id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2302.10728 | Intergranular Hotspots: A Molecular Dynamics Study on the Influence of
Compressive and Shear Work | Numerous crystal- and microstructural-level mechanisms are at play in the
formation of hotspots, which are known to govern high explosive initiation
behavior. Most of these mechanisms, including pore collapse, interfacial
friction, and shear banding, involve both compressive and shear work done
within the material and have thus far remained difficult to separate. We assess
hotspots formed at shocked crystal-crystal interfaces using quasi-1D molecular
dynamics simulations that isolate effects due to compression and shear. Two
high explosive materials are considered (TATB and PETN) that exhibit distinctly
different levels of molecular conformational flexibility and crystal packing
anisotropy. Temperature and intra-molecular strain energy localization in the
hotspot is assessed through parametric variation of the crystal orientation and
two velocity components that respectively modulate compression and shear work.
The resulting hotspots are found to be highly localized to a region within 5-20
nm of the crystal-crystal interface. Compressive work plays a considerably
larger role in localizing temperature and intra-molecular strain energy for
both materials and all crystal orientations considered. Shear induces a
moderate increase in energy localization relative to unsheared cases only for
relatively weak compressive shock pressures of approximately 10 GPa. These
results help isolate and rank the relative importance of hotspot generation
mechanisms and are anticipated to guide the treatment of crystal-crystal
interfaces in coarse-grained models of polycrystalline high explosive
materials. | Brenden W. Hamilton, Matthew P. Kroonblawd, Jalen Macatangay, H. Keo Springer, Alejandro Strachan | 2023-02-21T15:25:42Z | http://arxiv.org/abs/2302.10728v1 | # Intergranular Hotspots: A Molecular Dynamics Study on the Influence of Compressive and Shear Work
###### Abstract
Numerous crystal- and microstructural-level mechanisms are at play in the formation of hotspots, which are known to govern high explosive initiation behavior. Most of these mechanisms, including pore collapse, interfacial friction, and shear banding, involve both compressive and shear work done within the material and have thus far remained difficult to separate. We assess hotspots formed at shocked crystal-crystal interfaces using quasi-1D molecular dynamics simulations that isolate effects due to compression and shear. Two high explosive materials are considered (TATB and PETN) that exhibit distinctly different levels of molecular conformational flexibility and crystal packing anisotropy. Temperature and intra-molecular strain energy localization in the hotspot is assessed through parametric variation of the crystal orientation and two velocity components that respectively modulate compression and shear work. The resulting hotspots are found to be highly localized to a region within 5-20 nm of the crystal-crystal interface. Compressive work plays a considerably larger role in localizing temperature and intra-molecular strain energy for both materials and all crystal orientations considered. Shear induces a moderate increase in energy localization relative to unsheared cases only for relatively weak compressive shock pressures of approximately 10 GPa. These results help isolate and rank the relative importance of hotspot generation mechanisms and are anticipated to guide the treatment of crystal-crystal interfaces in coarse-grained models of polycrystalline high explosive materials.
## 1 Introduction
Shockwaves in solids can induce a variety of complex responses such as plasticity[1, 2, 3], melting[4, 5], fracture[6, 7], and chemical reactions[8, 9, 10]. The shock initiation of chemistry, which can eventually lead to a run to detonation in high explosives (HEs), is governed by the formation of hotspots, which are local regions of excess temperature[11, 12]. These pockets of high energy density can be formed through many mechanisms such as shear bands, cracking, friction, void collapse, and the interaction of shock waves[12]. Shock desensitization experiments have shown that the collapse of voids and porosity is the dominant mechanism behind the formation of critical hotspots that can result in the run to detonation[13]. Direct numerical simulations performed across a range of time and length scales have helped unravel the governing physics of hotspot formation and how that links to initial material microstructure[14, 15, 24, 16, 17, 18, 19, 20, 21, 22, 23], yet a complete mapping between initial microstructure and hotspot formation remains a grand challenge for the shock physics community.
Atomistic scale simulations have shown that, for strong shocks, the pressure-volume (PV) work done via recompression of ejected material during pore collapse is a key mechanism for reaching high temperatures that result in prompt chemistry[25]. As a shockwave reaches the upstream face of the pore, material accelerates and expands into the void. Once the expanded material propagates across the void, it recompresses as it collides with the downstream face of the pore, leading to excess PV work compared to the shock in bulk material, forming a hotspot. The more material can expand in the void, the more work will be done during recompression. Shock focusing and compression along the major axis of high-aspect-ratio defects can eject molecules into the void to a gas-like density, which maximizes the PV work done during recompression[26].
Pore geometry can have substantial effects on the collapse process. With 2D and 3D void geometries (i.e., surfaces/faces with curvature), shock focusing can create a laterally inwards flow of material to form a jet[27, 28]. For simple 1D cases or a planar surface, material accelerates to twice the shock's particle velocity when the wave reaches a free surface. Such 1D "voids" serve as a reductionist model for the expansion and compression along the minor axis of high-aspect-ratio pores and for intergranular geometries encountered in real HE samples, which are almost always polycrystalline in nature. Holian et. al. derived an expression for the maximum increase in temperature, \(\Delta T_{\rm max}\), for hotspots formed in the 1D case, where PV work is the main mechanism due to a lack of jetting and limited effects from friction and shear[25]. The scaling law for this maximum is: \(k_{B}\Delta T_{\rm max}=mU_{s}U_{p}/d\) where \(m\) is atomic mass, \(d\) is dimensionality, \(U_{s}\) and \(U_{p}\) are the shock and particle velocities, and \(k_{B}\) is Boltzmann's constant.
Recent molecular dynamics (MD) studies have shown that while high temperatures trigger prompt reactions within a hotspot, an additional mechanism can further accelerate and alter these reactions: mechanochemistry resulting from latent intra-molecular potential energy. Mechanochemistry, or chemistry that results from the straining and deformation of covalent bonds, has been well documented for a variety of systems[29, 30, 31, 32, 33, 34]. Reactive MD simulations have shown that hotspots formed during the shock compression of voids are significantly more reactive than those formed in the perfect crystal under equivalent temperature and pressure conditions[35].Adding a shear component during planar crystal-crystal impact (or '1D pore collapse') can also increase reactivity within the hotspot[36].
Nonreactive MD modeling has shown that plastic flow during pore collapse in the molecular crystalline HE TATB (1,3,5-trimamino-2,4,6-trinitrobenzene) can lead to large intra-molecular strains in which the strain energy is stored directly in the modes relevant to prompt chemistry[37, 38]. Reactive calculations of the same intra-molecular strain phenomena showed that these strains both
accelerate initial reactions and alter the first step reaction pathways [39, 40]. Nanoscale shear banding in bulk TATB was shown to also induce large intra-molecular strains which leads to a significant acceleration of kinetics [41] that occurs essentially homogeneously throughout the material for shocks above approximately 20 GPa [42]. A steered MD approach designed to capture the complex many-body observed in these strained molecular states was able to systematically extract reaction kinetics and paths, showing up to 25% decrease in activation energy for deformations found in hotspots and the possibility of alternate reaction pathways that have a higher energy barrier under unstrained conditions [43]. Previous works have shown these molecular strains and mechanochemical response to be driven by plastic flow during the formation of the hotspot [38], and that by varying shock strength, pore size, and crystal orientation in TATB, different levels of molecular strain were induced.
The above prior work indicates the importance of expansion and recompression to generate high temperatures in hotspots, as well as localized high-rate plastic flow to deform molecules leading to their "mechanochemical activation". Shock-induced pore collapse exhibits both processes in a highly coupled manner and typically involve complex geometries, which makes it challenging to separate their contributions to energy localization and the initiation of chemistry.
To address this challenge, we use MD simulations specially designed to independently control expansion/recompression and shear deformation. We adopt a quasi-1D simulation geometry in which two spatially separated crystals are subjected to a compressive shock with prescribed lateral load that induces shearing at the crystal-crystal interface formed upon impact. By varying the longitudinal and transverse loads as well as the crystallographic orientation of the HE crystals, we map the processes that localize energy and find conditions that enhance intra-molecular strain and temperature of the hotspots that form at these interfaces. The generality of the trends is assessed by comparison of two HE materials that differ in their molecular shape and conformational flexibility.
## 2 Methods
All simulations in this work were performed with all-atom non-reactive MD using the LAMMPS software [44, 45]. Two representative HE materials were considered, TATB and pentaerythritol tetranitrate (PETN). Both materials were modeled using non-reactive force fields with similar class-I functional forms.
The force field used for TATB is based on that of Bedrov et al. [46], and includes tailored harmonic bond stretch and angle bend terms for flexible molecules [47], and an intra-molecular O-H repulsion term that was implemented as a bonded interaction [48]. The covalent bond vibrations, angle bends, and improper dihedrals are modeled using harmonic functions. Proper dihedrals are modeled using a cosine series. Van der Waals interactions are modeled using the Buckingham potential (exponential repulsion and a r-6 attractive term) combined with short-ranged r-12 potentials that compensate for the divergence in the Buckingham potential at small separation. The non-bonded terms were evaluated in real space within an 11 A cutoff. Electrostatic interactions were calculated between constant partial charges located on the nuclei and were evaluated using the short-ranged Wolf potential with a damping parameter of 0.2 A-1 and an 11 A cutoff [49]. The TATB force field excludes all intra-molecular nonbonded interactions by design.
The force field used for PETN was developed by Borodin et. al. [50]. We used the same implementation of the PETN force field as described in Ref. [51]. There are three key differences between the TATB and PETN force field forms and implementation in LAMMPS. First, the PETN
force field does not include short-ranged r\({}^{-12}\) potentials that compensate for the divergence in the Buckingham potential. Second, it employs standard intra-molecular nonbonded exclusions in which only the 1-2 and 1-3 nonbonded interactions are set to zero. Third, electrostatics in the PETN force field were evaluated with the PPPM method[51, 52] with the relative accuracy set to \(1~{}\times~{}10^{-6}\) rather than with the Wolf potential. We note that while our prior testing has shown negligible differences in atomic forces obtained with the Wolf potential and PPPM for TATB[41], the present simulations were not sufficiently large to motivate testing the Wolf potential as applied to the PETN force field. Implementing the PETN force field using the Wolf potential would provide significant computational speedup for larger systems.
Our simulation approach was designed to assess the localization of energy at crystal-crystal interfaces subject to axial compression and transverse shearing. Simulation cells were modeled after the work in Ref. [36] in which a 1D void (or gap) was placed at the center of the cell between two slabs of crystal with periodic boundary conditions in the lateral directions, as shown in Figure 1. The sample (both crystals) is launched into a fixed wall by assigning a particle velocity to all atoms, shear loading is controlled by a lateral velocity applied to one of the crystals as described in detail below. Gaps with a length of \(\sim\)40nm, were created using multiples of whole unit cells, with initial cell sizes \(\sim\)250 nm in length along the compression direction and ranging from 4-6nm along the transverse directions. Cells were thermalized at 300 K using a Nose-Hoover thermostat[53] for 100 ps. Atomic velocities were re-initialized every 10 ps to attenuate breathing modes that form upon the creation of a free surface.
Shock simulations were performed with the reverse ballistic approach[54] by holding the leftmost 5nm of the sample rigid, which forms a piston that drives a shock wave in the remaining fully flexible portion of the sample. Two translational velocity components were added to the thermal velocities of the flexible molecules in each crystal to impose an initial condition leading to compression and shear. Compressive work was controlled by adding the shock particle velocity (i.e., compressive velocity) \(U_{\mathrm{p}}=V_{\mathrm{z}}\) along the z direction to both crystals. Shearing work was controlled by adding a second lateral velocity \(U_{\mathrm{\tau}}=V_{\mathrm{y}}\) to the second crystal on the right-hand side along the y direction. This leads to a situation where the second crystal moves laterally at a uniform velocity until complete closure of the void space when the two crystals impact each other leading to shear friction at the interface.
Three crystallographic orientations were considered for TATB, and two for PETN, as shown in Figure 2. Constructions for both HEs used equilibrium lattice parameters determined with their respective force fields at 1 atm and 300 K, with the TATB parameters coming from Ref. [48] and the
Figure 1: Simulations setups for crystal – crystal shock impacts with varying compressive and lateral velocities. Dashed line represents periodic boundary conditions.
PETN parameters from Ref. [51]. The TATB oriented cells were produced with the generalized crystal cutting method (GCCM) [55]. TATB orientations were defined by the inclination angle \(\theta\) of the shock direction vector \(\mathbf{S}\), which varied between the [100] direction (\(\theta=0^{\mathrm{o}}\)) and the basal plane normal vector given by \(\mathbf{a}\times\mathbf{b}\) (\(\theta=90^{\mathrm{o}}\)). The GCCM solutions used to obtain oriented TATB supercells from the triclinic \(P\bar{1}\) structure were as follows:
TATB 0\({}^{\mathrm{o}}\):
\(\mathbf{A}=\mathbf{-1a-2b-1c}\)
\(\mathbf{B}=0a+0b-1c\)
\(\mathbf{C}=\mathbf{a}+0b+0c\)
TATB 45\({}^{\mathrm{o}}\):
\(\mathbf{A}=\mathbf{-1a+1b-2c}\)
\(\mathbf{B}=0a+3b+4c\)
\(\mathbf{C}=15a+11b+0c\)
TATB 90\({}^{\mathrm{o}}\):
\(\mathbf{A}=\mathbf{-5a-3b+0c}\)
\(\mathbf{B}=\mathbf{a-7b+0c}\)
\(\mathbf{C}=\mathbf{a+2b+6c}\)
The supercells were oriented such that \(\mathbf{A}\) was along x, \(\mathbf{B}\) was in the x x y plane with a positive y component, and \(\mathbf{C}\) was in the positive z half-space. The shock direction z, was therefore nominally along \(\mathbf{C}\), and the lateral velocity direction, y, was nominally along \(\mathbf{B}\). PETN orientations were chosen to span its observed [56, 57] directional sensitivity to shock initiation, with [100] being more insensitive than [001]. The shear direction for both is the [010] direction. Noted orientations in Figure 2 are aligned with the vertical axis of the cell, which is the shock direction and is parallel to the z-axis arrow.
Two main properties were assessed on a per-molecule basis, the roto-vibrational kinetic energy (\(K_{ro-vib}\)) measured from the C.O.M. of each molecule, which we interpret as intra-molecular temperature (\(T\)) and use units of Kelvin, and the intra-molecular potential energy as calculated by the force field (\(U_{Intra}\)).
\[K_{ro-vib}=\frac{3N-3}{2}k_{B}T\]
\[U_{Intra}=\sum PE_{Bond}+\sum PE_{Ang}+\sum PE_{Dih}+\sum PE_{Imp}+\sum PE_{NB}\]
For TATB, the force field excludes all non-bonded intra-molecular interactions by design, so the intra-molecular potential energy can be directly computed from the sum of all bonded terms. For PETN, the force field includes intra-molecular non-bonded interactions, so the total per-molecule non-bonded energy contains both intra- and inter-molecular contributions in condensed-phase systems. We determined \(U_{Intra}\) for PETN by calculating the total potential energy of each molecule through single-point calculations in which the molecule was placed in isolation in a large cubic cell, which effectively eliminates inter-molecular non-bonded contributions. The difference between \(U_{Intra}\) and \(K_{ro-vib}\) was also assessed, where each was referenced by the equilibrium value at 300 K and 1 atm. This difference gives a measure of the intra-molecular strain energy associated
Figure 2: Crystallographic structure of each of the oriented cells in relation to the compression (z) and shear (y) directions. Listed angles or crystallographic directions are aligned with the vertical (z) axis of the cell. Atoms are colored grey, blue, red, and white for C, N, O, and H.
with deforming molecule conformations from the equilibrium structure, which we denote as \(U_{Latent}\):
\[U_{Latent}=[U_{Intra}-U_{o}]-\left[\frac{3N-3}{2}k_{B}(T-300\ \mathrm{K})\right]\]
For an undeformed system in equilibrium, this value is zero on average due to the equipartition of energy. Supplemental Materials section SM-1 shows the \(U_{Intra}\) increase in the bulk material for each compressive velocity applied.
## 3 General Shock Response
Figure 3 shows position-time diagrams (colloquially referred to as x-t diagrams) for a representative simulation with a 2.0 km/s compressive velocity and 3.0 km/s lateral velocity for the TATB 0\({}^{\circ}\) orientation case. These are colored by compressive direction (particle) velocity, the lateral (shear) velocity, and the local density, all of which are calculated using an Eulerian binning along the compression direction z with bins of 2nm. Figure 4 shows individual snapshots at key times in the same simulation as Figure 3. The view in Figure 4 captures material motions, and in particular highlights how the respective wave fronts and free surfaces coincide with phases of material expansion and recompression. We note that both the Figure 3 and 4 plots do not change qualitatively for different TATB or PETN orientations. Quantitative changes observed for these materials are discussed in Sections 4 and 5, respectively.
Focusing first on the compressive velocity in the left-most panel of Figure 3 shows the evolution of multiple wave fronts. The initial shockwave (a) transits the first crystal before reaching the first free surface. Once the wave reaches the surface (I), part is reflected back into the sample (b) while the material expands freely into the void (c). The expanding material impacts the second crystal at (II), which leads to a second reflection (d) and transmission of another supported shockwave (e). Comparison against the lateral velocity plot shows that even at high compression, the lateral velocity does not attenuate outside of the immediate crystal-crystal interface at II on picosecond timescales. Transverse material flow continues to occur well after crystal-crystal impact, a trend that significantly differs from previous 2D void collapse work [37, 39]. This indicates relatively small friction in the shear band created around the impact plane. The density plot shows similar wave profiles as seen in the compressive velocity, but adds two additional pieces of information: (1) the material from the first crystal that expands into the void space (c) reaches a density state similar to the uncompressed crystal before it impacts the downstream surface a (II); and (2) the density is similar on either side of the downstream interface after impact at (II).
Figure 3: x-t diagrams for the 2 km/s compressive velocity and 3.0 km/s lateral velocity, colored by compressive velocity, lateral velocity, and density.
## 4 Tatb
We first focus on hotspots formed at crystal-crystal interfaces in TATB. A total of 105 distinct cases were considered in which the shock direction was set to values of \(\theta=0^{\circ}\), 45\({}^{\circ}\), and 90\({}^{\circ}\), the compressive velocity ranged from \(V_{\rm z}=1\) km/s to 2 km/s in steps of 0.25 km/s, and the transverse velocity ranged from \(V_{\rm y}=0\) km/s to 3 km/s by steps of 0.5 km/s. Figure 5 shows configuration snapshots that qualitatively highlight the orientation-dependent crystal deformations induced for a subset of these compressive and lateral velocities, denoted V\({}_{\rm z}\) and V\({}_{\rm y}\) respectively. Compressive velocities shown are 1.25 km/s and 2 km/s, lateral velocities shown are 0 and 3 km/s. These two compressive velocities lead to shocks with approximate pressures of 10 and 25 GPa, respectively, with the precise value being somewhat dependent on shock direction.
The 0\({}^{\circ}\) case results in significant disorder that varies with both V\({}_{\rm y}\) and V\({}_{\rm z}\). The crystal layers buckle, molecules rotate, and increasing shear velocity generally increases the degree of disorder at the interface for both weak and strong compressive shocks. Distinctly spaced planes where the layers buckle is more evident for the weaker compressive shock velocity. This is consistent with Ref. [58] which found that the spacing between buckling planes generally decreases with increasing strain rate.
In the 45\({}^{\circ}\) case, the crystal layers are oriented such that they align with the anticipated plane of maximum resolved shear at \(\pm\)45\({}^{\circ}\) relative to the compression direction. This activates basal slip in which the crystal layers slide past each other without inducing more significant deformations that destroy local lattice packing. Similar basal glide was observed under shock and non-shock axial loads in Refs. [42, 58, 59]. For this orientation, the two cases with nonzero lateral velocity leads to modest disordering that is localized to the crystal-crystal interface.
The 90\({}^{\circ}\) case, which compresses normal to the crystal layers along the most compliant direction in the crystal[42], does not induce much disorder until both the compressive and lateral velocities
Figure 4: Molecular renderings of the 0\({}^{\circ}\) case for V\(z\) of 2.0 km/s and Vy of 3.0 km/s. Coloring is based on molecule center of mass velocities.
reach high levels. Although difficult to discern with the image resolution, the crystal layers remain largely intact even in the case with V\({}_{\mathrm{y}}\) = 3.0 km/s and V\({}_{\mathrm{z}}\) = 2.0 km/s. Regions of disorder are observed behind the shock front and typically consume the entire width of the sample through the periodic boundary. This contrasts with the extensive nanoscale shear band network that has been shown to form for compression along this direction with much larger MD simulations[41, 42]. Based on an earlier finite size effect study[41], it should be noted that the system cross-section sizes used here are expected to somewhat suppress the formation of shear bands in the bulk. This removes an important source for additional intra-molecular strain in the bulk far away from the interface. For non-overdriven shocks, the secondary plastic front where shear bands form will not reach the free surface before the primary elastic front causes the crystal to expand into the void space. Thus, we would not necessarily expect shear bands to form in the first crystal near the free surface with the present simulation geometry, even for cells with larger cross sections. Thus, we anticipate that the suppression of shear bands will not lead to qualitative differences in the structure of the hotspot formed at the crystal-crystal interface for this orientation case.
Figure 5: Snapshots showing TATB packing structure within and around the hotspots formed at crystal-crystal interfaces for selected orientations, compressive velocities, and lateral velocities. Only C atoms in the TATB ring are visualized.
Figure 6 contains x-t diagrams for all three orientations, showing the temperature and \(U_{Latent}\) for the 2.0 km/s compressive velocity and 3.0 km/s lateral velocity case. Focusing first on the temperature, it can be seen in all cases that heating is largely uniform except on the upstream half of the crystal interface after impact of the two crystals (\(x\approx 100\)nm, \(t\geq 25\)ps). This is expected due to significantly more PV work done, as the material on the upstream surface expanded to near the initial density in the void space before it was recompressed on the downstream face of the second crystal. It is perhaps interesting that free expansion of the first crystal into the void leads to very little temperature dissipation. This contrasts with the response of \(U_{Latent}\). While shock compression leads to large and positive \(U_{Latent}\) in the first crystal, it dissipates to approximately zero upon free expansion into the void. Recompression leads to similar magnitude increase in \(U_{Latent}\) that is largely symmetric across the crystal-crystal interface.
The most salient orientation-independent features of hotspots formed at crystal-crystal interfaces are the apparent asymmetry of the temperature field and symmetry of the intra-molecular strain energy field. Additional differences in the general responses of these two fields are also apparent. In these high shear velocity cases, a thin band of extreme temperature (\(>\)2500K) exists directly at the interface where the material has experienced large levels of friction. In terms of absolute magnitude, the average \(U_{Intra}\) does not reach significantly higher values in the hotspot at the interface compared to its value in the compressed bulk.
Comparison of the three orientations shows that the hotspot temperature and bulk shock temperature are largely independent of orientation. Temperature in the bulk is governed by the input energy (compressive velocity) and plastic work, and the hotspot temperature is a function of the PV work done during expansion and recompression. However, for \(U_{Latent}\), there is considerably more excess energy in the \(0^{\circ}\) case, with \(90^{\circ}\) being the lowest. This tracks well to the level of disorder shown in Figure 4, as the cases with the most apparent disorder result in the highest intra-molecular strain energy. Previous work on compression-induced energy localization in TATB[42] showed that the \(0^{\circ}\) case exhibits a moderately homogenous localization of intra-molecular strain energy driven by buckling/twinning defects in the crystal, which is also noted here. For cases in which \(\theta\geq 45^{\circ}\), shear banding begins to be the dominant deformation mechanism, which nucleates significantly more intra-molecular strain energy than the buckling/twinning mechanism. However, the small cell sizes in the lateral directions here suppress shear band formation, leading to significantly lower bulk energy localization than expected. While this does imply that orientation-dependent trends in \(U_{Latent}\) should be interpreted carefully, we expect these finite size effects to have limited influence over the general features of the interfacial hotspot identified across the different orientations.
Figure 7 shows distributions of \(T\) and \(U_{Intra}\) (note the distinction with \(U_{Latent}\)), where all molecules within the system are plotted 5ps after crystal-crystal impact. The cyan line represents the equipartition of energy, the expected value of \(U_{Intra}\) if all potential energy is thermal (where \(U_{Latent}\) is zero). Deviations from this line represent the \(U_{Latent}\). Panel a) shows the 0\({}^{\circ}\) case with no lateral velocity, with increasing compressive velocity. With increasing compressive work, the temperature and \(U_{Intra}\) increase, as expected, and an increase in the deviation from the equipartition line is also seen. Thus, it is clear that \(U_{Latent}\) also increases with increasing shock strength. Panel b) shows the 2.0 km/s compressive velocity for various lateral velocities. This shows that increasing the lateral velocity increases the peak \(T\) and \(U_{Intra}\) reached, due to imparting more total energy into the system. However, by adding interfacial shear during recompression, there is not a noticeable difference in the deviation from equipartition. Root mean-square (RMS) deviations from the equipartition line for increasing lateral velocity at a constant 2.0 km/s compressive velocity are 35.32, 35.66, and 36.03 kcal/mol for 0.0, 1.5, and 3.0 km/s lateral velocity, respectively. Conversely, increasing the compressive velocity at a constant lateral velocity of 0.0 km/s gives RMS deviations of 27.96, 30.49, and 35.32 kcal/mol, for the 1.0, 1.5, and 2.0 km/s cases, respectively.
In previous work on TATB using a reactive force field [39], we identified clear links between the reaction kinetics and both the molecular temperature and intra-molecular strain energy. Clustering analysis based on \(T\) and \(U_{Latent}\) identified distinct regimes where chemistry was accelerated due to mechanical strain of the molecules, which provides a measure of "mechanochemical activity". Thresholds for different levels of mechanochemical activity based on this clustering analysis are denoted by the orange and pink lines in Figure 7b, which respectively correspond to \(U_{Latent}=50\) kcal/mol and 100 kcal/mol. Molecules above the pink line are in highly mechanochemically active states, and molecules below pink but above orange are moderately mechanochemically active.
Figure 6: x-t diagrams for the 2 km/s compressive and 3 km/s lateral velocity case for each orientation, colored by vibrational temperature and intra-molecular strain energy, \(U_{Latent}\).
Figure 8 shows a 1D binning, with 2nm bins, of the population of the mechanochemical states along the compression direction for the 0\({}^{\circ}\) case and four sets of velocities that bookend our sampled ranges, i.e. compressive velocity of 1 and 2 km/s and lateral velocity of 0 and 3 km/s. For the 1 km/s compressive velocity (P \(\sim\)10GPa) along the top row of Figure 8, the inclusion of a lateral velocity causes a thin spike of highly mechanochemical molecular states directly at the interface. These mechanochemical states are localized to molecules within a few unit cells of the surface as the z-dependent profiles show this region to be only 4-6nm thick. For 2 km/s compressive velocity (P \(\sim\)25 GPa), there is very little effect from the lateral velocity, but the population of highly mechanochemical molecular states is significantly higher than for the weaker compressive velocity. It is perhaps surprising that lateral shearing has an almost negligible effect for this stronger compressive velocity. This implies that the compressive work is what creates this \(\sim\)15nm thick region of almost all highly mechanochemical states. While the number of high \(U_{Intra}\) outliers and the RMS deviation from equipartition in Figure 7b do not greatly increase with lateral velocity, it follows that lateral velocity does not impart a significant difference in the intra-molecular strain energy of the molecules in this mechanochemically activated interfacial region. We also note that there is an (artificially) enhanced mechanochemical activity near the piston at z = 0 nm compared to the bulk, especially for the cases with stronger compressive velocity.
Qualitatively similar trends to those just identified for the 0\({}^{\circ}\) case also hold for the 45\({}^{\circ}\) and 90\({}^{\circ}\) cases. We find that shear velocity enhances the population of mechanochemically activated molecules at the interface for weak compressive velocities, but not for strong compressive velocities. Figure 9 shows profiles of mechanochemical molecule populations for the weak and strong compressive velocities (top and bottom rows) at a lateral velocity of 3 km/s for the 45\({}^{\circ}\) and 90\({}^{\circ}\) cases in the left and right columns, respectively. These directly track the right-hand column in Figure 8 for the 0\({}^{\circ}\) case. A complete set of figures for the 45\({}^{\circ}\) and 90\({}^{\circ}\) cases analogous to Figure 8 can be found in the Supplemental Materials section SM-2.
For both orientations and compressive velocities, the left crystal, which expanded into the void space and was then recompressed, is made of predominately mechanochemical molecules, albeit at mostly "moderate" levels. In contrast, the downstream (right) crystal has mainly non
Figure 7: PE-T (\(U_{intra}\) and roto-vibrational T, respectively) distributions for the 0\({}^{\circ}\) TATB case. Panel a) shows increasing compressive velocity with no lateral velocity, panel b) shows increasing lateral velocity at 2.0 km/s compressive velocity. Cyan dashed lines represent perfect equipartition of energy. Purple and orange lines in panel b) represent regions of mechanochemical activity as defined in Ref. [39] where above the grey line corresponds to mechanochemically active (Reference Clusters 3+4 and 5+6).
mechanical molecules with some moderately active mechanochemical states. All three orientations show a similar magnitude population of highly mechanochemical molecules in a 10-15nm interfacial region with a strong 2.0 km/s compressive velocity. While cross-inspection with the bottom row plots of \(U_{Latent}\) in Figure 6 shows that there is symmetry of the average intra-molecular strain energy across the interface, the population analysis in Figures 8 and 9 indicates that the two crystals exhibit distinctly different distributions of molecular \(U_{Latent}\) states on either side of the interface for all three orientations. A similar piston effect as was identified for the 0\({}^{\circ}\) case is also seen for the stronger compressive velocity with both the 45\({}^{\circ}\) and 90\({}^{\circ}\) orientations.
Orientation effects are most pronounced at the slower compressive velocity. The 0\({}^{\circ}\) and 45\({}^{\circ}\) cases exhibit a thin 4-6nm wide peak with \(\sim\)80% of the molecules in this region being in highly mechanochemically activated states. The 90\({}^{\circ}\) case exhibits a similar peak at the interface, but with only \(\sim\)25% of the molecules reaching high levels of mechanochemical activation. This is potentially due to the resilience of the hydrogen-bonded TATB crystal layers. The 90\({}^{\circ}\) case compresses directly perpendicular to the layers, with the lateral velocity running across them. These layers remain largely intact even during expansion and recompression of the first crystal, given the suppression of shear banding, which helps hold the molecules in a planar geometry and hence a low-\(U_{Latent}\) state. Significantly more compressive velocity may be needed to break crystal layers and deform molecules at spatial scales below typical shear band dimensions, which are roughly 10nm wide and are spaced 10s of nm apart[41, 42]. Comparison against Figure 5 shows that there is not much deformation of the layers at a compressive velocity of 1.25 km/s and a lateral velocity of 3.0 km/s. The degree of deformation increases substantially as the compressive velocity is increased to 2.0 km/s, which directly tracks with the enhanced population of mechanochemically activated molecules under those conditions seen in Figure 9.
Using the same cluster-based thresholds for counting mechanochemically activated molecules discussed above, we performed a comprehensive population analysis to identify quantitative trends with compressive and lateral velocity. Our population analysis considered a 50nm wide region centered on the crystal-crystal interface, which is shown in Figure 10 for the 0\({}^{\circ}\) case with each line corresponding to a different compressive velocity. It is immediately apparent that the total population of mechanochemically activated molecules is more dependent on compressive velocity than shear velocity, with almost no influence from shear velocity above a compressive velocity of 1.5 km/s. Similar qualitative trends follow for the 45\({}^{\circ}\) and 90\({}^{\circ}\) cases, which are shown in Supplemental Materials section SM-3. Pearson correlation coefficients for spatially resolved properties of the hotspot are also provided in Supplemental Materials section SM-4, which consider the cross-correlation of initial velocities, temperature, \(U_{Latent}\), and the compressive and shear work. These corroborate the conclusions from Figures 7-10, showing that \(U_{Latent}\) is highly correlated with the compressive velocity and work, but much less so with the lateral velocity and shear work.
Figure 8: Spatially resolved populations of mechanochemically activated molecules for the 0\({}^{\circ}\) TATB case at time \(t_{o}\) + 5 ps. Populations are based on the regions plotted in Figure 7b, which correspond to the mechanochemical model from Ref. [39]. Population fractions are computed with respect to the total number of molecules in each Eulerian bin. Panels correspond to 1.0 and 2.0 km/s compressive velocity, top and bottom row respectively, and 0.0 and 3.0 km/s lateral velocity, left and right columns, respectively.
Figure 9: Spatially resolved populations of mechanochemically activated molecules for the 45\({}^{o}\) and 90\({}^{o}\) TATB cases at time \(t_{o}\) + 5 ps. Populations are based on the regions plotted in Figure 7b, which correspond to the mechanochemical model from Ref. [39]. Population fractions are computed with respect to the total number of molecules in each Eulerian bin. Panels correspond to 1.0 and 2.0 km/s compressive velocity, top and bottom row respectively, at 3.0 km/s lateral velocity, with the left and right columns showing the 45\({}^{o}\) and 90\({}^{o}\) cases, respectively.
## 5 Pets
Recent studies on mechanochemical activation of HEs have largely focused on TATB [37, 38, 39, 41, 42]. As TATB is a noted outlier in many respects (e.g., its nearly unique safety-performance tradeoffs), and because its unusual chemical reactivity [60] has been tied at least in part to mechanochemical effects [41], it remains an open question whether similar mechanochemical effects arise in other HEs. In this respect, we consider PETN as a model HE without mechanically stiff molecular ring structures to assess the generality of the observations made for TATB (compare molecular structures in Figure 2). PETN crystal-crystal impact simulations are conducted for the same sets of velocities as was done in TATB, using the PETN [001] and [100] directions.
Figure 11 shows snapshots of the molecular deformations at the interfacial hotspot for both PETN orientations for a variety of compressive and lateral velocity conditions, analogous to the results shown for TATB in Figure 5. For both orientations, increasing both compressive and lateral velocity results in a local increase in plasticity and apparent amorphization at the interface. Additionally, the [100] cases appear to exhibit deformation due to increasing lateral velocity at locations further away from the interface as compared to the [001] case. This is especially true for the 2.0 km/s compressive velocity case in which distortions to the lattice exist within the entire region shown. Strongly shocking along [100] activates slip along the \(\{\mathbbm{1}\mathbbm{1}\mathbbm{1}\}\) slip system [51, 61], which falls at a \(\pm\)45\({}^{\circ}\) angle with respect to the compression direction. Similar slip-mediated "shear bands" have been noted in large-scale MD simulations of PETN shocked along this direction [51].
Figure 10: The percentage of TATB molecules with sufficient \(U_{\text{Latent}}\geq 100\) kcal/mol required for significant mechanochemical activation (i.e., above pink line in Figure 7b) for the 0\({}^{\circ}\) case. Each line corresponds to a different compressive velocity. The population analysis considers a 50nm region centered at the interface.
Guided by our previous assessments for TATB in Section 4, we calculated distributions of the per-molecule intra-molecular potential energy and temperature states. Figure 12 shows the resulting \(U_{Intra}\)-\(T\) distributions for the [001] PETN case as parametric functions of compressive and lateral velocity. The PETN results shown here (as well as for the [100] case) are quite similar to the results shown TATB in Figure 7, where increasing compressive velocity increases peak values of both \(U_{Intra}\) and \(T\) as well as the deviation of \(U_{Intra}\) from equipartition. However, increasing lateral velocity only increases the peak values and does not appear to increase the deviation of \(U_{Intra}\) from equipartition aside for a few outlier molecules with the largest lateral velocity. This indicates that the intra-molecular strain energy \(U_{Latent}\) does not depend strongly on the lateral velocity, which is quite similar to the response seen for TATB.
Figure 13 displays similar \(U_{Intra}\)-\(T\) distributions as Figure 12, but compares the two PETN orientations for the case with 2.0 km/s compressive velocity and 3.0 km/s lateral velocity. Compression along the [100] direction, which is insensitive to shock initiation[56], reaches the same peak values as [001] case and both exhibit similar deviations from equipartition. However, the [001] case has significantly greater density of points at large \(T\) and \(U_{Intra}\) values. While these results do not indicate a directionally dependent mechanochemical effect, they do indicate that more energy is localized in molecules for the direction known to be more sensitive to initiation. It should be noted that net \(U_{Latent}\) of a molecule does not fully determine whether that molecule
Figure 11: Snapshots showing PETN packing structure within and around the hotspots formed at crystal-crystal interfaces for selected orientations, compressive velocities, and lateral velocities. Only backbone atoms are plotted (no H or nitro group O) to better visualize local lattice deformations.
undergoes mechanochemistry. Recent reactive MD studies indicate that the degree of freedom in the molecule that gains strain energy is a critical factor influencing both chemical dynamics and kinetics[43].
In comparison to TATB, PETN reaches nearly the same quantitative levels of molecular \(T\), \(U_{Intra}\), and \(U_{Latent}\) states in hotspots formed at crystal-crystal interfaces. We also find for both materials that the thermomechanical state of the hotspot and degree of intra-molecular strains are stronger functions of the compressive velocity compared to the lateral shearing velocity. These comparisons indicate that the intra-molecular strain states formed under shock conditions, and their potential for inducing mechanochemical effects, are not just a product of TATB's many unusual physical and chemical characteristics.
Figure 12: PE-T (\(U_{intra}\) and roto-vibrational \(T\), respectively) distributions for the [001] PETN case. Panel a) shows increasing compressive velocity with no lateral velocity, panel b) shows increasing lateral velocity at 2.0 km/s compressive velocity. Cyan dashed lines represent perfect equipartition of energy.
Figure 13: PE-T (\(U_{intra}\) and roto-vibrational \(T\), respectively) distributions showing the effect the compression direction orientation in PETN. Each case is for compressive and lateral velocities of 2.0 km/s and 3.0 km/s, respectively. Cyan dashed lines represent perfect equipartition of energy.
## 6 Conclusions
We investigated the characteristics of hotspots formed at shocked crystal-crystal interfaces using quasi-1D MD simulations designed to isolate effects due to compression and shear. The generality of the trends identified was surveyed through consideration of two crystalline HE materials (TATB and PETN) with different degrees of molecular conformational flexibility and multiple compression and shearing directions for each material. The simulations contained two crystals separated by a gap. Compressive shocks were generated in the sample through a reverse ballistic approach and a variable lateral velocity component was assigned to one of the crystals, which imposes a shearing/friction effect upon impact. By independently varying both the compressive and shear velocities, as well as material and crystallographic orientation, the resulting effects of interfacial energy localization were assessed in terms of kinetic energy (temperature), potential energy, and intra-molecular strain energy. The intra-molecular strain energy has recently been linked to mechanochemical acceleration of reaction kinetics.
Increasing both compressive and lateral velocity results in an increase in hotspot temperature and potential energy. However, only the compressive work is found to strong effect the intra-molecular strain energy. Assignment of a shearing velocity increases plastic deformation at crystal-crystal interfaces but does not significantly increase intra-molecular strain energy for comparatively strong compressive shocks (P \(\sim\) 25 GPa). Shearing does increase intra-molecular strain energy for weaker compressive shocks (P \(\sim\) 10 GPa), but the effect is found to be highly localized. By mapping previous reactive MD results onto the intra-molecular strain energy states found here for TATB, we show that regions of significant mechanochemical activity are localized to the crystal-crystal interface. The width of these regions is 4-6 nm for a weak compressive velocity (P \(\sim\) 10 GPa) and 10-15nm for a strong compressive velocity (P \(\sim\) 25 GPa).
Both TATB and PETN are found to exhibit similar trends regarding the sensitivity of intra-molecular strain energy localization to the compressive and lateral shearing velocity for all crystal orientations considered. Quantitative differences were found for orientations that undergo more extensive plastic deformations, which tend to yield more mechanochemically active molecules at the interface. Overall, this strain energy is shown to be highly correlated with compressive work and considerably less so with shear work. The trends identified here with quasi-1D simulations motivate further elucidation of the convolution of compression and shearing with 2D and 3D interfacial and surface effects, such as shock focusing and jetting.
## Acknowledgements
The authors thank Tommy Sewell and Andrey Pereverzev for providing the LAMMPS implementation of the PETN force field used here.
This work was supported by the U.S. Department of Energy (DOE) through the Los Alamos National Laboratory. The Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy (Contract No. 89233218CNA000001). Approved for unlimited release: LA-UR-23-21420.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Work by Purdue University was supported by LLNL subcontract B648789. Approved for unlimited release: LLNL-JRNL-844913-DRAFT.
BWH contributed to this work while at Purdue University and Los Alamos National Laboratory. BWH acknowledges funding provided by the Los Alamos National Laboratory Director's Postdoctoral Fellowship program, project LDRD 20220705PRD1 with partial funding provided by the Advanced Simulation and Computing Physics and Engineering Models project (ASC-PEM).
JM and AS acknowledge funding from the US Office of Naval Research, Multidisciplinary University Research Initiatives (MURI) Program, Contract: N00014-16-1-2557. Program managers: Chad Stoltz and Kenny Lipkowitz.
We acknowledge computational resources from nanoHUB and Purdue University through the Network for Computational Nanotechnology.
**Supplemental Materials to:**
**Intergranular Hotspots: A Molecular Dynamics Study on the Influence of Compressive and Shear Work**
Brenden W. Hamilton\({}^{1,2}\), Matthew P. Kroonblawd\({}^{3}\), Jalen Macatangay\({}^{1}\),
H. Keo Springer\({}^{3}\), and Alejandro Strachan\({}^{1}\)*
Affiliations
\({}^{1}\)School of Materials Engineering and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana, 47907 USA
\({}^{2}\)Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
\({}^{3}\)Physical and Life Sciences Directorate, Lawrence Livermore National Laboratory, Livermore, California 94550, USA
* [email protected]
## Section Sm-1
SM Figure 1 shows the baseline rise in \(U_{Intra}\) for all materials and orientations considered in the main manuscript plotted as a function of compressive (particle) velocity. Values were averaged over 2nm Eulerian bins in the upstream (right-hand) crystal grain, prior to the shockwave reaching the free surface.
## 4 SECTION SM-2
SM Figures 2 and 3 mirror main manuscript Figure 8. The main manuscript figure shows results for the TATB 0\({}^{\text{o}}\) case whereas SM 2 and 3 show results for the 45\({}^{\text{o}}\) and 90\({}^{\text{o}}\) cases, respectively. Slices were taken at 5 ps after crystal-crystal impact using 2nm Eulerian bins and mechanochemical groups are based off of reactive MD results.
## 3 Section Sm-3
SM Figure 4 mirrors main manuscript Figure 10. The main manuscript figure shows results for the TATB 0\({}^{\text{o}}\) case whereas SM 4 a and b show results for the 45\({}^{\text{o}}\) and 90\({}^{\text{o}}\) cases, respectively. These show the trends with lateral velocity are consistent across orientations.
## 3 Section Sm-4
SM Table 1 shows Pearson Correlation Coefficients for selected quantities averaged in spatial Eulerian bins that were 2nm wide along the compression direction. Only the four nearest bins from either side from the interface were considered in the correlation analysis (4 bins from each of the 105 runs). Quantities Vy and Vz are the initial lateral and compressive velocity, respectively. W\({}_{\text{C}}\) and W\({}_{\text{T}}\) are respectively the work done from compression and lateral (shear) motion up to 5 ps after impact. These work components were computed using
\[W=\int_{0}^{t}\sigma_{ij}\cdot\frac{dv_{j}}{dz}\cdot V\ \cdot dt\]
Here u is velocity in the j direction, \(\sigma_{ij}\) is the stress in the ij tensor term, and \(V\) is the bin volume. Quantities \(T\) and \(U_{Latent}\) are respectively the temperature (rotational-librational KE) and the intra-molecular strain energy, which were computed using the approach described in Section 2 of the main text.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \hline
**0\({}^{\bf o}\)** & **Vz** & **Vy** & **Wc** & **WT** & **T** & **ULatent** \\ \hline
**Vz** & 1.000 & 0.000 & 0.403 & 0.023 & 0.668 & 0.969 \\ \hline
**Vy** & & 1.000 & 0.098 & 0.835 & 0.663 & 0.165 \\ \hline
**Wc** & & & 1.000 & 0.286 & 0.416 & 0.401 \\ \hline
**Wr** & & & & 1.000 & 0.733 & 0.145 \\ \hline
**T** & & & & & 1.000 & 0.755 \\ \hline
**ULatent** & & & & & & 1.000 \\ \hline
**45\({}^{\bf o}\)** & **Vz** & **Vy** & **Wc** & **WT** & **T** & **ULatent** \\ \hline
**Vz** & 1.000 & 0.000 & 0.578 & 0.367 & 0.588 & 0.844 \\ \hline
**Vy** & & 1.000 & 0.385 & 0.757 & 0.666 & 0.419 \\ \hline
**Wc** & & & 1.000 & 0.680 & 0.810 & 0.735 \\ \hline
**WT** & & & & 1.000 & 0.935 & 0.677 \\ \hline
**T** & & & & & 1.000 & 0.844 \\ \hline
**ULatent** & & & & & & 1.000 \\ \hline
**90\({}^{\bf o}\)** & **Vz** & **Vy** & **Wc** & **WT** & **T** & **ULatent** \\ \hline
**Vz** & 1.000 & 0.000 & 0.878 & 0.442 & 0.868 & 0.957 \\ \hline
**Vy** & & 1.000 & 0.187 & 0.657 & 0.330 & 0.148 \\ \hline
**Wc** & & & 1.000 & 0.697 & 0.953 & 0.874 \\ \hline
**WT** & & & & 1.000 & 0.796 & 0.539 \\ \hline
**T** & & & & & 1.000 & 0.892 \\ \hline
**ULatent** & & & & & & 1.000 \\ \hline \end{tabular}
\end{table} SM Table 1: Pearson Correlation Tables for values of selected material properties obtained in Eulerian bins near the crystal-crystal interface 5ps after impact. |
2303.10401 | Smart ROI Detection for Alzheimer's disease prediction using explainable
AI | Purpose Predicting the progression of MCI to Alzheimer's disease is an
important step in reducing the progression of the disease. Therefore, many
methods have been introduced for this task based on deep learning. Among these
approaches, the methods based on ROIs are in a good position in terms of
accuracy and complexity. In these techniques, some specific parts of the brain
are extracted as ROI manually for all of the patients. Extracting ROI manually
is time-consuming and its results depend on human expertness and precision.
Method To overcome these limitations, we propose a novel smart method for
detecting ROIs automatically based on Explainable AI using Grad-Cam and a 3DCNN
model that extracts ROIs per patient. After extracting the ROIs automatically,
Alzheimer's disease is predicted using extracted ROI-based 3D CNN. Results We
implement our method on 176 MCI patients of the famous ADNI dataset and obtain
remarkable results compared to the state-of-the-art methods. The accuracy
acquired using 5-fold cross-validation is 98.6 and the AUC is 1. We also
compare the results of the ROI-based method with the whole brain-based method.
The results show that the performance is impressively increased. Conclusion The
experimental results show that the proposed smart ROI extraction, which
extracts the ROIs automatically, performs well for Alzheimer's disease
prediction. The proposed method can also be used for Alzheimer's disease
classification and diagnosis. | Atefe Aghaei, Mohsen Ebrahimi Moghaddam | 2023-03-18T11:58:56Z | http://arxiv.org/abs/2303.10401v1 | # Smart ROI Detection for Alzheimer's Disease prediction using explainable AI
###### Abstract
**Purpose**
**Predicting the progression of MCI to Alzheimer's disease is an important step in reducing the progression of the disease. Therefore, many methods have been introduced for this task based on deep learning. Among these approaches, the methods based on ROIs are in a good position in terms of accuracy and complexity. In these techniques, some specific parts of the brain are extracted as ROI manually for all of the patients. Extracting ROI manually is time-consuming and its results depend on human expertness and precision.**
**Method**
**To overcome these limitations, we propose a novel smart method for detecting ROIs automatically based on Explainable AI using Grad-Cam and a 3DCNN model that extracts ROIs per patient. After extracting the ROIs automatically, Alzheimer's disease is predicted using extracted ROI-based 3D CNN.**
**Results**
**We implement our method on 176 MCI patients of the famous ADNI dataset and obtain remarkable results compared to the state-of-the-art methods. The accuracy acquired using 5-fold cross-validation is 98.6 and the AUC is 1. We also compare the results of the ROI-based method with the whole brain-based method. The results show that the performance is impressively increased.**
**Conclusion**
**The experimental results show that the proposed smart ROI extraction, which extracts the ROIs automatically, performs well for Alzheimer's Disease prediction. The proposed method can also be used for Alzheimer's disease classification and diagnosis.**
**Keywords: 3DCNN, Alzheimer's disease, explainable AI, ROI extraction, Structural MRI**
## 1 Introduction
Today, in developed countries, the population is aging, and even if this fact is positive, it brings unwanted consequences such as an increase in various diseases such as dementia [1]. Alzheimer's Disease (AD), the most common form of dementia, is a major health care challenge in the 21st century. Alzheimer's disease is the sixth leading cause of death in the United States [2]. This disease is a neurodegenerative brain disorder with reduced cognitive function and there is no cure for it. Therefore, the mortality rate caused by Alzheimer's have increased by 68%, and life expectancy for Alzheimer's patients is now less than 7 years [3].
As we mentioned before, there is no certain cure for this disease [4], [5] only some definitive treatments are available which are effective for limited periods in subgroups of patients. Scientists believe that the most effective way to control the progression to AD is early diagnosis and an appropriate management strategy from the very beginning of cognitive decline. Therefore, many efforts have been made to find strategies for early diagnosis, especially in the early stages before the symptoms of the disease appear, in order to slow down or prevent the progress of the disease [6]. Since according to researchers, the brain changes caused by Alzheimer's disease can be seen up to 20 years before the disease manifests [7], the disease can be diagnosed by analyzing signals and images taken from the brain, including Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Electroencephalogram (EEG), and functional Magnetic Resonance Imaging (fMRI) years before AD appeared. Alzheimer's disease has three stages: Cognitive
Normal, Mild Cognitive Impairment (MCI), and Alzheimer's Dementia. MCI is a very critical stage for the diagnosis of Alzheimer's disease, which includes two outcomes: stable MCI (sMCI) are those people who do not develop Alzheimer's disease in the future, and progressive MCI (pMCI) are those who will develop AD in the future. According to the [8], 32% of people with MCI develop Alzheimer's dementia within five years.
Regarding to the importance of AD prediction, AI techniques have been considered in recent years by researchers and in this way, a lot of researches have been developed which use deep learning methods, including CNN, from MRI images. In some studies, whole 3D MRI images have been used to AD detection or prediction [9, 10, 11], in some other researches two-dimensional MRI images have been used to overcome complexity and data leakage of 3D subject-level studies[12, 13, 14]. Also, pre-trained convolutional neural networks including Res-Net designed for 2D natural images has been applied to 2D medical images with transfer learning [15]. However, in such approaches, converting the 3D image into 2D slices causes some useful information in the 3D image is lost. To overcome this problem, some studies focus on 3D patch-based classification. In these frameworks, the input consists of a set of 3D patches extracted from an image [16, 17, 18]. Most of these slices created from slicing 3D images do not have useful information because they may contain parts of the brain that are not affected by the disease. Therefore, methods based on regions of interest (ROI) focusing on regions known to be informative are used in many studies [19, 20, 21].
The main challenge of the region of interest-based methods is that only one (or more) region, which is also considered hippocampus region in most of the papers, is extracted while AD changes are widespread in several brain regions [15]. Also, selecting multiple regions at the same time may increase the complexity. To overcome this problem, in this article, a new method for smart ROI detection related to Alzheimer's prediction using the analysis of the whole brain 3D image of all patients is proposed. Although in the proposed method, ROI is detected per patient, it should be noticed that this region is not a different region for all subjects, it means that the region which is detected for each person may be common in most of the patients. Therefore, the main contributions of the paper are listed as follows:
* To overcome the problems of selecting manual ROI, an automatic region of interest detection method based on explainable AI for Alzheimer's prediction from 3D MRI images is proposed.
* We propose a method to predict Alzheimer's disease using detected ROI for each patient as input data.
* Also, according to our ROI detection method, to obtain an appropriate ROI for each subject, we propose a 3D classification model based on 3D convolutional neural network.
The rest of the paper has been organized as follows: In section 2, a history of similar studies is presented. In section 3, the proposed method is described in detail. Section 4 contains the experimental results. Section 5 presents a discussion about the paper.
## 2 Related work
In this section, we introduce some previous studies on Alzheimer's disease early detection using whole brain or extracting patches. Then we review some ROI based AD prediction.
### Alzheimer's disease prediction
In Alzheimer's disease, symptoms usually appear after the age of 60, however, some forms of the disease develop very early in people with the gene mutation, even in their 30s to 50s [4].Alzheimer's disease causes structural and functional changes in the brain. As mentioned in the previous chapters, in Alzheimer's patients, there are several years between a healthy state and Alzheimer's. In the early stages of the disease, patients do not have obvious cognitively decline and after a while, they develop mild cognitive impairment (MCI) and gradually develop Alzheimer's. However, not all MCI patients convert to AD. Therefore, a major focus of current researches is on predicting the progression of MCI to AD. MRI is a neuroimaging technique that is commonly used to analyze and measure the structure of the brain and the brain changes. Recently, many studies have been conducted on the prediction of Alzheimer's disease from brain MIR images using deep learning methods. For example, in [22] a 3D CNN is proposed to predict rate of cognitive decline. Since using 3DCNN require a lot of data to train, some papers use transfer learning. For instance, in [23] a pre-trained 3D convolutional neural network is used to predict AD. In this study, also, age adjusted neural network is proposed. Researchers in another study proposed a modified version of ResNet which is ResNet_3D to predict progression of AD [24]. Also, in [25] VoxCNN (3D version of VGG) and ResNet_3D is applied to predict conversion
of MCI to AD. Although studies result show customized CNN performs better if there is enough data, transfer learning obtains good results.
Sometimes to make the results more accurate, a combination of traditional methods and deep learning is used for AD prediction. For example, in [14] and [26] a combination of CNN and ensemble learning is proposed on whole brain MRI images and results show that ensemble learning enrich the accuracy. Generally, using 3D whole brain increases computational complexity. Therefore, some studies divide the whole brain image into some patches. A patch-based AD prediction model is proposed in [27]. Longitudinal data consists of three sets of images is used in this study and a dictionary of 2D patches from one 2D slice is made. Another patch-based AD prediction has been presented in [28]. In this paper, left hippocampus is extracted and local patches from left hippocampus assembled into 2.5 dimensions and fed into 2.5DCNN. In [29] the authors partitioned whole brain MRI images into some patches with the same size and they applied t test to sort these patches to obtain informative features. They proposed Patch-Net to extract features using 3D convolutional neural networks. As we mention in introduction, some patches may have no information for AD prediction, hence, more important patches should be selected. To this aim, in [30] patches based on AD related landmarks which are obtain by data-driven landmark discovery algorithm are extracted. Although this method solves the described challenge, it requires some pre-processing, including registration in the test phase, and therefore the accuracy of the model is limited to the accuracy of the registration method.
### ROI-based Alzheimer's disease prediction
Unlike whole brain-based studies which the input of the model is 2D or 3D brain MRI image, and patch-based studies which images are divided to some patches and mostly all of the patches are the input of model, some papers extract Region of Interest related to AD. Since gray matter is more effected by AD, in most of the studies whole brain is segmented to gray matter, white matter, and CSF, and the model is trained using gray matter only. For example, in [31], the gray matter of the brain is extracted and ensemble learning is used to classify the data and then for final prediction by voting, a group of deep belief networks is applied. To decrease computational complexity some researches extract smaller ROI instead of gray matter [32]. In [19], authors select hippocampus, amygdalae, and insularfrom axial sagittal and coronal slices as Three View Patches (TVP) ROI and fed these ROIs into CNN to feature extraction. In [33], using feature selection based on mutual information has been proposed in which five key features including left and right hippocampus, the thickness of the cortex of the quadruple bodies of the brain, the left upper temporal part and the right anterior part have been identified that have a positive effect on the classification. In this study, the selected features are classified using simple linear classification. As regard AD may affect different regions of the brain, selecting two or three ROIs may limit the performance hence in [34], 134 ROIs are selected and among them the most informative ROIs are identified such as caudal and rostral anterior cingulate gyrus, entorhinal, fusiform and insular cortex and the subcortical ROIs anterior corpus callosum and the left vessel, an ROI comprising lacunar alterations in inferior putamen and pallidum. In [35], two ROI based networks for converting NC to MCI are proposed. One network is single-ROI-based and the other is multiple-ROI-based. However, considering some ROIs increase computational complexity. Therefore, some papers select a small size of one ROI like hippocampus which is more important in Alzheimer's disease. For example, in [36] a transfer learning focusing on a few slices of hippocampus region is applied. To increase the accuracy, in [37], a recurrent neural network for time-based learning of longitudinal cognitive values of each subject and combining them with the hippocampus of the brain in the first examination have been developed to build a prognostic model of the progression of Alzheimer's disease. In [38], hippocampus and middle temporal gyrus are introduce as ROI. In this study, integrated regression framework combined with CNN is proposed.
One of the challenges of the ROI-based approaches which extract ROIs manually, is to select the best ROIs for every patient. To overcome this challenge and to establish a trade of between accuracy and time complexity, we introduce an ROI-based AD prediction approach which detect ROI automatically per person. Therefore, to decrease computational complexity, a few ROIs are used and since ROI is detected per person accuracy is not decreased.
## 3 Proposed method
In this section, proposed method is clarified. In section 3.1, an overview of the proposed method is presented. In section 3.2 image preprocessing is described. In section 3.3, the proposed method for extracting regions of interest based on interpretable AI and proposed 3DCNN for feature extraction is explained, and finally, in section
3.4, Alzheimer's prediction using the desired ROI is discussed.
### An overview of the method
In fig.1 an overview of our proposed method is presented. As shown in the figure, in the proposed method, first, the images are preprocessed before entering to the model. The steps of the pre-processing are explained in section 3.2. After that, the 3D brain MRI images are fed into a 3D convolutional neural network classification, a binary classification to classify MRI images into s-MCI and p-MCI classes. Then, using the weights obtained from the last convolutional layer of the model, the more important parts of the images in decision making are extracted as Regions of Interest. The details of ROI detection are given in Section 3.3. After extracting the ROIs, these regions are fed into a 3D convolutional neural network and the final classification is obtained using these regions.
### Pre-processing
Raw MRI images consist of skull and scalp. Therefore, at the first step of image pre-processing, brain is extracted from skull and scalp. Also, because of the imaging devices, MRI images may have artifacts like motion blur, intensity inhomogeneity, rotation or translation. In the proposed method, to artifact elimination, pre-processing, including B1 Correction, Grad Warp [39], and N3[40] has been applied to the images. Moreover, the size of the images is 256\(\times\)256\(\times\)256 so, in the last step, to reduce complexity of the model, the 32 slices from the middle of the image which include whole brain are sampled and the images have been resized into 128\(\times\)128\(\times\)32.
### ROI Detection
In this section, ROI detection is described in details. The details of the method are shown in the figure 2. As shown in the figure, a 3D convolutional neural network which includes 3 CNN blocks is used. Each block consists of 3D convolution layers, Leaky ReLU activation function, Maxpooling, Batch Normalization and dropout. The number of 3D convolution layers of each block and the size of feature map in each step is written under the convolution layer in the figure 2. Also, the last layers of the 3D CNN are flattened layer, dense layer, Leaky ReLU activation function, BN, dropout and the last layer is softmax layer to classification. The implementation details are given in the figure description. First, the features of the images are weighted using the Grad-cam method [41] explained in section 3.3.1. Then a heatmap is generated for the pixels of the image based on the assigned weights. The heatmap, which has a value between zero and one (zero for lowest weight and one for highest weight), is multiplied to the image and a value is calculated for each pixel. A threshold is applied to pixel values and 30 percent of the more important pixels are
Figure 1: An overview of the proposed method
obtained. The details of the ROI extraction using feature importance are described in section 3.3.2. finally, Alzheimer's disease is predicted using extracted ROIs. The details of the prediction are explained in section 3.4
#### 3.3.1 Grad-Cam
Neurons of convolutional layers extract semantic information for each class in each image. Therefore, it is possible to understand which parts of the image had the most effect in decision making for that specific class according to this information. For example, to put a natural image into the dog class, we need to find information about the dog in the image, which the last convolution layer extracts from the image. Therefore, feature importance can be extracted from the last convolution layer of the model which is used for classification. In the proposed method, in order to obtain the importance of the features, Grad-CAM is used. First, the gradient of the last layer before softmax layer, \(\mathcal{S}_{{}_{C}}\), with
Figure 2: the details of ROI detection. 3D images are fed into the 3D CNN which includes three 3D CNN blocks. The first block is designed using 32 3D convolutional layers with kernel size of 3\(\times\)3\(\times\)3, LeakyReLu activation function with \(\alpha\)= 0.1, MaxPooling with size of 2\(\times\)2\(\times\)2 and dropout with rate= 0.3, the second and third blocks are designed using 64 3D convolutional layers with kernel size of 3\(\times\)3\(\times\)3, LeakyReLu activation function with \(\alpha\)= 0.1, MaxPooling with size of 2\(\times\)2\(\times\)2 and dropout with rate= 0. the highest feature weights are extract from the last 3D block and ROIs are detected by multiplying the feature weights to the original image.
respect to each feature map of the last convolution layer, named \(\mathbf{C}_{f}\), is obtained, and then global average pooling is applied to the result using Equation 1 and the feature importance is calculated using Equation 2.
\[W_{f}=\sum_{x}\sum_{y}\frac{\partial S_{C}}{\partial C_{f}} \tag{1}\]
\[H_{f}=\text{Re}\,LU\sum_{f}\,(\frac{1}{N}W_{f})\times\mathbf{C}_{f} \tag{2}\]
According to the paper [41] to prove the eq.2, consider the Equation 3 to get the final score before softmax:
\[S_{c}=\sum_{f}W_{f}\,\frac{1}{\sum_{z}}\sum_{x}\sum_{y}\mathbf{C}_{f} \tag{3}\]
If the output of the global average poolnig is defined as follows:
\[G_{f}=\frac{1}{N}\sum_{x}\sum_{y}\mathbf{C}_{f} \tag{4}\]
Therefore, equation 3 is rewritten as follows:
\[S_{c}=\sum_{f}W_{f}G_{f} \tag{5}\]
Therefore, according to equation 4 and 5, equations 6 and 7 are obtained:
\[\frac{\partial G_{f}}{\partial C_{f}}=\frac{1}{N} \tag{6}\]
\[\frac{\partial S_{C}}{G_{f}}=W_{f}=\frac{\partial S_{C}}{\partial C_{f}}\,N \tag{7}\]
Summing the values of both sides of equation 7, we get equation 8, and finally the equation 9 is reached. And to normalize the values, multiply it by 1/n, so we will reach the equation 1.
\[\sum_{x}\sum_{y}W_{f}= \sum_{x}\sum_{y}N\,\frac{\partial S_{C}}{\mathbf{C}_{f}} \tag{8}\] \[W_{f}=N\sum_{x}\sum_{y}\frac{\partial S_{C}}{\partial C_{f}} \tag{9}\]
The ReLU function is used to get the importance of the features that have a positive effect to decision making. In this way, the heatmap corresponding to the values of each of the feature maps, \(H_{f}\), which shows the importance of the different parts of the image, is calculated using equation 2.
#### 3.3.2 ROI Extraction
As shown in the figure 2, to extract the Regin of Interest, we resize the feature heatmap, \(H_{i}\), and then multiply it to the original image, \(x_{i}\), where \(i\) is the index of the image. After that, a threshold function is applied to the intensity values of the image and a 0-1 mask is, \(M_{i}\), made for each image as follows:
\[M_{i}=\begin{cases}255&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
Some T1 weighted 3D MRI images from famous and free ADNI (Alzheimer's Disease Neuroimaging Initiative) [42] dataset which consists of about 85 thousand of subjects are selected for this paper. ADNI is a multicenter study designed for the early detection and tracking of Alzheimer's disease and contains clinical, imaging, genetic, and biochemical biomarkers. This study has been collecting data from volunteers since the early of 2004 in ADNI1, ADNI2, ADNI3 and ADNIGo phases and the subjects are classified into Cognitively Normal (CN), Mild Cognitive Impairment (MCI) and Alzheimer's disease (AD). In this paper, 176 MCI patients (88 MCI-Stable and 87 MCI-to-AD patients) are selected. The study is longitudinal study and five sets of the images every six months (from baseline to 24 months) are used. Tables 1 shows the details of the data.
### Implementation details
The proposed networks are implemented in the Keras library in python 3.9 using GPU Tesla T4 and Intel(R) Xeon(R) CPU @ 2.20GHz. The 3D CNN model contains three 3D CNN blocks and some dense layers. The first block is designed using 32 3D convolutional layers with kernel size of 3\({}^{\times}\)3\({}^{\times}\)3, LeakyReLu activation function with \(\alpha\)= 0.1, MaxPooling with size of 2\(\times\)2\(\times\)2 and dropout with rate= 0.3, the second and third blocks are designed using 64 3D convolutional layers with kernel size of 3\(\times\)3\(\times\)3, LeakyReLu activation function with \(\alpha\)= 0.1, MaxPooling with size of 2\(\times\)2\(\times\)2 and dropout with rate\(=\) 0.3, also the size of the output feature maps of each layer is written under each feature map in the figure 2. We use Adam optimization function using the dynamic learning rate with the initial learning rate of 0.01 and Categorical_Cross_Entropy loss function.
### Analysis of the proposed method
In this section, the proposed method is evaluated on MCI conversion (MCI-Stable vs MCI-to-AD classes) using the introduced dataset. We use both hold out and k-fold cross validation to obtain certain results. Since there are about 5 sets of MRI images per subject, to obtain more accurate results, we hold out 10 percentages of the subjects as test data and then split training data into five folds to create validation and train dataset. The model is trained five times using each set of training and validation data. Finally, the experimental results are the average of 5 models results on test data. The evaluation criteria are Accuracy, Precision, Recall and F1-Score which are defined in eq 12 to equ 15 respectively. Also, Area Under the Recursive Operating characteristic Curve (ROC) is calculated using True Positive Rate (TPR) and False Positive Rate (FPR) to verify the model.
\[\Pr{ecision}=\frac{TP}{TP+FP} \tag{12}\] \[\Re{call}=\frac{TP}{TP+FN}\] (13) \[F1=2\times\frac{\Pr{ecision}\times\Re{call}}{\Pr{ecision}+\Re{ call}}\] (14) \[Accuracy=\frac{TPR+TNR}{2}=\frac{TP+TN}{TP+FN+TN+FP} \tag{15}\]
#### 4.3.1 The results on whole 3D brain MRI images
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multirow{3}{*}{**Age**} & **category** & **\%Patients** \\ \cline{2-3} & 60-69 & 15.5\% \\ \cline{2-3} & 70-79 & 41\% \\ \cline{2-3} & 80-89 & 41\% \\ \cline{2-3} & 90-93 & 2.5\% \\ \hline \multirow{3}{*}{**Gender**} & Male & 67\% \\ \cline{2-3} & Female & 37\% \\ \hline \end{tabular}
\end{table}
Table 1: Distribution of volunteers based on Age
First, the proposed 3D convolutional method is evaluated on original whole brain 3D MRI images. Since, some slices do not have meaningful information and there is no brain in most of the slices, to decrease computational complexity and increase the accuracy, 70 slices from the middle of the MRI images, where there is whole brain, are selected and then the images are resized into 128\(\times\)128\(\times\)32. After that data augmentation is applied to the training set to increase the amount of data. Since every pixel of the brain image has important information, flipping and rotation have been chosen as augmentation technique. Therefore, the images are vertically and horizontally flipped. Also, to rotate the images, a vector of some rotation angles such as, -10, -5, 5, 10 are defined and each time one of these angles are randomly selected. Both original and created images are fed into the proposed 3D CNN and the model is trained using these images. Finally, test set are fed into the trained model to validate the generalization of the model. The confusion matrix and the ROC curve of the average results on the test data are illustrated in fig.4. Also, the average of accuracy, Precision, Recall and F1-Score using 5 models on test data is shown in table 2.
#### 4.3.2 Extracting ROI
As mentioned in the introduction, in order to avoid time complexity and sometimes even to increase accuracy, instead of using the whole brain image, ROI is extracted and classification is performed only according to those parts. One of the challenges in this field is that only one region, hippocampus region in most of the papers, is extracted while AD changes are widespread in several brain regions specially in early stage. To solve this challenge, in this article, instead of extracting ROI manually, ROIs are detected automatically per patient. In this section, the results of ROI detection are explained. As mentioned in section 3.3.2, to extract automatic ROI, after obtaining the heatmap of the feature map's weights, a zero-one mask is created for every image where top \((1-a)\times\)100 percentages of the feature map's weights are considered as one and the other regions are zero. In the figure 5 the heatmap of some images using this value is shown. The heatmap color is from dark blue to dark orange and the features with greater weights are dark orange, which means, the features in dark orange color are the most important features.
According to the equation 10, there is a threshold value, \(\alpha\), which is a number between zero and one that decide what percentage of the regions of the original image should be considered as Region of Interest. In other words, this number is a threshold selects the values of the image where heatmap of their feature weights is greater than this number, then these pixels are selected as ROI. Since this value is a hyperparameter, the appropriate measure should be chosen for
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Accuracy** & **Precision** & **Recall** & **F1-score** \\ \hline
0.80 & 0.78 & 0.84 & 0.81 \\ \hline \end{tabular}
\end{table}
Table 2: The average of accuracy, Precision, Recall and F1-Score using 5 folds on original whole brain images
Figure 4: confusion matrix and ROC curve of the proposed model on the average of 5 folds.
it. To this end, some values are tested for this hyperparameter and for every one of them all of the introduced criteria is calculated using 5-fold cross validation.
The ROIs extracted from the images are fed into the classification model and the results are reported. Training and test sets of the whole brain images have been saved for this section. The ROIs of all of the samples of the training sets (there are 5 training and 5 validation sets) which are classified correctly by the model is extracted and then these ROIs fed into 3D model as training set and the model is trained again using ROIs only. After that, ROIs of the held-out test data are extracted and the trained model is tested using these ROIs. 5 models using 5-fold training and validation data are obtained. The experimental results are the average of 5 models obtained from 5 folds on ROIs which are extracted from test data. The evaluation criteria are Accuracy, Precision, Recall and F1-Score which are defined in eq. 12-15 respectively. The average of the results on 5 folds are shown in the table 3.
As shown in Table 3, second to sixth rows, to find the optimum percentage of the super-pixels as ROI, we consider 0.5, 0.6, 0.7, 0.8, 0.9 for '_a_ '. According to the results, it can be seen that the performance of the model is enhanced with the increasing amount of '_a_ '. However, the criteria remain fixed at values greater than 0.7, hence, any value in the range of 0.7 to 0.9 can be selected. Since this paper aims to find important regions involved in Alzheimer's disease in different patients, we have chosen the greatest value to show that the model extracts all important regions.
To prove the reliability of the results, we once consider the heatmap lower than 0.3 as ROI instead of higher than 0.7. In this case, since high weights have the greatest impact on obtaining results, low accuracy should be obtained by considering low weights. The last row of the table 3 shows the results obtained from this assumption. According to the result, the accuracy of the model on test data using 30 percentages of pixels with higher weights about 98% whereas, the accuracy is 50% using 30 percentages of pixels with lower weights which means the model obtain the lowest performance. Therefore, we can claim that our approach to extract smart ROI performs well.
The results of these criteria using the selected value for \(\alpha\) (0.7) per each fold have been shown in table 4. Also, the average of Area Under the Recursive Operating characteristic Curve (ROC) and confusion matrix using \(\alpha\) equal to 0.7 is shown in the fig. 7.
According to the table 4, the average of the accuracy, Precision, Recall, and f1-score on the best extracted ROIs are 98.6%, 98%, 99.2% and 98.6% respectively and compared to the table 2 all of the results are increased when the best ROIs are used instead of whole brain images. Also, the AUC is 1 using ROIs which increased 0.08 compared to the results on whole brain images.
#### 4.3.3 Representing the extracted ROI
According to the previous section, the best value for '\(\alpha\)'is 0.7. After creating the mask using '\(\alpha\)'based on equation 10, the ROI is obtained using the created mask and equation 11. In this section, extracted ROI for images are analyzed. Some of extracted ROIs are shown in the fig. 6. To show the extracted ROI, 3D ROIs are converted to 2D and one of the slices from the middle of the 3D image is chosen. Based on [43], the most important changes occurred in the MCI stage are shrinkage of hippocampus, cortical thinning, and enlargement of ventricles filled with cerebrospinal fluid. Also, due to the recent studies, contrary to traditional belief, in very early stage of Alzheimer's disease, entorhinal cortex which is the location of tau protein is affected before hippocampus changes [44]. Therefore, according to the fig.6, these ROIs of the six samples are marked. As can be seen, in every patent, some of the introduced ROIs are extracted. Also, other parts of the images are also extracted as ROI, which are not shown in the figure since they do not have many repetitions in other images.
\begin{table}
\begin{tabular}{c|c c c c} _Fold_ & _Accuracy_ & _Precision_ & _Recall_ & _F1-score_ \\ \hline Fold-1 & 0.99 & 0.98 & 1 & 0.99 \\ Fold2 & 0.98 & 0.96 & 1 & 0.98 \\ Fold3 & 0.98 & 0.98 & 0.98 & 0.98 \\ Fold4 & 0.98 & 0.98 & 0.98 & 0.98 \\ Fold5 & 1 & 1 & 1 & 1 \\ Average & 0.986 & 0.98 & 0.992 & 0.986 \\ \end{tabular}
\end{table}
Table 4: The average of Accuracy, Precision, Recall and F1-Score using 5 models on extracted ROIs of test data.
Figure 6: The average of confusion matrix and ROC curve using 5 folds on extracted ROIs of test data.
To analyze the obtained results, the percentage of repetition of each of the important sections that are affected by Alzheimer's disease based on the articles (which is mentioned in previous paragraph) is shown in the table 5. According to the table, the sum of the percentages is not equal to 100, because as seen in the figure, more than one ROI is extracted in some images. Due to the table, the more repetition is for cortical thinning because shrinkage of the cortical is occurred in the whole gray matter and therefore, this ROI is extracted for the most of the samples. After that, hippocampus and entorhinal cortex has the most repetition in the samples. According to the researches these two parts are the most important parts in the early stage of the Alzheimer's disease. Since the images are not from one time, it means that the images are from six months before AD, 12 months before AD, 18 months before AD and 24 months before AD, the parts are affected during the time are a little changed. For example, shrinkage of the hippocampus is more when the patients get AD than before 24 months and in 24 months before AD entorhinal cortex is more effected. Therefore, the repetition of these two ROIs is almost the same. The last ROIs with high repetition are ventricles filled with cerebrospinal fluid. This ROIs which are more important in the final stage, are repeated in 40% of the samples.
### Comparison with the other models
In this section, a comparison between the proposed model and some state-of-the-art models are discussed. In table 6 some recent studies on AD prediction are introduced. All of the compared studies are AD prediction using structural MRI images of ADNI dataset to make comparison more acceptable. The state-of-the-art studies are some of the patch-based and ROI-based studies which are introduced in related works section. As shown in the table 6, three patch-based studies and ROI-based studies are selected. In ROI-based studies the most effective parts of the brain to AD based on the literature are extracted as ROI. For example, in [45] hippocampus, in [46] the hippocampus, fusiform, and inferior temporal gyrus and in [33], key features including left and right hippocampus, the thickness of the cortex of the cortex is 12.5 \(\mu\)m, 13.5 \(\mu\)m, 14.5 \(\mu\)m, 15.5 \(\mu\)m, 16.5 \(\mu\)m, 17.5 \(\mu\)m, 18.5 \(\mu\)m, 19.5 \(\mu\)m, 20.5 \(\mu\)m, 21.5 \(\mu\)m, 22.5 \(\mu\)m, 23.5 \(\mu\)m, 24.5 \(\mu\)m, 25.5 \(\mu\)m, 26.5 \(\mu\)m, 27.5 \(\mu\)m, 28.5 \(\mu\)m, 29.5 \(\mu\)m, 30.5 \(\mu\)m, 31.5 \(\mu\)m, 32.5 \(\mu\)m, 33.5 \(\mu\)m, 34.5 \(\mu\)m, 35.5 \(\mu\)m, 36.5 \(\mu\)m, 37.5 \(\mu\)m, 38.5 \(\mu\)m, 39.5 \(\mu\)m, 40.5 \(\mu\)m, 41.5 \(\mu\)m, 42.5 \(\mu\)m, 43.5 \(\mu\)m, 44.5 \(\mu\)m, 45.5 \(\mu\)m, 46.5 \(\mu\)m, 47.5 \(\mu\)m, 48.5 \(\mu\)m, 49.5 \(\mu\)m, 40.5 \(\mu\)m, 41.5 \(\mu\)m, 42.5 \(\mu\)m, 43.5 \(\mu\)m, 44.5 \(\mu\)m, 45.5 \(\mu\)m, 46.5 \(\mu\)m, 47.5 \(\mu\)m, 48.5 \(\mu\)m, 49.5 \(\mu\)m, 40.5 \(\mu\)m, 41.5 \(\mu\)m, 42.5 \(\mu\)m, 43.5 \(\mu\)m, 44.5 \(\mu\)m, 45.5 \(\mu\)m, 46.5 \(\mu\)m, 47.5 \(\mu\)m, 48.5 \(\mu\)m, 49.5 \(\mu\)m, 40.5 \(\mu\)m, 41.5 \(\mu\)m, 42.5 \(\mu\)m, 43.5 \(\mu\)m, 44.5 \(\mu\)m, 45.5 \(\mu\)m, 46.5 \(\mu\)m, 47.5 \(\mu\)m, 48.5 \(\mu\)m, 49.5 \(\mu\)m, 40.5 \(\mu\)m, 41.5 \(\mu\)m, 42.5 \(\mu\)m, 43.5 \(\mu\)m, 44.5 \(\mu\)m, 45.5 \(\mu\)m, 46.5 \(\mu\)m, 47.5 \(\mu\)m, 48.5 \(\mu\)m, 49.5 \(\mu\)m, 40.5 \(\mu\)m, 41.5 \(\mu\)m, 42.5 \(\mu\)m, 43.5 \(\mu\)m, 44.5 \(\mu\)m, 45.
quadruple bodies of the brain, the left upper temporal part and the right anterior part have been identified that have positive effect on the classification. In [18], 3D patches are extracted and to increase the accuracy a dual multi-instance attention based deep neural network is proposed. The results of this paper are better than ROI-based studies but as we mentioned before, some of these patches may don't have important information for this reason the accuracy may decrease. To solve this problem, in [28] left hippocampus is extracted from the image and local 2.5D patches are extracted from only this ROI. In ROI-based studies some ROIs are extracted manually and prediction is done using those ROIs. Authors in [30] believe that manually ROI extraction limit the performance of the prediction because of 1) defining the specific ROIs and 2) extracting effective disease-related features. To solve this problem, they propose a landmark-based framework which extract informative patches using AD-related landmarks. The results of this paper show they achieve good results. The aim of our proposed method, which extracts automatic ROI, is the same and it gets the best results compare to the other studies. However, our proposed feature extraction model, even based on whole brain, achieve good results which is obvious in the table.
## 5 Discussion
This section is divided to three parts. In the first part the importance of Alzheimer's disease prediction and our dataset is discussed, in the second part our proposed method and the automatic ROI extraction is explained, and the last part is about limitations of the paper and our future works.
### Alzheimer's disease prediction
According to the literature, Alzheimer's disease, the most common form of dementia, is a neurodegenerative brain disorder with cognitive decline. Also, there is no cure for this disease, therefore the most effective way to control the progress of this disease is early detection. This paper proposed an approach to early detection of AD using T1 weighted MRI images of 176 MCI patients (including sMCI and pMCI) selected from ADNI dataset. The data are longitudinal including almost five sets of images per patient (some of the patients has less than five sets). First images are preprocessed, brain is extracted from the images, after that, since deep learning methods get remarkable results in AD
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Method** & **Number of patients** & **Whole brain** & **Patch-based** & **ROI** & **Accuracy (\%)** & **Sensitivity (\%)** & **Specificity (\%)** \\ \hline
**Patch\_base\_2.5** & & & & & & \\
**D CNN (2018)** & 264 & & & 79.9 & 89.6 & 68 \\
**[28]** & & & & & & \\ \hline
**LDA (2015) [33]** & 388 & & & 71.9 & 69.6 & 73.6 \\ \hline
**Attention+MIL + CNN (2020)** & 289 & & & 80.2 & 77.1 & 82.6 \\
**[18]** & & & & & & \\ \hline
**LDMIL (2018)** & 164 & & 78.3 & 47.3 & 83.2 \\
**[30]** & 115 & & & 74 & 62 & 78 \\
**DCNN (2022)** & 381 & & & 75.85 & 0.66 & 0.8 \\
**[45]** & **175** & & & **80** & **0.84** & **0.78** \\ \hline
**3D-CNN (Proposed)** & **175** & & & **98.6** & **99.2** & **98** \\ \hline \end{tabular}
\end{table}
Table 4: the comparison of the proposed model and some state-of-the-art methods on Accuracy, Sensitivity, and Specificity
prediction, we proposed a 3D CNN to extract the features. The results show our model obtain good results compare to other methods.
### Automatic ROI detection
Region of interest is a part of the image which is more important for the specific task and in some studies ROI is extracted from the image and classification is perform using those parts. As explained in introduction and result section, in some state-of-the-art researches one or some ROIs are extracted for AD prediction and these ROIs mostly include hippocampus, cortex thickness, temporal gyrus and two or three other parts. Results show extracting some specific regions and omitting the rest of the brain affect the results. To overcome this issue, in some studies patches are extracted from the image instead of ROI but most of the patches do not have useful information. As discussed in the previous section some studies combine patch-based and ROI-based approach. In this paper we propose an automatic ROI detection approach which extract ROIs for each patient instead of considering some specific ROIs for all of the patients. In the proposed method, first whole 3D brain images fed into 3DCNN model, after that, the most important regions of the image using the highest feature weights are extracted as ROI based on decision making. We consider a threshold to obtain the highest weights and according to the results the best threshold is 0.7. Finally, the extracted ROIs are fed into 3DCNN classifier model to make the final decision. According to the results, the accuracy of the proposed model using the extracted ROI is 98.6% and 1 respectively, whereas, the accuracy and the AUC of the proposed model using whole brain image is 80% and 0.92. The results show the accuracy increased 18.6% using ROIs instead of whole brain images, Also the AUC is improved 0.08 using the extracted ROI
### Limitations and future work
Although the proposed method obtains good results, there is some limitation in the approach. The first limitation is using MCI patients who are convert (or not convert) to AD after 24 months but the most important issue is predicting AD in early stages for example 10 years earlier than conversion. Therefore, in the future work we will consider the normal subjects which will convert to AD after years. The second limitation is that in our proposed method MRI images are used only while metadata like age, gender, education and so on are important biomarkers. Hence, in the future we will use these biomarkers as well as MRI images.
|
2302.07254 | Fractal properties of the frontier in Poissonian coloring | We study a model of random partitioning by nearest-neighbor coloring from
Poisson rain, introduced independently by Aldous and Preater. Given two initial
points in $[0,1]^d$ respectively colored in red and blue, we let independent
uniformly random points fall in $[0,1]^d$, and upon arrival, each point takes
the color of the nearest point fallen so far. We prove that the colored regions
converge in the Hausdorff sense towards two random closed subsets whose
intersection, the frontier, has Hausdorff dimension strictly between $d-1$ and
$d$, thus answering a conjecture raised by Aldous. However, several topological
properties of the frontier remain elusive. | Anne-Laure Basdevant, Guillaume Blanc, Nicolas Curien, Arvind Singh | 2023-02-14T18:51:53Z | http://arxiv.org/abs/2302.07254v2 | # Fractal properties of the frontier in Poissonian coloring
###### Abstract
We study a model of random partitioning by nearest-neighbor coloring from Poisson rain, introduced independently by Aldous [2] and Preater [6]. Given two initial points in \([0,1]^{d}\) respectively colored in red and blue, we let independent uniformly random points fall in \([0,1]^{d}\), and upon arrival, each point takes the color of the nearest point fallen so far. We prove that the colored regions converge in the Hausdorff sense towards two random closed subsets whose intersection -- the _frontier_ -- has Hausdorff dimension strictly between \((d-1)\) and \(d\), thus answering a conjecture raised by Aldous in [2]. However, several topological properties of the frontier remain elusive.
## Introduction and main results
We consider a model of Poissonian coloring which is based on a dynamical construction in the \(d\)-dimensional hypercube \([0,1]^{d}\). Initially, two points \(R_{0}\neq B_{0}\) are planted in \([0,1]^{d}\): think of \(R_{0}\) as an initial red seed, and of \(B_{0}\) as an initial blue seed. All the randomness in the construction comes from a sequence \((X_{n})_{n\in\mathbb{N}^{*}}\) of independent random variables, uniformly distributed in \([0,1]^{d}\). Picturing \(X_{1},X_{2},\ldots\) as points falling consecutively in \([0,1]^{d}\), we let each point take the color of the closest point already present (nearest neighbor for the usual Euclidean metric \(d\)). Formally, define the initial red and blue sets as \(\mathcal{R}_{0}=\{R_{0}\}\) and \(\mathcal{B}_{0}=\{B_{0}\}\), respectively. Then, by induction, for each \(n\in\mathbb{N}\) such that the red and blue sets \(\mathcal{R}_{n}\) and \(\mathcal{B}_{n}\) have been constructed, proceed as follows: almost surely, we have \(d(X_{n+1},\mathcal{R}_{n})\neq d(X_{n+1},\mathcal{B}_{n})\), and
* if \(d(X_{n+1},\mathcal{R}_{n})<d(X_{n+1},\mathcal{B}_{n})\), then set \(\mathcal{R}_{n+1}=\mathcal{R}_{n}\cup\{X_{n+1}\}\) and \(\mathcal{B}_{n+1}=\mathcal{B}_{n}\);
* otherwise, if \(d(X_{n+1},\mathcal{R}_{n})>d(X_{n+1},\mathcal{B}_{n})\), then set \(\mathcal{R}_{n+1}=\mathcal{R}_{n}\) and \(\mathcal{B}_{n+1}=\mathcal{B}_{n}\cup\{X_{n+1}\}\).
Figure 1: Simulation of the Poisson coloring of space where a new incoming point takes the color of the nearest neighbor in the process so far, from left to right with \(10^{2},10^{3},10^{4},10^{6}\) and \(10^{7}\) points.
Letting \(n\to\infty\), the red and blue sets \(\mathcal{R}_{n}\) and \(\mathcal{B}_{n}\) respectively converge, for the Hausdorff distance between closed subsets of \([0,1]^{d}\), to
\[\mathcal{R}_{\infty}=\overline{\bigcup_{n\geqslant 0}\mathcal{R}_{n}}\quad \text{and}\quad\mathcal{B}_{\infty}=\overline{\bigcup_{n\geqslant 0}\mathcal{B}_{n}}.\]
The object we are interested in is the _frontier_\(\mathcal{F}_{\infty}=\mathcal{R}_{\infty}\cap\mathcal{B}_{\infty}\), which is also easily shown to be the limit for the Hausdorff distance of the discrete frontier \(\mathcal{F}_{n}=\big{\{}x\in[0,1]^{d}:d(x,\mathcal{R}_{n})=d(x,\mathcal{B}_{n} )\big{\}}\), as \(n\to\infty\) (_c.f._ Proposition 5). See Figure 1 for a simulation of the coloring process.
This very natural model can be found in Aldous [2], which attributes it to Penrose & Wade [5, Section 7.6.8], although it may have been considered by other authors before. Recently, Lichev and Mitsche [4] studied the combinatorial properties of genealogical trees induced by the coloring procedure. Here, we focus instead on the geometric and topological properties of the model. After completion of this work, we learned from Aldous that Preater [6] had considered the same model, and in particular answered [2, Conjecture 3], showing that the frontier \(\mathcal{F}_{\infty}\) has zero Lebesgue measure (see [6, Theorem 2]). Our main result is the following, and settles a conjecture of Aldous [2, Section 5.3.3].
**Theorem 1** (The frontier is fractal).: Almost surely, the Hausdorff dimension of the frontier \(\mathcal{F}_{\infty}\) satisfies
\[d-1<\dim_{H}\mathcal{F}_{\infty}<d.\]
The proof is divided in two main steps, which we summarize below.
* Upper bound. We first show that for every \(x\in[0,1]^{d}\) and \(r>0\) such that the ball \(\overline{B}(x,r)\) does not contain the seeds \(R_{0}\) and \(\mathcal{B}_{0}\), there is a positive probability that the smaller ball \(\overline{B}(x,r/6)\) is monochromatic at the end of the coloring (Lemma 1). Together with a multi-scale argument, this shows that the Hausdorff dimension of the frontier \(\mathcal{F}_{\infty}\) is strictly less than \(d\) (see Proposition 1). A result with a similar flavor, also using a first-passage percolation argument, was obtained by Preater (see [6, Theorem 1]), who showed that \(\mathcal{F}_{\infty}\) has zero Lebesgue measure.
* Lower bound. The lower bound on the Hausdorff dimension of the frontier is based on ideas and techniques developed by Aizenman & Burchard in [1, Sections 5 and 6], where they introduce general conditions which allow to lower bound the Hausdorff dimension of random curves (see [1, Theorem 1.3]. Their result applies in particular to scaling limits of interfaces from critical statistical physics models such as percolation; random curves which have a positive probability, at each scale, of oscillating. Unfortunately, it is not clear that our frontier \(\mathcal{F}_{\infty}\) even contains curves, see Open question 1 below. We find a workaround by adapting the ideas of Aizenman & Burchard to get a Hausdorff dimension lower bound result for connected random closed subsets of \([0,1]^{d}\). The exact statement is given in Theorem 2. We hope that this extension will prove to be of independent interest.
A natural variant.There are natural variants of this coloring model, such as the following "segment" model (as opposed to the original "point" model): still thinking of \(R_{0}\) and \(B_{0}\) as initial red and blue seeds, and of \(X_{1},X_{2},\ldots\) as points falling consecutively in \([0,1]^{d}\), let as before \(\mathcal{R}_{0}=\{R_{0}\}\) and \(\mathcal{B}_{0}=\{B_{0}\}\) be the initial red and blue sets, respectively. Then, by induction, for each \(n\in\mathbb{N}\) such that the red and blue sets \(\mathcal{R}_{n}\) and \(\mathcal{B}_{n}\) have been constructed, proceed as follows: almost surely, we have \(d(X_{n+1},\mathcal{R}_{n})\neq d(X_{n+1},\mathcal{B}_{n})\), and
* if \(d(X_{n+1},\mathcal{R}_{n})<d(X_{n+1},\mathcal{B}_{n})\), then set \(\mathcal{R}_{n+1}=\mathcal{R}_{n}\cup[Y_{n},X_{n+1}]\) and \(\mathcal{B}_{n+1}=\mathcal{B}_{n}\), where \(Y_{n}\) denotes the point on \(\mathcal{R}_{n}\) which is closest to \(X_{n+1}\);
* otherwise, if \(d(X_{n+1},\mathcal{R}_{n})>d(X_{n+1},\mathcal{B}_{n})\), then set \(\mathcal{R}_{n+1}=\mathcal{R}_{n}\) and \(\mathcal{B}_{n+1}=\mathcal{B}_{n}\cup[Y_{n},X_{n+1}]\), where \(Y_{n}\) denotes the point on \(\mathcal{B}_{n}\) which is closest to \(X_{n+1}\).
Note that, by construction, the red and blue sets \(\mathcal{R}_{n}\) and \(\mathcal{B}_{n}\) are connected finite unions of line segments, so that \(Y_{n}\) is always well defined (such a point is almost surely unique because \(X_{n+1}\) is uniform and independent of \(X_{1},\dots,X_{n}\)). Upon minor technical modifications in the proofs, Theorem 1 holds for this coloring process as well.
Elusive topological properties of the frontier.Although our results show the convergence in a strong sense of the colored regions and establish the fractal nature of the frontier, many questions remain open, such as the existence of a \(0/1\)-law for the Hausdorff dimension of \(\mathcal{F}_{\infty}\). We focus here on the planar case \(d=2\), which concentrates the most interesting topological questions. Notice first that almost surely, the frontier \(\mathcal{F}_{\infty}\) is not connected, the reason being that it is possible for a point to get surrounded by points of the opposite color, thus eventually creating an "island" in the coloring. See [6, Theorem 3], and Figure 3. This island creation is not possible in the segment model, where the limiting frontier is almost surely connected.
**Open Question 1** (Curves).Is the frontier \(\mathcal{F}_{\infty}\) a countable union of curves?
It is natural to believe that \(\mathcal{F}_{\infty}\) is a countable union of curves (i.e, images of continuous paths from \([0,1]\) to \(\mathbb{R}^{2}\)), or that the limiting frontier in the segment model is a curve. Although Aizenman & Burchard [1] provide sufficient conditions (namely [1, Hypothesis **H1**]) which would allow to show that \(\mathcal{F}_{\infty}\)_contains_ a curve1, checking those estimates seems hard in our setup due to the lack of a
Figure 3: Illustration of the creation of an “island”. Such an island can be seen on Figure 1 on the top right corner. This shows that the limiting frontier \(\mathcal{F}_{\infty}\) is not connected almost surely.
Figure 2: Simulation of the variant Poisson coloring of space where a new incoming point is linked by a monochromatic segment to the nearest point in the process so far, from left to right with \(10^{2},10^{3},10^{4},10^{6}\) and \(10^{7}\) points. The arrivals points \(X_{i}\) are the same as those used for Figure 1. Notice that the red “island” on the top right part of the figure present in the original point model has disappeared.
correlation inequality. Yet, simulations suggest that the connected components of \(\mathcal{F}_{\infty}\) are _simple_ curves, meaning that "double points", i.e. points from which four alternating monochromatic non-trivial curves originate, do not exist.
**Open Question 2** (Simple curves).: If the above question has a positive answer, are those curves almost surely _simple_?
Accordingly, if this is true, then the frontier in the segment model should be made of a single simple curve. In fact, simulations suggest that in that model, the finite red and blue trees \(\mathcal{R}_{n}\) and \(\mathcal{B}_{n}\) are in the interior of the limiting red and blue regions \(\mathcal{R}_{\infty}\) and \(\mathcal{B}_{\infty}\) (it is possible to show that the arrival vertices \(X_{1},X_{2},\ldots\) are indeed in the interior of \(\mathcal{R}_{\infty}\) and \(\mathcal{B}_{\infty}\), with minor technical modifications in the proof of Lemma 1 below, but the same results for the whole segments is still out of scope.) A more general question is the following.
**Open Question 3** (Safe margin).: Suppose that \(\mathcal{R}_{0}\) is made of a segment or a ball instead of a single point. Do we have \(\mathbb{P}(\mathcal{R}_{0}\cap\mathcal{B}_{\infty}\neq\emptyset)=0\)?
Our techniques (or those of Preater) only show that the above probability is strictly less than \(1\), see the discussion before Corollary 1 in [6].
**Acknowledgments.** We warmly thank David Aldous for discussions about [2] and for providing us with the reference [6]. The first and fourth authors were supported by ANR 19-CE40-0025 ProGraM. The second and third authors were supported by ERC 740943 GeoBrown and ANR RanTanPlan. We are grateful to the participants of the PizzaMa seminar, during which this work was initiated.
## 1 Monochromatic balls and upper bound on \(\dim_{H}\mathcal{F}_{\infty}\)
In this section we establish our key lemma, Lemma 1, which shows that for every \(x\in[0,1]^{d}\) and \(r>0\) such that \(\overline{B}(x,r)\) does not contain the seeds \(R_{0}\) and \(B_{0}\), there is a positive probability that the smaller ball \(\overline{B}(x,r/6)\) is monochromatic at the end of the coloring. Applying Lemma 1 at all scales yields the upper bound on the dimension of the frontier. In particular, it shows that for every \(x\in[0,1]^{d}\), almost surely there exists an \(r>0\) such that the ball \(\overline{B}(x,r)\) is monochromatic at the end of the coloring.
### Key lemma
Before stating the result, let us embed the model in continuous time to gain convenient independence properties.
Figure 4: **Left.** Illustration of a double point in the frontier \(\mathcal{F}_{\infty}\). **Right.** Can the frontier intersect the finite trees in the segment model (orange arrow)?
Poissonisation.Let \(\Lambda\) be a Poisson random measure with intensity \(\lambda\otimes\lambda_{d}\) on \(\mathbb{R}_{+}\times\mathbb{R}^{d}\), where \(\lambda\) and \(\lambda_{d}\) denote the Lebesgue measures on \(\mathbb{R}_{+}\) and \(\mathbb{R}^{d}\), respectively. Let \(X_{1},X_{2},\ldots\) be the points of \(\Lambda\) that fall in \([0,1]^{d}\), successively at times \(\tau_{1}<\tau_{2}<\ldots\). It is a standard fact that the \((X_{n})_{n\in\mathbb{N}^{*}}\) are independent random variables, uniformly distributed in \([0,1]^{d}\). Now, the coloring process can be defined in continuous time as follows. The sequence \((\mathcal{R}_{n},\mathcal{B}_{n})_{n\in\mathbb{N}^{*}}\) of the discrete setting will here correspond to \((\mathcal{R}_{\tau_{n}},\mathcal{B}_{\tau_{n}})_{n\in\mathbb{N}^{*}}\), and the sets \(\mathcal{R}_{t}\) and \(\mathcal{B}_{t}\) will be defined at all times \(t\in\mathbb{R}_{+}\) as follows: for each \(n\in\mathbb{N}\), we set \(\mathcal{R}_{t}=\mathcal{R}_{\tau_{n}}\) and \(\mathcal{B}_{t}=\mathcal{B}_{\tau_{n}}\) for every \(t\in[\tau_{n},\tau_{n+1}[\), with the convention \(\tau_{0}=0\).
For \(x\in\mathbb{R}^{d}\) and \(0<r<R\), we define the annulus \(\overline{A}(x;r,R)=\big{\{}y\in\mathbb{R}^{d}:r<|x-y|\leqslant R\big{\}}\), and denote by \(\mathcal{A}^{x}_{r,R}\) the \(\sigma\)-algebra generated by the restriction of the Poisson random measure \(\Lambda\) to the set \(\mathbb{R}_{+}\times\overline{A}(x;r,R)\). The point of Lemma 1 below is to describe an \(\mathcal{A}^{x}_{r/6,r}\)-measurable "good event" \(\mathcal{G}^{x}_{r/6,r}\), which has probability bounded away from \(0\) uniformly in \(x\) and \(r\), such that if \(\overline{B}(x,r)\) does not contain the seeds \(R_{0}\) and \(B_{0}\), then on \(\mathcal{G}^{x}_{r/6,r}\) the ball \(\overline{B}(x,r/6)\) is monochromatic at the end of the coloring. Figure 5 provides an overview of how such a good event is constructed.
**Lemma 1**.: _There is a constant \(p\in\,]0,1[\) for which the following holds. For every \(x\in[0,1]^{d}\) and \(r>0\), there exists an \(\mathcal{A}^{x}_{r/6,r}\)-measurable good event \(\mathcal{G}^{x}_{r/6,r}\), which has probability \(\mathbb{P}\left(\mathcal{G}^{x}_{r/6,r}\right)\geqslant p\), such that if \(\overline{B}(x,r)\) does not contain either \(R_{0}\) or \(B_{0}\), then on \(\mathcal{G}^{x}_{r/6,r}\) the ball \(\overline{B}(x,r/6)\) does not meet both \(\bigcup_{t\geqslant 0}\mathcal{R}_{t}\) and \(\bigcup_{t\geqslant 0}\mathcal{B}_{t}\)._
_Remark 1_.: It will be clear from the proof that the event \(\mathcal{G}^{x}_{r/6,r}\) also prevents the ball \(\overline{B}(x,r/6)\) from bichromaticity whenever \(R_{0}\in\overline{B}(x,r/3)\) and \(B_{0}\notin\overline{B}(x,r)\), or \(B_{0}\in\overline{B}(x,r/3)\) and \(R_{0}\notin\overline{B}(x,r)\). In particular, Lemma 1 allows to recover the result of Preater (see [6, proof of Theorem 2]) that almost surely, there exists an \(r>0\) such that \(\overline{B}(R_{0},r)\) does not contain a blue point, and \(\overline{B}(B_{0},r)\) does not contain a red point.
Figure 5: Illustration of the construction of the event \(G^{x}_{r/6,r}\). Suppose that the seeds \(R_{0},B_{0}\) lie outside \(\overline{B}(x,r)\).
Proof.: Fix \(x\in[0,1]^{d}\) and \(r>0\), and suppose that both \(R_{0}\) and \(B_{0}\) lie outside \(\overline{B}(x,r)\). We construct an \(\mathcal{A}_{r/6,r}^{x}\)-measurable _good event_\(G\) on which a "defense" is organized inside the annulus \(\overline{A}(x;r/6,r)\), preventing \(\overline{B}(x,r/6)\) from meeting both \(\bigcup_{t\geqslant 0}\mathcal{R}_{t}\) and \(\bigcup_{t\geqslant 0}\mathcal{B}_{t}\).
Definition of \(G\).: Let \(\rho_{k}=\left(1+2^{-k}\right)\cdot r/6\) for all \(k\in\mathbb{N}\), and let \((t_{k})_{k\in\mathbb{N}}\) be a sequence of positive real numbers to be adjusted later, with \(T_{k}:=t_{0}+\ldots+t_{k}\to\infty\) as \(k\to\infty\). We define \(\mathcal{A}_{r/6,r}^{x}\)-measurable events \((G_{k})_{k\in\mathbb{N}}\) such that for every \(k\in\mathbb{N}\), on \(G_{0}\cap\ldots\cap G_{k}\) the ball \(\overline{B}(x,\rho_{k})\) does not meet both \(\bigcup_{0\leqslant t<T_{k}}\mathcal{R}_{t}\) and \(\bigcup_{0\leqslant t<T_{k}}\mathcal{B}_{t}\). The good event \(G\) will then be defined as \(G=\bigcap_{k\geqslant 0}G_{k}\). For every \(k\in\mathbb{N}\), we denote by \(A_{k}\) the annulus \(\overline{A}(x;\rho_{k+1},\rho_{k})\). Let \(\delta_{k}=1/2\cdot(\rho_{k}-\rho_{k+1})\cdot(k+1)^{-2}\), and let \(\mathcal{Z}_{k}\subset A_{k}\) be a finite set of points with the following properties:
1. for every \(y\in A_{k}\), there exists \(z\in\mathcal{Z}_{k}\) such that \(y\in\overline{B}(z,3\delta_{k}/2)\),
2. for any \(z\neq z^{\prime}\in\mathcal{Z}_{k}\), we have \(|z-z^{\prime}|>\delta_{k}\),
3. for every \(z\in\mathcal{Z}_{k}\), we have \(\overline{B}(z,\delta_{k}/2)\subset A_{k}\).
It is clear that such a set \(\mathcal{Z}_{k}\) always exists: we keep adding points satisfying b. and c. until no more point can be added and then a. must also be satisfied by construction. Note also that, because the balls \(\left(\overline{B}(z,\delta_{k}/2)\right)_{z\in\mathcal{Z}_{k}}\) are disjoint and included in \(A_{k}\subset\overline{B}(x,r/3)\), a volume computation entails that
\[\#\mathcal{Z}_{k}\leqslant\left(\frac{r/3}{\delta_{k}/2}\right)^{d}=\left(8 \cdot 2^{k+1}\cdot(k+1)^{2}\right)^{d}. \tag{1}\]
We define \(G_{0}\) as the event: "for every \(z\in\mathcal{Z}_{0}\), a point of \(\Lambda\) falls in \(\overline{B}(z,\delta_{0}/2)\) over the time interval \([0,t_{0}[\), meanwhile no point falls in \(\overline{A}(x;r/3,r)\)". We claim that on \(G_{0}\), the ball \(\overline{B}(x,r/3)\) does not meet both \(\bigcup_{0\leqslant t<t_{0}}\mathcal{R}_{t}\) and \(\bigcup_{0\leqslant t<t_{0}}\mathcal{B}_{t}\). In particular, all the points of \(\Lambda\) that have fallen in the spots \(\left(\overline{B}(z,\delta_{0}/2)\right)_{z\in\mathcal{Z}_{0}}\) over the time interval \([0,t_{0}[\) have the same _good_ color. Indeed, fix a realization of the event \(G_{0}\). Denote by \(y_{1},\ldots,y_{n}\) the points of \(\Lambda\) that fall in \(\overline{B}(x,r)\) over the time interval \([0,t_{0}[\), and by \(\tau_{1}<\ldots<\tau_{n}\in[0,t_{0}[\) their arrival times. Note that by the definition of \(G_{0}\), the points \(y_{1},\ldots,y_{n}\) land in \(\overline{B}(x,r/3)\). So \(y_{1}\) arrives in \(\overline{B}(x,r/3)\), with its color. Then when \(y_{2}\) arrives, it lands at distance at most \(2r/3\) of \(y_{1}\), and at distance more than \(2r/3\) of any other point of the process, since these all lie outside \(\overline{B}(x,r)\). Therefore, the nearest neighbor of \(y_{2}\) is \(y_{1}\), and \(y_{2}\) inherits it color. The argument iterates, proving the claim.
Next, in order to define \(G_{k}\) for \(k\geqslant 1\), we start with the following deterministic observation. Suppose by induction that, on the event \(G_{0}\cap\ldots\cap G_{k-1}\), the following holds: at time \(T_{k-1}\), each cell \(\left(\overline{B}(z,\delta_{k-1}/2)\right)_{z\in\mathcal{Z}_{k-1}}\) contains a point of the good color, and \(\overline{B}(x,\rho_{k-1})\) does not contain any point of the other bad color. Then, every \(y\in A_{k-1}\) is at distance at most \(2\delta_{k-1}\) from a point of the good color, and the only way of bringing a point of the bad color inside \(\overline{B}(x,\rho_{k})\) before time \(T_{k}\) is to have points of \(\Lambda\) -- say \(y_{1},\ldots,y_{j}\) -- falling in \(A_{k-1}\), at times say \(\tau_{1}<\ldots<\tau_{j}\in[T_{k-1},T_{k}[\), with:
* \(d\left(y_{1};\mathbb{R}^{d}\backslash\overline{B}(x,\rho_{k-1})\right)<2\delta _{k-1}\),
* \(|y_{i+1}-y_{i}|<2\delta_{k-1}\) for each \(i\in[\![1,j]\![\),
* \(d\left(y_{j};\overline{B}(x,\rho_{k})\right)<2\delta_{k-1}\).
Now, let us discretise this information. First, it follows from the inequality \(\rho_{k-1}-\rho_{k}<(j+1)\cdot 2\delta_{k-1}\) that such a path must have length \(j\geqslant k^{2}\). Then, for each \(i\in\left[\![1,k^{2}]\!\right]\), let \(z_{i}\in\mathcal{Z}_{k-1}\) be such that \(y_{i}\in\overline{B}(z_{i},3\delta_{k-1}/2)\). The following holds:
* for every \(i\in\llbracket 1,k^{2}\rrbracket\), we have \(z_{i}\in\mathcal{Z}_{k-1}\),
* for each \(i\in\llbracket 1,k^{2}\llbracket\), we have \(|z_{i+1}-z_{i}|\leqslant 5\delta_{k-1}\).
A sequence \(z_{1},\ldots,z_{k^{2}}\) satisfying the two properties above is said to be _admissible of order \(k\)_. Moreover, since for each \(i\in\llbracket 1,k^{2}\rrbracket\), a point of \(\Lambda\) falls in \(\overline{B}(z_{i},3\delta_{k-1}/2)\) at time \(\tau_{i}\), with \(\tau_{1}<\ldots<\tau_{n}\in[T_{k-1},\,T_{k}[\), we say that \(z_{1},\ldots,z_{k^{2}}\)_ring consecutively_ over the time interval \([T_{k-1},T_{k}[\). We can now formally define the event \(G_{k}\) by "for every \(z\in\mathcal{Z}_{k}\), a point of \(\Lambda\) falls in \(\overline{B}(z,\delta_{k}/2)\) over the time interval \([T_{k-1},T_{k}[\), meanwhile no admissible sequence of order \(k\) rings consecutively. By induction on \(k\), we see that on \(G_{0}\cap\ldots\cap G_{k}\), the ball \(\overline{B}(x,\rho_{k})\) does not meet both \(\bigcup_{0\leqslant t<T_{k}}\mathcal{R}_{t}\) and \(\bigcup_{0\leqslant t<T_{k}}\mathcal{B}_{t}\). Finally, we set \(G=\bigcap_{k\geqslant 0}G_{k}\).
The probability \(\mathbb{P}(G)\) is bounded away from 0. Because of the disjointness of the time intervals over which they are defined, the events \((G_{k})_{k\in\mathbb{N}}\) are independent:
\[\mathbb{P}(G)=\prod_{k\geqslant 0}\mathbb{P}(G_{k})=\mathbb{P}(G_{0}) \cdot\prod_{k\geqslant 1}[1-\mathbb{P}(F_{k})],\]
where \(F_{k}\) is the complement of the event \(G_{k}\). On \(F_{k}\),
* either there exists \(z\in\mathcal{Z}_{k}\) such that \(\Lambda\left([T_{k-1},T_{k}[\times\overline{B}(z,\delta_{k}/2)\right)=0\), let us call \(B_{k}\) the corresponding event,
* or there exists an admissible sequence of order \(k\) that rings consecutively. We call \(C_{k}\) the corresponding event.
Figure 6: Schematic description of the induction procedure.
We have \(\mathds{P}(F_{k})\leqslant\mathds{P}(B_{k})+\mathds{P}(C_{k})\). For the second term, a union bound and the Markov property for \(\Lambda\) show that
\[\mathds{P}(C_{k}) \leqslant\sum_{z_{1},\ldots,z_{k^{2}}\text{ admissible of order }k}\mathds{P}(z_{1},\ldots,z_{k^{2}}\text{ ring consecutively})\] \[=\#\{\text{admissible sequences of order }k\}\cdot\mathds{P}( \tau_{1}+\ldots+\tau_{k^{2}}<t_{k})\text{,}\]
where the \(\tau_{i}\)'s are independent exponential random variables with parameter (_i.e,_ inverse mean) \(\lambda_{k}=v_{d}(3\delta_{k-1}/2)^{d}\). We set \(t_{k}=\alpha\cdot k^{2}\cdot\lambda_{k}^{-1}\) for all \(k\geqslant 1\), where \(\alpha\in\left]0,1\right[\) is a parameter to be adjusted later. On the one hand, a standard Chernoff bound yields
\[\mathds{P}(\tau_{1}+\ldots+\tau_{k^{2}}<t_{k})\leqslant e^{(1-\alpha+\ln\alpha )k^{2}}.\]
On the other hand, in order to choose an admissible sequence \(z_{1},\ldots,z_{k^{2}}\) of order \(k\), there is no more than \(\#\mathcal{Z}_{k-1}\) possibilities for the choice of \(z_{1}\), and then for each \(i\in\llbracket 1,k^{2}\llbracket\), there is at most \(\#\left(\mathcal{Z}_{k-1}\cap\overline{B}(z_{i},5\delta_{k-1})\right) \leqslant 11^{d}\) possibilities for the choice of \(z_{i+1}\), this upper bound is since the disjoint balls \(\left(\overline{B}(z,\delta_{k-1}/2)\right)\); \(z\in\mathcal{Z}_{k-1}\cap\overline{B}(z_{i},5\delta_{k-1})\right)\) are included in \(\overline{B}(z_{i},11\delta_{k-1}/2)\). Thus, we find that
\[\mathds{P}(C_{k})\leqslant\#\mathcal{Z}_{k-1}\cdot\left(11^{d}\right)^{k^{2}- 1}\cdot e^{(1-\alpha+\ln\alpha)k^{2}}.\]
We now fix \(\alpha\) such that \(11^{d}\cdot e^{1-\alpha+\ln\alpha}\leqslant e^{-1}\). Recalling (1), we obtain
\[\mathds{P}(C_{k})\leqslant\left(8\cdot 2^{k}\cdot k^{2}\right)^{d}\cdot 11^{-d} \cdot e^{-k^{2}}=:c_{k}.\]
Next, we upper bound \(\mathds{P}(B_{k})\). We have \(\mathds{P}(B_{k})\leq\#\mathcal{Z}_{k}\cdot p_{k}\), where
\[p_{k}=\exp\left[-v_{d}(\delta_{k}/2)^{d}\cdot t_{k}\right]=\exp\left[-\alpha \cdot\left(\frac{\delta_{k}}{3\delta_{k-1}}\right)^{d}\cdot k^{2}\right].\]
Using again (1), we see that
\[\#\mathcal{Z}_{k}\cdot p_{k}\leqslant\left(8\cdot 2^{k+1}\cdot(k+1)^{2} \right)^{d}\cdot\exp\left[-\alpha\cdot 24^{-d}\cdot k^{2}\right]=:b_{k},\]
which finally yields
\[\mathds{P}(F_{k})\leqslant b_{k}+c_{k}=:a_{k}.\]
We finally check that \(\mathds{P}(G)\) is bounded away from \(0\) uniformly in \(x\) and \(r\). Since \(\sum_{k\geqslant 1}a_{k}<\infty\), we can find \(K\in\mathds{N}\) (not depending on \(x\) or \(r\)) such that \(\prod_{k\geqslant K+1}(1-a_{k})\geqslant 1/2\). With that choice, we have
\[\mathds{P}(G)=\mathds{P}(G_{0})\cdot\ldots\cdot\mathds{P}(G_{K})\cdot\prod_{ k\geqslant K+1}[1-\mathds{P}(F_{k})]\geqslant\frac{\mathds{P}(G_{0})\cdot\ldots \cdot\mathds{P}(G_{K})}{2}.\]
Note that we have yet to specify the value of \(t_{0}\), which we now set to \(t_{0}=r^{-d}\). Given this choice, the probability \(\mathds{P}(G_{0})\) is bounded away from \(0\) uniformly in \(x\) and \(r\). Next, for each \(k\in\llbracket 1,K\rrbracket\), we claim that \(\mathds{P}(G_{k})\) is also bounded away from \(0\) uniformly in \(x\) and \(r\), because the same is true for the probability of the sub-event: "for each \(z\in\mathcal{Z}_{k}\), a point of \(\Lambda\) falls in \(\overline{B}(z,\delta_{k}/2)\) over the time interval \([T_{k-1},T_{k}[\), meanwhile no point falls in \(A_{k-1}\)". Thus, the quantity \(\mathds{P}(G_{0})\cdot\ldots\cdot\mathds{P}(G_{K})\cdot 1/2\) is bounded away from zero uniformly in \(x\) and \(r\), which completes the proof of the lemma.
### Hausdorff dimension: upper bound
**Proposition 1**.: There exist constants \(C,\alpha>0\) such that for every \(x\in[0,1]^{d}\),
\[\mathbb{P}\left(\mathcal{F}_{\infty}\text{ meets }\overline{B}(x,\delta)\right) \leqslant C\cdot\delta^{\alpha}\quad\text{for all }\delta\in\left]0,1\right[ \text{ small enough so that }R_{0},B_{0}\notin\overline{B}\left(x,\sqrt{\delta}\right).\]
Proof.: Fix \(x\in[0,1]^{d}\), and let \(\delta\in\left]0,1\right[\) be small enough so that \(R_{0},B_{0}\notin\overline{B}\left(x,\sqrt{\delta}\right)\). Set \(r_{k}=\sqrt{\delta}\cdot 6^{-k}\) for all \(k\in\mathbb{N}\), and denote by \(K\) the largest integer \(k\) such that \(r_{k}>\delta\). By Lemma 1, we have the inclusion
\[\left(\mathcal{F}_{\infty}\text{ meets }\overline{B}(x,\delta)\right) \subset\left(\text{for every }k\in\llbracket 1,K\rrbracket,\text{ the event }G^{x}_{r_{k},r_{k-1}}\text{ fails to be realized}\right).\]
Thus, since those are independent events, we obtain
\[\mathbb{P}\left(\mathcal{F}_{\infty}\text{ meets }\overline{B}(x,\delta) \right)\leqslant(1-p)^{K}.\]
Plugging in the equality \(K=\left\lceil\log_{6}\left(\delta^{-1/2}\right)\right\rceil-1\), we find that
\[(1-p)^{K}\leqslant(1-p)^{-1}\cdot\delta^{\alpha},\quad\text{with }\alpha=-\frac{\ln(1-p)}{2\ln 6}>0,\]
which yields the required upper bound.
**Proposition 2**.: There exists \(\varepsilon>0\) such that, almost surely,
\[\dim_{H}\mathcal{F}_{\infty}\leqslant d-\varepsilon<d.\]
Proof.: Let \(\varepsilon=\alpha\wedge(d/2)\), where \(\alpha\) is the exponent of Proposition 1. For each \(k\in\mathbb{N}\), set \(\delta_{k}=2^{-k}\), and let \(\left(\overline{B}(x,\delta_{k})\right)_{x\in\mathcal{X}_{k}}\) be a covering of \([0,1]^{d}\) by balls of radius \(\delta_{k}\), with centres \(x\in[0,1]^{d}\) more than \(\delta_{k}\) apart so that the \(\left(\overline{B}(x,\delta_{k}/2)\right)_{x\in\mathcal{X}_{k}}\) are disjoint. In particular, there exists a constant \(C^{\prime}=C^{\prime}(d)>0\) such that \(\#\mathcal{X}_{k}\leqslant C^{\prime}\cdot\delta_{k}^{-d}\). By definition, the \((d-\varepsilon)\)-dimensional Hausdorff measure of \(\mathcal{F}_{\infty}\) is bounded from above by the random variable
\[H=\varliminf_{k\to\infty}\sum_{x\in\mathcal{X}_{k}}(2\delta_{k})^{d- \varepsilon}\cdot\mathbf{1}\left(\mathcal{F}_{\infty}\text{ meets }\overline{B}(x,\delta_{k})\right).\]
We claim that \(H\) is almost surely finite, which implies that \(\dim_{H}\mathcal{F}_{\infty}\leqslant d-\varepsilon\) almost surely. Indeed, using Fatou's lemma, we get
\[\mathbb{E}[H]\leqslant\varliminf_{k\to\infty}\sum_{x\in\mathcal{X}_{k}}(2 \delta_{k})^{d-\varepsilon}\cdot\mathbb{P}\left(\mathcal{F}_{\infty}\text{ meets }\overline{B}(x,\delta_{k})\right),\]
and for every \(k\) we have, using Proposition 1:
\[\sum_{x\in\mathcal{X}_{k}}(2\delta_{k})^{d-\varepsilon}\cdot \mathbb{P}\left(\mathcal{F}_{\infty}\text{ meets }\overline{B}(x,\delta_{k})\right)\] \[\leqslant\#\left\{x\in\mathcal{X}_{k}:R_{0}\in\overline{B}\left(x,\sqrt{\delta_{k}}\right)\text{ or }B_{0}\in\overline{B}\left(x,\sqrt{\delta_{k}}\right) \right\}\cdot(2\delta_{k})^{d-\varepsilon}+\#\mathcal{X}_{k}\cdot(2\delta_{k })^{d-\varepsilon}\cdot C\cdot\delta_{k}^{\alpha}.\]
For the first term, we have
\[\#\left\{x\in\mathcal{X}_{k}:R_{0}\in\overline{B}\left(x,\sqrt{ \delta_{k}}\right)\text{ or }B_{0}\in\overline{B}\left(x,\sqrt{\delta_{k}}\right)\right\} \leqslant\#\mathcal{X}_{k}\cap\overline{B}\left(R_{0},\sqrt{ \delta_{k}}\right)+\#\mathcal{X}_{k}\cap\overline{B}\left(B_{0},\sqrt{\delta_ {k}}\right)\] \[\leqslant 2\cdot\left(\frac{\sqrt{\delta_{k}}}{\delta_{k}/2}+1 \right)^{d}=2\left(\frac{2}{\sqrt{\delta_{k}}}+1\right)^{d}.\]
Recalling the assumption that \(\epsilon\leqslant d/2\), we deduce:
\[\varliminf_{k\to\infty}\#\left\{x\in\mathcal{X}_{k}:R_{0}\in\overline{B}\left(x, \sqrt{\delta_{k}}\right)\text{ or }B_{0}\in\overline{B}\left(x,\sqrt{\delta_{k}}\right)\right\} \cdot(2\delta_{k})^{d-\epsilon}<\infty.\]
For the second term, we use that \(\#\mathcal{X}_{k}\leqslant\mathcal{C}^{\prime}\cdot\delta_{k}^{-d}\), and \(\epsilon\leqslant\alpha\) to check:
\[\varliminf_{k\to\infty}\#\mathcal{X}_{k}\cdot(2\delta_{k})^{d-\epsilon}\cdot \mathcal{C}\cdot\delta_{k}^{\alpha}<\infty.\]
Combining these two inequalities, we conclude that \(\operatorname{\mathbf{E}}[H]<\infty\).
## 2 Hausdorff dimension lower bounds
In this section, we prove that the Hausdorff dimension of \(\mathcal{F}_{\infty}\) is strictly greater than \((d-1)\). A substantial part of this work consists in the adaptation of the lower bound [1, Theorem 1.3] of Aizenman & Burchard. Indeed, as the knowledgeable reader has undoubtedly noticed, it is not possible to invoke the above mentioned result directly because we do not know whether the frontier \(\mathcal{F}_{\infty}\) contains non-trivial curves. So instead, we modify the proof of Aizenman & Burchard to obtain a general Hausdorff dimension lower bound result for connected random closed subsets which satisfy Property (\(\varnothing\)), with the following definition.
**Definition 1** (Property (\(\varnothing\))).: Let \(\mathcal{F}\) be a random closed subset of \([0,1]^{d}\). We say that \(\mathcal{F}\) satisfies Property (\(\varnothing\)) if there exists a constant \(\zeta>1\) and two constants \(Q>0\) and \(q\in\left]0,1\right[\) such that the following holds: for every collection \(\left(\overline{B}(x_{i},r_{i});\;i\in[\![1,n]\!]\right)\) of balls with centres \(x_{1},\ldots,x_{n}\in[0,1]^{d}\) such that the dilated balls \(\left(\overline{B}(x_{i},\zeta r_{i});\;i\in[\![1,n]\!]\right)\) are disjoint (we say that the balls are \(\zeta\)-separated), we have
\[\operatorname{\mathbf{P}}\left(\text{for each }i\in[\![1,n]\!],\;\text{the set }\mathcal{F}\text{ meets } \overline{B}(x_{i},r_{i})\right)\leqslant Q\cdot q^{n}.\]
We point out that Property (\(\varnothing\)) is very similar to [1, Hypothesis **H2**].
**Theorem 2**.: Let \(\mathcal{F}\) be a random closed subset of \([0,1]^{d}\). Assume that it satisfies Property (\(\varnothing\)), and that almost surely \(\mathcal{F}\) has a connected component which is not reduced to a point. Then, there exists a constant \(s>1\) such that, almost surely,
\[\operatorname{\mathsf{dim}}_{H}\mathcal{F}\geqslant s>1.\]
The empty set, or the points of a homogeneous Poisson process, are obvious examples of random closed subsets of \([0,1]^{d}\) which satisfy Property (\(\varnothing\)). Both have Hausdorff dimension \(0\) but their connected components are singletons. The above result says that, as soon as we request a random closed subset to have a non-trivial connected component (and thus, Hausdorff dimension at least \(1\)), then the fact that it satisfies the property (\(\varnothing\)) implies that it is "delocalized" in some sense, and entails that its Hausdorff dimension is, in fact, strictly greater than \(1\).
_Remark 2_.: As it will be clear from the proof of Theorem 2, and used later on, the following actually holds: there exists a constant \(s>1\) such that almost surely, for any closed subset \(\mathcal{G}\subset\mathcal{F}\) with a non-trivial connected component, we have \(\operatorname{\mathsf{dim}}_{H}\mathcal{G}\geqslant s\).
In Subsection 2.2, we will first apply Theorem 2 to the frontier \(\mathcal{F}_{\infty}\) in dimension \(d=2\), showing that \(\operatorname{\mathsf{dim}}_{H}\mathcal{F}_{\infty}>1\) almost surely. The lower bound \(\operatorname{\mathsf{dim}}_{H}\mathcal{F}_{\infty}>d-1\) in higher dimensions will then follow from Theorem 2 together with a slicing lemma, as detailed at the end of Section 2.2. Let us now present the proof of Theorem 2.
### Proof of Theorem 2
As mentioned before, the proof is adapted from [1] and thus uses similar ingredients. Still, we provide here a self-contained proof, recalling and adapting the necessary results from [1] whenever required. At its core, the proof employs the usual "energy method" (see [3, Theorem 6.4.6]) to lower bound the Hausdorff dimension of a set. There are two main parts:
1. We first describe a deterministic splitting procedure for curves which produces, when the curves are oscillating enough, an important number of disjoint sub-curves.
2. Next, we show that if a connected random closed subset \(\mathcal{F}\) satisfies Property (\(\varnothing\)), then curves located in a shrinking neighborhoods of \(\mathcal{F}\) will necessarily oscillate enough so that we can use the splitting procedure above to create many sub-curves. This will enable us to create a sequence of measures with good integrability properties and finally, by compactness, extract a measure \(\nu\) supported on \(\mathcal{F}\) such that \(\iint|x-y|^{-s}\mathrm{d}\nu(y)\mathrm{d}\nu(x)<\infty\) for some \(s>1\), which in turn implies that \(\dim_{H}\mathcal{F}\geqslant s\).
#### 2.1.1 A deterministic splitting procedure for curves
Given a small parameter \(\alpha\in\left]0,1\right[\), we describe the splitting procedure (\(\mathrm{P}_{\alpha}\)) mentioned in [1, Lemma 5.2]. It takes as input a continuous path \(\gamma:[0,1]\to\mathbb{R}^{d}\) with \(\gamma(1)\neq\gamma(0)\), and outputs a collection \(\gamma_{1},\ldots,\gamma_{\kappa}\) of subpaths of \(\gamma\), with the following properties:
* for every \(i\in\left[\![1,\kappa]\!\right]\), we have \(|\gamma_{i}(0)-\gamma_{i}(1)|=\alpha\cdot|\gamma(0)-\gamma(1)|=:\delta\),
* for any \(i\neq j\in\left[\![1,\kappa]\!\right]\), we have \(d(\gamma_{i}[0,1];\gamma_{j}[0,1])\geqslant\alpha\delta\).
The splitting procedure (\(\mathrm{P}_{\alpha}\)) goes as follows, see Figure 7 for an illustration. Set \(\Delta=|\gamma(0)-\gamma(1)|>0\) and let \(\delta=\alpha\cdot\Delta<|\gamma(0)-\gamma(1)|\). Initially, set \(\sigma_{1}=0\), and let
\[\tau_{1}=\inf\left\{t\in[0,1]:\gamma(t)\notin\overline{B}(\gamma(0),\delta) \right\}.\]
By induction, for \(i\in\mathbb{N}^{*}\), assuming that \(\sigma_{1},\tau_{1};\ldots;\sigma_{i},\tau_{i}\) have been constructed, if
\[d(\gamma(t);\gamma[\sigma_{1},\tau_{1}]\cup\ldots\cup\gamma[\sigma_{i},\tau_{ i}])\leqslant(1+\alpha)\delta\quad\text{for all }t\in[\tau_{i},1],\]
then we set \(\sigma_{i+1}=1\) and \(\tau_{i+1}=1\). Otherwise, we set
\[\tau_{i+1}=\inf\{t\in[\tau_{i},1]:d(\gamma(t);\gamma[\sigma_{1},\tau_{1}] \cup\ldots\cup\gamma[\sigma_{i},\tau_{i}])>(1+\alpha)\delta\},\]
and let \(\sigma_{i+1}=\sup\left\{t\in[\tau_{i},\tau_{i+1}[:\gamma(t)\notin\overline{B}( \gamma(\tau_{i+1}),\delta)\right\}\). Finally, let \(\kappa\) be the largest integer \(i\in\mathbb{N}^{*}\) such that \(\tau_{i}<1\), and for each \(i\in\left[\![1,\kappa]\!\right]\) denote by \(\gamma_{i}\) the path \(\theta\in[0,1]\mapsto\gamma((1-\theta)\sigma_{i}+\theta\tau_{i})\). For every \(i\in\left[\![1,\kappa]\!\right]\), we have \(|\gamma_{i}(0)-\gamma_{i}(1)|=\delta\), and for any \(i\neq j\in\left[\![1,\kappa]\!\right]\), we have \(d(\gamma_{i}[0,1];\gamma_{j}[0,1])\geqslant\alpha\delta\).
**Definition 2**.: We say that a continuous path \(\gamma:[0,1]\to\mathbb{R}^{d}\), with \(\gamma(1)\neq\gamma(0)\), _deviates by a factor \(\rho>0\) from being a straight line_ when there exists \(t\in[0,1]\) such that
\[\gamma(t)\notin S(\gamma(0),\gamma(1);\rho|\gamma(0)-\gamma(1)|),\]
where \(S(x,y;r)=\left\{z\in\mathbb{R}^{d}:d(z;[x,y])\leqslant r\right\}\) denotes the sausage of radius \(r\) around the line segment \([x,y]\).
Intuitively -- recall that \(\alpha\) is small -- the number \(\kappa\) of subpaths produced by the procedure \((\mathsf{P}_{\alpha})\) must be at least of order \(\Delta/\delta=1/\alpha\) (a lower bound which can be attained by a straight line). However, when the input path deviates of a straight line, one would expect the procedure to produce additional paths. This is the meaning of the next proposition.
**Proposition 3**.: Let \(\gamma:[0,1]\to\mathbbm{R}^{d}\) be a path with \(\gamma(1)\neq\gamma(0)\).
1. The number of subpaths produced by the procedure \((\mathsf{P}_{\alpha})\) always satisfies \[\kappa\geqslant\frac{1-\alpha}{(1+\alpha)\alpha}.\]
2. If \(\gamma\) deviates by a factor \(\rho>0\) from being a straight line, the number of subpaths produced by the procedure \((\mathsf{P}_{\alpha})\) satisfies \[\kappa\geqslant\frac{\frac{1}{2}\left(1+\sqrt{1+2\rho^{2}}\right)-(4+\alpha) \alpha}{(1+\alpha)\alpha}.\]
Proof.: Recall that \(\Delta=|\gamma(0)-\gamma(1)|>0\) and \(\delta=\alpha\Delta\).
1. By the definition of \((\mathsf{P}_{\alpha})\), there exists \(i_{1}\in\llbracket 1,\kappa\rrbracket\) such that \(d(\gamma(1);\gamma[\sigma_{i_{1}},\tau_{i_{1}}])\leqslant(1+\alpha)\delta\), and therefore \(t_{1}\in[\sigma_{i_{1}},\tau_{i_{1}}]\) such that \(|\gamma(1)-\gamma(t_{1})|\leqslant(1+\alpha)\delta\). Then, by induction, for \(k\in\mathbbm{N}^{*}\) such that \(i_{1},t_{1}!\dots;i_{k},t_{k}\) have been constructed, proceed as follows. If \(i_{k}=1\), then set \(i_{k+1}=1\) and let \(t_{k+1}=0\). Otherwise, by the definition of \((\mathsf{P}_{\alpha})\), there exists \(i_{k+1}\in\llbracket 1,i_{k}\llbracket\) such that \(d\left(\gamma(t_{k});\gamma\left[\sigma_{i_{k+1}},\tau_{i_{k+1}}\right]\right) \leqslant(1+\alpha)\delta\), and \(t_{k+1}\in\left[\sigma_{i_{k+1}},\tau_{i_{k+1}}\right]\) such that \(|\gamma(t_{k})-\gamma(t_{k+1})|\leqslant(1+\alpha)\delta\). Finally, let \(m\) be the smallest integer \(k\in\mathbbm{N}^{*}\) such that \(i_{k}=1\). We have \[|\gamma(0)-\gamma(1)|\leqslant|\gamma(0)-\gamma(t_{m})|+\sum_{k=1}^{m-1}|\gamma (t_{k+1})-\gamma(t_{k})|+|\gamma(t_{1})-\gamma(1)|\leqslant\delta+m\cdot(1+ \alpha)\delta,\] hence \[m\geqslant\frac{\Delta-\delta}{(1+\alpha)\delta}=\frac{1-\alpha}{(1+\alpha) \alpha}.\] The result follows, since \(i_{1},\dots,i_{m}\) are distinct elements of \(\llbracket 1,\kappa\rrbracket\).
Figure 7: Illustration of the procedure \((\mathsf{P}_{\alpha})\). The times \(\sigma_{i}\) are in blue, and the \(\tau_{i}\) in red. Several subpaths (in thick line), spanning a distance \(\delta\) and being \(\alpha\delta\) apart, are created from the initial path. The balls have radius \(\delta\).
2. Suppose that there exists \(t\in[0,1]\) such that \(\gamma(t)\notin S(\gamma(0),\gamma(1);\rho\Delta)\). We still denote by \(i_{1},t_{1};\ldots;i_{m},t_{m}\) the sequence of indices and times defined above. We construct another sequence \(j_{1},u_{1};\ldots;j_{p},u_{p}\) in the exact same manner but now obtained by backtracking from time \(t\) instead of time \(1\). By construction, we have \(|\gamma(t)-\gamma(u_{1})|\leqslant(1+\alpha)\delta\) and \(|\gamma(u_{k})-\gamma(u_{k+1})|\leqslant(1+\alpha)\delta\) for all \(k\leqslant p-1\). Finally, let \(n\) be the smallest integer such that \(j_{n}\in\{i_{1},\ldots,i_{m}\}\), and denote by \(l\) the index such that \(j_{n}=i_{l}\). The indices \(i_{1},\ldots,i_{m}\) and \(j_{1},\ldots,j_{n-1}\) are all distinct, hence \(\kappa\geqslant m+n-1\). Now, on the one hand, with the same argument as above, we have: \[|\gamma(0)-\gamma(t_{l})|+|\gamma(t_{l})-\gamma(1)|\leqslant\delta+m\cdot(1+ \alpha)\delta.\] On the other hand, \[|\gamma(t_{l})-\gamma(t)|\leqslant|\gamma(t_{l})-\gamma(u_{n})|+\sum_{k=1}^{n -1}|\gamma(u_{k+1})-\gamma(u_{k})|+|\gamma(u_{1})-\gamma(t)|\leqslant 2 \delta+n\cdot(1+\alpha)\delta.\] Summing these inequalities, we get \[|\gamma(0)-\gamma(t_{l})|+|\gamma(1)-\gamma(t_{l})|+|\gamma(t)-\gamma(t_{l})| \leqslant(m+n)(1+\alpha)\delta+3\delta,\] hence \[m+n-1\geqslant\frac{\inf_{x\in\mathbb{R}^{d}}\{|\gamma(0)-x|+|\gamma(1)-x|+| \gamma(t)-x|\}-(4+\alpha)\delta}{(1+\alpha)\delta}.\] It remains to lower bound the infimum in the right hand side. First, using the triangle inequality, we get \[|\gamma(0)-x|+|\gamma(1)-x|+|\gamma(t)-x|\geqslant\frac{|\gamma(0)-\gamma(1) |+|\gamma(0)-\gamma(t)|+|\gamma(t)-\gamma(1)|}{2}.\] Then, we make use of the fact that \(\gamma(t)\notin S(\gamma(0),\gamma(1);\rho\Delta)\), to get \[|\gamma(0)-\gamma(t)|+|\gamma(t)-\gamma(1)|\geqslant\sqrt{1+2\rho^{2}}\cdot\Delta.\] Altogether, we obtain \[\inf_{x\in\mathbb{R}^{2}}\{|\gamma(0)-x|+|\gamma(1)-x|+|\gamma(t)-x|\} \geqslant\frac{1+\sqrt{1+2\rho^{2}}}{2}\cdot\Delta,\] and the proof is complete.
For the rest of the proof of Theorem 2, we set \(\rho=\sqrt{18\alpha}\), and denote by \(\beta=\beta(\alpha)\) the inverse geometric mean of the two lower bounds in Proposition 3:
\[\frac{1}{\beta}=\sqrt{\frac{1-\alpha}{(1+\alpha)\alpha}\cdot\frac{\frac{1}{2} \left(1+\sqrt{1+2\rho^{2}}\right)-(4+\alpha)\alpha}{(1+\alpha)\alpha}}. \tag{2}\]
With that choice for \(\rho\), we have \(\beta=\alpha-\alpha^{2}+o(\alpha^{2})\) as \(\alpha\to 0^{+}\), and therefore \(\beta<\alpha\) for all sufficiently small \(\alpha\).
#### 2.1.2 Core of the proof
Proof of Theorem 2.: Let \(\mathcal{F}\) be a random closed subset of \([0,1]^{d}\). Assume that \(\mathcal{F}\) satisfies Property (\(\varnothing\)) with constants \(\zeta>1\), and \(Q>0\) and \(q\in\left]0,1\right[\), and that almost surely \(\mathcal{F}\) has a non-trivial connected component. We prove that there exists a constant \(s>1\) such that \(\dim_{H}\mathcal{F}\geqslant s\) almost surely. To this end, by the "energy method" (see, e.g, [3, Theorem 6.4.6]), it suffices to construct a Borel probability measure \(\nu\) supported on \(\mathcal{F}\) such that
\[\int\int\frac{\mathrm{d}\nu(y)}{|x-y|^{s}}\mathrm{d}\nu(x)<\infty. \tag{3}\]
(We note here, in view of Remark 2, that \(\mathcal{F}\) could be replaced with any closed subset \(\mathcal{G}\subset\mathcal{F}\) having a non-trivial connected component without affecting the deterministic part of the reasoning. Then, the definition of the events \((E_{m})_{m\in\mathbb{N}^{*}}\) below would not change, and the same constant \(s\) would work for all subsets \(\mathcal{G}\).)
Fix a realization of \(\mathcal{F}\). We claim that it is possible to find a sequence \(\big{(}\gamma_{n}:[0,1]\to\mathbb{R}^{d}\big{)}_{n\in\mathbb{N}}\) of paths, with \(\Delta:=\inf_{n\geqslant 0}|\gamma_{n}[0,1]|>0\), such that:
\[\text{for each $n\in\mathbb{N}$, we have $\gamma_{n}[0,1]\subset(\mathcal{F})_{1/(n+1)}$}, \tag{4}\]
where \((\mathcal{F})_{\varepsilon}=\big{\{}x\in\mathbb{R}^{d}:d(x,\mathcal{F}) \leqslant\varepsilon\big{\}}\) denotes the \(\varepsilon\)-neighborhood of \(\mathcal{F}\). Indeed, denote by \(\mathcal{C}\) a non-trivial connected component of \(\mathcal{F}\), and let \(a\neq b\) be two distinct elements of \(\mathcal{C}\). For each \(n\in\mathbb{N}\), the points \(a\) and \(b\) belong to the same connected component of \(\mathcal{O}_{n}=\big{\{}x\in\mathbb{R}^{d}:d(x,\mathcal{F})<1/(n+1)\big{\}}\). Since \(\mathcal{O}_{n}\) is open, any connected component of \(\mathcal{O}_{n}\) is path connected, hence there exists a continuous path \(\gamma_{n}:[0,1]\to\mathcal{O}_{n}\) that connects \(a\) to \(b\). In particular, we have \(\gamma_{n}[0,1]\subset(\mathcal{F})_{1/(n+1)}\), and the diameter \(|\gamma_{n}[0,1]|\) of \(\gamma_{n}[0,1]\) is at least \(|a-b|>0\).
We now use the Aizenman & Burchard splitting procedure (\(\mathrm{P}_{\!a}\)) recursively on the path \(\gamma_{n}\), and derive a collection \(\big{(}\mu_{l}^{n}\big{)}_{l\in\mathbb{N}}\) of Borel probability measures supported on \(\gamma_{n}[0,1]\). Making use of the fact that \(\mathcal{F}\) satisfies Property (\(\varnothing\)), we will then show that for almost every realization of \(\mathcal{F}\), it is possible to extract a sequence \(\Big{(}\nu_{n}=\mu_{L_{n}}^{n}\Big{)}_{n\in\mathbb{N}}\), of which any subsequential weak limit \(\nu\) is a Borel probability measure supported \(\mathcal{F}\) and such that (3) holds.
Fix \(a\in\big{(}0,d^{-1/2}\big{)}\) small enough so that the parameter \(\beta=\beta(a)\) defined in (2) satisfies \(\beta<a\). Set \(\delta_{k}=a^{k}\) for all \(k\in\mathbb{N}\), and denote by \(k_{0}=k_{0}(\omega)\) the smallest integer \(k\) such that \(\delta_{k}\leqslant\Delta\). Let us note that, since \(a<d^{-1/2}\) and \(\Delta\leqslant\sqrt{d}\), we have \(\delta_{k_{0}}>a\Delta\) even when \(k_{0}=0\). For each \(n\in\mathbb{N}\), we split the path \(\gamma_{n}\) into a collection \((\gamma_{u}^{n},\ u\in\mathbb{T}^{n})\) of subpaths, indexed by a plane tree \(\mathbb{T}^{n}\) with root denoted by \(o\), as follows. First, by the definition of \(\Delta\) and \(k_{0}\), we have \(|\gamma_{n}[0,1]|\geqslant\delta_{k_{0}}\). Thus, there exists \(s<t\in[0,1]\) such that \(|\gamma_{n}(s)-\gamma_{n}(t)|=\delta_{k_{0}}\), and we let \(\gamma_{o}^{n}\) be the path \(\theta\in[0,1]\mapsto\gamma_{n}((1-\theta)s+\theta t)\). Then, by induction, having constructed the paths indexed by \(\partial\mathbb{T}_{l}^{n}=\{u\in\mathbb{T}^{n}:|u|=l\}\), we apply for each \(u\in\partial\mathbb{T}_{l}^{n}\) the procedure (\(\mathrm{P}_{\!a}\)) to the path \(\gamma_{u}^{n}\), and denote by \(\gamma_{u1}^{n},\dots,\gamma_{u\mathrm{x}_{u}(u)}^{n}\) the subpaths generated. The children of \(u\) in \(\mathbb{T}^{n}\) are the nodes \(u1,\dots,u\mathrm{x}_{u}(u)\). By construction, the following holds:
* for every \(u\in\mathbb{T}^{n}\), we have \(|\gamma_{u}^{n}(0)-\gamma_{u}^{n}(1)|=\delta_{k_{0}+|u|}\),
* for any nodes \(u,v\) that are not descendants of one another in \(\mathbb{T}^{n}\), we have \[d(\gamma_{u}^{n}[0,1];\gamma_{v}^{n}[0,1])\geqslant a\delta_{k_{0}+|u\wedge v |+1^{\prime}}\] where \(u\wedge v\) denotes the lowest common ancestor of \(u\) and \(v\).
Now, set \(\pi_{n}(u)=\prod_{v\prec u}\kappa_{n}(u)^{-1}\) for all \(u\in\mathbb{T}^{n}\), and let \(\mu_{l}^{n}=\sum_{u\in\partial\mathbb{T}^{n}_{l}}\pi_{n}(u)\cdot(\gamma^{n}_{u}) _{*}\lambda\) for all \(l\in\mathbb{N}\), where \((\gamma^{n}_{u})_{*}\lambda\) denotes the push forward by \(\gamma^{n}_{u}\) of the Lebesgue measure on \([0,1]\). By construction, the measure \(\mu_{l}^{n}\) is a probability supported on \(\gamma_{n}[0,1]\), since \(\sum_{u\in\partial\mathbb{T}^{n}_{l}}\pi_{n}(u)=1\) (this is easily checked by induction).
Let \(\varepsilon_{l}:=\alpha\delta_{k_{0}+l+1}=\alpha^{l+2}\cdot\delta_{k_{0}}\) for all \(l\in\mathbb{N}\), and note that \(\varepsilon_{l}\geqslant\alpha^{l+3}\Delta\). For every \(L\in\mathbb{N}\), we have
\[\int\int\frac{\mathrm{d}\mu_{L}^{n}(y)}{(\varepsilon_{L}\vee|x- y|)^{s}}\mathrm{d}\mu_{L}^{n}(x) =\sum_{u,v\in\partial\mathbb{T}^{n}_{1}}\int_{\gamma^{n}_{u}[0,1] }\int_{\gamma^{n}_{v}[0,1]}\frac{\mathrm{d}\mu_{L}^{n}(y)}{(\varepsilon_{L} \vee|x-y|)^{s}}\mathrm{d}\mu_{L}^{n}(x)\] \[\leqslant\sum_{u,v\in\partial\mathbb{T}^{n}_{L}}\varepsilon_{|u \wedge v|}^{-s}\cdot\pi_{n}(u)\cdot\pi_{n}(v)\] \[=\sum_{l=0}^{L}\varepsilon_{l}^{-s}\cdot\sum_{\begin{subarray}{ c}u,v\in\partial\mathbb{T}^{n}_{l}\\ |u\wedge v|=l\end{subarray}}\pi_{n}(u)\cdot\pi_{n}(v)\] \[\leqslant\sum_{l=0}^{L}\varepsilon_{l}^{-s}\cdot\sum_{t\in \partial\mathbb{T}^{n}_{l}}\sum_{\begin{subarray}{c}u,v\in\partial\mathbb{T}^ {n}_{l}\\ u,v\in l\end{subarray}}\pi_{n}(u)\cdot\pi_{n}(v) \tag{5}\] \[=\sum_{l=0}^{L}\varepsilon_{l}^{-s}\cdot\sum_{t\in\partial \mathbb{T}^{n}_{l}}\pi_{n}(t)^{2}\] \[\leqslant\sum_{l=0}^{L}\varepsilon_{l}^{-s}\cdot\max_{t\in \partial\mathbb{T}^{n}_{l}}\pi_{n}(t)\cdot\sum_{t\in\partial\mathbb{T}^{n}_{l} }\pi_{n}(t)\] \[\leqslant\left(\alpha^{3}\Delta\right)^{-s}\cdot\sum_{l=0}^{L} \max_{t\in\partial\mathbb{T}^{n}_{l}}\pi_{n}(t)\cdot\alpha^{-sl}.\]
Recall that we chose \(\alpha\) small enough so as to have \(\beta<\alpha\). Thus, we can fix a constant \(s>1\) such that \(\beta<\alpha^{s}\). We claim now that for almost every realization of \(\mathcal{F}\), it is possible to choose \(L=L_{n}(\omega)\), with \(L_{n}\to\infty\) as \(n\to\infty\), so that
\[\varlimsup_{n\to\infty}\sum_{l=0}^{L_{n}}\max_{t\in\partial\mathbb{T}^{n}_{l} }\pi_{n}(t)\cdot\alpha^{-sl}<\infty. \tag{6}\]
It is here that the probabilistic machinery comes into play, through the fact that \(\mathcal{F}\) satisfies Property (\(\varnothing\)). We will introduce a family \((E_{m})_{m\in\mathbb{N}^{*}}\) of events, with \(\sum_{m\geqslant 1}\mathbb{P}(E_{m})<\infty\), such that the event \(E_{k_{0}+l}\) holds whenever there exists a node \(u\in\partial\mathbb{T}^{n}_{l}\) with \(\pi_{n}(u)>\beta^{l}\). The Borel-Cantelli Lemma will imply that almost surely this cannot happen for \(l\) large enough, and in turn prove (6).
To get there, let \(l\in\mathbb{N}^{*}\) and suppose that there exists \(u\in\mathbb{T}^{n}_{l}\) such that \(\pi_{n}(u)>\beta^{l}\). Denoting by \(u_{0},\ldots,u_{l}\) the geodesic from the root to \(u\) in \(\mathbb{T}^{n}\), this can be reformulated as \(\prod_{0\leqslant k<l}\kappa_{n}(u_{k})<(1/\beta)^{l}\). By the definition of \(\beta\) given in (2) as the inverse of the geometric mean of the two lower bounds for \(\kappa_{n}(\cdot)\) obtained in Proposition 3, we see that there must exist a number \(j>l/2\) of indices \(l_{1}<\ldots<l_{j}\in\llbracket 0,l\llbracket\) such that, for each \(i\in\llbracket 1,j\rrbracket\), the path \(\gamma^{n}_{u_{l_{j}}}\) does not deviate of a factor \(\rho=\sqrt{18\alpha}\) from being a straight line. In particular, there exists \(\sigma_{1}\leqslant\ldots\leqslant\sigma_{j}<\tau_{j}\leqslant\ldots\leqslant \tau_{1}\in[0,1]\) such that, for every \(i\in\llbracket 1,j\rrbracket\):
\[|\gamma_{n}(\sigma_{i})-\gamma_{n}(\tau_{i})|=\delta_{k_{0}+l_{i}}\quad\text {and}\quad\gamma_{n}[\sigma_{i},\tau_{i}]\subset S(\gamma_{n}(\sigma_{i}), \gamma_{n}(\tau_{i});\rho\delta_{k_{0}+l_{i}}).\]
Now, writing \(m_{i}=k_{0}+l_{i}\) for all \(i\in\llbracket 1,j\rrbracket\), let us discretise this information.
Discretisation step. For each \(m\in\mathbb{N}\), let \(\left(\overline{B}(z,\rho\delta_{m})\right)_{z\in\mathcal{Z}_{m}}\) be a covering of \([0,1]^{d}\) by balls of radius \(\rho\delta_{m}\), with centres \(z\in[0,1]^{d}\) more than \(\rho\delta_{m}\) apart so that the \(\left(\overline{B}(z,\rho\delta_{m}/2)\right)_{z\in\mathcal{Z}_{m}}\) are disjoint. For each \(i\in\llbracket 1,j\rrbracket\), we can find \(x_{i},y_{i}\in\mathcal{Z}_{m_{i}}\) such that \(\gamma_{n}(\sigma_{i})\in\overline{B}(x_{i},\rho\delta_{m_{i}})\) and \(\gamma_{n}(\tau_{i})\in\overline{B}(y_{i},\rho\delta_{m_{i}})\), and we have
\(\gamma_{n}[\sigma_{i},\tau_{i}]\subset S(x_{i},y_{i};2\rho\delta_{m_{i}})\). Discretising further, let us place a number \(H\in\mathbb{N}^{*}\) (to be adjusted soon) of points
\[\left(z_{h}^{i}=\left(1-\frac{h}{H}\right)\cdot x_{i}+\frac{h}{H}\cdot y_{i},\ h\in[\![0,H]\!]\right),\]
spread evenly on the line segment \([x_{i},y_{i}]\). By construction, the path \(\gamma_{n}\) must meet each one of the balls \(\left(\overline{B}\left(z_{h^{\prime}}^{i},2\rho\delta_{m_{i}}\right)\,;\ h\in[\![0,H]\!]\right)\). Now, since \(\gamma_{n}[\![0,1]\subset(\mathcal{F})_{1/(n+1)}\), a similar statement holds for \(\mathcal{F}\): namely, for all \(n\) sufficiently large so that \(1/(n+1)\leqslant\rho\delta_{m_{i}}\), the set \(\mathcal{F}\) must meet each one of the balls \(\left(\overline{B}\left(z_{h^{\prime}}^{i},3\rho\delta_{m_{i}}\right)\,;\ i \in[\![1,j]\!],h\in[\![0,H]\!]\right)\); i.e, the intersection event
\[A_{x_{1},y_{1};\ldots;x_{j},y_{j}}^{m_{1};\ldots,m_{j},y_{j}}\text{: ``for each }i\in[\![1,j]\!]\text{ and every }h\in[\![0,H]\!]\text{, the set }\mathcal{F}\text{ meets } \overline{B}\left(z_{h^{\prime}}^{i},3\rho\delta_{m_{i}}\right)\text{''}\]
must be realized. Here, the sequence \(x_{1},y_{1};\ldots;x_{j},y_{j}\) has the following properties:
* for every \(i\in[\![1,j]\!]\), we have \(x_{i},y_{i}\in\mathcal{Z}_{m_{i}}\), with \((1-2\rho)\delta_{m_{i}}\leqslant|x_{i}-y_{i}|\leqslant(1+2\rho)\delta_{m_{i}}\),
* for every \(i\in[\![1,j]\!]\), we have \(x_{i+1},y_{i+1}\in S(x_{i},y_{i};2\rho\delta_{m_{i}})\).
We shall call any sequence satisfying those two properties _admissible with respect to \(m_{1},\ldots,m_{j}\)_.
Summing up the previous reasoning, we have shown that, if there exists a node \(u\in\partial\mathbb{T}_{l}^{n}\) such that \(\pi_{n}(u)>\beta^{l}\), then for all \(n\) sufficiently large so that \(1/(n+1)\leqslant\rho\delta_{k_{0}+l-1}\), there must exist a number \(j>l/2\) of indices \(l_{1}<\ldots<l_{j}\in[\![0,l[\), and a sequence \(x_{1},y_{1};\ldots;x_{j},y_{j}\) which is admissible with respect to \(k_{0}+l_{1},\ldots,k_{0}+l_{j}\), such that the event \(A_{x_{1},y_{1};\ldots;x_{j},y_{j}}^{k_{0}+l_{1},\ldots,k_{0}+l_{j}}\) is realized. Let us now define, for all \(m\in\mathbb{N}^{*}\), the event:
\[E_{m}=\bigcup_{m/3\leqslant j\leqslant m}\ \bigcup_{m_{1}<\ldots<m_{j}\in[\![0,m [\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\!\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\!\[\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\!\[\[\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\[\![\![\![\![\![\![\![\![\![\![\[\![\![\![\![\[\![\![\[\![\![\![\![\![\[\![\![\![\![\![\!\[\[\![\![\![\[\![\![\![\[\![\!\[\[\![\![\[\![\![\!\[\[\![\![\[\!\[\![\![\[\!\[\[\[\!\[\[\!\[\[\!\[\[\[\!\[\[\!\[\[\[\!\[\[\[\[\!\[\[\[\[\[\[\!\[\[\[\[\!\[\[\[\[\[\[\[\[\[\[\!\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\!\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\!\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\[\!\[\[\[\[\!\[\
so as to have \((1+(6\zeta+2)\rho)<\rho/\alpha\): by (8), the sausage \(S\left(x_{j},y_{j};3\zeta\rho\delta_{m_{j}}\right)\) meets the dilated ball \(\overline{B}\left(z_{h}^{j-1},\zeta\cdot 3\rho\delta_{m_{j-1}}\right)\) for at most one \(h_{0}\in[\![0,H]\!]\). We add all the balls
\[\left(\overline{B}\left(z_{h}^{j-1},3\rho\delta_{m_{j-1}}\right);\ h\in[\![0,H ]\!]\setminus\{h_{0}\}\right)\]
to our collection. We iterate this argument, noticing that, as the sausages \((S(x_{i},y_{i};3\zeta\rho\delta_{m_{j}});\ i\in[\![1,j]\!])\) are nested (without loss of generality, we may assume that \(\alpha\) is small enough so as to have \(2+3\zeta\alpha\leqslant 3\zeta\)) we only have to worry about intersections with the previous sausage at each step. At the end of the construction, we obtain a collection of \(\zeta\)-separated balls that \(\mathcal{F}\) must meet on the event \(A_{x_{1},y_{1};\ldots;m_{j}}^{m_{1},\ldots,m_{j}}\), which has cardinality at least \((H+1)+(j-1)\cdot H\geqslant Hj\). Since \(\mathcal{F}\) satisfies Property \((\varnothing)\), we deduce that
\[\mathbb{P}\left(A_{x_{1},y_{1};\ldots;\ldots;y_{j},y_{j}}^{m_{1},\ldots,m_{j},m_{j}}\right)\leqslant Q\cdot q^{Hj}.\]
Going back to the union bound (7), we get
\[\mathbb{P}(E_{m}) \leqslant\sum_{m/3\leqslant j\leqslant m}\ \sum_{m_{1}<\ldots<m_{j}\in[0,m]}\sum_{x_{1},y_{1},\ldots;\ldots;y_{j},\ y_{j}\ \text{admissible}}Q\cdot q^{Hj}\] \[=\sum_{m/3\leqslant j\leqslant m}\ \sum_{m_{1}<\ldots<m_{j}\in[\![0,m ]\!]}\#\{\text{admissible sequences with respect to }m_{1},\ldots,m_{j}\}\cdot Q\cdot q^{Hj}.\]
Now, given an integer \(j\) such that \(m/3\leqslant j\leqslant m\), and indices \(m_{1}<\ldots<m_{j}\in[\![0,m[\), let us control the number of admissible sequences with respect to \(m_{1},\ldots,m_{j}\). First, there exists a constant \(C=C(d,\alpha)>0\) such that \(\#\mathcal{Z}_{m_{1}}\leqslant C\cdot\delta_{m_{1}}^{-d}\), this because the balls \(\left(\overline{B}\left(z,\rho\delta_{m_{1}}/2\right);\ z\in\mathcal{Z}_{m_{1 }}\right)\) are disjoint and included in the \(\rho\delta_{m_{1}}/2\)-neighborhood of \([0,1]^{d}\). Next, we claim that there exists a constant \(c=c(d)\) such that for each \(i\in[\![1,j]\!]\),
\[\#\mathcal{Z}_{m_{i+1}}\cap S(x,y;2\rho\delta_{m_{i}})\leqslant\frac{c}{\rho} \cdot\left(\frac{\delta_{m_{i}}}{\delta_{m_{i+1}}}\right)^{d}\quad\text{for all }x,y\in\mathcal{Z}_{m_{i}}\ \text{such that}\ |x-y|\leqslant(1+2\rho)\delta_{m_{i}}.\]
This is because the balls \(\left(\overline{B}\left(z,\rho\delta_{m_{i+1}}/2\right);\ z\in\mathcal{Z}_{m_{ i+1}}\cap S(x,y;2\rho\delta_{m_{i}})\right)\) are disjoint and included in the sausage \(S\left(x,y;2\rho\delta_{m_{i}}+\rho\delta_{m_{i+1}}/2\right)\). Thus, we obtain that the number of admissible sequences with respect to \(m_{1},\ldots,m_{j}\) is bounded from above by
\[\left(C\cdot\delta_{m_{1}}^{-d}\right)^{2}\cdot\prod_{i=1}^{j-1}\left(\frac{c} {\rho}\cdot\left(\frac{\delta_{m_{i}}}{\delta_{m_{i+1}}}\right)^{d}\right)^{2 }=C^{2}\cdot\frac{(c/\rho)^{2(j-1)}}{\delta_{m_{j}}^{2d}}=\frac{C^{2}}{(c/ \rho)^{2j}}\cdot\frac{(c/\rho)^{2j}}{\alpha^{2dm_{j}}}=:C^{\prime}\cdot\frac{( c/\rho)^{2j}}{\alpha^{2dm_{j}}},\]
where the constant \(C^{\prime}\) depends only on \(d\) and \(\alpha\). Plugging this inequality into the union bound, we find
\[\mathbb{P}(E_{m}) \leqslant\sum_{m/3\leqslant j\leqslant m}\ \sum_{m_{1}<\ldots<m_{j}\in[\![0,m]\!]}C^{\prime} \cdot\frac{(c/\rho)^{2j}}{\alpha^{2dm_{j}}}\cdot Q\cdot q^{Hj}\] \[\leqslant\sum_{m/3\leqslant j\leqslant m}\ \sum_{m_{1}<\ldots<m_{j}\in[\![0,m]\!]}C^{\prime} \cdot\frac{(c/\rho)^{2m}\lor 1}{\alpha^{2dm}}\cdot Q\cdot q^{Hm/3}\] \[=C^{\prime}\cdot Q\cdot\sum_{m/3\leqslant j\leqslant m}\binom{m}{j }\cdot\left(\frac{(c/\rho)^{2}\lor 1}{\alpha^{2d}}\cdot q^{H/3}\right)^{m}\] \[\leqslant C^{\prime}\cdot Q\cdot 2^{m}\cdot\left(\frac{(c/\rho)^{2}\lor 1 }{\alpha^{2d}}\cdot q^{H/3}\right)^{m}=C^{\prime}\cdot Q\cdot\left(2\cdot \frac{(c/\rho)^{2}\lor 1}{\alpha^{2d}}\cdot q^{H/3}\right)^{m}.\]
Recalling that \(\rho=\sqrt{18\alpha}\) and \(H=\lfloor(1-2\rho)/\left((6\zeta+1)\rho\right)\rfloor\), a straightforward analysis shows that the \(2\cdot\left((c/\rho)^{2}\lor 1\right)\cdot\alpha^{-2d}\cdot q^{H/3}\) term can be made strictly smaller than \(1\) by choosing \(\alpha\) small enough. For such \(\alpha\), we get \(\sum_{m\geqslant 1}\mathbb{P}(E_{m})<\infty\).
Concluding the proof.** By the Borel-Cantelli lemma, almost surely, the event \(E_{m}\) fails to be realized for all sufficiently large \(m\). Therefore, to almost every realization of \(\mathcal{F}\) corresponds some \(l_{0}\geqslant 2k_{0}\) such that \(E_{m}\) fails to be realized for all \(m\geqslant k_{0}+l_{0}\). Now, define \(L_{n}=L_{n}(\omega)\) as the largest integer \(l\in\mathbf{N}^{*}\) such that \(\rho\delta_{k_{0}+l-1}\geqslant 1/(n+1)\) (note that \(L_{n}\) is well defined for all sufficiently large \(n\), and that \(L_{n}\to\infty\) as \(n\to\infty\)), and let \(\nu_{n}=\mu_{L_{n}}^{n}\). Recalling (5), we have
\[\int\int\frac{\mathrm{d}\nu_{n}(y)}{(\varepsilon_{L_{n}}\vee|x-y|)^{s}}\mathrm{ d}\nu_{n}(x)\leqslant\left(\alpha^{3}\Delta\right)^{-s}\cdot\sum_{l=0}^{L_{n}} \max_{u\in\partial\mathbf{T}_{l}^{n}}\pi_{n}(u)\cdot\alpha^{-sl}.\]
By all the above work, if for some \(l\in[\![2k_{0},L_{n}]\!]\), there exists a node \(u\in\partial\mathbf{T}_{l}^{n}\) such that \(\pi_{n}(u)>\beta^{l}\), then the event \(E_{k_{0}+l}\) must be realized. Now we can write, recalling that \(s\) was chosen such that \(\beta<\alpha^{s}\):
\[\sum_{l=0}^{L_{n}}\max_{u\in\partial\mathbf{T}_{l}^{n}}\pi_{n}(u)\cdot\alpha^{ -sl}\leqslant\sum_{0\leqslant l<l_{0}}\alpha^{-sl}+\sum_{l=l_{0}}^{L_{n}}\beta ^{l}\cdot\alpha^{-sl}\leqslant\sum_{0\leqslant l<l_{0}}\alpha^{-sl}+\sum_{l \geqslant l_{0}}\left(\frac{\beta}{\alpha^{s}}\right)^{l}<\infty.\]
This proves that
\[\varlimsup_{n\to\infty}\int\int\frac{\mathrm{d}\nu_{n}(y)}{(\varepsilon_{L_{n }}\vee|x-y|)^{s}}\mathrm{d}\nu_{n}(x)<\infty.\]
Since we are working on the compact space \([\![0,1]\!]^{d}\), the sequence of probability measures \((\nu_{n})_{n\in\!\mathbf{N}}\) is automatically tight: let \(\nu\) be any subsequential weak limit of \((\nu_{n})_{n\in\!\mathbf{N}}\). For each \(\varepsilon>0\), by the Portmanteau theorem, we have \(\nu((\mathcal{F})_{\varepsilon})\geqslant\varlimsup_{n\to\infty}\nu_{n}(( \mathcal{F})_{\varepsilon})=1\) (the last equality holds because the support of \(\nu_{n}\) is included in \((\mathcal{F})_{\varepsilon}\) for all sufficiently large \(n\) thanks to (4)). Since \(\mathcal{F}\) is closed, we deduce that the probability measure \(\nu\) is supported on \(\mathcal{F}\). Furthermore, since \(\nu\) is the weak limit of some subsequence \((\nu_{n_{k}})_{k\in\!\mathbf{N}}\), we have
\[\int\int\frac{\mathrm{d}\nu(y)}{(\varepsilon\vee|x-y|)^{s}} \mathrm{d}\nu(x) =\lim_{k\to\infty}\int\int\frac{\mathrm{d}\nu_{n_{k}}(y)}{( \varepsilon\vee|x-y|)^{s}}\mathrm{d}\nu_{n_{k}}(x)\] \[\leqslant\varlimsup_{n\to\infty}\int\int\frac{\mathrm{d}\nu_{n}(y )}{(\varepsilon\vee|x-y|)^{s}}\mathrm{d}\nu_{n}(x)\] \[\leqslant\varlimsup_{n\to\infty}\int\int\frac{\mathrm{d}\nu_{n}(y )}{(\varepsilon_{L_{n}}\vee|x-y|)^{s}}\mathrm{d}\nu_{n}(x)<\infty.\]
This upper bound does not depend on \(\varepsilon\), and thus letting \(\varepsilon\to 0^{+}\), we conclude by the monotone convergence theorem that the integrability condition (3) holds, completing the proof of Theorem 2.
### Lower bound for the Hausdorff dimension of \(\mathcal{F}_{\infty}\)
The upper bound for the Hausdorff dimension of \(\mathcal{F}_{\infty}\) stated in Theorem 1 was established in Proposition 2, we now come to the lower bound. First, we check that \(\mathcal{F}_{\infty}\) satisfies Property \((\varnothing)\) and almost surely has a non-trivial connected component. With Theorem 2, this directly yields the result in dimension \(d=2\), which bootstraps to any dimension with a slicing lemma.
**Proposition 4**.: The frontier \(\mathcal{F}_{\infty}\) satisfies Property \((\varnothing)\): there exists constants \(Q>0\) and \(q\in\left]0,1\right[\) such that, for every collection \(\left(\overline{B}(x_{i},r_{i});\ i\in[\![1,n]\!]\right)\) of \(7\)-separated balls, we have
\[\mathbb{P}\left(\text{for each $i\in[\![1,n]\!]$, the frontier $\mathcal{F}_{\infty}$ meets $\overline{B}(x_{i},r_{i})$}\right)\leqslant Q\cdot q^{n}.\]
Proof.: This result is a consequence of Lemma 1. Let \(\left(\overline{B}(x_{i},r_{i});\ i\in[\![1,n]\!]\right)\) be a collection of \(7\)-separated balls, and fix a realization of the intersection event: "for each \(i\in[\![1,n]\!]\), the frontier \(\mathcal{F}_{\infty}\) meets \(\overline{B}(x_{i},r_{i})\)".
Notice that the initial points \(R_{0}\) and \(B_{0}\) belong to at most two distinct balls, with indices say \(i_{R}\) and \(i_{B}\). For every other \(i\neq i_{R},i_{B}\), both \(R_{0}\) and \(B_{0}\) lie outside \(\overline{B}(x_{i},\mathcal{T}_{i})\), hence the event \(G_{i}:=C_{\mathcal{T}_{i_{\mathcal{T}_{i}}}/6,\mathcal{T}_{i}}^{\mathbb{x}_{i}}\) of Lemma 1 fails to be realized. Indeed, if \(G_{i}\) were realized, then the ball \(\overline{B}(x_{i},\mathcal{T}_{i}/6)\) would be monochromatic at the end of the coloring, hence \(\mathcal{F}_{\infty}\) would not meet \(\overline{B}(x_{i},r_{i})\). Therefore, we have
\[\big{(}\text{for each }i\in\llbracket 1,n\rrbracket,\text{ the frontier }\mathcal{F}_{\infty}\text{ meets }\overline{B}(x_{i},r_{i})\big{)}\\ \subset(\text{for each }i\in\llbracket 1,n\rrbracket\setminus\{i_{R},i_{B}\}, \text{ the event }G_{i}\text{ fails to be realized})\,.\]
The events \((G_{i})_{i\in\llbracket 1,n\rrbracket}\) are independent and have probability at least \(p>0\), so we conclude that
\[\mathbb{P}\left(\text{for each }i\in\llbracket 1,n\rrbracket,\text{ the frontier }\mathcal{F}_{\infty}\text{ meets }\overline{B}(x_{i},r_{i})\right)\leqslant(1-p)^{n-2}.\]
Before proving that \(\mathcal{F}_{\infty}\) contains a non trivial connected component, we first consider the following proposition.
**Proposition 5**.: Almost surely, as \(n\to\infty\), the discrete frontier \(\mathcal{F}_{n}=\big{\{}x\in[0,1]^{d}:d(x,\mathcal{R}_{n})=d(x,\mathcal{B}_{n}) \big{\}}\) converges towards \(\mathcal{F}_{\infty}\) for the Hausdorff distance.
Proof.: By the definition of \(\mathcal{R}_{\infty}\) and \(\mathcal{B}_{\infty}\), as \(n\to\infty\) we have \(\mathcal{R}_{n}\to\mathcal{R}_{\infty}\) and \(\mathcal{B}_{n}\to\mathcal{B}_{\infty}\) for the Hausdorff distance. Now, fix \(\varepsilon>0\), and let us prove that the inclusions \(\mathcal{F}_{n}\subset(\mathcal{F}_{\infty})_{\varepsilon}\) and \(\mathcal{F}_{\infty}\subset(\mathcal{F}_{n})_{\varepsilon}\) hold for all sufficiently large \(n\).
* For all sufficiently large \(n\), the following holds: for each \(x\in[0,1]^{d}\), the ball \(\overline{B}(x,\varepsilon)\) contains an element of \(\{R_{0},B_{0},X_{1},\ldots,X_{n}\}=\mathcal{R}_{n}\cup\mathcal{B}_{n}\). Then, for each \(x\in\mathcal{F}_{n}\), as \(d(x,\mathcal{R}_{n})=d(x,\mathcal{B}_{n})\), the ball \(\overline{B}(x,\varepsilon)\) must contain an element of \(\mathcal{R}_{n}\) and an element of \(\mathcal{B}_{n}\). Since \(\mathcal{R}_{\infty}\cap\overline{B}(x,\varepsilon)\) and \(\mathcal{B}_{\infty}\cap\overline{B}(x,\varepsilon)\) are two non-empty closed subsets whose union forms the connected set \(\overline{B}(x,\varepsilon)\), they cannot be disjoint; hence \(d(x,\mathcal{F}_{\infty})\leqslant\varepsilon\), i.e. \(\mathcal{F}_{n}\subset(\mathcal{F}_{\infty})_{\varepsilon}\).
* Conversely, by the convergence of \(\mathcal{R}_{n}\) and \(\mathcal{B}_{n}\) towards \(\mathcal{R}_{\infty}\) and \(\mathcal{B}_{\infty}\), we have \(\sup_{x\in\mathcal{R}_{\infty}}d(x,\mathcal{R}_{n})\leqslant\varepsilon\) and \(\sup_{x\in\mathcal{B}_{\infty}}d(x,\mathcal{B}_{n})\leqslant\varepsilon\) for all sufficiently large \(n\). Thus, for every \(x\in\mathcal{F}_{\infty}=\mathcal{R}_{\infty}\cap\mathcal{B}_{\infty}\), the ball \(\overline{B}(x,\varepsilon)\) contains an element of \(\mathcal{R}_{n}\) and an element of \(\mathcal{B}_{n}\). Since \(\big{\{}y\in\overline{B}(x,\varepsilon):d(y,\mathcal{R}_{n})\leqslant d(y, \mathcal{B}_{n})\big{\}}\) and \(\big{\{}y\in\overline{B}(x,\varepsilon):d(y,\mathcal{R}_{n})\geqslant d(y, \mathcal{B}_{n})\big{\}}\) are two non-empty closed subsets whose union forms the connected set \(\overline{B}(x,\varepsilon)\), they cannot be disjoint; hence \(d(x,\mathcal{F}_{n})\leqslant\varepsilon\), i.e. \(\mathcal{F}_{\infty}\subset(\mathcal{F}_{n})_{\varepsilon}\).
**Corollary 1**.: _If dimension \(d=2\), almost surely the frontier \(\mathcal{F}_{\infty}\) contains a non-trivial connected component._
Proof.: Note that in dimension \(2\), for each \(n\in\mathbb{N}\), the discrete frontier \(\mathcal{F}_{n}\) is a finite union of curves, where each curve is composed of line segments belonging to the boundary of the Voronoi cells of \(R_{0},B_{0},X_{1},\ldots,X_{n}\) (see Figure 8 below). By virtue of Lemma 1, almost surely there exists some \(r>0\) such that the balls \(\overline{B}(R_{0},r)\) and \(\overline{B}(B_{0},r)\) are monochromatic at the end of the coloring (see Remark 1). In particular, this implies that the union of the red (resp. blue) cells in the Voronoi diagram of the points \(R_{0},B_{0},X_{1},\ldots,X_{n}\) contains the ball \(\overline{B}(R_{0},r/2)\) (resp. \(\overline{B}(B_{0},r/2)\)). Therefore, the discrete frontier \(\mathcal{F}_{n}\) contains a curve, i.e. the image of a continuous path \(\gamma_{n}:[0,1]\to[0,1]^{d}\), of diameter at least \(r\) (Figure 8 does not lie). Finally, recall that for the Hausdorff distance:
1. the set of closed subsets of \([0,1]^{d}\) is compact,
2. any limit of a sequence of connected closed subsets is connected.
Therefore, we can extract from \((\gamma_{n}[0,1])_{n\in\mathbb{N}}\) a subsequence which converges to some limit \(\mathcal{C}\), that is necessarily connected and has diameter at least \(r\). Finally, using that \(\mathcal{F}_{n}\) converges to \(\mathcal{F}_{\infty}\), the fact that \(\gamma_{n}[0,1]\subset\mathcal{F}_{n}\) implies that \(\mathcal{C}\) is included in \(\mathcal{F}_{\infty}\).
Proposition 4 and Corollary 1 show that in dimension \(d=2\), the frontier \(\mathcal{F}_{\infty}\) fulfils the hypotheses of Theorem 2. Therefore, there exists a constant \(s>1\) such that, almost surely,
\[\dim_{H}\mathcal{F}_{\infty}\geqslant s>1.\]
This gives the lower bound stated in Theorem 1 when \(d=2\). Finally, the lower bound in higher dimensions is deduced from the two-dimensional case using a slicing argument, as explained in the next proposition.
**Proposition 6**.: For any dimension \(d\geqslant 2\), there exists \(\varepsilon>0\) such that, almost surely,
\[\dim_{H}\mathcal{F}_{\infty}\geqslant d-1+\varepsilon>d-1.\]
Proof.: Fix a plane \(\mathcal{P}\subset\mathbb{R}^{d}\) that contains the red and blue seeds \(R_{0}\) and \(B_{0}\), and denote by \(\mathcal{P}^{\perp}\) its orthogonal. By the slicing theorem [3, Theorem 1.6.2], for \(\varepsilon>0\), we have
\[H^{d-1+\varepsilon}(\mathcal{F}_{\infty})\geqslant\varepsilon_{d}\int_{ \mathcal{P}^{\perp}}H^{1+\varepsilon}(\mathcal{F}_{\infty}\cap(z+\mathcal{P} ))\mathrm{d}z,\]
Figure 8: Colored Voronoi diagram of the points \(R_{0},B_{0},X_{1},\ldots,X_{n}\). Balls with radius \(r/2\) are represented around \(R_{0}\) and \(B_{0}\). The path \(\gamma_{n}\) is also represented, and it has diameter at least \(r\).
where \(H^{l}(A)\) denotes the \(s\)-dimensional Hausdorff measure of a subset \(A\subset\mathbbm{R}^{d}\). By Proposition 4, the random closed subset \(\mathcal{F}_{\infty}\) satisfies Property (\(\varnothing\)). By Theorem 2, there exists a constant \(s>1\) such that almost surely, for any closed subset \(\mathcal{G}\subset\mathcal{F}\) with a non-trivial connected component, we have \(\dim_{H}\mathcal{G}\geqslant s\) (see Remark 2). Moreover, by Lemma 1, almost surely there exists some \(r>0\) such that the balls \(\overline{B}(R_{0},r)\) and \(\overline{B}(B_{0},r)\) are monochromatic at the end of the coloring (see Remark 1). As in the proof of Corollary 1, this implies that for all \(n\in\mathbbm{N}\), the union of the red (resp. blue) cells in the Voronoi diagram of the points \(R_{0},B_{0},X_{1},\ldots,X_{n}\) contains the ball \(\overline{B}(R_{0},r/2)\) (resp. \(\overline{B}(B_{0},r/2)\)). Hence, for every \(|z|\leqslant r/4\), the intersection between this union of red (resp. blue) cells and the affine plane \((z+\mathcal{P})\) contains a two-dimensional ball of radius \(r/4\). Now, using the exact same arguments as in the proof of Corollary 1, we deduce that for every \(|z|\leqslant r/4\), the closed subset \(\mathcal{F}_{\infty}\cap(z+\mathcal{P})\subset\mathcal{F}_{\infty}\) has a non-trivial connected component. Therefore, almost surely we have \(\dim_{H}\mathcal{F}_{\infty}\cap(z+\mathcal{P})\geqslant s\) for every \(|z|\leqslant r/4\). In particular, choosing \(\varepsilon>0\) such that \(1+\varepsilon<s\), by the definition of the Hausdorff dimension we have \(H^{1+\varepsilon}\left(\mathcal{F}_{\infty}\cap(z+\mathcal{P})\right)=\infty\) for all \(|z|\leqslant r/4\), and it follows that
\[H^{d-1+\varepsilon}(\mathcal{F}_{\infty})\geqslant\int_{\mathcal{P}^{\perp} \cap\overline{B}(0,r/4)}H^{1+\varepsilon}\left(\mathcal{F}_{\infty}\cap(z+ \mathcal{P})\right)\mathrm{d}z=\infty.\]
This achieves to show that \(\dim_{H}\mathcal{F}_{\infty}\geqslant d-1+\varepsilon\).
|
2310.08376 | Wigner transport in linear electromagnetic fields | Applying a Weyl-Stratonovich transform to the evolution equation of the
Wigner function in an electromagnetic field yields a multidimensional
gauge-invariant equation which is numerically very challenging to solve. In
this work, we apply simplifying assumptions for linear electromagnetic fields
and the evolution of an electron in a plane (two-dimensional transport), which
reduces the complexity and enables to gain first experiences with a
gauge-invariant Wigner equation. We present an equation analysis and show that
a finite difference approach for solving the high-order derivatives allows for
reformulation into a Fredholm integral equation. The resolvent expansion of the
latter contains consecutive integrals, which is favorable for Monte Carlo
solution approaches. To that end, we present two stochastic (Monte Carlo)
algorithms that evaluate averages of generic physical quantities or directly
the Wigner function. The algorithms give rise to a quantum particle model,
which interprets quantum transport in heuristic terms. | Clemens Etl, Mauro Ballicchia, Mihail Nedjalkov, Josef Weinbub | 2023-10-12T14:46:43Z | http://arxiv.org/abs/2310.08376v2 | # Wigner transport in linear electromagnetic fields
###### Abstract
Applying a Weyl-Stratonovich transform to the evolution equation of the Wigner function in an electromagnetic field yields a multidimensional gauge-invariant equation which is numerically very challenging to solve. In this work, we apply simplifying assumptions for linear electromagnetic fields and the evolution of an electron in a plane (two-dimensional transport), which reduces the complexity and enables to gain first experiences with a gauge-invariant Wigner equation. We present an equation analysis and show that a finite difference approach for solving the high-order derivatives allows for reformulation into a Fredholm integral equation. The resolvent expansion of the latter contains consecutive integrals, which is favorable for Monte Carlo solution approaches. To that end, we present two stochastic (Monte Carlo) algorithms that evaluate averages of generic physical quantities or directly the Wigner function. The algorithms give rise to a quantum particle model, which interprets quantum transport in heuristic terms.
+
Footnote †:
## 1 Introduction
The analysis of charged quantum particles in electromagnetic fields is, among others, particularly important to nanoelectronics [1, 2, 3, 4, 5, 6, 7, 8]. The established Wigner formulation of quantum mechanics [9] (see recent reviews [10, 11] and book [12]) defines the Wigner function by applying the Weyl transform to the density matrix [13]:
\[f_{\rm w}(\mathbf{p},\mathbf{x})=\int\frac{\rmd\mathbf{s}}{(2\pi\hbar)^{3}} \rme^{-\frac{\rmi}{\hbar}\mathbf{s}\cdot\mathbf{p}}\rho(\mathbf{x}+\frac{ \mathbf{s}}{2},\mathbf{x}-\frac{\mathbf{s}}{2}) \tag{1}\]
The density matrix \(\rho\) of a pure state is defined from the solution \(\psi\) of the Schrodinger equation as \(\rho(\mathbf{x},\mathbf{y})=\psi(\mathbf{x})\psi^{*}(\mathbf{y})\) and depends on two position variables. (1) is a transformation from the position space to the phase space, i.e., \(f_{\rm w}\) is a function of the momentum \(\mathbf{p}\) and the position \(\mathbf{x}\). The evolution equation for the Wigner function is obtained by applying the Weyl transform to the von Neumann equation \(\rmi\hbar\frac{\partial}{\partial t}\hat{\rho}=[\hat{H},\hat{\rho}]_{-}:=\hat{ H}\hat{\rho}-\hat{\rho}\hat{H}\), with the Hamiltonian \(\hat{H}=\frac{1}{2m}\left(-\rmi\hbar\nabla\right)^{2}+V(\mathbf{r})\)[14]. The potential energy \(V\) defines a central quantity of the standard theory, namely the Wigner potential:
\[V_{\rm w}(\mathbf{p},\mathbf{x})=\frac{1}{(2\pi\hbar)^{3}}\int\frac{\rmd \mathbf{s}}{\rmi\hbar}\rme^{-\frac{\rmi}{\hbar}\mathbf{s}\cdot\mathbf{p}}\Big{[} V\Big{(}\mathbf{x}+\frac{\mathbf{s}}{2}\Big{)}-V\Big{(}\mathbf{x}-\frac{ \mathbf{s}}{2}\Big{)}\Big{]} \tag{2}\]
The scalar potential \(\phi=V/e\), with the electron charge \(e\), and the canonical momentum operator \(-\rmi\hbar\nabla\), are fundamental for this picture. The choice of the gauge is implicitly assumed, i.e., the vector potential \(\mathbf{A}\) is chosen to be zero. However, any other couple \(\mathbf{A}^{\prime},\phi^{\prime}\) satisfying \(\mathbf{A}^{\prime}=\mathbf{A}+\nabla\chi,\quad\phi^{\prime}=\phi-\partial \chi/\partial t\) for a given function \(\chi\) modifies the Hamiltonian and may lead to a very different physical picture, despite that the electromagnetic environment \(\mathbf{B}=\nabla\times\mathbf{A}\), \(\mathbf{E}=-\nabla\phi-\partial\mathbf{A}/\partial t\) remains independent on \(\chi\)[15]. An example is related to electrons governed by an electric field \(\mathbf{E}\)[16] in a periodic potential. If Wannier-Stark localized states [17] are used for the description, the picture involves a discrete energy spectrum accounting for the translational crystal symmetry. If accelerated Bloch states (Houston states) [18] are used, the picture of continuous acceleration of the wave vector in the crystal band structure gives rise to a periodic electron motion, called Bloch oscillations. It has been shown that the two pictures are equivalent and related to the choice of a vector (\(\mathbf{A}=-\mathbf{E}t;\ \phi=0\)), or a scalar potential gauge (\(\mathbf{A}=0,\phi=-\mathbf{E}\mathbf{x}\)), linked by \(\chi=-\mathbf{E}\mathbf{x}t\)[19, 20]. For the standard Wigner picture, the zero vector potential is a convenient choice, because then the canonical momentum \(\mathbf{p}\) and the kinetic momentum \(\mathbf{P}\) coincide. This is not true anymore in the case of a magnetic field when \(\mathbf{P}=\mathbf{p}-e\mathbf{A}(\mathbf{x})\). In this case, using the kinetic momentum as a phase space variable offers the advantage that the latter is a physical quantity and thus gauge-invariant [21, 22, 23, 24, 25, 26]. Inspired by this fact, Stratonovich [27] generalized the Weyl transform to
\[f_{\rm w}(\mathbf{P},\mathbf{x})=\int\frac{\rmd\mathbf{s}}{(2\pi\hbar)^{3}} \rme^{-\frac{\rmi}{\hbar}\mathbf{s}\cdot[\mathbf{P}+\frac{\mathbf{s}}{2}\int_ {-1}^{1}\rmd\tau\mathbf{A}(\mathbf{x}+\frac{\mathbf{s}}{2})]}\rho(\mathbf{x} +\frac{\mathbf{s}}{2},\mathbf{x}-\frac{\mathbf{s}}{2}). \tag{3}\]
Now the transform depends on the vector potential, however, the evolution equation for the Wigner function regarding the position and the kinetic momentum depends only on the electromagnetic field \(\mathbf{E}\), \(\mathbf{B}\)[28]. Thus, the Weyl-Stratonovich transform lifts the gauge dependence, offering more physical transparency to the quantum evolution. In the case \(\mathbf{A}=0\), the Weyl-Stratonovich transform equals the Weyl transform and can thus be seen as an extension. For the sake of convenience, we use \(\mathbf{p}\) instead of \(\mathbf{P}\) to refer to the kinetic momentum for the remainder of this work.
There are two ways to formulate the evolution equation depending on the physical settings. If the physical system is bounded in space, in a domain enclosed in \((-\mathbf{L}/2,\mathbf{L}/2)\), where \(\mathbf{L}\) is called coherence length, the momentum space becomes discrete, involving the integer variable \(\mathbf{m}\): \(\mathbf{P}_{\mathbf{m}}=\mathbf{m}\Delta\mathbf{P},\quad\mathbf{m}\in\mathbb{ Z}\times\mathbb{Z},\quad\Delta\mathbf{P}=2\pi\hbar/\mathbf{L}\). In the limit \(\mathbf{L}\to\infty\), called long coherence length limit, the momentum becomes continuous [29]. For electromagnetic fields with general spatiotemporal dependence, both formulations are very challenging from a numerical point of view. A computational experience with the treatment of multidimensional sums and integrals is missing. To gain first experiences, we look for simplified physical conditions to reduce the equation's complexity, allowing in particular for the application of analytical approaches. The fact that for a homogeneous magnetic field certain integrals vanish is helpful for choosing such conditions, while the field appears as the magnetic component of the Lorentz force in the Liouville operator of the reduced equation. This prompts considering the next term in the Taylor expansion of the magnetic field \(\mathbf{B}(\mathbf{x})\), namely linearly dependent magnetic fields. Furthermore, in the case of linear electric fields, they complete the force term in the Liouville operator to a full Lorentz force. We can thus formulate the physical settings under consideration: We consider a transport in a two-dimensional (\(2D\)) plane with coordinates \(\mathbf{x}=(x,y,0)^{T}\). A magnetic field \(\mathbf{B}(y)=(0,0,B_{0}+B_{1}y)^{T}\) points perpendicular to the plane and depends linearly on \(y\). The electric field \(\mathbf{E}(x,y)=(E_{x}x,E_{y}y,0)^{T}\) accelerates the electron in the plane. The obtained equation using the long coherence length limit [29] is given by
\[\left(\frac{\partial}{\partial t}+\frac{\mathbf{p}}{m}\cdot\frac{\partial}{ \partial\mathbf{x}}+\mathbf{F}\cdot\frac{\partial}{\partial\mathbf{p}}\right) f_{\mathrm{w}}\big{(}\mathbf{p},\mathbf{x}\big{)}=\frac{B_{1}\hbar^{2}}{m} \frac{e}{12}\left(\frac{\partial^{2}}{\partial p_{y}^{2}}\frac{\partial}{ \partial x}-\frac{\partial}{\partial p_{x}}\frac{\partial}{\partial p_{y}} \frac{\partial}{\partial y}\right)f_{\mathrm{w}}\big{(}\mathbf{p},\mathbf{x} \big{)}. \tag{4}\]
We note that the Lorentz force \(\mathbf{F}=e[\mathbf{E}(x,y)+\mathbf{p}\times\mathbf{B}(y)/m]\) in the Liouville operator on the left depends on the electromagnetic field. The operator corresponds to a classical motion over Newtonian trajectories, accelerated by the Lorentz force, linearly dependent on the position coordinates. The term on the right-hand side depends only on the magnetic field gradient \(B_{1}\) and consistently vanishes if \(B_{1}\to 0\). This term is responsible for the quantum character of the evolution process. Indeed, the structure of (4) resembles the standard Wigner equation. The latter consists of the forceless Liouville operator, whose interplay with the Wigner potential term gives rise to a fully quantum-coherent evolution. Indeed, the equation is equivalent to the von Neumann equation and in a pure state to the Schrodinger equation [30, 13]. However, this term is given by the convolution of the Wigner function with \(V_{\mathrm{w}}\) in (2) and thus depends linearly on \(f_{\mathrm{w}}\). The corresponding term in (4) introduces high-order mixed derivatives and hence has different numerical aspects. The numerical experience with the former equation has matured for more than three decades [31, 32, 33, 34, 35, 36]. Furthermore, a peculiarity of phase space formulations of quantum mechanics is the ability to use them for further development of heuristic, physics-based models, associated with quantum phenomena and processes. Good examples are quantum particle models where particles are provided with additional attributes, such as sign or affinity, while the action of the electric potential is interpreted as scattering or as particle generation [37]. In contrast, alternative quantum theories associate physical quantities and quantum processes with formal mathematical expressions, which offer little physical insight (e.g., operator mechanics).
This work provides a numerical analysis of (4) and a particle picture with the corresponding quantum evolution. These quantum particles have a numerical origin, however, they bear the basic properties of the physical models of particles in classical
mechanics. The additional particle properties carry the quantum information of the evolution.
In Section 2, an iterative solution to (4) is presented. The strategy is based on transforming the equation to a Fredholm integral equation, which can be solved by a resolvent expansion. In Section 3, we derive two different Monte Carlo algorithms for the evaluation of the terms in the resolvent expansion. In Section 4, the key findings of this work are discussed.
## 2 Iterative solution of the gauge-invariant Wigner equation
We introduce two new time-dependent functions of the Newtonian trajectory, which replace the phase space variables. We use two parameterizations (backward and forward), which yield different representations of the same solution. This is followed by transforming the gauge-invariant Wigner equation (4) into an integral form, i.e., the Fredholm integral equation, by using a finite difference scheme and a resolvent expansion of the Wigner function. We first present the solution for the backward parameterization and afterward for the forward parameterization. For the latter, we define and solve the adjoint formulation of the Fredholm equation. Finally, both solutions are used to evaluate the expectation value of a physical quantity \(A\) iteratively.
### Newtonian trajectories with backward and forward parameterization
The two new time-dependent functions of the Newtonian trajectory are based on the actual physical behavior of an electron governed by the Lorentz force \(\mathbf{F}\). The parameterization can be done backward and forward in time.
#### 2.1.1 Backward parameterization
Consider a particle at a time \(t\), the position \(\mathbf{x}\), and the momentum \(\mathbf{p}\) as initial values in a force field \(\mathbf{F}\). From there, one can determine the position and momentum at an earlier time \(t^{\prime}<t\). They are given by the two integral equations
\[\mathbf{x}(t^{\prime};\mathbf{p},\mathbf{x},t) :=\mathbf{x}-\int_{t^{\prime}}^{t}\frac{\mathbf{p}(\tau;\mathbf{p },\mathbf{x},t)}{m}\mathrm{d}\tau, \tag{5}\] \[\mathbf{p}(t^{\prime};\mathbf{p},\mathbf{x},t) :=\mathbf{p}-\int_{t^{\prime}}^{t}\mathbf{F}\big{(}\mathbf{p}( \tau;\mathbf{p},\mathbf{x},t),\mathbf{x}(\tau;\mathbf{p},\mathbf{x},t)\big{)} \mathrm{d}\tau. \tag{6}\]
#### 2.1.2 Forward parameterization
In this case, the particle is initialized at \(t^{\prime},\mathbf{p}^{\prime},\mathbf{x}^{\prime}\). \(\mathbf{p}\) and \(\mathbf{x}\) are then evaluated at a later time \(t>t^{\prime}\) as
\[\mathbf{x}^{\prime}(t;\mathbf{p}^{\prime},\mathbf{x}^{\prime},t^{ \prime}) :=\mathbf{x}^{\prime}+\int_{t^{\prime}}^{t}\frac{\mathbf{p}^{\prime }(\tau;\mathbf{p}^{\prime},\mathbf{x}^{\prime},t^{\prime})}{m}\mathrm{d}\tau, \tag{7}\] \[\mathbf{p}^{\prime}(t;\mathbf{p}^{\prime},\mathbf{x}^{\prime},t^{ \prime}) :=\mathbf{p}^{\prime}+\int_{t^{\prime}}^{t}\mathbf{F}\big{(} \mathbf{p}^{\prime}(\tau;\mathbf{p}^{\prime},\mathbf{x}^{\prime},t^{\prime}), \mathbf{x}^{\prime}(\tau;\mathbf{p}^{\prime},\mathbf{x}^{\prime},t^{\prime}) \big{)}\mathrm{d}\tau. \tag{8}\]
For convenience, we will write \(\mathbf{x}(t^{\prime}),\mathbf{p}(t^{\prime})\) and \(\mathbf{x}^{\prime}(t),\mathbf{p}^{\prime}(t)\) respectively. We also will use the Liouville theorem, stating that the phase space volume remains constant along the trajectories of the system, i.e., \(\int\mathrm{d}\mathbf{p}\mathrm{d}\mathbf{x}=\int\mathrm{d}\mathbf{p}(t^{ \prime})\mathrm{d}\mathbf{x}(t^{\prime})=\int\mathrm{d}\mathbf{p}^{\prime}(t )\mathrm{d}\mathbf{x}^{\prime}(t)\).
### Fredholm integral representation of the gauge-invariant Wigner equation
Next, we show how the gauge-invariant Wigner equation is transformed into an integral form, i.e., the Fredholm integral equation. For this purpose, a finite difference scheme is used to replace the derivatives.
#### 2.2.1 Integral form
For the transformation, the variables \(\mathbf{x}\) and \(\mathbf{p}\) in (4) are replaced by the functions (5) and (6), respectively. That way, the Liouville operator on the left-hand side can be replaced by a total derivative of time and integrated on \(t^{\prime}\) in the limits \((t_{0},t)\). By setting \(t_{0}=0\) (i.e., the time when the initial condition \(f_{\mathrm{w}_{0}}\) is known) it is obtained
\[\begin{split} f_{\mathrm{w}}\big{(}\mathbf{p},\mathbf{x},t\big{)} =&\mathrm{e}^{-\int\limits_{0}^{t}\gamma(\mathbf{p}(\tau), \mathbf{x}(\tau))\mathrm{d}\tau}f_{\mathrm{w}_{0}}\big{(}\mathbf{p}(0), \mathbf{x}(0)\big{)}+\int_{0}^{t}\mathrm{d}t^{\prime}\mathrm{e}^{-\int \limits_{t^{\prime}}^{t}\gamma(\mathbf{p}(\tau),\mathbf{x}(\tau))\mathrm{d} \tau}\\ &\cdot\left[\frac{B_{1}\hbar^{2}}{m}\frac{e}{12}\left(\frac{ \partial^{3}}{\partial p_{y}^{2}\partial x}-\frac{\partial^{3}}{\partial p_{x }\partial p_{y}\partial y}\right)+\gamma(\mathbf{p}(t^{\prime}),\mathbf{x}(t^ {\prime}))\right]f_{\mathrm{w}}\big{(}\mathbf{p}(t^{\prime}),\mathbf{x}(t^{ \prime}),t^{\prime}\big{)}.\end{split} \tag{9}\]
Here, \(\gamma\) is an auxiliary function, which is not presented in the differential form of the equation. Indeed, after taking the derivative with respect to \(t_{0}\), the terms containing \(\gamma\) cancel exactly. Later, we show that the introduction of \(\gamma\) is convenient from a numerical point of view and also has a physical meaning in the quantum particle model under development.
By taking a closer look at (9) we can gain insights into the physical background. The linear coefficient \(B_{1}\) of the magnetic field determines the quantum character of the evolution. Consider the case where \(B_{1}=0\) and \(\gamma=0\). The equation then simplifies to \(f_{\mathrm{w}}\big{(}\mathbf{p},\mathbf{x},t\big{)}=f_{\mathrm{w}_{0}}\big{(} \mathbf{p}(0),\mathbf{x}(0)\big{)}\). This means that the Wigner function is constant along the trajectories of the system and one can evaluate \(f_{\mathrm{w}}\) at any time \(t\) by tracing the trajectory back to \(t=0\), which is in accordance with Liouville's theorem. Indeed, an initial classical particle density in \(\mathrm{d}\mathbf{x}(0)\mathrm{d}\mathbf{p}(0)\) evolves along the trajectories until time \(t\) without any change.
#### 2.2.2 Finite difference scheme
The integral equation (9) is not yet of Fredholm type as it contains derivatives of the integrand function \(f_{\mathrm{w}}\). However, they can be approached by a finite difference scheme, which replaces them with linear combinations of \(f_{\mathrm{w}}\) defined in adjacent phase space points. Here, we apply a central finite difference scheme. This leads to fifteen terms represented by the indices \(\mathbf{i}=(i_{x},i_{y}),\mathbf{j}=(j_{x},j_{y})\) and coefficients \(\alpha_{\mathbf{ij}}\), where \(i_{x},i_{y},j_{x},j_{y}\in\{-1,0,1\}\). We also choose \(\gamma\) to be a constant:
\[\gamma=\gamma(\mathbf{p}(t^{\prime}),\mathbf{x}(t^{\prime})):=\frac{B_{1} \hbar^{2}}{m}\frac{e}{96(\Delta P)^{2}\Delta X}=\mathrm{constant} \tag{10}\]
The convenience of this choice will be discussed below. With the help of integrals over \(\mathbf{p}\) and \(\mathbf{x}\), and the use of \(\delta\) functions the equation obtains a mathematically formal appearance:
\[\begin{split}& f_{\mathrm{w}}(\mathbf{p},\mathbf{x},t)=f_{\mathrm{i}} (\mathbf{p},\mathbf{x},t)+\int_{0}^{\infty}\mathrm{d}t^{\prime}\int\mathrm{d} \mathbf{p}^{\prime}\int\mathrm{d}\mathbf{x}^{\prime}\mathcal{K}(\mathbf{p}, \mathbf{x},t,\mathbf{p}^{\prime},\mathbf{x}^{\prime},t^{\prime})f_{\mathrm{w} }(\mathbf{p}^{\prime},\mathbf{x}^{\prime},t^{\prime}),\\ & f_{\mathrm{i}}(\mathbf{p},\mathbf{x},t)=\mathrm{e}^{-t\gamma}f_ {\mathrm{w}_{0}}\big{(}\mathbf{p}(0),\mathbf{x}(0)\big{)},\\ &\mathcal{K}(\mathbf{p},\mathbf{x},t,\mathbf{p}^{\prime},\mathbf{ x}^{\prime},t^{\prime})=\ \theta(\mathrm{t}-\mathrm{t}^{\prime})\gamma\mathrm{e}^{-(\mathrm{t}-\mathrm{t}^{ \prime})\gamma}\sum_{\mathbf{i},\mathbf{j}}\alpha_{\mathbf{ij}}\delta(\mathbf{p }(\mathrm{t}^{\prime})+\mathbf{i}\Delta\mathrm{P}-\mathbf{p}^{\prime}, \mathbf{x}(\mathrm{t}^{\prime})+\mathbf{j}\Delta\mathrm{X}-\mathbf{x}^{\prime}).\end{split} \tag{11}\]
The Heaviside function on time takes care of the proper upper limit \(t\). The detailed form of the kernel \(\mathcal{K}\) can be found in A.
### Solution of the Fredholm integral equation
In this section, we present a solution for (11) and how it can be used to evaluate the expectation value of a physical quantity \(A\) of a particle. The weak formulation of this task is given as a series of integrals. This series arises from the resolvent expansion of the Wigner function. Consequently, the solution for the physical quantity is done iteratively.
#### 2.3.1 Weak formulation of the task
The Wigner function is a quasi-distribution function and can be used as a probability density for quantum particles [13]. Consider an arbitrary physical quantity \(A\), which depends on position, momentum, and time. The expectation value of \(A\) at a time \(T\) can be evaluated by
\[\langle A\rangle(T)=\int_{0}^{\infty}\mathrm{d}t\int\mathrm{d}\mathbf{p}\int \mathrm{d}\mathbf{x}f_{\mathrm{w}}(\mathbf{p},\mathbf{x},t)A(\mathbf{p}, \mathbf{x},t)\delta(T-t). \tag{12}\]
For convenience reasons we set \(A_{T}(\mathbf{p},\mathbf{x},t):=A(\mathbf{p},\mathbf{x},t)\delta(T-t)\). The solution of Fredholm integral equations is presented by its resolvent expansion [38], as given in B. It allows to represent \(\langle A\rangle(T)\) as a series
\[\langle A\rangle(T)=\sum_{n=0}^{\infty}\int_{0}^{\infty}\mathrm{d}t\int \mathrm{d}\mathbf{p}\int\mathrm{d}\mathbf{x}f_{n}(\mathbf{p},\mathbf{x},t)A_{ T}(\mathbf{p},\mathbf{x},t)=\sum_{n=0}^{\infty}\langle A\rangle_{n}(T). \tag{13}\]
In particular, if \(A\) is chosen to be a delta function, the series yields the expansion of the Wigner function.
#### 2.3.2 Resolvent expansion of the Wigner function
Given the scattering indices \((\mathbf{i}_{k},\mathbf{j}_{k})_{1\leq k\leq n}\) and the scattering times \(t_{1}<t_{2}<\ldots<t_{n}\), we introduce the trajectory with scattering events for backward parameterization as
\[\mathbf{p}_{n}\big{(}t^{\prime}\big{)} :=\cases{\mathbf{p}_{n-1}(t^{\prime})&for $t_{n}<t^{\prime}\leq T$\\ \mathbf{p}\big{(}t^{\prime};\mathbf{p}_{n-1}(t_{n})+\mathbf{i}_{n}\Delta P, \mathbf{x}_{n-1}(t_{n})+\mathbf{j}_{n}\Delta X,t_{n}\big{)}&for $0\leq t^{\prime}\leq t_{n}$,}\\ \mathbf{x}_{n}\big{(}t^{\prime}\big{)} :=\cases{\mathbf{x}_{n-1}(t^{\prime})&for $t_{n}<t^{\prime}\leq T$\\ \mathbf{x}\big{(}t^{\prime};\mathbf{x}_{n-1}(t_{n})+\mathbf{i}_{n}\Delta P, \mathbf{x}_{n-1}(t_{n})+\mathbf{j}_{n}\Delta X,t_{n}\big{)}&for $0\leq t^{\prime}\leq t_{n}$,}\end{cases} \tag{14}\]
Figure 1: Trajectory of the 2nd iteration with backward parameterization
where we use the convention \(\mathbf{p}_{0}(t^{\prime}):=\mathbf{p}(t^{\prime};\mathbf{p},\mathbf{x},T)\), \(\mathbf{x}_{0}(t^{\prime}):=\mathbf{x}(t^{\prime};\mathbf{p},\mathbf{x},T)\).
In accordance with (14) we obtain
\[\begin{split} f_{n}(\mathbf{p},\mathbf{x},t)=&\gamma^ {n}\rme^{-\gamma t}\int_{0}^{t}\rmd t_{1}\int_{0}^{t_{1}}\rmd t_{2}\ldots\int_{ 0}^{t_{n-1}}\rmd t_{n}\\ &\sum_{\mathbf{i}_{1},\mathbf{j}_{1}}\ldots\sum_{\mathbf{i}_{n}, \mathbf{j}_{n}}\prod_{k=1}^{n}(\alpha_{\mathbf{i}_{k},\mathbf{j}_{k}})f_{ \mathrm{w}_{0}}\big{(}\mathbf{p}_{n}(0),\mathbf{x}_{n}(0)\big{)},\end{split} \tag{15}\]
where \(f_{0}(\mathbf{p},\mathbf{x},t)=\rme^{-t\gamma}f_{\mathrm{w}_{0}}\big{(} \mathbf{p}(0),\mathbf{x}(0)\big{)}\), see 13.
The existence of the backward Newtonian trajectories invokes a picture of a pointlike particle that evolves back in time. The delta functions, which give rise to offsets of the phase space positions, can be interpreted as scattering factors. Figure 1 schematically presents the second term in the iterative expansion of the Wigner function. The particle starts at \((\mathbf{p},\mathbf{x},T)\) and moves back in time in the phase space according to the Lorentz force \(\mathbf{F}\). When the particle reaches \(t_{1}\) it is scattered, i.e., a factor \((\mathbf{i}_{1}\Delta P,\mathbf{j}_{1}\Delta X)\) is added. Next, it follows the trajectory again until it reaches \(t_{2}\). This process is repeated until \(t=0\) is reached.
#### 2.3.3 Iterative representation of physical quantities
To evaluate the solution of \(\langle A\rangle_{n}(T)\), we insert the solution of \(f_{n},n\in\mathbb{N}\) in (15) into (13). This yields
\[\begin{split}\langle A\rangle_{n}(T)=\gamma^{n}\rme^{-T\gamma} \int\rmd\mathbf{p}\int\rmd\mathbf{x}\int_{0}^{T}\rmd t_{1}\int_{0}^{t_{1}}\rmd t _{2}\ldots\int_{0}^{t_{n-1}}\rmd t_{n}A(\mathbf{p},\mathbf{x},T)\\ \cdot\sum_{\mathbf{i}_{1},\mathbf{j}_{1}}\ldots\sum_{\mathbf{i}_{ n},\mathbf{j}_{n}}\prod_{k=1}^{n}(\alpha_{\mathbf{i}_{k},\mathbf{j}_{k}})f_{ \mathrm{w}_{0}}\big{(}\mathbf{p}_{n}(0),\mathbf{x}_{n}(0)\big{)}.\end{split} \tag{16}\]
This shows us how each element \(\langle A\rangle_{n}(T)\) is generated. In the backward parameterization case, the trajectory of \(\mathbf{p}\) and \(\mathbf{x}\) starts at \(T\) and goes back in time, according to (14). The particle is scattered at each \((t_{i})_{i\in\{1,2,\ldots,n\}}\), where \(T>t_{1}>t_{2}>\ldots>t_{n}>0\). The indices \(\mathbf{i}_{k}\) and \(\mathbf{j}_{k}\) are implicitly included in the functions \(\mathbf{p}_{n}\) and \(\mathbf{x}_{n}\). Reaching the final momentum and position at \(t=0\), they are used as the arguments of the initial condition of the Wigner function \(f_{\mathrm{w}_{0}}\). The integration limits of the \(t_{i}\)'s and consequently their orders are determined by the \(\theta\) functions of the kernel.
### Solution of the adjoint integral equation
In this section, a solution of the Fredholm integral equation (11) is presented where forward parameterization is used. The weak formulation of this task is given by the adjoint formulation of the Fredholm integral equation. Finally, the solution for the adjoint equation is used to derive the expectation value of a physical quantity iteratively.
#### 2.4.1 Weak formulation of the task
The adjoint of a Fredholm integral equation has the same kernel, but the integration is over the other set of variables:
\[g(\mathbf{p}^{\prime},\mathbf{x}^{\prime},t^{\prime})=g_{\rm i}(\mathbf{p}^{ \prime},\mathbf{x},^{\prime}t^{\prime})+\int_{0}^{\infty}\rmd t\int_{-\infty}^{ \infty}\rmd\mathbf{p}\int_{-\infty}^{\infty}\rmd\mathbf{x}\mathcal{K}(\mathbf{ p},\mathbf{x},t,\mathbf{p}^{\prime},\mathbf{x}^{\prime},t^{\prime})g(\mathbf{p}, \mathbf{x},t) \tag{17}\]
The free term \(g_{\rm i}\) can be determined from the weak formulation of the task, namely to find the expecation value of a physical quantity \(A\). The following relation follows
from the exchange Lemma in B.2 and the Liouville theorem. By choosing \(g_{i}(\mathbf{p}^{\prime},\mathbf{x}^{\prime},t^{\prime}):=A_{T}(\mathbf{p}^{ \prime},\mathbf{x}^{\prime},t^{\prime})\) we can show
\[\langle A\rangle(T)=\int\limits_{0}^{\infty}\mathrm{d}t\int\mathrm{ d}\mathbf{p}\int\mathrm{d}\mathbf{x}f_{\mathrm{w}}(\mathbf{p},\mathbf{x},t)A_{T}( \mathbf{p},\mathbf{x},t)=\int\limits_{0}^{\infty}\mathrm{d}t\int\mathrm{d} \mathbf{p}\int\mathrm{d}\mathbf{x}f_{\mathrm{w}}(\mathbf{p},\mathbf{x},t)g_{i}( \mathbf{p},\mathbf{x},t) \tag{18}\] \[=\int\limits_{0}^{\infty}\mathrm{d}t\int\mathrm{d}\mathbf{p}\int \mathrm{d}\mathbf{x}f_{i}(\mathbf{p},\mathbf{x},t)g(\mathbf{p},\mathbf{x},t)= \int\limits_{0}^{\infty}\mathrm{d}t\int\mathrm{d}\mathbf{p}\int\mathrm{d} \mathbf{x}\mathrm{e}^{-t\gamma}f_{\mathrm{w}_{0}}\big{(}\mathbf{p},\mathbf{x} \big{)}g(\mathbf{p}^{\prime}(t),\mathbf{x}^{\prime}(t),t).\]
Like before, we consider the resolvent expansion to evaluate \(\langle A\rangle(T)\), which yields
\[\langle A\rangle(T)=\sum\limits_{n=0}^{\infty}\int_{0}^{\infty}\mathrm{d}t \int\mathrm{d}\mathbf{p}\int\mathrm{d}\mathbf{x}\mathrm{e}^{-t\gamma}f_{ \mathrm{w}_{0}}(\mathbf{p},\mathbf{x})g_{n}(\mathbf{p}^{\prime}(t),\mathbf{x} ^{\prime}(t),t)=\sum\limits_{n=0}^{\infty}\langle A\rangle_{n}(T). \tag{19}\]
The integration over the other set of variables \(\mathbf{p},\mathbf{x},t\) gives rise to a transition to a forward parametrization of the arguments in the \(\delta\) functions in the kernel:
\[\delta\big{(}\mathbf{p}(t^{\prime})+\mathbf{i}\Delta P-\mathbf{p} ^{\prime},\mathbf{x}(t^{\prime})+\mathbf{j}\Delta X-x^{\prime}\big{)} \tag{20}\] \[\qquad=\delta\big{(}\mathbf{p}-\mathbf{p}^{\prime}(t;\mathbf{p}^{ \prime}-\mathbf{i}\Delta P,\mathbf{x}^{\prime}-\mathbf{j}\Delta X,t^{\prime}), \mathbf{x}-\mathbf{x}^{\prime}(t;\mathbf{p}^{\prime}-\mathbf{i}\Delta P, \mathbf{x}^{\prime}-\mathbf{j}\Delta X,t^{\prime})\big{)}\]
For \(\mathcal{K}\) this yields
\[\mathcal{K}(\mathbf{p},\mathbf{x},t,\mathbf{p}^{\prime},\mathbf{x}^{\prime},t ^{\prime})=\theta(t-t^{\prime})\gamma\mathrm{e}^{-(t-t^{\prime})\gamma}\sum \limits_{\mathbf{i},\mathbf{j}}\alpha\mathbf{i}\mathbf{j} \tag{21}\] \[\qquad\cdot\delta\big{(}\mathbf{p}-\mathbf{p}^{\prime}(t;\mathbf{ p}^{\prime}-\mathbf{i}\Delta P,\mathbf{x}^{\prime}-\mathbf{j}\Delta X,t^{\prime}), \mathbf{x}-\mathbf{x}^{\prime}(t;\mathbf{p}^{\prime}-\mathbf{i}\Delta P, \mathbf{x}^{\prime}-\mathbf{j}\Delta X,t^{\prime})\big{)}.\]
#### 2.4.2 Solution for the adjoint equation
We introduce the trajectory with scattering events for forward parameterization. Given the scattering indices \((\mathbf{i}_{k},\mathbf{j}_{k})_{1\leq k\leq n}\) and the scattering times \(t_{1}<t_{2}<\ldots<t_{n}\), we use the convention \(\mathbf{p}^{\prime}_{0}(t):=\mathbf{p}(t;\mathbf{p},\mathbf{x},0)\), \(\mathbf{x}^{\prime}_{0}(t):=\mathbf{x}(t;\mathbf{p},\mathbf{x},0)\) to define
\[\mathbf{p}^{\prime}_{n}\big{(}t\big{)} :=\left\{\begin{aligned} &\mathbf{p}^{\prime}_{n-1}(t)&& \text{for }0\leq t\leq t_{n}\\ &\mathbf{p}^{\prime}\big{(}t;\mathbf{p}^{\prime}_{n-1}(t_{n})- \mathbf{i}_{n}\Delta P,\mathbf{x}^{\prime}_{n-1}(t_{n})-\mathbf{j}_{n}\Delta X,t_{n}\big{)}&&\text{for }t_{n}<t\leq T,\\ &\mathbf{x}^{\prime}_{n}\big{(}t\big{)}:=\left\{\begin{aligned} &\mathbf{x}^{\prime}_{n-1}(t)&& \text{for }0\leq t\leq t_{n}\\ &\mathbf{x}^{\prime}\big{(}t;\mathbf{p}^{\prime}_{n-1}(t_{n})- \mathbf{i}_{n}\Delta P,\mathbf{x}^{\prime}_{n-1}(t_{n})-\mathbf{j}_{n}\Delta X,t_{n}\big{)}&&\text{for }t_{n}<t\leq T.\end{aligned}\right. \tag{22}\]
A depiction of these functions can be seen in Figure 2. The resolvent series for the solution is then presented by the term
\[g_{n}(\mathbf{p}^{\prime}(t_{1}),\mathbf{x}^{\prime}(t_{1}),t_{1}) =\gamma^{n}\mathrm{e}^{-(T-t_{1})\gamma}\int\limits_{t_{1}}^{T} \mathrm{d}t_{2}\ldots\int\limits_{t_{n-1}}^{T}\mathrm{d}t_{n} \tag{23}\] \[\sum\limits_{\mathbf{i}_{1},\mathbf{j}_{1}\ldots\mathbf{i}_{n}, \mathbf{j}_{n}}\prod\limits_{k=1}^{n}(\alpha_{\mathbf{i}_{k},\mathbf{j}_{k}})A _{T}\big{(}\mathbf{p}_{n}(T),\mathbf{x}_{n}(T),T\big{)},\]
with \(g_{0}(\mathbf{p}^{\prime}(t),\mathbf{x}(t),t)=A_{T}(\mathbf{p}_{0}(t),\mathbf{ x}_{0}(t),t)\).
#### 2.4.3 Iterative representation of physical quantities
The series for the expectation value of a physical quantity is obtained by inserting (23) into (19). The general term is then
\[\begin{split}\langle A\rangle_{n}(T)=&\gamma^{n} \mathrm{e}^{-T\gamma}\!\int\!\mathrm{d}\mathbf{p}\!\int\!\mathrm{d}\mathbf{x}f_{ \mathrm{w}_{0}}(\mathbf{p},\mathbf{x})\!\int\limits_{0}^{T}\!\mathrm{d}t_{1} \ldots\!\int\limits_{t_{n-1}}^{T}\mathrm{d}t_{n}\\ &\sum_{\mathbf{i}_{1},\mathbf{j}_{1}\ldots\mathbf{i}_{n},\mathbf{ j}_{n}}\prod_{k=1}^{n}(\alpha_{\mathbf{i}_{k}\mathbf{j}_{k}})A\big{(}\mathbf{p}_{n}^{ \prime}(T),\mathbf{x}_{n}^{\prime}(T),T\big{)}.\end{split} \tag{24}\]
Since both (24) and (16) are transformations of the general solution (12), they are indeed equivalent. Equation (24) remarkably resembles the corresponding expression for the Monte Carlo averages of an ensemble of \(M\) classical (Boltzmann) electrons, which move under the action of the Lorentz force and are scattered by, e.g., lattice vibrations (phonons) [39]. They are point-like particles with an initial distribution \(f_{\mathrm{w}_{0}}\), which initializes the starting phase space points \(\mathbf{p},\mathbf{x}\). They determine Newtonian trajectories followed by the force particles during their free flight. The free flight is interrupted by scattering events, which, at a time \(t_{1}\), update the phase-space coordinates. The latter initialize a novel piece of Newtonian trajectory for the next free flight. The evolution continues until the time \(T\) is reached and then each particle \(l\) contributes with its current value \(A_{l}\) (e.g., velocity, energy) to the statistical sum \(\sum_{l}^{M}A_{l}\), which evaluates \(\langle A\rangle\). The process corresponds to the scheme depicted in Figure 2, which suggests a picture where pointlike quantum particles follow the same sequence of events. However, several problems need to be addressed to associate (24) with a quantum particle model. The classical initial distribution is non-negative, \(f_{\mathrm{w}_{0}}\geq 0\), while in the quantum case, \(f_{\mathrm{w}_{0}}\) could be any legitimate Wigner function and thus allows for negative values. This affects the evaluation of the physical averages, as can be seen already from the zeroth order term, which dominates if the evolution time is much smaller than the mean scattering time: In order to account for the sign, the statistical sum for the envisaged quantum particle model must be generalized to \(\sum_{l}^{M}w_{l}A_{l}\) where the quantity \(w_{l}\), called weight, should carry the sign of \(f_{\mathrm{w}_{0}}\) in the point of initialization of the \(l\)-th particle. Next, in the classical evolution, the scattering time (e.g., \(t_{1}\)) exponentially depends on the frequency of interaction with phonons, while in the quantum counterpart the sequence \(t_{1}<t_{2}<\cdots\) is predetermined. This
Figure 2: Trajectory of the 2nd iteration with forward parameterization.
suggests looking for an analogical physical interpretation of the prefactor in (24). Finally, both classical and quantum counterparts rely on Newtonian trajectories, and hence the difference between the two kinds of evolution is due to the scattering: A fundamental difference between classical and quantum scattering is expected. These problems, formulated by heuristic considerations, are rigorously addressed next by the rules of the Monte Carlo theory for integration.
## 3 Monte Carlo algorithms
The two algorithms presented in this section differ in both, their parameterization and the distribution of the scattering times. The first one is more formal and evaluates \(f_{\mathrm{w}}\) pointwise using backward parametrization and a uniform distribution for the scattering times. The other one uses the more transparent (from a physical point of view) forward parametrization and introduces an exponential distribution of the scattering, which is a characteristic of the evolution of classical particles in the presence of scattering events. This gives rise to a quantum particle model, where the evolution of pointlike particles consists of consecutive events of free-flight over the Lorentz force-governed Newtonian trajectories, followed by scattering events.
### Backward algorithm
The Monte Carlo algorithm introduced in this section allows to evaluate the terms \(\langle A\rangle_{n}(T)\) of the resolvent expansion in (16). For this purpose, the integrals and sums are expressed as an expectation value \(E[X_{n}]\) with a probability density \(P_{X_{n}}\) and a random variable \(X_{n}\). The terms \(\langle A\rangle_{n}(T)\) are set to
\[\begin{split}\langle A\rangle_{n}(T)=E[X_{n}]=&\int \mathrm{d}\mathbf{p}\int\mathrm{d}\mathbf{x}\int_{0}^{T}\mathrm{d}t_{1}\int_{0 }^{t_{1}}\mathrm{d}t_{2}\ldots\int_{0}^{t_{n-1}}\mathrm{d}t_{n}\\ &\sum_{\mathbf{i}_{1},\mathbf{j}_{1}}\sum_{\mathbf{i}_{2}, \mathbf{j}_{2}}\ldots\sum_{\mathbf{i}_{n},\mathbf{j}_{n}}P_{X_{n}}X_{n}.\end{split} \tag{25}\]
\(P_{X_{n}}\) acts as a selector for the scattering indices \((\mathbf{i}_{k},\mathbf{j}_{k})_{1\leq k\leq n}\), the scattering times \(t_{1}<t_{2}<\ldots<t_{n}\), and the initial points \(\mathbf{p},\mathbf{x}\) of the trajectory. Thus, it is split into a product of three probability functions:
* For the coefficients \(\alpha_{\mathbf{ij}}\) of the kernel (11), we introduce a discrete transition probability \(P_{\mathbf{ij}}:=\frac{|\alpha_{\mathbf{ij}}|}{|\alpha|}\), where \(|\alpha|:=\sum_{\mathbf{ij}}|\alpha_{\mathbf{ij}}|=41\), see (16). This means that the direction in which the trajectory scatters is chosen randomly, distributed proportionally to \(|\alpha_{\mathbf{ij}}|\).
* For the initial points \(\mathbf{p},\mathbf{x}\) of the trajectory, a density function \(P\) is chosen. Both \(A\) and \(f_{\mathrm{w}_{0}}\) depend on \(\mathbf{p}\) and \(\mathbf{x}\), thus a possible choice could be \(P(\mathbf{p},\mathbf{x})\propto|A(\mathbf{p},\mathbf{x})f_{\mathrm{w}_{0}}( \mathbf{p},\mathbf{x})|\).
* The scattering times \(t_{1},\ldots,t_{n}\) are evenly distributed on the intervals \((0,T)\) for \(t_{1}\) and on \((0,t_{i-1})\) for \(t_{i},i\in\{2,\ldots,n\}\). The density function of a uniform distribution is normalized by the inverse of the length of the integral, which has to be considered in \(X_{n}\) by the product \(T\prod_{i=1}^{n-1}t_{i}\).
In combination they yield \(P_{X_{n}}=|\alpha_{\mathbf{ij}}|/|\alpha|P(\mathbf{p},\mathbf{x})(T\prod_{i=1 }^{n-1}t_{i})^{-1}\). Since the corresponding random variable \(X_{n}\) is the estimator of \(\langle A\rangle_{n}(T)\), it is evaluated and
averaged for several arguments randomly selected according to \(P_{X_{n}}\). To satisfy (25) it is given as
\[X_{n}=\gamma^{n}|\alpha|^{n}\rme^{-T\gamma}\frac{A(\mathbf{p},\mathbf{x},T)}{P( \mathbf{p},\mathbf{x})}T\prod_{i=1}^{n-1}(t_{i})\prod_{k=1}^{n}\big{(}\sign( \alpha_{\mathbf{i}_{k}\mathbf{j}_{k}})\big{)}f_{\mathrm{w}_{0}}\big{(}\mathbf{ p}_{n}(0),\mathbf{x}_{n}(0)\big{)}. \tag{26}\]
The expectation value of a physical quantity can be obtained by Algorithm 1 (see also Figure 3).
```
1Initialization of \(N\), \((N_{n})_{n\in\{0,1,2,\ldots,N\}},n\gets 0\) and a variable, say \((A_{n})_{n\in\{0,1,2,\ldots,N\}}\leftarrow\vec{0}\). \(N\) sets the total number of terms in the iterative expansion (13). \(N_{n}\) determines the number of independent numerical trajectories with \(n\) scattering events. \(j\gets 1\) is a counter for \(N_{n}\). \(A_{n}\) represents the value of the \(n\)-th term in the resolvent expansion. \(t_{0}\) is initialized by \(t_{0}\gets T\).
2If\(n\neq 0\), the scattering times \((t_{i})_{i\in\{1,2,\ldots,n\}}\) are chosen in order because the upper limit of every \(t_{i}\) depends on \(t_{i-1}\). Each \(t_{i}\sim\mathrm{U}(0,t_{i-1})\) is generated randomly, with the uniform distribution \(\mathrm{U}\) on the interval \((0,t_{i-1})\). \(s\gets 1\), which represents all factors in \(A_{n}\) that are updated at each scattering event, and \(i\gets 0\). \((\mathbf{p},\mathbf{x})\sim P(\mathbf{p},\mathbf{x})\) are chosen randomly, and distributed according to the chosen probability function \(P(\mathbf{p},\mathbf{x})\). The initial values \(\mathbf{p}_{T}\leftarrow\mathbf{p}\) and \(\mathbf{x}_{T}\leftarrow\mathbf{x}\) are stored separately. If\(n=0\), jump to step 5.
3Starting from the current \(\mathbf{p}\) and \(\mathbf{x}\) the trajectory is followed until it reaches the next scattering event at \(t_{i+1}\), i.e., \(\mathbf{p}\leftarrow\mathbf{p}(t_{i+1};\mathbf{p},\mathbf{x},t_{i})\) and \(\mathbf{x}\leftarrow\mathbf{x}(t_{i+1};\mathbf{p},\mathbf{x},t_{i})\), and then\(i\gets i+1\).
4In the event of scattering: Values for \((\mathbf{i},\mathbf{j})\sim P_{\mathbf{ij}}\) are chosen randomly, distributed according to the values of the transition probability \(P_{\mathbf{ij}}\). Then \(s\) is updated to \(s\gets s\cdot\gamma t_{i-1}|\alpha|\sign(\alpha_{\mathbf{ij}})\). The factor \(t_{i-1}\) comes from the length of the time integral. Finally, \(\mathbf{p}\leftarrow\mathbf{p}+\mathbf{i}\Delta P,\mathbf{x}\leftarrow\mathbf{ x}+\mathbf{j}\Delta X\). If\(i<n\), jump to step 3.
5The trajectory is followed backward in the time interval \((0,t_{n})\), i.e., \(\mathbf{p}\leftarrow\mathbf{p}(0;\mathbf{p},\mathbf{x},t_{n})\) and \(\mathbf{x}\leftarrow\mathbf{x}(0;\mathbf{p},\mathbf{x},t_{n})\), where \((\mathbf{p},\mathbf{x})\) is equal to the phase space point \((\mathbf{p}_{n}(0),\mathbf{x}_{n}(0))\), see Figure 1.
6\(f_{\mathrm{w}_{0}}(\mathbf{p},\mathbf{x})\) is evaluated at the final position \((\mathbf{p},\mathbf{x})=(\mathbf{p}_{n}(0),\mathbf{x}_{n}(0))\) and \(A_{n}\gets A_{n}+s\rme^{-T\gamma}f_{\mathrm{w}_{0}}(\mathbf{p},\mathbf{x}) A(\mathbf{p}_{T},\mathbf{x}_{T},T)/P(\mathbf{p}_{T},\mathbf{x}_{T})\). If\(j<N_{n}\), set \(j\gets j+1\) and jump to step 2.
7\(n\gets n+1,j\gets 1\), and the algorithm jumps to step 2, unless\(n=N\). In this case, the next step is executed.
8Finally, return\(\sum_{n=0}^{N}A_{n}/N_{n}\).
```
**Algorithm 1**Backward algorithm
### Forward algorithm
Finally, a Monte Carlo algorithm is presented, where the number of scattering events is not predetermined and the scattering times are exponentially distributed. We will use forward parameterization in this case. Again, \(X_{n}\) and \(P_{X_{n}}\) have to satisfy the condition \(\langle A\rangle_{n}(T)=E[X_{n}]\). The arguments that are randomly chosen are the same
as before. The transition probability \(P_{\mathbf{ij}}\) and the density function \(P\) remain the same. For the scattering times \(t_{1},\ldots,t_{n}\), we evaluate the joint density of the number of scattering events \(n\) happening in the interval \([0,T]\), and the consecutive scattering times \((t_{i})_{i\in\{1,\ldots,n\}}\). Considering an exponential distribution, the density for a single scattering event is given by \(\gamma\mathrm{e}^{-\gamma t}\). The joint density is equal to the density of the first \(n\) events multiplied by the probability that the next event happens after T, which yields
\[\begin{split} p\big{(}(t_{i})_{i\in\{1,\ldots,n\}},n\big{)}& =\gamma^{n}\prod_{i=1}^{n}\left(\mathrm{e}^{-\gamma(t_{i}-t_{i-1}) }\right)\int_{T}^{\infty}\gamma\mathrm{e}^{-\gamma(t_{n+1}-t_{n})}\mathrm{d}t_ {n+1}\\ &=\gamma^{n}\mathrm{e}^{-\gamma t_{n}}\mathrm{e}^{\gamma t_{n}} \int_{T}^{\infty}\gamma\mathrm{e}^{-\gamma t_{n+1}}\mathrm{d}t_{n+1}\\ &=\gamma^{n}\mathrm{e}^{-\gamma T},\end{split} \tag{27}\]
assuming \(t_{0}=0\). This conveniently coincides with the prefactor in (24).
Combining all probability functions gives \(P_{X_{n}}=\prod_{k=1}^{n}(|\alpha_{\mathbf{i}_{\mathbf{i}_{\mathbf{i}_{ \mathbf{i}_{\mathbf{i}_{\mathbf{i}_{\mathbf{i}_{\mathbf{i}_{\mathbf{i}_{ \mathbf{i}_{\mathbf{i}_{\mathbf{i}_{\mathbf{i}_{\mathbf{i}_{\mathbf{i}_{ \mathbf{i}}}}}}}}}}}}}}}|)|\alpha|^{-n}P(\mathbf{p},\mathbf{x})\gamma^{n} \mathrm{e}^{-\gamma T}}\). By using the condition \(\langle A\rangle_{n}(T)=E[X_{n}]\) and the result of \(\langle A\rangle_{n}(T)\) in (24), we can evaluate the random variable as
\[X_{n}=|\alpha|^{n}\frac{f_{w_{0}}(\mathbf{p},\mathbf{x})}{P(\mathbf{p}, \mathbf{x})}\prod_{k=1}^{n}\big{(}\mathrm{sign}(\alpha_{\mathbf{i}_{\mathbf{i} _{\mathbf{i}_{\mathbf{i}_{\mathbf{i}_{\mathbf{i}_{\mathbf{i}_{\mathbf{i}}_{ \mathbf{i}_{\mathbf{i}}}}}}}}}})\big{)}A\big{(}\mathbf{p}_{n}^{\prime}(T), \mathbf{x}_{n}^{\prime}(T),T\big{)}. \tag{28}\]
The expectation value of a physical quantity can be obtained by Algorithm 3 (see also Figure 4).
```
1Initialization of \(M\) and a variable, say \(A\gets 0\). \(M\) sets the total number of the independent numerical trajectories and \(A\) represents
Figure 3: Flow chart of the backward algorithm
the expectation value of the physical quantity. \(j\gets 1\) is a counter for \(M\).
2. \(t_{i}\) is initialized as \(t_{i}\gets 0\). \(s\gets 1\) represents all factors in \(X_{n}\) that are updated at each scattering event. \((\mathbf{p},\mathbf{x})\sim P(\mathbf{p},\mathbf{x})\) are chosen randomly, distributed according to the chosen probability function \(P(\mathbf{p},\mathbf{x})\). Since \((\mathbf{p},\mathbf{x})\) will change in the following steps, the initial values \(\mathbf{p}_{0}\leftarrow\mathbf{p},\mathbf{x}_{0}\leftarrow\mathbf{x}\) are also saved as they are needed at a later step.
3. An exponentially distributed variable \(t^{\prime}\sim\mathrm{Exp}(\gamma)\) with the constant \(\gamma\) is chosen by generating a uniformly distributed variable \(r\sim\mathrm{U}(0,1)\) and setting \(t^{\prime}\leftarrow-\ln(r)/\gamma\). If \(t_{i}+t^{\prime}>T\), then we jump to step 6.
4. Starting from the current \(\mathbf{p}\) and \(\mathbf{x}\) the trajectory is followed until it reaches the next scattering event at \(t_{i}+t^{\prime}\), i.e., \(\mathbf{p}\leftarrow\mathbf{p}^{\prime}(t_{i}+t^{\prime};\mathbf{p},\mathbf{x},t_{i})\) and \(\mathbf{x}\leftarrow\mathbf{x}^{\prime}(t_{i}+t^{\prime};\mathbf{p},\mathbf{x},t_{i})\).
5. In the event of scattering: Values for \((\mathbf{i},\mathbf{j})\sim P_{\mathbf{ij}}\) are chosen randomly, distributed according to the values of the transition probability \(P_{\mathbf{ij}}\), defined in Section 3.1. Then, \(s\) is updated to \(s\gets s\cdot|\alpha|\mathrm{sign}(\alpha_{\mathbf{ij}})\). Finally, \(\mathbf{p}\leftarrow\mathbf{p}-\mathbf{i}\Delta P,\mathbf{x}\leftarrow\mathbf{ x}-\mathbf{j}\Delta X\) and \(t_{i}\gets t_{i}+t^{\prime}\). Then jump to step 3.
6. The trajectory is followed in the time interval \((t_{i},T)\), i.e., \(\mathbf{p}\leftarrow\mathbf{p}^{\prime}(T;\mathbf{p},\mathbf{x},t_{i})\) and \(\mathbf{x}\leftarrow\mathbf{x}^{\prime}(T;\mathbf{p},\mathbf{x},t_{i})\), where \((\mathbf{p},\mathbf{x})\) is equal to the phase space point \((\mathbf{p}^{\prime}_{n}(T),\mathbf{x}^{\prime}_{n}(T))\), see Figure 2.
7. \(A(\mathbf{p},\mathbf{x},T)\) is evaluated at the final position \((\mathbf{p},\mathbf{x})=(\mathbf{p}^{\prime}_{n}(T),\mathbf{x}^{\prime}_{n}(T))\) and \(A\gets A+sA(\mathbf{p},\mathbf{x},T)f_{\mathrm{w}_{0}}(\mathbf{p}_{0}, \mathbf{x}_{0})/P(\mathbf{p}_{0},\mathbf{x}_{0})\). \(j\gets j+1\).
8. Jump to step 2, unless \(j=M\). In this case, the next step is executed.
9. Finally, return \(A/M\).
Figure 4: Flow chart of the forward algorithm
## 4 Discussion
The two introduced Monte Carlo algorithms constitute an important step in understanding gauge-invariant Wigner theory using classical Boltzmann concepts. The choice of linear electromagnetic fields ensures the appearance of the same Liouville operator in both transport descriptions and thus provides a convenient reference frame for insights into the quantum evolution in terms of particles. The two algorithms are derived by the application of established Monte Carlo approaches for integrating the backward or forward form of the gauge-invariant Wigner equation. In the former case, the algorithm is more formal as the evolution proceeds backward in time. It offers computational advantages when the solution is needed locally in the phase space. Furthermore, it allows us to gradually introduce concepts used in the forward algorithm, which completes the particle picture conjectured at the end of the previous section. The quantum evolution resembles to a large extent the evolution of classical Boltzmann particles. An ensemble of particles is initialized in both cases according to the initial condition. Particles are accelerated by the Lorentz force over Newtonian trajectories and interrupted by scattering events. Comparing both algorithms reveals the proper interpretation of the distribution of the scattering times. In the backward algorithm, the scattering times were chosen uniformly distributed on the interval between the beginning of the evaluation and the previous scattering event. As a result, the scattering events tend to be unevenly distributed throughout the evolution time. The distribution density of the scattering events is inversely proportional to the length of the time intervals \((0,t_{i})_{i\in\{1,\ldots,n-1\}}\), and is thus higher at \(0\) and lower toward \(T\). In the forward algorithm, the scattering events are evenly distributed on the interval \([0,T]\), due to the exponential distribution. This manifests in the joint probability density, which corresponds exactly to the prefactor of the terms in the resolvent expansion. As for the weights of the statistical sum of the physical quantity, their absolute value is multiplied by \(|\alpha|\) for every scattering event. This factor corresponds to the weighted amount of possible directions the particle could scatter. Also, the sign of the weights can change during the scattering, depending on the sign of the corresponding coefficient \(\alpha_{\mathbf{ij}}\) in the kernel.
These considerations can be summarized as follows: The distribution of scattering times is given by the formally introduced quantity \(\gamma\), (10), which now has been provided with a physical meaning of a total out-scattering rate in a striking analogy with the classical counterpart. Similarly to the latter, \(\gamma\) is given by the sum of the quantities \(|\alpha_{\mathbf{ij}}|\), which corresponds to the probability for scattering from different classical mechanisms such as phonons and impurities. The difference is that the terms \(\alpha_{\mathbf{ij}}\) carry a sign, so that each scattering event can change both the absolute value of the weight and the sign, which are the main attributes of a quantum particle. Indeed, in this way scattering determines the difference between classical and quantum evolution, as discussed before. Furthermore, while in the former case, scattering is local in space, causing only a shift in momentum, quantum scattering leads to spatial shifts. These shifts depend on the finite difference scheme, however, this is irrelevant to the conceptual understanding: Similarly, considering computational approaches, different numerical schemes can be applied to find the numerical solution.
The introduction of the Newtonian trajectory enables us to transform the gauge-invariant Wigner equation to a Fredholm integral equation, where a resolvent expansion gives an iterative solution. However, this involves the approximation of the high-order derivative term leading to many terms in the kernel. This consequently
increases the number of possible paths of the trajectory giving rise to the accumulation of the weight of a trajectory with the evolution. Large positive and negative weight values need to cancel each other in the statistical estimators for the physical averages. Thus, the maximum simulation time \(T\) of the simulation is limited, because the larger \(T\), the higher the impact of the terms with a higher number of scattering events. From a computational point of view, this leads to the well-known'sign problem' of quantum mechanics. A good example is the Taylor series of \(e^{-x}\) for large positive \(x\), where large terms compensate each other to give a value smaller than unity. The problem can be addressed by using the Markovian character of the evolution of the particle ensemble, which, in particular, provides the Wigner solution \(f_{w}\) in the entire phase space: \(T\) can be decomposed on shorter time intervals \(\Delta t\), so that the solution at the end of the \(n\)-th interval \(f_{w}(n\Delta T)\) becomes the initial condition for the \(n+1\)-th interval.
This research was funded by the Austrian Science Fund (FWF): P33609-N and P37080-N.
|
2309.00891 | On semi-classical limit of spatially homogeneous quantum Boltzmann
equation: asymptotic expansion | We continue our previous work [Ling-Bing He, Xuguang Lu and Mario Pulvirenti,
Comm. Math. Phys., 386(2021), no. 1, 143223.] on the limit of the spatially
homogeneous quantum Boltzmann equation as the Planck constant $\epsilon$ tends
to zero, also known as the semi-classical limit. For general interaction
potential, we prove the following: (i). The spatially homogeneous quantum
Boltzmann equations are locally well-posed in some weighted Sobolev spaces with
quantitative estimates uniformly in $\epsilon$. (ii). The semi-classical limit
can be further described by the following asymptotic expansion formula: $$
f^\epsilon(t,v)=f_L(t,v)+O(\epsilon^{\vartheta}).$$ This holds locally in time
in Sobolev spaces. Here $f^\epsilon$ and $f_L$ are solutions to the quantum
Boltzmann equation and the Fokker-Planck-Landau equation with the same initial
data.The convergent rate $0<\vartheta \leq 1$ depends on the integrability of
the Fourier transform of the particle interaction potential. Our new
ingredients lie in a detailed analysis of the Uehling-Uhlenbeck operator from
both angular cutoff and non-cutoff perspectives. | Ling-Bing He, Xuguang Lu, Mario Pulvirenti, Yu-Long Zhou | 2023-09-02T10:10:10Z | http://arxiv.org/abs/2309.00891v1 | # On semi-classical limit of spatially homogeneous quantum Boltzmann equation: asymptotic expansion
###### Abstract.
We continue our previous work [16] on the limit of the spatially homogeneous quantum Boltzmann equation as the Planck constant \(\epsilon\) tends to zero, also known as the semi-classical limit. For general interaction potential, we prove the following: (i). The spatially homogeneous quantum Boltzmann equations are locally well-posed in some weighted Sobolev spaces with quantitative estimates uniformly in \(\epsilon\). (ii). The semi-classical limit can be further described by the following asymptotic expansion formula:
\[f^{\epsilon}(t,v)=f_{L}(t,v)+O(\epsilon^{\vartheta}).\]
This holds locally in time in Sobolev spaces. Here \(f^{\epsilon}\) and \(f_{L}\) are solutions to the quantum Boltzmann equation and the Fokker-Planck-Landau equation with the same initial data. The convergent rate \(0<\vartheta\leq 1\) depends on the integrability of the Fourier transform of the particle interaction potential. Our new ingredients lie in a detailed analysis of the Uehling-Uhlenbeck operator from both angular cutoff and non-cutoff perspectives.
###### Contents
* 1 Introduction
* 2 Analysis of Uehling-Uhlenbeck operator
* 3 Uniform upper bounds in weighted Sobolev space
* 4 Well-posedness and propagation of regularity
* 5 Asymptotic formula
## 1. Introduction
The quantum Boltzmann equations for Fermi-Dirac and Bose-Einstein statistics proposed by Uehling and Uhlenbeck in [26] (after Nordheim [23]) should be derived from the evolution of real Fermions and Bosons in the so called weak-coupling limit (see [3] and [5]). Since Fokker-Planck-Landau equation is the effective equation associated with a dense and weakly interacting gas of classical particles (see [25], [8]), it is not surprising that the semi-classical limits of the solutions to quantum Boltzmann equations are expected to be solutions to the Fokker-Planck-Landau equation.
The weak convergence of the limit is justified in the paper [16]. The main purpose of this article is to provide a detailed asymptotic expansion formula to describe the semi-classical limit in the classical sense.
### Setting of the problem
The Cauchy problem of the spatially homogeneous quantum Boltzmann equation reads
\[\partial_{t}f=Q^{\epsilon}_{UU}(f),\quad f|_{t=0}=f_{0}, \tag{1.1}\]
which describes time evolution of the gas given initial datum \(f_{0}\). Here the solution \(f=f(t,v)\geq 0\) is the density of the gas. The Uehling-Uhlenbeck operator \(Q^{\epsilon}_{UU}\) in the weakly coupled regime is defined by
\[Q^{\epsilon}_{UU}(f)=\int_{\mathbb{R}^{3}\times\mathbb{S}^{2}}B^{\epsilon}(|v- v_{*}|,\cos\theta)\big{(}f^{\prime}_{*}f^{\prime}(1\pm\epsilon^{3}f_{*})(1 \pm\epsilon^{3}f)-f_{*}f(1\pm\epsilon^{3}f^{\prime})(1\pm\epsilon^{3}f^{ \prime})\big{)}\mathrm{d}\sigma\mathrm{d}v_{*}, \tag{1.2}\]
where
\[B^{\epsilon}(|v-v_{*}|,\cos\theta):=\epsilon^{-4}|v-v_{*}|\left(\hat{\phi} \left(\epsilon^{-1}|v-v_{*}|\sin(\theta/2)\right)\pm\hat{\phi}\left(\epsilon^ {-1}|v-v_{*}|\cos(\theta/2)\right)\right)^{2}. \tag{1.3}\]
\(\bullet\)_Some explanation on the model_. On the derivation of (1.2) and (1.3) in the weak-coupling limit, we refer to [5, 6, 7, 12]. To make (1.2) and (1.3) clear, we have the following remarks:
1. The parameter \(\epsilon\) is the Planck constant \(\hbar\). Note that for simplicity, we already drop the factor \(2\pi\) appeared in [16]. As our goal is to study the semi-classical limit, we always assume \(0<\epsilon<1\).
2. The sign \("+"\) and the sign \("-"\) correspond to Bose-Einstein particles and Fermi-Dirac particles respectively.
3. The real-valued function \(\hat{\phi}\) is the Fourier transform of the particle interaction potential \(\phi(|x|)\).
4. The deviation angle \(\theta\) is defined through \(\cos\theta:=\frac{v-v_{*}}{|v-v_{*}|}\cdot\sigma\). Thanks to the symmetric property of the collision kernel, we can assume that \(\theta\in[0,\pi/2]\).
5. We use the standard shorthand \(h=h(v)\), \(g_{*}=g(v_{*})\), \(h^{\prime}=h(v^{\prime})\), \(g_{*}^{\prime}=g(v_{*}^{\prime})\) where \(v^{\prime}\), \(v_{*}^{\prime}\) are given by \[v^{\prime}=\frac{v+v_{*}}{2}+\frac{|v-v_{*}|}{2}\sigma,\quad v_{*}^{\prime}= \frac{v+v_{*}}{2}-\frac{|v-v_{*}|}{2}\sigma,\quad\sigma\in\mathbb{S}^{2}\,.\]
\(\bullet\) _Basic assumptions on the potential function \(\phi\)._ For \(a\geq 0\), let
\[I_{a}:=\int_{0}^{\infty}\hat{\phi}^{2}(r)r^{a}\mathrm{d}r,\quad I_{a}^{\prime }:=\int_{0}^{\infty}|r(\hat{\phi})^{\prime}(r)|^{2}r^{a}\mathrm{d}r. \tag{1.4}\]
Our basic assumptions on \(\hat{\phi}\) are
1. \(I_{0}+I_{3}+I_{3}^{\prime}<\infty\).
2. \(I_{3+\vartheta}+I_{3+\vartheta}^{\prime}<\infty\) for some \(\vartheta\in(0,1]\).
Several remarks are in order:
1. The condition \(I_{0}<\infty\) in **(A1)** is used to bound \(\int_{\mathbb{S}^{2}}B^{\epsilon}(|v-v_{*}|,\cos\theta)\mathrm{d}\sigma< \epsilon^{-3}I_{0}\), see (2.18) for details. This is the key point to prove the _global existence_ of the mild solution for the Fermi-Dirac particles. In the weak coupling regime (1.3), finiteness of the \(\sigma\)-integral holds even for some inverse power law potentials. Indeed, taking \(\phi(|x|)=|x|^{-p}(0<p<3)\), one can check that \(\int_{\mathbb{S}^{2}}B^{\epsilon}(|v-v_{*}|,\cos\theta)\mathrm{d}\sigma\) is finite for \(p>2\) and infinite for \(p\leq 2\), see [27] for more details on this. For the infinite case, one may need some angular cutoff. The condition \(I_{0}<\infty\) is reminiscent of Grad's angular cutoff assumption for inverse power law potentials in the low density regime, which allows one separate the Boltzmann operator into gain and loss terms. From now on, we will call such mathematical treatment as "angular cutoff" view. However, such treatment is not enough since the upper bound blows up as \(\epsilon\to 0\).
2. \(I_{3}\) is derived by computing the momentum transfer which is defined as follows: (1.5) \[M_{o}^{\epsilon}(|v-v_{*}|):=\int_{\mathbb{S}^{2}}B^{\epsilon}(|v-v_{*}|,\cos \theta)(1-\cos\theta)\mathrm{d}\sigma.\] We will show in the later that \(M_{o}^{\epsilon}\sim I_{3}|v-v_{*}|^{-3}\) when \(\epsilon\) is sufficiently small. This is also related to the diffusion coefficient of Fokker-Plank-Landau collision operator(see (1.7) and (1.8)). The condition \(I_{3}<\infty\) is reminiscent of angular non-cutoff kernels for inverse power law potentials as one always relies an additional order-2 \(\theta^{2}\) to deal with angular singularity. From now on, we will call such mathematical treatment as "angular non-cutoff" view.
3. The condition \(I_{3}+I_{3}^{\prime}<\infty\) in **(A1)** allows us to derive the cancellation lemma from the point view of angular non-cutoff. It plays the essential role to get the uniform-in-\(\epsilon\) estimate.
4. Assumption **(A1)** is used to prove uniform-in-\(\epsilon\) local well-posedness and propagation of regularity. To get the asymptotic expansion for the semi-classical limit, technically we need assumption **(A2)**.
5. We do not impose any point-wise condition on \(\hat{\phi}\). All the constants depending on \(\phi\) in this article are through the quantities in **(A1)** and **(A2)**.
By formal computation(see [4]), the Cauchy problem (1.1) will converge to that of the Fokker-Planck Landau equation
\[\partial_{t}f=Q_{L}(f,f),\quad f|_{t=0}=f_{0}. \tag{1.6}\]
Here the Fokker-Planck Landau operator \(Q_{L}\) reads
\[Q_{L}(g,h)(v):=\nabla\cdot\int_{\mathbb{R}^{3}}a(v-v_{*})\{g(v_{*})\nabla h(v) -\nabla g(v_{*})h(v)\}\,\mathrm{d}v_{*}, \tag{1.7}\]
where \(a\) is a matrix-valued function that is symmetric, (semi-definite) positive. It depends on the interaction potential between particles, and is defined by (for \(i,j=1,2,3\))
\[a_{ij}(z)=2\pi I_{3}|z|^{-1}\,\Pi_{ij}(z),\quad\Pi_{ij}(z)=\delta_{ij}-\frac{z _{i}z_{j}}{|z|^{2}}, \tag{1.8}\]
where \(I_{3}\) is defined in (1.4).
Our goal is to study the semi-classical limit from (1.1) to (1.6) in some weighted Sobolev space. To do that, we seperate our proof into two parts: well-posedness results for (1.1) with uniform-in-\(\epsilon\) estimates and the asymptotic expansion formula with explicit error estimates.
### Main results
Before introducing the main results, we list some facts on the notations.
\(\bullet\) As usual, \(a\lesssim b\) is used to denote that there is a universal constant \(C\) such that \(a\leq Cb\). The notation \(a\sim b\) means \(a\lesssim b\) and \(b\lesssim a\). We denote by \(C(\lambda_{1},\lambda_{2},\cdots,\lambda_{n})\) or \(C_{\lambda_{1},\lambda_{2},\cdots,\lambda_{n}}\) some constant depending on parameters \(\lambda_{1},\lambda_{2},\cdots,\lambda_{n}\). The notation \(a\lesssim_{\lambda_{1},\lambda_{2},\cdots,\lambda_{n}}b\) is interpreted as \(a\leq C_{\lambda_{1},\lambda_{2},\cdots,\lambda_{n}}b\).
\(\bullet\) We recall the \(L^{p}\) space for \(1\leq p\leq\infty\) through the norm
\[\|f\|_{L^{p}}:=\left(\int_{\mathbb{R}^{3}}|f(v)|^{p}\mathrm{d}v\right)^{1/p} \text{ for }1\leq p<\infty;\quad\|f\|_{L^{\infty}}:=\operatorname*{ess\,sup}_{v\in \mathbb{R}^{3}}|f(v)|.\]
Denote the weight function by \(W_{l}(v):=(1+|v|^{2})^{\frac{1}{2}}\) for \(l\in\mathbb{R}\) and write \(W=W_{1}\) for simplicity. Then the weighted \(L^{p}_{l}\) space is defined through the norm \(\|f\|_{L^{p}_{l}}:=\|W_{l}f\|_{L^{p}}\). We denote the multi-index \(\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})\in\mathbb{N}^{3}\) with \(|\alpha|=\alpha_{1}+\alpha_{2}+\alpha_{3}\). For up to order \(N\in\mathbb{N}\) derivatives, the weighted Sobolev space \(W^{N,p}_{l}\) on \(\mathbb{R}^{3}\) with \(p\in[1,\infty],l\in\mathbb{R}\) is defined through the following norm
\[\|f\|_{W^{N,p}_{l}}:=\sum_{|\alpha|\leq N}\|\partial^{\alpha}f\|_{L^{p}_{l}}.\]
If \(p=2\), denote by \(H^{N}_{l}\) the Hilbert space with \(\|f\|_{H^{N}_{l}}=\|f\|_{W^{N,2}_{l}}\).
\(\bullet\) For simplicity, for \(T>0\), let \(\mathcal{A}_{T}:=L^{\infty}([0,T];L^{1}(\mathbb{R}^{3}))\) associated with the norm \(\|f\|_{T}:=\sup_{0\leq t\leq T}\|f(t)\|_{L^{1}}\). Let \(\mathcal{A}_{\infty}:=L^{\infty}([0,\infty);L^{1}(\mathbb{R}^{3}))\) associated with the norm \(\|f\|_{\infty}:=\sup_{t\geq 0}\|f(t)\|_{L^{1}}\).
\(\bullet\) Given a non-negative initial datum \(f_{0}\in L^{1}(\mathbb{R}^{3})\cap L^{\infty}(\mathbb{R}^{3})\). We consider the initial value problem (1.1) for Bose-Einstein particles with \(0<\epsilon<1\) and for Fermi-Dirac particles with \(0<\epsilon\leq\min\{1,\|f_{0}\|_{L^{\infty}}^{-1/3}\}\). Set \(\|f_{0}\|_{L^{\infty}}^{-1/3}=\infty\) for the case \(\|f_{0}\|_{L^{\infty}}=0\) where the problem trivially has a zero solution.
\(\bullet\) For simplicity, we will use the shorthand \(\int(\cdots)\mathrm{d}V=\int_{v,v_{*}\in\mathbb{R}^{3},\sigma\in\mathbb{S}^{2}, (v-v_{*})\cdot\sigma\geq 0}(\cdots)\mathrm{d}v\mathrm{d}v_{*}\mathrm{d}\sigma\). We drop integration domain in most of the integrals if there is no confusion.
Next we introduce the definition of mild solution to (1.1).
**Definition 1.1**.: _For \(T>0\), set \(\mathcal{A}_{T}:=L^{\infty}([0,T];L^{1}(\mathbb{R}^{3})\cap L^{\infty}(\mathbb{ R}^{3})\). A measurable non-negative function \(f\in\mathcal{A}_{T}\) on \([0,T]\times\mathbb{R}^{3}\) is called a local(or global) mild solution of the initial value problem (1.1) if \(T<\infty\)(or \(T=\infty\)) there is a null set \(Z\subset\mathbb{R}^{3}\) s.t., for all \(t\in[0,T]\) and \(v\in\mathbb{R}^{3}\setminus Z\),_
\[f(t,v)=f_{0}(v)+\int_{0}^{t}Q^{\epsilon}_{UU}(f)(\tau,v)\mathrm{d}\tau,\]
_and additionally for Fermi-Dirac particles, it holds that_
\[\|f(t)\|_{L^{\infty}}\leq\epsilon^{-3}. \tag{1.9}\]
Our first result is the global well-posedness and propagation of regularity of the Cauchy problem (1.1) for Fermi-Dirac particles.
**Theorem 1.1** (Fermi-Dirac particles).: _Let \(\hat{\phi}\) verify **(A1)** and \(0\leq f_{0}\in L^{1}\cap L^{\infty}\). Suppose that \(0<\epsilon\leq\min\{1,\|f_{0}\|_{L^{\infty}}^{-1/3}\}\)._
1. **(Global well-posedness)** _The Cauchy problem (_1.1_) for Fermi-Dirac particles admits a unique global mild solution_ \(f^{\epsilon}\)_._
2. **(Propagation of regularity uniformly in_ \(\epsilon\)_)** _If_ \(f_{0}\in H^{N}_{l}\) _for_ \(N,l\geq 2\)_, there exists a lifespan_ \(T^{*}=T^{*}(N,l,\phi,\|f_{0}\|_{H^{N}_{l}})>0\) _independent of_ \(\epsilon\)_, such that the family of solution_ \(\{f^{\epsilon}\}_{\epsilon}\) _is uniformly bounded in_ \(L^{\infty}([0,T^{*}];H^{N}_{l})\cap C([0,T^{*}];H^{N-2}_{l})\)_. More precisely, uniformly in_ \(\epsilon\)_,_ (1.10) \[\sup_{t\in[0,T^{*}]}\|f^{\epsilon}(t)\|_{H^{N}_{l}}\leq 2\|f_{0}\|_{H^{N}_{l}},\] _and for_ \(0\leq t_{1}\leq t_{2}\leq T^{*}\)_,_ (1.11) \[\|f^{\epsilon}(t_{2})-f^{\epsilon}(t_{1})\|_{H^{N-2}_{l}}\leq C(N,l,\phi,\|f_{0} \|_{H^{N}_{l}})(\|f_{0}\|_{H^{N}_{l}}^{2}+\|f_{0}\|_{H^{N}_{l}}^{3})(t_{2}-t_{1}).\]
Our second result is the local well-posedness and propagation of regularity \(H^{N}_{l}\) of the Cauchy problem (1.1) for Bose-Einstein particles.
**Theorem 1.2** (Bose-Einstein particles).: _Let \(\hat{\phi}\) verify **(A1)**. Let \(0\leq f_{0}\in H^{N}_{l}\) for \(N,l\geq 2\), then for any \(0<\epsilon<1\), the Cauchy problem (1.1) for Bose-Einstein particles admits a unique local mild solution \(f^{\epsilon}\in L^{\infty}([0,T^{*}];H^{N}_{l})\cap C([0,T^{*}];H^{N-2}_{l})\) where \(T^{*}=T^{*}(N,l,\phi,\|f_{0}\|_{H^{N}_{l}})>0\) is independent of \(0<\epsilon<1\). Moreover, the family of solution \(\{f^{\epsilon}\}_{0<\epsilon<1}\) satisfies (1.10) and (1.11) uniformly in \(\epsilon\)._
**Remark 1.1**.: _Note that (1.11) implies (1.1) holds in the space \(H^{N-2}_{l}\) for almost all \(t\in[0,T]\). Thanks to the weak convergence result in [16], similar local well-posedness also holds for the Landau equation (1.6) with estimates (1.10) and (1.11)._
Our last result is on the asymptotic expansion for the semi-classical limit.
**Theorem 1.3** (Semi-classical limit with convergence rate).: _Let \(0\leq N\in\mathbb{N},2\leq l\in\mathbb{R}\). Suppose that (i). \(\hat{\phi}\) satisfies **(A1)** and **(A2)**; (ii). \(0\leq f_{0}\in H^{N+3}_{l+5}\); (iii). For Fermi-Dirac particles, \(0<\epsilon\leq\min\{1,\|f_{0}\|_{L^{\infty}}^{-1/3}\}\). Let \(f^{\epsilon}\) and \(f_{L}\) be the solutions to (1.1) and (1.6) respectively with the initial datum \(f_{0}\) on \([0,T^{*}]\) where \(T^{*}=T^{*}(N,l,\phi,\|f_{0}\|_{H^{N+3}_{l+5}})\) given in Theorem 1.1 and 1.2. Then for \(t\in[0,T^{*}]\), it holds that_
\[f^{\epsilon}(t,v)=f_{L}(t,v)+\epsilon^{\vartheta}R^{\epsilon}(t,v), \tag{1.12}\]
_where_
\[\sup_{t\in[0,T^{*}]}\|R^{\epsilon}\|_{H^{N}_{l}}\leq C(\|f_{0}\|_{H^{N+3}_{l+ 5}};N,l,\phi). \tag{1.13}\]
Some comments on these results are in order:
**Remark 1.2**.: _Owing to the fact that Fermi-Dirac particles enjoy the \(L^{\infty}\) upper bound (1.9), indeed, we can prove the global propagation of regularity with the quantitative estimates as follows:_
* **(Global propagation of regularity)** _If_ \(f_{0}\in L^{2}_{l}\) _for_ \(l\geq 2\)_, then for any_ \(t\geq 0\)_,_ (1.14) \[\|f^{\epsilon}(t)\|_{L^{2}_{l}}\leq\|f_{0}\|_{L^{2}_{l}}\exp\left(tC_{\epsilon, \phi,l}(\|f_{0}\|_{L^{1}}+\epsilon^{-3})\right).\] _If_ \(f_{0}\in L^{1}_{l}\cap L^{\infty}_{l}\cap H^{1}_{l}\) _for_ \(l\geq 2\)_, then for any_ \(t\geq 0\)_,_ (1.15) \[\|f^{\epsilon}(t)\|_{L^{1}_{l}\cap L^{\infty}_{l}\cap H^{1}_{l}}\leq C(\|f_{0} \|_{L^{1}_{l}\cap L^{\infty}_{l}\cap H^{1}_{l}},t;\epsilon,\phi,l).\] _If_ \(f_{0}\in W^{1,1}_{l}\cap W^{1,\infty}_{l}\cap H^{N}_{l}\) _for_ \(N,l\geq 2\)_, then for any_ \(t\geq 0\)_,_ (1.16) \[\|f^{\epsilon}(t)\|_{W^{1,1}_{l}\cap W^{1,\infty}_{l}\cap H^{N}_{l}}\leq C(\| f_{0}\|_{W^{1,1}_{l}\cap W^{1,\infty}_{l}\cap H^{N}_{l}},t;\epsilon,\phi,l,N).\]
_We cannot expect similar results for the B-E particles because of the B-E condensation phenomenon. The proofs of these propagation results are based on the fact that derivatives and weights can be suitably distributed across the non-linear terms such that the targeting high-order norm grows at most linearly under the premise that the lower-order norms are already propagated globally. Since these results are somewhat deviated from the main purpose of this article, their proofs are not given for brevity._
**Remark 1.3**.: _The propagation of regularity uniformly in \(\epsilon\) in equation (1.10) and the temporal continuity described in equation (1.11) are applicable to both Fermi-Dirac and Bose-Einstein particles. This enables us to delve deeper into exploring the semi-classical limit as presented in Theorem 1.3._
**Remark 1.4**.: _To get the upper bound of the error term \(R^{\epsilon}\) uniformly in \(\epsilon\), it is compulsory to impose high regularity on the solutions \(f^{\epsilon}\) and \(f_{L}\) since we need to estimate the error between \(Q^{\epsilon}_{UU}(f^{\epsilon})\) and \(Q_{L}(f_{L})\), see (5.1) for the error equation and Lemma 5.1 for the main estimate._
**Remark 1.5**.: _We emphasize that the asymptotic expansion (1.12) is sharp. This can be easily seen from the proof of Theorem 1.3. Roughly speaking, to get the factor \(\epsilon^{\vartheta}\), we need to kill the singularity which behaves like the Riesz potential \(|x|^{-2-\vartheta}\). Obviously this singularity can be removed in the case of \(\vartheta<1\). For the borderline case \(\vartheta=1\), we further check that the corresponding part in fact behaves as \(\frac{K(x)}{|x|^{3}}\) which is the kernel of the typical Calderon-Zygmund operator. From this point, the expansion \(f^{\epsilon}=f_{L}+O(\epsilon)\) is sharp for any smooth potential function \(\phi\)._
**Remark 1.6**.: _The asymptotic formula (1.12) holds locally in time for any initial data in \(H^{N+3}_{l+5}\). We may expect large or even global-in-time result for some special initial data, considering the recent progress on global well-posedness in the homogeneous [18] and inhomogeneous [2, 17, 24, 27] case._
### Short review
Quantum Boltzmann equation has been widely investigated by many authors. In this subsection, we first give a short review on the existing results. Then we explain the main difficulty for the problem of semi-classical limit.
In most of the literature on the quantum Boltzmann equation, the authors usually take \(\epsilon=1\) in the definition of Uehling-Uhlenbeck operator (1.2). In this situation, for Fermi-Dirac particles, we have the a priori bound for the solution \(f\), that is, \(f\leq 1\). We refer readers to [19, 21] for the existence result. For Bose-Einstein particles, we refer readers to [20, 9] for the existence of measure solution and the local well-posedness in weighted \(L^{\infty}\) spaces. For the Bose-Einstein condensation at low temperature, we refer to [13, 14] and also [10, 18, 22] for the recent progress.
As for semi-classical limit of (1.1), in [4], Benedetto and Pulvirenti proved the convergence of the operator. More precisely, for a suitable class of integrable functions \(f\) and any Schwartz function \(\varphi\),
\[\lim_{\epsilon\to 0}\langle Q^{\epsilon}_{UU}(f),\varphi\rangle=\langle Q_{L}(f,f),\varphi\rangle. \tag{1.17}\]
The notation \(\langle f,g\rangle:=\int_{\mathbb{R}^{3}}f(v)g(v)\mathrm{d}v\) is used to denote the inner product for \(v\) variable. In [16], under some assumptions on \(\hat{\phi}\), the following results are proved: (1). Starting from the Eq.(Fermi-Dirac), up to subsequences, the _isotropic weak solution_ to Eq.(Fermi-Dirac) will converge to the _isotropic weak solution_ to Eq.(Fokker-Planck-Landau); (2). Starting from the Eq.(Bose-Einstein), up to subsequences, the _measure-valued isotropic weak solution_ to Eq.(Bose-Einstein) will converge to the _measure-valued isotropic weak solution_ to Eq.(Fokker-Planck-Landau). Here _isotropic solution_ means that the solution \(f(t,v)\) is a radial function with respect to \(v\), that is, \(f(t,v)=f(t,|v|)\). To achieve these results, the main idea is to reformulate the equations in the isotropic sense and then make full use of the cancellation hidden in the cubic terms.
### Difficulties, strategies and new ideas
The main difficulty is induced by the singular scaling factor in the Uehling-Uhlenbeck operator (1.2). One may attempt to use some normalization technique to deal with the parameter \(\epsilon\). For instance, if \(\tilde{f}(t,v):=\epsilon^{3}f(\epsilon^{6}t,\epsilon v)\), one can easily verify that \(\tilde{f}\) is a solution to following equation:
\[\partial_{t}\tilde{f}=\int_{\mathbb{R}^{3}\times\,\mathbb{S}^{2}}B^{1}(|v-v_ {*}|,\cos\theta)\big{(}\tilde{f}^{\prime}_{*}\tilde{f}^{\prime}(1\pm\tilde{f} _{*})(1\pm\tilde{f})-\tilde{f}_{*}\tilde{f}(1\pm\tilde{f}^{\prime}_{*})(1\pm \tilde{f}^{\prime})\big{)}\mathrm{d}\sigma\mathrm{d}v_{*}. \tag{1.18}\]
Now (1.1) is reduced to (1.18) with the initial data \(\tilde{f}|_{t=0}:=\epsilon^{3}f_{0}(\epsilon v)\). The good side is that the equation (1.18) itself contains no \(\epsilon\). However, the bad side is that the initial data \(\tilde{f}|_{t=0}\) is sufficiently large in weighted Sobolev spaces when \(\epsilon\) is sufficiently small. It is challenging to establish uniform lifespan for the nonlinear equations like (1.18) starting from arbitrarily large initial data. Usually, the lifespan vanishes as the initial data blows up. Therefore, we will directly consider (1.1).
Let us explain our strategy from the analysis of the collision operator. It is easy to see that we can decompose \(Q^{\epsilon}_{UU}\) into two parts:
\[Q^{\epsilon}_{UU}(f)=Q(f,f)+R(f,f,f), \tag{1.19}\]
where \(Q(f,f)\) contains the quadratic terms and \(R(f,f,f)\) contains the cubic terms. More precisely,
\[Q(g,h):=\int_{\mathbb{R}^{3}\times\,\mathbb{S}^{2}}B^{\epsilon}(|v-v_{*}|, \cos\theta)(g^{\prime}_{*}h^{\prime}-g_{*}h)\mathrm{d}\sigma\mathrm{d}v_{*}= \sum_{i=1}^{3}Q_{i}(g,h); \tag{1.20}\]
\[R(g,h,\rho):=\pm\epsilon^{3}\int_{\mathbb{R}^{3}\times\,\mathbb{S}^{2}}B^{ \epsilon}(|v-v_{*}|,\cos\theta)(g^{\prime}_{*}h^{\prime}(\rho+\rho_{*})-g_{*} h(\rho^{\prime}+\rho^{\prime}_{*}))\mathrm{d}\sigma\mathrm{d}v_{*}. \tag{1.21}\]
Here in (1.20) for \(i=1,2,3\), \(Q_{i}\) is defined by
\[Q_{i}(g,h):=\int_{\mathbb{R}^{3}\times\,\mathbb{S}^{2}}B^{\epsilon}_{i}(|v-v_{ *}|,\cos\theta)(g^{\prime}_{*}h^{\prime}-g_{*}h)\mathrm{d}\sigma\mathrm{d}v_{*},\]
where \(B^{\epsilon}_{i}\) is defined by
\[B^{\epsilon}_{1}(|v-v_{*}|,\cos\theta) := \epsilon^{-4}|v-v_{*}|\hat{\phi}^{2}\left(\epsilon^{-1}|v-v_{*}| \sin(\theta/2)\right), \tag{1.23}\] \[B^{\epsilon}_{2}(|v-v_{*}|,\cos\theta) := \pm 2\epsilon^{-4}|v-v_{*}|\hat{\phi}\left(\epsilon^{-1}|v-v_{*}| \sin(\theta/2)\right)\hat{\phi}\left(\epsilon^{-1}|v-v_{*}|\cos(\theta/2) \right),\] (1.24) \[B^{\epsilon}_{3}(|v-v_{*}|,\cos\theta) := \epsilon^{-4}|v-v_{*}|\hat{\phi}^{2}\left(\epsilon^{-1}|v-v_{*}| \cos(\theta/2)\right). \tag{1.22}\]
**Remark 1.7**.: _Note that \(B^{\epsilon}=B^{\epsilon}_{1}+B^{\epsilon}_{2}+B^{\epsilon}_{3}\) and \(|B^{\epsilon}_{2}|=2\sqrt{B^{\epsilon}_{1}B^{\epsilon}_{3}}\). The divergence in \(B^{\epsilon}_{1}\) arises from different physical reasons: the intensity of the collisions increases together with effective domain of \(\hat{\phi}^{2}(\cdot)\) due to the vanishing of the scattering angle. More precisely, \(\epsilon^{-1}|v-v_{*}|\sin(\theta/2)\in[0,\epsilon^{-1}|v-v_{*}|\sqrt{2}/2]\) contains the
_dominate part of \(\hat{\phi}^{2}(\cdot)\) as \(\epsilon\to 0\). While for \(B_{3}^{\epsilon}\), \(\epsilon^{-1}|v-v_{*}|\cos(\theta/2)\in[\epsilon^{-1}|v-v_{*}|\sqrt{2}/2,\epsilon ^{-1}|v-v_{*}|]\) goes to infinity and plays minor roles as \(\epsilon\to 0\) because of the integrability conditions **(A1)** and **(A2)** on \(\hat{\phi}^{2}\). In a word, in the limiting process \(\epsilon\to 0\), \(B_{1}^{\epsilon}\) is the dominant part. If \(B^{\epsilon}\) is replaced by \(B_{1}^{\epsilon}\) and the cubic terms are ignored we would expect the same limiting behavior. See also Remark 1.1 in [16] for another treatment of the kernel and relevant discussions._
In what follows, we will show that \(B_{1}^{\epsilon}\) and \(B_{3}^{\epsilon}\) should be treated in a different manner. Roughly speaking, we will treat \(B_{1}^{\epsilon}\) from the non-cutoff view and \(B_{3}^{\epsilon}\) from the cutoff view. This can be seen easily by the following computations.
(i). It is not difficult to derive that
\[\int_{\mathbb{S}^{2}}B_{3}^{\epsilon}(|v-v_{*}|,\cos\theta)\mathrm{d}\sigma \sim I_{3}|v-v_{*}|^{-3},\]
where we use the facts that \(\cos(\theta/2)\sim 1\) and the change of variables from \(\cos(\theta/2)\) to \(r:=\epsilon^{-1}|v-v_{*}|\cos(\theta/2)\). The strong singularity induced by the relative velocity \(|v-v_{*}|^{-3}\) is consistence with the Landau collision operator (1.7). The singularity can be removed by sacrificing the slight regularity of the solution.
(ii). Again by the similar calculation, we have
\[\int_{\mathbb{S}^{2}}B_{1}^{\epsilon}(|v-v_{*}|,\cos\theta)\mathrm{d}\sigma \sim\epsilon^{-2}I_{1}|v-v_{*}|^{-1}.\]
To kill the singular factor \(\epsilon^{-2}\), we resort to the momentum transfer (1.5). Technically if we expand the Taylor expansion of \(f(v^{\prime})-f(v)\) up to the second order, then we may arrive at(see (2.8) for details)
\[\langle Q_{1}(g,h),f\rangle \sim \int B_{1}^{\epsilon}(|v-v_{*}|,\cos\theta)g_{*}h\bigg{(}(v^{ \prime}-v)\cdot\nabla_{v}f+(v^{\prime}-v)\otimes(v^{\prime}-v):\nabla_{v}^{2} f\bigg{)}\mathrm{d}\sigma dv_{*}dv\] \[\sim I_{3}\int|g_{*}h|(|\nabla f||v-v_{*}|^{-2}+|\nabla^{2}f||v-v_{* }|^{-1})dv_{*}dv.\]
On the one hand, this suggests that if we aim to obtain a uniform-in-\(\epsilon\) estimate, we should deal with \(Q_{1}\) from the non-cutoff view. However, on the other hand, this approach results in a loss of derivatives, particularly for the solution.
Now we are in a position to state our main strategy and the key ideas to overcome the difficulties. The strategy can be outlined in three steps.
_Step 1._ We construct a local mild solution in \(L^{1}\cap L^{\infty}\) space via the contraction mapping theorem. Here the lifespan \(T_{*}\) depends heavily on the parameter \(\epsilon\) since we deal with (1.1) from angular cutoff view. For Fermi-Dirac particles, we get the propagation of the \(L^{\infty}\) upper bound that \(f(t)\leq\epsilon^{-3}\) for any \(t\in[0,T_{*}]\).
_Step 2._ We prove the propagation of the regularity uniformly in \(\epsilon\) in weighted Sobolev spaces. This is motivated by the fact (1.17). We expect that the Uehling-Uhlenbeck operator (1.2) will behave like a diffusive operator when \(\epsilon\) is sufficiently small. Thus the \(L^{2}\) framework to prove the propagation of regularity is reasonable. To implement the main idea, we develop some tools as follows.
\(\bullet\)Explicit formula for the change of variable. As we state it in the before, to kill the singularity, we will use the Taylor expansion of \(f(v^{\prime})-f(v)\) up to the second order. Technically we will meet the intermediate points \(\kappa(v)=\kappa v^{\prime}+(1-\kappa)v,\iota(v_{*})=\iota v_{*}^{*}+(1-\iota)v _{*}\) where \(\kappa,\iota\in[0,1]\). As a result, the change of variables \(v\to\kappa(v)\) and \(v_{*}\to\iota(v_{*})\) are compulsory. Since now our kernel is not a factorized form \(\Phi(|v-v_{*}|)b(\cos\theta)\) of relative velocity and deviation angle, rough treatment is not enough. For this reason, we explicitly compute the change of variables \(v\to\kappa(v)\) and \(v_{*}\to\iota(v_{*})\) in Lemma 2.2 and carefully use it in Lemma 2.4 for \(B_{3}^{\epsilon}\) and Lemma 2.7 for \(B_{1}^{\epsilon}\).
\(\bullet\)Coercivity estimate and the cancellation lemma. As we explain it in the before, the operator \(Q_{1}\) is supposed to produce the dissipation. Indeed, if \(f\geq 0\),
\[\langle Q_{1}(f,\partial^{\alpha}f),\partial^{\alpha}f\rangle \tag{1.25}\] \[= -\frac{1}{2}\int B_{1}^{\epsilon}f_{*}((\partial^{\alpha}f)^{ \prime}-(\partial^{\alpha}f))^{2}\mathrm{d}v\mathrm{d}v_{*}\mathrm{d}\sigma+ \frac{1}{2}\int B_{1}^{\epsilon}f_{*}(((\partial^{\alpha}f)^{2})^{\prime}-( \partial^{\alpha}f)^{2})\mathrm{d}v\mathrm{d}v_{*}\mathrm{d}\sigma\] (1.26) \[\leq \frac{1}{2}\int B_{1}^{\epsilon}f_{*}(((\partial^{\alpha}f)^{2})^{ \prime}-(\partial^{\alpha}f)^{2})\mathrm{d}v\mathrm{d}v_{*}\mathrm{d}\sigma.\]
The first term in (1.25) is non-positive and corresponds to the coerctity of the operator. Unfortunately, since we consider a general interaction potential, we are unable to obtain an explicit description of the coercivity which is related closely to the one from the Landau collision operator. As a result, we only make
full use of the sign. To treat the second term in (1.25), we establish the cancellation lemma(see Lemma 2.6 for details) to balance the loss of the derivative.
\(\bullet\)Estimate the collision operators from the cutoff and non-cutoff perspectives. To estimate the collision operator \(Q_{UU}\) in weighted Sobolev spaces \(H^{N}_{l}\), we first remind that \(Q_{1},Q_{2},Q_{3}\) and \(R\) behave quite differently and each of them has its own difficulty. Moreover, since the kernels \(B^{\epsilon}_{i}\) (where \(i=1,2,3\)) cannot be expressed in the product form \(\Phi(|v-v_{*}|)b(\cos\theta)\), it will lead to numerous technical difficulties in the analysis. Our main approach is based on the integration of two distinct perspectives: the cutoff view and the non-cutoff view. These enable us to balance the regularity to get the uniform-in-\(\epsilon\) estimates. We refer readers to Sect. 2 and Sect. 3 for details.
\(\bullet\)Integration by parts formulas for the penultimate order terms. Since we have no explicit description for the dissipation mechanism, the main obstruction to prove the propagation of regularity uniformly in \(\epsilon\) lies in the estimates for the penultimate order terms. To bound penultimate order terms, we have to sacrifice the regularity to kill the singular factor. To balance the regularity, we borrow the idea from [11] to establish the integration by parts formulas(see Lemma 3.4 for details).
_Step 3._ According to the Sobolev embedding theorem, the uniform-in-\(\epsilon\) estimate indicates that the \(L^{\infty}\) upper bound of the solution can be constrained by the Sobolev norm of the initial data. This in particular evokes the continuity argument to extend the lifespan \(T_{*}\) to be \(O(1)\) which is independent of \(\epsilon\).
### Organization of the paper
Section 2 and Section 3 aim to obtain a precise energy estimate of \(Q^{\epsilon}_{UU}\) in the space \(H^{N}_{l}\) through a comprehensive analysis of the bi-linear operators \(Q_{1},Q_{2},Q_{3}\), and the tri-linear operator \(R\). In Section 4, we prove the results in Theorems 1.1 and 1.2 that hold uniformly in \(\epsilon\). Finally, Section 5 contains the proof of Theorem 1.3.
## 2. Analysis of Uehling-Uhlenbeck operator
In this section, we will examine the upper bounds of \(Q\) and \(R\), and investigate the commutator estimates between these operators and the weight function \(W_{l}\). The operators will be considered from the perspective of both angular cutoff view and angular non-cutoff view.
### Some elementary facts
In this subsection, we will introduce some fundamental formulas that are commonly employed in the analysis of the Boltzmann operator. These formulas are particularly useful for studying the Boltzmann operator from the perspective of angular non-cutoff view.
#### 2.1.1. Taylor expansion
When evaluating the difference \(f^{\prime}-f\) (or \(f^{\prime}_{*}-f_{*}\)) before and after collision, various Taylor expansions are often used. We first introduce the order-1 expansion as
\[f^{\prime}-f=\int_{0}^{1}(\nabla f)(\kappa(v))\cdot(v^{\prime}-v)\mathrm{d} \kappa,\quad f^{\prime}_{*}-f_{*}=\int_{0}^{1}(\nabla f)(\iota(v_{*}))\cdot(v ^{\prime}_{*}-v_{*})\mathrm{d}\iota, \tag{2.1}\]
where for \(\kappa,\iota\in[0,1]\), the intermediate points are defined as
\[\kappa(v)=\kappa v^{\prime}+(1-\kappa)v,\quad\iota(v_{*})=\iota v^{\prime}_{ *}+(1-\iota)v_{*}. \tag{2.2}\]
Observing that \(|v^{\prime}-v|=|v^{\prime}_{*}-v_{*}|=|v-v_{*}|\sin\frac{\theta}{2}\), we have
\[f^{\prime}-f\sim C(\nabla f)\theta;\quad f^{\prime}_{*}-f_{*}\sim C(\nabla f)\theta.\]
As we emphasize in the introduction, expansion of \(f^{\prime}-f\) up to the second order is compulsory. We have
\[f^{\prime}-f =(\nabla f)(v)\cdot(v^{\prime}-v)+\int_{0}^{1}(1-\kappa)(\nabla^ {2}f)(\kappa(v)):(v^{\prime}-v)\otimes(v^{\prime}-v)\mathrm{d}\kappa; \tag{2.4}\] \[f^{\prime}-f =(\nabla f)(v^{\prime})\cdot(v^{\prime}-v)-\int_{0}^{1}\kappa( \nabla^{2}f)(\kappa(v)):(v^{\prime}-v)\otimes(v^{\prime}-v)\mathrm{d}\kappa. \tag{2.3}\]
Thanks to the symmetry property of \(\sigma\)-integral, the first terms in the formulas can be computed as follows
\[\int B(|v-v_{*}|,\frac{v-v_{*}}{|v-v_{*}|}\cdot\sigma)(v^{\prime} -v)\mathrm{d}\sigma=\int B(|v-v_{*}|,\frac{v-v_{*}}{|v-v_{*}|}\cdot\sigma)\sin^ {2}\frac{\theta}{2}(v_{*}-v)\mathrm{d}\sigma, \tag{2.6}\] \[\int B(|v-v_{*}|,\frac{v-v_{*}}{|v-v_{*}|}\cdot\sigma)(v^{\prime} -v)h(v^{\prime})\mathrm{d}\sigma\mathrm{d}v=0. \tag{2.5}\]
We remark that the formula (2.5) holds for fixed \(v,v_{*}\) and (2.6) holds for fixed \(v_{*}\). Therefore, (2.3) and (2.5) lead to \(O(\theta^{2})\) for the quantity \(\int Bg_{*}h(f^{\prime}-f)\mathrm{d}\sigma\mathrm{d}v_{*}\mathrm{d}v\); so do (2.4) and (2.6) for \(\int Bg_{*}h^{\prime}(f^{\prime}-f)\mathrm{d}\sigma\mathrm{d}v_{*}\mathrm{d}v\).
#### 2.1.2. Momentum transfer
We claim that the kernels \(B_{i}^{\epsilon}\) defined in (1.22)-(1.24) satisfy the estimate:
\[\int B^{\epsilon}\sin^{2}\frac{\theta}{2}\mathrm{d}\sigma\leq\int(B_{1}^{ \epsilon}+|B_{2}^{\epsilon}|+B_{3}^{\epsilon})\sin^{2}\frac{\theta}{2}\mathrm{ d}\sigma\lesssim I_{3}|v-v_{*}|^{-3}. \tag{2.7}\]
Indeed, for \(B_{1}^{\epsilon}\), using the change of variable \(r=\epsilon^{-1}|v-v_{*}|\sin(\theta/2)\), we have
\[\int B_{1}^{\epsilon}\sin^{2}\frac{\theta}{2}\mathrm{d}\sigma=8\pi\int_{0}^{ \pi/2}\epsilon^{-4}|v-v_{*}|\sin^{3}(\theta/2)\hat{\phi}^{2}\left(\epsilon^{- 1}|v-v_{*}|\sin(\theta/2)\right)\mathrm{d}\sin(\theta/2) \tag{2.8}\]
\[=8\pi\int_{0}^{2^{-1/2}}\epsilon^{-4}\hat{\phi}^{2}\left(\epsilon^{-1}|v-v_{* }|t\right)t^{3}\mathrm{d}t=8\pi|v-v_{*}|^{-3}\int_{0}^{2^{-1/2}\epsilon^{-1}|v -v_{*}|}\hat{\phi}^{2}(r)r^{3}\mathrm{d}r\leq 8\pi I_{3}|v-v_{*}|^{-3}.\]
For \(B_{3}^{\epsilon}\), using the fact \(\sqrt{2}/2\leq\cos\frac{\theta}{2}\) for \(0\leq\theta\leq\pi/2\) and the change of variable \(r=\epsilon^{-1}|v-v_{*}|\cos(\theta/2)\), we can similarly get that \(\int B_{3}^{\epsilon}\sin^{2}\frac{\theta}{2}\mathrm{d}\sigma\leq 8\pi I_{3}|v-v_{ *}|^{-3}\). For \(B_{2}^{\epsilon}\), the desired result follows the fact that \(|B_{2}^{\epsilon}|\leq B_{1}^{\epsilon}+B_{3}^{\epsilon}\).
#### 2.1.3. Estimates for the Riesz potentials
We list the following lemma without proof.
**Lemma 2.1**.: _It holds that_
\[\int|g_{*}hf||v-v_{*}|^{-1}\mathrm{d}v\mathrm{d}v_{*}\lesssim\|g\|_{L^{1} \cap L^{2}}\|h\|_{L^{2}}\|f\|_{L^{2}}. \tag{2.9}\]
_Let \(\delta>0,s_{1},s_{2},s_{3}\geq 0,s_{1}+s_{2}+s_{3}=\frac{1}{2}+\delta\), then_
\[\int|g_{*}hf||v-v_{*}|^{-2}\mathrm{d}v\mathrm{d}v_{*}\lesssim_{\delta}\|g\|_{ H^{s_{1}}}\|h\|_{H^{s_{2}}}\|f\|_{H^{s_{3}}}. \tag{2.10}\]
_Let \(\delta>0,s_{1},s_{2}\geq 0,s_{1}+s_{2}=\frac{1}{2}+\delta\), then_
\[\int|v-v_{*}|^{-1}|g_{*}|^{2}|h|^{2}\mathrm{d}v\mathrm{d}v_{*}\lesssim_{\delta }\|g\|_{H^{s_{1}}}^{2}\|h\|_{H^{s_{2}}}^{2}. \tag{2.11}\]
### A change of variable
In order to deal with the intermediate variables \(\kappa(v)\) and \(\iota(v_{*})\) defined in (2.2), we derive a useful formula involving the change of variable \(v\to\kappa(v)\) and \(v_{*}\to\iota(v_{*})\). It is quite important for the estimates of the integrals involving the kernels \(B_{i}^{\epsilon}(i=1,2,3)\).
**Lemma 2.2**.: _For \(\kappa\in[0,2]\), let us define_
\[\psi_{\kappa}(\theta):=(\cos^{2}\frac{\theta}{2}+(1-\kappa)^{2}\sin^{2}\frac {\theta}{2})^{-1/2}. \tag{2.12}\]
_For any \(0\leq\kappa\leq 1,v_{*}\in\mathbb{R}^{3}\), it holds that_
\[\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}_{+}}B(|v-v_{*}|,\cos\theta)f(\kappa (v))\mathrm{d}v\mathrm{d}\sigma=\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}_{+} }B(|v-v_{*}|\psi_{\kappa}(\theta),\cos\theta)f(v)\psi_{\kappa}^{3}(\theta) \mathrm{d}v\mathrm{d}\sigma. \tag{2.13}\]
_Here \(\mathbb{S}^{2}_{+}:=\{\sigma\in\mathbb{S}^{2}\,|(v-v_{*})\cdot\sigma\geq 0\}\), For any \(0\leq\kappa,\iota\leq 1\), it holds that_
\[\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}_{+ }}B(|v-v_{*}|,\cos\theta)g(\iota(v_{*}))f(\kappa(v))\mathrm{d}v\mathrm{d}v_{*} \mathrm{d}\sigma\] \[= \int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}_{+ }}B(|v-v_{*}|\psi_{\kappa+\iota}(\theta),\cos\theta)g(v_{*})f(v)\psi_{\kappa+ \iota}^{3}(\theta)\mathrm{d}v\mathrm{d}v_{*}\mathrm{d}\sigma. \tag{2.14}\]
Proof.: Recalling (2.2), we set \(\cos\beta_{\kappa}:=\sigma\cdot(\kappa(v)-v_{*})/|\kappa(v)-v_{*}|\). To express \(\beta_{\kappa}\) in terms of \(\theta\), we notice that \(\kappa(v)-v_{*}=v^{\prime}-v_{*}+(\kappa-1)(v^{\prime}-v)\), which implies that
\[|\kappa(v)-v_{*}|^{2}=|v^{\prime}-v_{*}|^{2}+(\kappa-1)^{2}|v^{\prime}-v_{*}|^ {2}=(\cos^{2}\frac{\theta}{2}+(1-\kappa)^{2}\sin^{2}\frac{\theta}{2})|v-v_{*}|^ {2}=\psi_{\kappa}(\theta)^{2}|v-v_{*}|^{2}.\]
From this together with fact that \((\kappa(v)-v_{*})\cdot\sigma=\left(\cos^{2}\frac{\theta}{2}+(\kappa-1)\sin^{2} \frac{\theta}{2}\right)|v-v_{*}|\), we have
\[\cos\beta_{\kappa}=\frac{\cos^{2}\frac{\theta}{2}+(\kappa-1)\sin^{2}\frac{ \theta}{2}}{\left(\cos^{2}\frac{\theta}{2}+(1-\kappa)^{2}\sin^{2}\frac{\theta}{2 }\right)^{1/2}}=\varphi_{\kappa}(\sin\frac{\theta}{2}),\]
where \(\varphi_{\kappa}(x)=\frac{1-x^{2}+(\kappa-1)x^{2}}{(1-x^{2}+(1-\kappa)^{2}x^{2 })^{1/2}}.\) The above relation yields that if \(0\leq\theta\leq\frac{\pi}{2}\), then \(0\leq\beta_{\kappa}\leq\delta_{\kappa}:=\arccos(\frac{\sqrt{2}}{2}\frac{\kappa}{ \sqrt{1+(1-\kappa)^{2}}})\) is a bijection.
Now we are in a position to prove (2.13). By the fact that
\[\det(\frac{\partial u}{\partial v})=(1-\frac{\kappa}{2})^{2}\left((1-\frac{\kappa }{2})+\frac{\kappa}{2}\cos\theta\right):=\alpha_{\kappa}(\theta), \tag{2.15}\]
we get that
\[\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}_{+}}B(|v-v_{*}|,\cos\theta)f(\kappa(v ))\mathrm{d}v\mathrm{d}\sigma=2\pi\int_{\mathbb{R}^{3}}\int_{0}^{\delta_{ \kappa}}B(|v-v_{*}|\psi_{\kappa}(\theta),\cos\theta)f(v)\alpha_{\kappa}^{-1}( \theta)\sin\beta_{\kappa}\mathrm{d}v\mathrm{d}\beta_{\kappa}.\]
Then the desired result follows the computation
\[\sin\beta_{\kappa}\mathrm{d}\beta_{\kappa}=-\mathrm{d}\cos\beta_{\kappa}=- \frac{1}{4}\varphi_{\kappa}^{\prime}(\sin\frac{\theta}{2})\sin^{-1}\frac{ \theta}{2}\sin\theta\mathrm{d}\theta,\quad-\frac{1}{4}\varphi_{\kappa}^{ \prime}(\sin\frac{\theta}{2})\sin^{-1}\frac{\theta}{2}\alpha_{\kappa}^{-1}( \theta)=\psi_{\kappa}^{3}(\theta).\]
As for (2.14), the case \(\kappa=\iota=1\) is obviously given by the natural change of variable \((v,v_{*},\sigma)\to(v^{\prime},v_{*}^{\prime},\sigma^{\prime})\) where \(\sigma^{\prime}=(v-v_{*})/|v-v_{*}|\). If \(\kappa+\iota<2\), we can similarly repeat the above derivation with \(\kappa\) replaced by \(\kappa+\iota\). Indeed, one can derive
\[\det(\frac{\partial(\kappa(v),\iota(v_{*}))}{\partial(v,v_{*})})=\alpha_{ \kappa+\iota}(\theta),\quad|v-v_{*}|=|\kappa(v)-\iota(v_{*})|\psi_{\kappa+ \iota}(\theta).\]
Let \(\beta_{\kappa+\iota}\) be the angle between \(\kappa(v)-\iota(v_{*})\) and \(\sigma\), then \(\cos\beta_{\kappa+\iota}=\varphi_{\kappa+\iota}(\sin\frac{\theta}{2})\). If \(\kappa+\iota<2\), then \(\delta_{\kappa+\iota}>0\) and the function: \(\theta\in[0,\frac{\pi}{2}]\to\beta_{\kappa+\iota}\in[0,\delta_{\kappa+\iota}]\) is a bijection. These facts are enough to obtain (2.14) for \(\kappa+\iota<2\).
### Integrals involving \(B_{3}^{\epsilon}\)
We first derive the upper bound of the integrals involving \(B_{3}^{\epsilon}\) from the cutoff perspective. We remark that in this situation the estimates depend on \(\epsilon\).
**Lemma 2.3**.: _Let \(a\geq 0\), then_
\[\int B_{3}^{\epsilon}|v-v_{*}|^{a}|g_{*}h|\mathrm{d}V\leq 8\pi(\sqrt{2})^{(a-1) _{+}}\epsilon^{a-3}I_{a}\|g\|_{L^{1}}\|h\|_{L^{1}}. \tag{2.16}\]
Proof.: As \(\sqrt{2}/2\leq\cos(\theta/2)\leq 1\), we have
\[\int|z|^{a}B_{3}^{\epsilon}(z,\sigma)\mathrm{d}\sigma=8\pi\epsilon ^{-4}|z|^{a+1}\int_{0}^{\pi/2}\hat{\phi}^{2}(\epsilon^{-1}|z|\cos\frac{\theta} {2})\cos\frac{\theta}{2}\mathrm{d}\cos\frac{\theta}{2}\] \[\leq 8\pi(\sqrt{2})^{(a-1)_{+}}\epsilon^{-4}|z|^{a+1}\int_{0}^{ \pi/2}\hat{\phi}^{2}(\epsilon^{-1}|z|\cos\frac{\theta}{2})\cos^{a}\frac{ \theta}{2}\mathrm{d}\cos\frac{\theta}{2}=8\pi(\sqrt{2})^{(a-1)_{+}}\epsilon^ {a-3}\int_{\epsilon^{-1}|z|/\sqrt{2}}^{\epsilon^{-1}|z|}\hat{\phi}^{2}(t)t^{a} \mathrm{d}t\] \[\leq 8\pi(\sqrt{2})^{(a-1)_{+}}\epsilon^{a-3}I_{a}\lesssim \epsilon^{a-3}\int_{0}^{\infty}\hat{\phi}^{2}(r)r^{a}\mathrm{d}r, \tag{2.17}\]
which implies (2.16).
By taking \(a=0\) and replacing \(\cos\frac{\theta}{2}\) by \(\sin\frac{\theta}{2}\) in (2.17), we have
\[A^{\epsilon}:=\sup_{z\in\mathbb{R}^{3}}\int B^{\epsilon}(z,\sigma)\mathrm{d} \sigma\leq 2\sup_{z\in\mathbb{R}^{3}}\int B_{1}^{\epsilon}(z,\sigma)\mathrm{d} \sigma+2\sup_{z\in\mathbb{R}^{3}}\int B_{3}^{\epsilon}(z,\sigma)\mathrm{d} \sigma\lesssim\epsilon^{-3}I_{0}. \tag{2.18}\]
The above inequality shows that the \(L^{\infty}\)-norm of \(\int B_{1}^{\epsilon}(\cdot,\sigma)\mathrm{d}\sigma\) and \(\int B_{3}^{\epsilon}(\cdot,\sigma)\mathrm{d}\sigma\) is bounded by \(\epsilon^{-3}I_{0}\) which tends to \(\infty\) as \(\epsilon\to 0\). Considering the \(L^{1}\)-norm of \(\int B_{3}^{\epsilon}(\cdot,\sigma)\mathrm{d}\sigma\), we find it is bounded uniformly in \(\epsilon\) by the following computation:
\[\iint B_{3}^{\epsilon}(z,\sigma)\mathrm{d}\sigma\mathrm{d}z=4\pi\int_{0}^{ \infty}\int_{\mathbb{S}^{2}_{+}}\epsilon^{-4}r^{3}\hat{\phi}^{2}(\epsilon^{-1} r\cos(\theta/2))\mathrm{d}r\mathrm{d}\sigma=4\pi I_{3}\int_{\mathbb{S}^{2}_{+}} \cos^{-4}(\theta/2)\mathrm{d}\sigma=16\pi^{2}I_{3}. \tag{2.19}\]
Here \(\mathbb{S}^{2}_{+}\) stands for \(0\leq\theta\leq\pi/2\).
Based on the above uniform \(L^{1}\) upper bound, we can easily obtain uniform-in-\(\epsilon\) estimates for various integrals involving \(B_{3}^{\epsilon}\) with the change of variables in (2.14).
**Lemma 2.4**.: _Fix \(\kappa\in[0,1]\), either \(u=\kappa(v_{*})\) or \(u=\kappa(v)\). Then_
\[\int B_{3}^{\epsilon}|g(u)|\mathrm{d}V\lesssim I_{3}\|g\|_{L^{1}}. \tag{2.20}\]
_As a direct result, fix an integer \(k\geq 2\) and \(\iota_{i},\kappa_{i}\in[0,1]\) for \(1\leq i\leq k\), let \(u_{i}\in\{\iota_{i}(v_{*}),\kappa_{i}(v):1\leq i\leq k\}\) for \(1\leq i\leq k\), then_
\[\int B_{3}^{\epsilon}\prod_{i=1}^{k}|f_{i}(u_{i})|\mathrm{d}V\lesssim I_{3} \prod_{i=1}^{k}\|f_{i}\|_{X_{i}}, \tag{2.21}\]
_where two of \(X_{i}\) are taken by \(L^{2}\)-norm and the others are taken by \(L^{\infty}\)-norm. Let \(0\leq s_{i}<\frac{3}{2}\) for \(1\leq i\leq k\) and \(\sum_{i=1}^{k}s_{i}=\frac{3k}{2}-3\), then_
\[\int B_{3}^{\epsilon}\prod_{i=1}^{k}|f_{i}(u_{i})|\mathrm{d}V\lesssim_{s_{1}, \cdots,s_{k}}I_{3}\prod_{i=1}^{k}\|f_{i}\|_{H^{s_{i}}}, \tag{2.22}\]
Proof.: Applying (2.14), we have \(\int B_{3}^{c}|g(u)|\mathrm{d}V=\int J_{\kappa,\epsilon}(v-v_{*})|g(v)| \mathrm{d}v\mathrm{d}v_{*}\), where
\[J_{\kappa,\epsilon}(z):=\int_{\mathbb{S}_{+}^{2}}\epsilon^{-4}|z|\psi_{\kappa }^{4}(\theta)\hat{\phi}^{2}(\epsilon^{-1}|z|\psi_{\kappa}(\theta)\cos(\theta/ 2))\mathrm{d}\sigma. \tag{2.23}\]
Similarly to (2.19), it is easy to see that \(L^{1}\)-norm of \(J_{\kappa,\epsilon}(z)\) is bounded (uniformly in \(\kappa,\epsilon\)) as follows:
\[\|J_{\kappa,\epsilon}\|_{L^{1}} = 4\pi\int_{0}^{\infty}\int_{\mathbb{S}_{+}^{2}}\epsilon^{-4}r^{3 }\psi_{\kappa}^{4}(\theta)\hat{\phi}^{2}(\epsilon^{-1}r\psi_{\kappa}(\theta) \cos(\theta/2))\mathrm{d}r\mathrm{d}\sigma\] \[= 4\pi I_{3}\int_{\mathbb{S}_{+}^{2}}\cos^{-4}(\theta/2)\mathrm{d} \sigma=16\pi^{2}I_{3}, \tag{2.24}\]
which yields (2.20). As a direct result, (2.21) is easily followed by Holder's inequality.
To prove (2.22), for \(2\leq p_{i}<\infty\) and \(\sum_{i=1}^{k}p_{i}^{-1}=1\), we have
\[\int B_{3}^{\epsilon}\prod_{i=1}^{k}|f_{i}(u_{i})|\mathrm{d}V\lesssim\prod_{i =1}^{k}\left(\int B_{3}^{\epsilon}|f_{i}(u_{i})|^{p_{i}}\mathrm{d}V\right)^{1/ p_{i}}\lesssim I_{3}\prod_{i=1}^{k}\|f_{i}\|_{L^{p_{i}}}\lesssim_{s_{1}, \cdots,s_{k}}I_{3}\prod_{i=1}^{k}\|f_{i}\|_{H^{s_{i}}},\]
where \(\frac{s_{i}}{3}=\frac{1}{2}-\frac{1}{p_{i}}\) thanks to the Sobolev embedding theorem.
With the estimates in Lemma 2.4, we derive upper bounds of \(Q_{3}\) in weighted Sobolev.
**Proposition 2.1**.: _Let \(l\geq 0,\delta>0\). For \(0\leq s_{1},s_{2},s_{3}\) with \(s_{1}+s_{2}+s_{3}=\frac{3}{2}+\delta\),_
\[|\langle Q_{3}(g,h),W_{l}f\rangle|\lesssim_{l,\delta}I_{3}\|W_{l}g\|_{H^{s_{1} }}\|W_{l}h\|_{H^{s_{1}}}\|f\|_{H^{s_{2}}}. \tag{2.25}\]
Proof.: For \(l\geq 0\) and \(0\leq\iota_{1},\kappa_{1},\iota_{2},\kappa_{2}\leq 1\), it is easy to check that
\[W_{l}(\kappa_{1}(v))+W_{l}(\iota_{1}(v_{*}))\lesssim_{l}W_{l}(\kappa_{2}(v))+ W_{l}(\iota_{2}(v_{*})). \tag{2.26}\]
As a result, we get that
\[|\langle Q_{3}(g,h),W_{l}f\rangle|\lesssim_{l}\int B_{3}^{c}(|(W_{l}g)_{*}^{ \prime}(W_{l}h)^{\prime}|+|(W_{l}g)_{*}W_{l}h|)|f|\mathrm{d}V. \tag{2.27}\]
Then the desired result follows from (2.22).
By taking \(\delta=\frac{1}{2}\) in Proposition 2.1, we easily close the energy estimate for \(Q_{3}\) in \(H_{l}^{N}\).
**Lemma 2.5**.: _Let \(l\geq 0,N\geq 2\) and \(m=|\alpha|\leq N\). Then_
\[\sum_{\alpha_{1}+\alpha_{2}=\alpha}|\langle Q_{3}(\partial^{\alpha_{1}}g, \partial^{\alpha_{2}}f)W_{l},W_{l}\partial^{\alpha}f\rangle|\lesssim_{N,l}I_{ 3}\|g\|_{H_{l}^{N}}\|f\|_{H_{l}^{N}}^{2}.\]
Proof.: If \(|\alpha_{1}|\geq 2\), then \(|\alpha_{2}|\leq m-2\). Take \(\delta=\frac{1}{2}\) in Proposition 2.1. We take \(s_{1}=s_{3}=0\) and \(s_{2}=2\) in (2.25) to get
\[|\langle Q_{3}(\partial^{\alpha_{1}}g,\partial^{\alpha_{2}}f)W_{l},W_{l} \partial^{\alpha}f\rangle|\lesssim_{l}I_{3}\|\partial^{\alpha_{1}}g\|_{L_{l}^{ 2}}\|\partial^{\alpha_{2}}f\|_{H_{l}^{2}}\|\partial^{\alpha}f\|_{L_{l}^{2}} \lesssim_{l}I_{3}\|g\|_{H_{l}^{m}}\|f\|_{H_{l}^{m}}^{2}.\]
If \(|\alpha_{1}|=1\), then \(|\alpha_{2}|\leq m-1\). Then the desired result follows by taking \(s_{1}=s_{2}=1\) and \(s_{3}=0\) in (2.25). Similarly argument can be applied to the case that \(|\alpha_{1}|=0\). We complete the proof of the lemma.
Relying on more regularity, we can get the weighted upper bound of \(Q_{3}\) with a small factor \(\epsilon^{\theta}\). Such estimates will be used in the last section to derive the asymptotic formula in Theorem 1.3.
**Proposition 2.2**.: _Let \(\vartheta\in[0,1]\), then_
\[|\langle Q_{3}(g,h),W_{l}f\rangle|\lesssim\epsilon^{\theta}I_{3+\vartheta}\|W_{l }g\|_{H_{l}^{\frac{3}{2}+\frac{\theta}{2}}}\|W_{l}h\|_{H_{l}^{\frac{3}{2}+\frac{ \theta}{2}}}\|f\|_{L^{2}}. \tag{2.28}\]
Proof.: Recalling (2.27), by Holder's inequality and the change of variable (2.14)(\(\kappa=\iota=1\)), we have
\[|\langle Q_{3}(g,h),W_{1}f\rangle| \lesssim\left(\int B_{3}^{\epsilon}|v-v_{*}|^{-\vartheta}|(W_{l}g)_ {*}|^{4}\mathrm{d}V\right)^{1/4}\left(\int B_{3}^{\epsilon}|v-v_{*}|^{- \vartheta}|W_{l}h|^{4}\mathrm{d}V\right)^{1/4}\left(\int B_{3}^{\epsilon}|v-v _{*}|^{\vartheta}|f|^{2}\mathrm{d}V\right)^{1/2}\] \[\leq\|(J_{0,\epsilon}|\cdot|^{-\vartheta})*(W_{l}^{4}g^{4})\|_{L^ {1}}^{1/4}\|(J_{0,\epsilon}|\cdot|^{-\vartheta})*(W_{l}^{4}h^{4})\|_{L^{1}}\|( J_{0,\epsilon}|\cdot|^{\vartheta})*f^{2}\|_{L^{1}}^{1/2},\]
where we use the notation (2.23). Similarly to (2.24), we derive that
\[\|J_{0,\epsilon}|\cdot|^{\vartheta}\|_{L^{1}}=4\pi\int_{0}^{\infty}\int_{ \mathbb{S}_{+}^{2}}\epsilon^{-4}x^{3+\vartheta}\hat{\phi}^{2}(\epsilon^{-1}r \cos(\theta/2))\mathrm{d}r\mathrm{d}\sigma\lesssim\epsilon^{\vartheta}I_{3+ \vartheta}, \tag{2.29}\]
from which together with the Hardy's inequality \(\int|v-v_{*}|^{-2\vartheta}|F(v)|^{4}\mathrm{d}v\lesssim\|F^{2}\|_{H^{ \vartheta}}^{2}\), we get
\[|\langle Q_{3}(g,h),W_{1}f\rangle|\lesssim\epsilon^{\vartheta}I_{3+\vartheta} \|W_{1}^{2}g^{2}\|_{H^{\vartheta}}^{1/2}\|W_{l}^{2}h^{2}\|_{H^{\vartheta}}^{1/ 2}\|f\|_{L^{2}}.\]
Using the fact that \(\|F^{2}\|_{H^{\vartheta}}\lesssim\|F\|_{H^{3/4+\vartheta}/2}^{2}\), we conclude the desire result (2.28).
### Cancellation Lemma
In this subsection, we prove the cancellation lemma for \(Q_{1}\) and \(Q_{2}\) which is used to transfer the regularity from one function to the other.
**Lemma 2.6** (Cancellation Lemma).: _Let \(\delta>0\) and \(a,b,c\geq 0\) verifying that \(a+b+c=\frac{3}{2}+\delta\). For \(i=1,2\), and functions \(g,h,f\), we set \(g_{i}:=\int B_{i}^{\epsilon}(|v-v_{*}|,\cos\theta)g_{*}((hf)^{\prime}-hf) \mathrm{d}V.\) Then_
_(i). For \(g_{1}\), it holds that_
\[g_{1}=\int(J_{\epsilon}*g)(v)h(v)f(v)\mathrm{d}v. \tag{2.30}\]
_where \(J_{\epsilon}(u)=8\pi\int_{\frac{\sqrt{2}}{2}}^{1}\epsilon^{-4}|u|\hat{\phi}^ {2}(\epsilon^{-1}|u|r)r\mathrm{d}r\) and \(\|J_{\epsilon}\|_{L^{1}}=16\pi^{2}I_{3}\). As a result, we have_
\[|g_{1}|\lesssim_{\delta}I_{3}\|g\|_{H^{\mathbf{-}}}\|h\|_{H^{\mathbf{-}}}\|f \|_{H^{\mathbf{-}}}. \tag{2.31}\]
_(ii). For \(g_{2}\), it holds that_
\[g_{2}=\int(K_{\epsilon}*g)(v)h(v)f(v)\mathrm{d}v, \tag{2.32}\]
_where \(K_{\epsilon}(u)=K_{\epsilon,1}(u)+K_{\epsilon,2}(u)\) with_
\[K_{\epsilon,1}(u) = 16\pi\int_{0}^{\frac{\sqrt{2}}{2}}\epsilon^{-4}|u|\hat{\phi}( \epsilon^{-1}|u|r)\left(\hat{\phi}(\epsilon^{-1}|u|)-\hat{\phi}(\epsilon^{-1}| u|\sqrt{1-r^{2}})\right)r\mathrm{d}r, \tag{2.34}\] \[K_{\epsilon,2}(u) = 16\pi\int_{\frac{\sqrt{2}}{2}}^{1}\epsilon^{-4}|u|\hat{\phi}( \epsilon^{-1}|u|r)\hat{\phi}(\epsilon^{-1}|u|)r\mathrm{d}r. \tag{2.33}\]
_Moreover, \(\|K_{\epsilon}\|_{L^{1}}\leq 64\pi^{2}(I_{3}+I_{3}^{\prime})\) which implies that_
\[|g_{2}|\lesssim_{\delta}(I_{3}+I_{3}^{\prime})\|g\|_{H^{\mathbf{-}}}\|h\|_{H^{ \mathbf{-}}}\|f\|_{H^{\mathbf{-}}}. \tag{2.35}\]
_In general, for \(0\leq\vartheta\leq 1\), \(\|K_{\epsilon}|\cdot|^{\vartheta}\|_{L^{1}}\lesssim\epsilon^{\vartheta}(I_{3+ \vartheta}+I_{3+\vartheta}^{\prime})\) and_
\[|g_{2}|\lesssim_{\delta}\epsilon^{\vartheta}(I_{3+\vartheta}+I_{3+\vartheta}^{ \prime})\|g\|_{H^{\frac{3}{2}}+\frac{\vartheta}{2}+\frac{\vartheta}{2}+\frac{ \vartheta}{2}}\|h\|_{H^{\frac{3}{2}}+\frac{\vartheta}{2}+\frac{\vartheta}{2} }\|f\|_{L^{2}}. \tag{2.36}\]
Proof.: We first prove the estimate of \(g_{1}\). By (2.13), we have
\[g_{1} = 2\pi\int\big{(}B_{1}^{\epsilon}(\frac{|v-v_{*}|}{\cos(\theta/2)}, \cos\theta)(\cos(\theta/2))^{-3}-B_{1}^{\epsilon}(|v-v_{*}|,\cos\theta)\big{)} g_{*}hf\sin\theta\mathrm{d}\theta\mathrm{d}v_{*}\mathrm{d}v\] \[= 8\pi\int\int_{0}^{\frac{\sqrt{2}}{2}}\epsilon^{-4}|v-v_{*}|\big{[} \hat{\phi}^{2}(\epsilon^{-1}|v-v_{*}|\frac{r}{\sqrt{1-r^{2}}})(1-r^{2})^{-2}- \hat{\phi}^{2}(\epsilon^{-1}|v-v_{*}|r)\big{]}g_{*}hfr\mathrm{d}r\mathrm{d}v_{*} \mathrm{d}v.\]
By the change of variable \(\mathfrak{r}:=\frac{r}{\sqrt{1-r^{2}}}\) which implies \((1-r^{2})^{-2}r\mathrm{d}r=\mathrm{d}r\), we get
\[g_{1}=8\pi\int\int_{\frac{\sqrt{2}}{2}}\epsilon^{-4}|v-v_{*}|\hat{\phi}^{2}( \epsilon^{-1}|v-v_{*}|r)g_{*}hfr\mathrm{d}r\mathrm{d}v_{*}\mathrm{d}v,\]
which is exactly (2.30). Since \(J_{\epsilon}\) is radial, let \(\mathfrak{r}:=|u|\),
\[\|J_{\epsilon}\|_{L^{1}}=32\pi^{2}\int_{0}^{\infty}\int_{\frac{\sqrt{2}}{2}}^{1} \epsilon^{-4}\mathfrak{r}^{3}\hat{\phi}^{2}(\epsilon^{-1}r)r\mathrm{d}r \mathrm{d}r=32\pi^{2}\left(\int_{0}^{\infty}s^{3}\hat{\phi}^{2}(s)\mathrm{d}s \right)\left(\int_{\frac{\sqrt{2}}{2}}^{1}r^{-3}dr\right)=16\pi^{2}I_{3}.\]
We turn to the estimate of \(\mathscr{B}_{2}\). Following the same argument in the above, we can get the formula (2.32) with \(K_{\epsilon}\) as the sum of (2.33) and (2.34). Let us compute the \(L^{1}\)-norm of \(K_{\epsilon,1}\) and \(K_{\epsilon,2}\). Let \(\imath:=|u|\), then
\[\|K_{\epsilon,2}\|_{L^{1}}=64\pi^{2}\int_{0}^{\infty}\left|\int_{ \frac{\sqrt{2}}{2}}^{1}\epsilon^{-4}\imath^{3}\hat{\phi}(\epsilon^{-1}r\imath) \hat{\phi}(\epsilon^{-1}\imath)r\mathrm{d}r\right|\mathrm{d}\imath\] \[\leq 64\pi^{2}\int_{\frac{\sqrt{2}}{2}}^{1}\left(\int_{0}^{ \infty}\epsilon^{-4}\imath^{3}\hat{\phi}^{2}(\epsilon^{-1}r\imath)\mathrm{d}x \right)^{1/2}\left(\int_{0}^{\infty}\epsilon^{-4}\imath^{3}\hat{\phi}^{2}( \epsilon^{-1}\imath)\mathrm{d}x\right)^{1/2}r\mathrm{d}r\leq 32\pi^{2}I_{3},\]
where for fixed \(r\), we used the change of variables \(\imath\to\epsilon^{-1}r\imath\) and \(\imath\to\epsilon^{-1}\imath\).
To kill the singularity at \(r=0\) in \(K_{\epsilon,1}\), by Taylor expansion, we have
\[K_{\epsilon,1}(u)=16\pi\int_{0}^{\frac{\sqrt{2}}{2}}\int_{0}^{1} \epsilon^{-5}|u|^{2}\hat{\phi}(\epsilon^{-1}|u|r)(\hat{\phi})^{\prime}( \epsilon^{-1}|u|(\tau+(1-\tau)\sqrt{1-r^{2}}))(1-\sqrt{1-r^{2}})r\mathrm{d}r \mathrm{d}\tau,\]
which implies that
\[\|K_{\epsilon,1}\|_{L^{1}}=64\pi^{2}\int_{0}^{\infty}\left|\int_{ 0}^{\frac{\sqrt{2}}{2}}\int_{0}^{1}\epsilon^{-5}\imath^{4}\hat{\phi}( \epsilon^{-1}\imath r)(\hat{\phi})^{\prime}(\epsilon^{-1}\imath(\tau+(1-\tau) \sqrt{1-r^{2}}))(1-\sqrt{1-r^{2}})r\mathrm{d}r\mathrm{d}\tau\right|\mathrm{d}\imath\] \[\leq 64\pi^{2}\int_{0}^{\frac{\sqrt{2}}{2}}\int_{0}^{1}\left(\int_ {0}^{\infty}\epsilon^{-4}\imath^{3}\hat{\phi}^{2}(\epsilon^{-1}r\imath) \mathrm{d}x\right)^{1/2}\left(\int_{0}^{\infty}\epsilon^{-6}\imath^{5}|(\hat{ \phi})^{\prime}(\epsilon^{-1}\imath(\tau+(1-\tau)\sqrt{1-r^{2}}))|^{2} \mathrm{d}x\right)^{1/2}\] \[\times(1-\sqrt{1-r^{2}})r\mathrm{d}r\mathrm{d}\tau\leq 64\pi^{2} \left(\int_{0}^{\infty}s^{3}\hat{\phi}^{2}(s)\mathrm{d}s\right)^{1/2}\left(\int _{0}^{\infty}s^{5}|(\hat{\phi})^{\prime}(s)|^{2}\mathrm{d}s\right)^{1/2}\int_ {0}^{\frac{\sqrt{2}}{2}}\int_{0}^{1}(1-\sqrt{1-r^{2}})r^{-1}\] \[\times(\tau+(1-\tau)\sqrt{1-r^{2}})^{-3}dr\mathrm{d}\tau\leq 8 \sqrt{2}\pi^{2}(I_{3}+I_{3}^{\prime}),\]
where the estimates \(1-\sqrt{1-r^{2}}\leq r^{2}/2\), \(\sqrt{2}/2\leq\tau+(1-\tau)\sqrt{1-r^{2}}\leq 1\) are used. Now we have
\[\|K_{\epsilon}\|_{L^{1}}\leq\|K_{\epsilon,1}\|_{L^{1}}+\|K_{\epsilon,2}\|_{L^{ 1}}\leq 8\sqrt{2}\pi^{2}(I_{3}+I_{3}^{\prime})+32\pi^{2}I_{3}\leq 64\pi^{2}(I_{3}+ I_{3}^{\prime}),\]
which gives (2.35). The same argument can be applied to get that \(\|K_{\epsilon}|\cdot|^{\vartheta}\|_{L^{1}}\lesssim\epsilon^{\vartheta}(I_{3+ \vartheta}+I_{3+\vartheta}^{\prime})\), which gives (2.36). We end the proof.
### Integrals involving \(B_{1}^{\epsilon}\)
We shall use (2.14) to give the estimate of the integrals involving \(B_{1}^{\epsilon}\).
**Lemma 2.7**.: _Let \(a,b\in\mathbb{R}\) with \(b\geq 0\). If \(0\leq\kappa,\iota\leq 1\), then_
\[\int B_{1}^{\epsilon}|g(\iota(v_{*}))h(\kappa(v))||v-v_{*}|^{a+b}\sin^{b-1}( \theta/2)\mathrm{d}V\leq(\sqrt{2})^{(a+1)_{+}}8\pi\epsilon^{b-3}I_{b}\int|v-v_{* }|^{a}|g_{*}h|\mathrm{d}v_{*}\mathrm{d}v. \tag{2.37}\]
Proof.: Applying (2.14), we have
\[\int B_{1}^{\epsilon}|g(\iota(v_{*}))h(\kappa(v))||v-v_{*}|^{a+b }\sin^{b-1}(\theta/2)\mathrm{d}V\] \[= \int\epsilon^{-4}|v-v_{*}|^{a+b+1}\psi_{\kappa+\iota}^{a+b+4}( \theta)\sin^{b-1}(\theta/2)\hat{\phi}^{2}\bigg{(}\epsilon^{-1}|v-v_{*}|\psi_{ \kappa+\iota}(\theta)\sin(\theta/2)\bigg{)}|g_{*}h|\mathrm{d}V.\]
Recalling (2.12), if we set \(\imath:=\psi_{\kappa+\iota}(\theta)\sin(\theta/2)\) then \(\mathrm{d}\imath=\psi_{\kappa+\iota}^{3}(\theta)\mathrm{d}\sin(\theta/2)\). Since \(1\leq\psi_{\kappa+\iota}\leq\sqrt{2}\), we have
\[\int\epsilon^{-4}\psi_{\kappa+\iota}^{a+b+4}(\theta)\sin^{b-1}( \theta/2)\hat{\phi}^{2}\bigg{(}\epsilon^{-1}|v-v_{*}|\psi_{\kappa+\iota}( \theta)\sin(\theta/2)\bigg{)}\mathrm{d}\sigma\] \[= 8\pi\int_{0}^{\pi/2}\epsilon^{-4}\psi_{\kappa+\iota}^{a+b+4}( \theta)\sin^{b}(\theta/2)\hat{\phi}^{2}\bigg{(}\epsilon^{-1}|v-v_{*}|\psi_{ \kappa+\iota}(\theta)\sin(\theta/2)\bigg{)}\mathrm{d}\sin(\theta/2)\] \[= 8\pi\int_{0}^{(2-2(\kappa+\iota)+(\kappa+\iota)^{2})^{-1/2}} \epsilon^{-4}\psi_{\kappa+\iota}^{a+1}(\theta)\hat{\phi}^{2}\bigg{(}\epsilon^{-1}|v -v_{*}|\imath\bigg{)}\imath^{b}\mathrm{d}x\] \[\leq (\sqrt{2})^{(a+1)_{+}}8\pi\int_{0}^{1}\epsilon^{-4}\hat{\phi}^{2} \bigg{(}\epsilon^{-1}|v-v_{*}|\imath\bigg{)}\imath^{b}\mathrm{d}x\leq(\sqrt{2 })^{(a+1)_{+}}8\pi\epsilon^{b-3}I_{b}|v-v_{*}|^{-b-1},\]
which yields (2.37).
**Remark 2.1**.: _If we borrow the same idea used here to the estimate of integrals involving \(B_{3}^{\epsilon}\), we will use the change of variable: \(\imath=\psi_{\kappa+\iota}(\theta)\cos(\theta/2)\) which indicates that \(\mathrm{d}\imath=(1-\kappa-\iota)^{2}\psi_{\kappa+\iota}^{3}(\theta)\mathrm{d} \cos(\theta/2)\). As a result, the ending estimate will have a singular factor \((1-\kappa-\iota)^{-2}\) as \(\kappa+\iota\to 1\). For this reason, we always avoid the change of variable \(v\to\kappa(v)\) or \(v_{*}\to\iota(v_{*})\) for integrals involving \(B_{3}^{\epsilon}\)._
**Remark 2.2**.: _Lemma 2.7 and its proof are highly versatile, making them applicable to the majority of integrals involving \(B_{1}^{\epsilon}\) that will arise in this article. To facilitate future reference, we provide several examples of their use below._
\(\bullet\) _Using (2.13) and the computation in (2.38), we have_
\[\int B_{1}^{\epsilon}(|v-v_{*}|,\cos\theta)f_{*}^{\prime}\mathrm{d}\sigma \mathrm{d}v_{*}=\int B_{1}^{\epsilon}(|v-v_{*}|\psi_{1}(\theta),\cos\theta)f_ {*}\psi_{1}^{3}(\theta)\mathrm{d}\sigma\mathrm{d}v_{*}\lesssim\epsilon^{-3}I_ {0}\|f\|_{L^{1}}. \tag{2.39}\]
\(\bullet\) _By taking \(a=0\) in (2.37), for \(b\geq 0\), we have_
\[\int B_{1}^{\epsilon}|g(\iota(v_{*}))h(\kappa(v))||v-v_{*}|^{b}\sin^{b-1} \frac{\theta}{2}\mathrm{d}V\leq 8\sqrt{2}\pi\epsilon^{b-3}I_{b}\|g\|_{L^{1}} \|h\|_{L^{1}}. \tag{2.40}\]
\(\bullet\) _Let \(0\leq\vartheta\leq 1\). Let \(c,d\in\mathbb{R}\) with \(d\geq 2+\vartheta\). Thanks to the fact that \(|v-v_{*}|^{c}\sin^{d}(\theta/2)\leq|v-v_{*}|^{c-3-\vartheta}|v-v_{*}|^{3+ \vartheta}\sin^{2+\vartheta}(\theta/2)\), if we take \(a=c-3-\vartheta\) and \(b=3+\vartheta\) in (2.37), then_
\[\!\!\int B_{1}^{\epsilon}|g(\iota(v_{*}))h(\kappa(v))||v-v_{*}|^{c}\sin^{d}( \theta/2)\mathrm{d}V\leq 8\pi(\sqrt{2})^{(c-2-\vartheta)_{+}}\epsilon^{ \vartheta}I_{3+\vartheta}\int|v-v_{*}|^{c-3-\vartheta}|g_{*}h|\mathrm{d}v_{*} \mathrm{d}v. \tag{2.41}\]
_In particular, if \(\vartheta=0,d=2\) in (2.41), then we get that_
\[\int B_{1}^{\epsilon}|g(\iota(v_{*}))h(\kappa(v))||v-v_{*}|^{c}\sin^{2}( \theta/2)\mathrm{d}V\leq 8\pi(\sqrt{2})^{(c-2)_{+}}I_{3}\int|v-v_{*}|^{-3+c}|g_{*}h| \mathrm{d}v_{*}\mathrm{d}v. \tag{2.42}\]
Using (2.42) properly, we derive the following flexible estimate allowing some balance of weight.
**Lemma 2.8**.: _For \(0\leq\iota,\kappa_{1},\kappa_{2}\leq 1\), it holds that_
\[\int B_{1}^{\epsilon}|g(\iota(v_{*}))h(\kappa_{1}(v))f(\kappa_{2}(v))||v^{ \prime}-v|^{2}\mathrm{d}V\lesssim I_{3}(\|g\|_{L^{1}_{a_{1}}}+\|g\|_{L^{2}}) \|h\|_{L^{2}_{a_{2}}}\|f\|_{L^{2}_{a_{3}}}, \tag{2.43}\]
_where \(-1\leq a_{1}\leq 0\leq a_{2},a_{3}\) and \(a_{1}+a_{2}+a_{3}=0\)._
Proof.: We divide the integration domain into two parts: \(\mathcal{U}:=\{(v,v_{*},\sigma):|v-v_{*}|\leq 1,(v-v_{*})\cdot\sigma\geq 0\}\) and \(\mathcal{G}:=\{(v,v_{*},\sigma):|v-v_{*}|\geq 1,(v-v_{*})\cdot\sigma\geq 0\}\) and denote the associated integrals by \(g_{\leq}\) and \(g_{\geq}\) accordingly.
\(\bullet\) In the domain \(\mathcal{U}\), one has \(|\kappa(v)-\iota(v_{*})|\leq 1\) for all \(0\leq\iota,\kappa\leq 1\). Using (2.42), we will have
\[g_{\leq} \leq \left(\int 1_{\mathcal{U}}B_{1}^{\epsilon}|g(\iota(v_{*}))h^{2}( \kappa_{1}(v))||v^{\prime}-v|^{2}\mathrm{d}V\right)^{1/2}\left(\int 1_{\mathcal{U}}B_{1}^{\epsilon}|g(\iota(v_{*}))f^{2}( \kappa_{2}(v))||v^{\prime}-v|^{2}\mathrm{d}V\right)^{1/2}\] \[\lesssim I_{3}\left(\int 1_{|v-v_{*}|\leq 1}|g_{*}h^{2}||v-v_{*}|^{-1} \mathrm{d}v\mathrm{d}v_{*}\right)^{1/2}\left(\int 1_{|v-v_{*}|\leq 1}|g_{*}f^{2}||v-v_{*}|^{-1} \mathrm{d}v\mathrm{d}v_{*}\right)^{1/2}\] \[\lesssim I_{3}\|g\|_{L^{2}}\|h\|_{L^{2}}\|f\|_{L^{2}}.\]
\(\bullet\) In the domain \(\mathcal{G}\), one has \(|\kappa(v)-\iota(v_{*})|\geq\sqrt{2}/2\) and \(W_{l}(v-v_{*})\sim|v-v_{*}|^{l}\). Then we get that
\[|v-v_{*}|^{-1} \lesssim |v-v_{*}|^{a_{1}}=|v-v_{*}|^{-a_{2}-a_{3}}\sim|\kappa_{1}(v)- \iota(v_{*})|^{-a_{2}}|\kappa_{2}(v)-\iota(v_{*})|^{-a_{3}}\] \[\lesssim W_{a_{1}}(\iota(v_{*}))W_{a_{2}}(\kappa_{1}(v))W_{a_{3}}(\kappa_ {2}(v)),\]
which gives
\[g_{\geq}\lesssim\int 1_{\mathcal{U}}B_{1}^{\epsilon}|(W_{a_{1}}g)( \iota(v_{*}))(W_{a_{2}}h)(\kappa_{1}(v))(W_{a_{3}}f)(\kappa_{2}(v))||v-v_{*}|^{ 3}\sin^{2}\frac{\theta}{2}\mathrm{d}V\] \[\lesssim\left(\int B_{1}^{\epsilon}|(W_{a_{1}}g)(\iota(v_{*}))(W_ {a_{2}}h)^{2}(\kappa_{1}(v))||v-v_{*}|^{3}\sin^{2}\frac{\theta}{2}\mathrm{d}V \right)^{1/2}\left(\int B_{1}^{\epsilon}|(W_{a_{1}}g)(\iota(v_{*}))(W_{a_{3}}f )^{2}(\kappa_{2}(v))|\] \[\times|v-v_{*}|^{3}\sin^{2}\frac{\theta}{2}\mathrm{d}V\right)^{1/ 2}\lesssim I_{3}\left(\int|(W_{a_{1}}g)_{*}(W_{a_{2}}h)^{2}|\mathrm{d}v\mathrm{d} v_{*}\right)^{1/2}\left(\int|(W_{a_{1}}g)_{*}(W_{a_{3}}f)^{2}|\mathrm{d}v \mathrm{d}v_{*}\right)^{1/2}\] \[\lesssim I_{3}\|g\|_{L^{1}_{a_{1}}}\|h\|_{L^{2}_{a_{2}}}\|f\|_{L^{2}_{a _{2}}}.\]
We complete the proof of the lemma by patching together these two estimates.
### Upper bounds from the cutoff perspective
We will prove several upper bounds of the operator \(Q\) and \(R\). All the estimates will depend heavily on the parameter \(\epsilon\).
**Lemma 2.9**.: _It holds that_
\[|\langle Q(g,h),f\rangle|\lesssim(\epsilon^{-3}I_{0}+I_{3})\|g\|_{L^ {1}\cap L^{2}}\|h\|_{L^{2}}\|f\|_{L^{2}}, \tag{2.45}\] \[|\langle R(g,h,\rho),f\rangle|\lesssim(I_{0}+\epsilon^{3}I_{3}) \|g\|_{L^{1}\cap L^{2}}\|h\|_{L^{2}}\|\rho\|_{L^{\infty}}\|f\|_{L^{2}},\] (2.46) \[|\langle R(g,h,\rho),f\rangle|\lesssim(I_{0}+\epsilon^{3}I_{3}) \|g\|_{L^{1}\cap L^{\infty}}\|h\|_{L^{2}\cap L^{\infty}}\|\rho\|_{L^{2}}\|f\|_ {L^{2}}. \tag{2.44}\]
Proof.: Observing that \(B^{\epsilon}\leq 2B_{1}^{\epsilon}+2B_{3}^{\epsilon}\), we get
\[|\langle Q(g,h),f\rangle|=|\int B^{\epsilon}g_{*}h(f^{\prime}-f)\mathrm{d}V| \leq 2\int B_{1}^{\epsilon}|g_{*}h(|f^{\prime}|+|f|)|\mathrm{d}V+2\int B_{3}^{ \epsilon}|g_{*}h(|f^{\prime}|+|f|)|\mathrm{d}V. \tag{2.47}\]
Applying (2.40) with \(\iota=\kappa=b=0\) and noting \(0\leq\sin\frac{\theta}{2}\leq 1\), we have
\[\int B_{1}^{\epsilon}|g_{*}hf|\mathrm{d}V\lesssim\epsilon^{-3}I_{0}\|g\|_{L^ {1}}\|hf\|_{L^{1}}\lesssim\epsilon^{-3}I_{0}\|g\|_{L^{1}}\|h\|_{L^{2}}\|f\|_{L ^{2}}. \tag{2.48}\]
Again by (2.40) with \(\iota=b=0\), \(\kappa=0\) or \(1\), we get the same bound for \(\int B_{1}^{\epsilon}|g_{*}hf^{\prime}|\mathrm{d}V\).
Next, by taking \(a=0\) in (2.16), we have
\[\int B_{3}^{\epsilon}|g_{*}hf|\mathrm{d}V\lesssim\epsilon^{-3}I_{0}\|g\|_{L^ {1}}\|hf\|_{L^{1}}\lesssim\epsilon^{-3}I_{0}\|g\|_{L^{1}}\|h\|_{L^{2}}\|f\|_{L ^{2}}. \tag{2.49}\]
It is not difficult to check that
\[\int B_{3}^{\epsilon}|g_{*}hf^{\prime}|\mathrm{d}V\leq(I_{0}\epsilon^{-3}I_{3 })^{\frac{1}{2}}\|g\|_{L^{2}}\|h\|_{L^{2}}\|f\|_{L^{2}}. \tag{2.50}\]
We conclude (2.44) by patching together the above estimates.
For the cubic term, we first observe that
\[|\langle R(g,h,\rho),f\rangle|=\epsilon^{3}|\int B^{\epsilon}g_{*}h(\rho^{ \prime}+\rho_{*}^{\prime})(f^{\prime}-f)\mathrm{d}V|. \tag{2.51}\]
Then (2.45) follows (2.44) by imposing \(L^{\infty}\) norm on \(\rho\).
To prove (2.46), by using \(B^{\epsilon}\leq 2B_{1}^{\epsilon}+2B_{3}^{\epsilon}\), we get that
\[|\langle R(g,h,\rho),f\rangle|=\epsilon^{3}|\int B^{\epsilon}g_{* }h(\rho^{\prime}+\rho_{*}^{\prime})(f^{\prime}-f)\mathrm{d}V|\leq 2\epsilon^{3} \|h\|_{L^{\infty}}\int B_{1}^{\epsilon}|g_{*}\rho^{\prime}(|f^{\prime}|+|f|)| \mathrm{d}V\] \[+2\epsilon^{3}\int B_{1}^{\epsilon}|g_{*}h\rho_{*}^{\prime}(|f^{ \prime}|+|f|)|\mathrm{d}V+2\epsilon^{3}\|g\|_{L^{\infty}}\|h\|_{L^{\infty}}\int B _{3}^{\epsilon}(|\rho^{\prime}|+|\rho_{*}^{\prime}|)(|f^{\prime}|+|f|)| \mathrm{d}V.\]
By the Cauchy-Schwartz inequality, using (2.40) and (2.21), we conclude the desired result.
In the next, we prove the commutator estimates from the cutoff perspective.
**Lemma 2.10**.: _Let \(l\geq 2\), then_
\[|\langle Q(g,h)W_{l}-Q(g,hW_{l}),f\rangle|\leq C_{l}(\epsilon^{-3}I_{0}+I_{ 3})\|g\|_{L^{2}_{l}}\|h\|_{L^{2}_{l}}\|f\|_{L^{2}}, \tag{2.53}\] \[|\langle R(g,h,\rho)W_{l}-R(g,hW_{l},\rho),f\rangle|\leq C_{l}(I _{0}+\epsilon^{3}I_{3})\|g\|_{L^{2}_{l}}\|h\|_{L^{2}_{l}}\|\rho\|_{L^{\infty}} \|f\|_{L^{2}},\] (2.54) \[|\langle R(g,h,\rho)W_{l}-R(g,hW_{l},\rho),f\rangle|\leq C_{l}(I _{0}+\epsilon^{3}I_{3})\|g\|_{L^{2}_{l}\cap L^{\infty}_{l}}\|h\|_{L^{2}_{l} \cap L^{\infty}_{l}}\|\rho\|_{L^{2}}\|f\|_{L^{2}}. \tag{2.52}\]
Proof.: It is easy to compute that
\[g:=\langle Q(g,h)W_{l}-Q(g,hW_{l}),f\rangle=\int B^{\epsilon}g_{*}hf^{\prime}( W_{l}^{\prime}-W_{l})\mathrm{d}V. \tag{2.55}\]
From the fact that \(|\nabla^{k}W_{l}|\lesssim_{l,k}W_{l-k}\) with \(l\in\mathbb{R},k\in\mathbb{N}\), (2.26) yields that
\[|W_{l}^{\prime}-W_{l}|\lesssim_{l}1_{|v-v_{*}|\leq 1}W_{l}+1_{|v-v_{*}|\geq 1}(W_{l- 1}+(W_{l-1})_{*})|v-v_{*}|\sin\frac{\theta}{2}. \tag{2.56}\]
Then we have
\[|g|\lesssim_{l}\int B^{\epsilon}|g_{*}W_{l}hf^{\prime}|\mathrm{d}V+\int B^{ \epsilon}|g_{*}W_{l-1}hf^{\prime}||v-v_{*}|\mathrm{d}V+\int B^{\epsilon}1_{|v-v _{*}|\geq 1}|v-v_{*}|\sin\frac{\theta}{2}|(W_{l-1}g)_{*}hf^{\prime}|\mathrm{d}V.\]
Thanks to (2.50), the first term can be bounded as \(\int B^{\epsilon}|g_{*}W_{l}hf^{\prime}|\mathrm{d}V\lesssim(\epsilon^{-3}I_{0}+I_{3 })\|g\|_{L^{1}\cap L^{2}}\|h\|_{L^{2}_{1}}\|f\|_{L^{2}}\). For the second term, using \(B^{\epsilon}\leq 2B^{\epsilon}_{1}+2B^{\epsilon}_{3}\), it suffices to estimate \(\int B^{\epsilon}_{1}|g_{*}W_{l-1}hf^{\prime}||v-v_{*}|\mathrm{d}V\) and \(\int B^{\epsilon}_{3}|g_{*}W_{l-1}hf^{\prime}||v-v_{*}|\mathrm{d}V\). By the Cauchy-Schwartz inequality, (2.40), (2.16) and (2.20) imply that
\[\int B^{\epsilon}|g_{*}W_{l-1}hf^{\prime}||v-v_{*}|\mathrm{d}V\lesssim( \epsilon^{-2}I_{1}+(\epsilon^{-1}I_{2}I_{3})^{1/2})\|g\|_{L^{1}\cap L^{2}}\|h \|_{L^{2}_{1}}\|f\|_{L^{2}}\lesssim(\epsilon^{-3}I_{0}+I_{3})\|g\|_{L^{1}\cap L ^{2}}\|h\|_{L^{2}_{1}}\|f\|_{L^{2}},\]
where we use the fact that \(\epsilon^{a-3}I_{a}\leq\epsilon^{-3}I_{0}+I_{3}\) for \(0\leq a\leq 3\) thanks to the interpolation. The similar argument can be applied to the third term and we conclude the desired result (2.52).
For the cubic term, we first have
\[\mathscr{K}:=\langle R(g,h,\rho)W_{l}-R(g,hW_{l},\rho),f\rangle=\pm\epsilon^{ 3}\int B^{\epsilon}g_{*}h(\rho^{\prime}+\rho^{\prime}_{*})f^{\prime}(W^{\prime }_{l}-W_{l})\mathrm{d}V. \tag{2.57}\]
By comparing the structure of (2.55) and that of (2.57), (2.53) easily follows (2.45).
It remains to derive (2.54). Note that
\[\epsilon^{-3}|\mathscr{K}|\lesssim\int B^{\epsilon}_{1}|g_{*}h|(|\rho^{\prime }|+|\rho^{\prime}_{*}|)|f^{\prime}||W^{\prime}_{l}-W_{l}|\mathrm{d}V+\int B^{ \epsilon}_{3}|g_{*}h|(|\rho^{\prime}|+|\rho^{\prime}_{*}|)|f^{\prime}||W^{ \prime}_{l}-W_{l}|\mathrm{d}V.\]
Since \(|W^{\prime}_{l}-W_{l}|\lesssim W_{l}+(W_{l})_{*}\), (2.21) implies that \(\int B^{\epsilon}_{3}|g_{*}h|(|\rho^{\prime}|+|\rho^{\prime}_{*}|)|f^{\prime}| W^{\prime}_{l}-W_{l}|\mathrm{d}V\lesssim I_{3}\|g\|_{L^{\infty}_{l}}\|h\|_{L^{ \infty}_{l}}\|\rho\|_{L^{2}}\|f\|_{L^{2}}\).
We use (2.56) to expand the first term by
\[\int B^{\epsilon}_{1}|g_{*}h|(|\rho^{\prime}|+|\rho^{\prime}_{*}|)| f^{\prime}||W^{\prime}_{l}-W_{l}|\mathrm{d}V\lesssim\int B^{\epsilon}_{1}|g_{*} W_{l}h|(|\rho^{\prime}|+|\rho^{\prime}_{*}|)|f^{\prime}|\mathrm{d}V+\int B^{ \epsilon}_{1}|g_{*}W_{l-1}h|(|\rho^{\prime}|+|\rho^{\prime}_{*}|)\] \[\times|f^{\prime}||v-v_{*}|\mathrm{d}V+\int B^{\epsilon}_{1}|(W_{ l-1}g)_{*}h\rho^{\prime}_{*}f^{\prime}||v-v_{*}|\mathrm{d}V+\int B^{\epsilon}_{1} 1|v-v_{*}|\sin\frac{\theta}{2}|(W_{l-1}g)_{*}h\rho^{\prime}f^{\prime}|\mathrm{d }V.\]
By repeatedly using (2.40) and Cauchy-Schwartz inequality, we are led to the desired result (2.54) and then complete the proof of the lemma.
Lemma 2.9 and 2.10 together yield the following upper bounds in weighted \(L^{p}\) spaces.
**Proposition 2.3**.: _Let \(l\geq 2\),_
\[|\langle W_{l}Q(g,h),f\rangle|\leq C_{l}(\epsilon^{-3}I_{0}+I_{3 })\|g\|_{L^{2}_{1}}\|h\|_{L^{2}_{1}}\|f\|_{L^{2}}, \tag{2.59}\] \[|\langle W_{l}R(g,h,\rho),f\rangle|\leq C_{l}(I_{0}+\epsilon^{3}I _{3})\|g\|_{L^{2}_{1}}\|h\|_{L^{2}_{1}}\|\rho\|_{L^{\infty}_{l}}\|f\|_{L^{2}},\] (2.60) \[|\langle R(g,h,\rho),W_{l}f\rangle|\leq C_{l}(I_{0}+\epsilon^{ 3}I_{3})\|g\|_{L^{2}_{1}\cap L^{\infty}_{l}}\|h\|_{L^{2}_{1}\cap L^{\infty}_{l} }\|\rho\|_{L^{2}}\|f\|_{L^{2}}. \tag{2.58}\]
In the next subsection, we will do the energy estimate in the weighted Sobolev space \(H^{N}_{l}\). In particular, we have to estimate the typical term: \(\langle R(\partial^{\alpha_{1}}f,\partial^{\alpha_{2}}f,\partial^{\alpha_{3} }f)W_{l},W_{l}\partial^{\alpha}f\rangle\) for \(\alpha_{1}+\alpha_{2}+\alpha_{3}=\alpha\). It can divided into four cases: Case 1: \(\alpha_{1}=\alpha\); Case 2: \(\alpha_{2}=\alpha\); Case 3: \(\alpha_{3}=\alpha\); Case 4: \(|\alpha|\geq 1\) and \(|\alpha_{1}|,|\alpha_{2}|,|\alpha_{3}|\leq|\alpha|-1\). We will apply (2.59) to Case 1 and 2 and apply (2.60) to Case 3. For Case 4, to balance the regularity of \(g,h\) and \(\rho\), we need additional upper bound of \(R\) which can be stated as follows.
**Proposition 2.4**.: _Let \(l\geq 0\), then_
\[|\langle R(g,h,\rho),W_{l}f\rangle|\leq C_{l}(I_{0}+\epsilon^{3}I_{3})\|g\|_{H^ {2}_{1}}\|h\|_{H^{2}_{1}}\|\rho\|_{H^{1}_{l}}\|f\|_{L^{2}}. \tag{2.61}\]
Proof.: We recall that \(\langle\epsilon^{-3}R(g,h,\rho),W_{l}f\rangle=\int Bg_{*}h\left(\rho^{\prime}+ \rho^{\prime}_{*}\right)((W_{l}f)^{\prime}-W_{l}f)\mathrm{d}V.\) Obviously, using (2.26) and \(B^{\epsilon}\leq 2B^{\epsilon}_{1}+2B^{\epsilon}_{3}\), we may get that
\[|\langle\epsilon^{-3}R(g,h,\rho),W_{l}f\rangle| \lesssim l \int B^{\epsilon}_{1}|g_{*}W_{l}h|\left(|(W_{l}\rho)^{\prime}|+|(W_{ l}\rho)^{\prime}_{*}|\right)(|f^{\prime}|+|f|)\mathrm{d}V\] \[+\int B^{\epsilon}_{3}|g_{*}W_{l}h|\left(|(W_{l}\rho)^{\prime}|+|(W_ {l}\rho)^{\prime}_{*}|\right)(|f^{\prime}|+|f|)\mathrm{d}V. \tag{2.62}\]
By (2.22), we first have
\[\int B^{\epsilon}_{3}|g_{*}W_{l}h|\left(|(W_{l}\rho)^{\prime}|+|(W_{l}\rho)^{ \prime}_{*}|\right)(|f^{\prime}|+|f|)\mathrm{d}V\lesssim I_{3}\|g\|_{H^{1}}\|h \|_{H^{1}_{l}}\|\rho\|_{H^{1}_{l}}\|f\|_{L^{2}}. \tag{2.63}\]
By Holder's inequality and (2.40) with \(b=0\), the integral containing \(B^{\epsilon}_{1}\) in (2.62) is bounded by
\[\left(\int B^{\epsilon}_{1}|g_{*}||W_{l}h|^{4}\mathrm{d}V\right)^{1/ 4}\left(\int B^{\epsilon}_{1}|g_{*}||(W_{l}\rho)^{\prime}|^{4}\mathrm{d}V \right)^{1/4}\left(\int B^{\epsilon}_{1}|g_{*}||(|f^{\prime}|+|f|)^{2} \mathrm{d}V\right)^{1/2}\] \[+\left(\int B^{\epsilon}_{1}|W_{l}h|^{2}|(W_{l}\rho)^{\prime}_{*}|^{ 2}\mathrm{d}V\right)^{1/2}\left(\int B^{\epsilon}_{1}|g_{*}|^{2}(|f^{\prime}|+|f|)^{ 2}\mathrm{d}V\right)^{1/2}\lesssim\epsilon^{-3}I_{0}\|g\|_{L^{2}_{2}}\|W_{l}h\|_{H^{ 3/4}}\|W_{l}\rho\|_{H^{3/4}}\|f\|_{L^
where we use the Sobolev embedding theorem. Patching together all the estimates, we get (2.61).
We now apply Propositions 2.3 and 2.4 to get the following energy estimate of \(R\) in \(H^{N}_{l}\) spaces.
**Lemma 2.11**.: _Let \(N,l\geq 2\). Then_
\[\big{|}\sum_{|\alpha|\leq N}\langle W_{l}\partial^{\alpha}R(f,f,f),W_{l} \partial^{\alpha}f\rangle\big{|}\lesssim_{N,l}(I_{0}+\epsilon^{3}I_{3})\|f\|_{ H^{N}_{l}}^{4}.\]
Proof.: It is reduced to the consideration of \(\langle R(\partial^{\alpha_{1}}f,\partial^{\alpha_{2}}f,\partial^{\alpha_{3}} f)W_{l},W_{l}\partial^{\alpha}f\rangle\) for \(\alpha_{1}+\alpha_{2}+\alpha_{3}=\alpha\). There are four cases: Case 1: \(\alpha_{1}=\alpha\); Case 2: \(\alpha_{2}=\alpha\); Case 3: \(\alpha_{3}=\alpha\); Case 4: \(|\alpha|\geq 1\) and \(|\alpha_{1}|,|\alpha_{2}|,|\alpha_{3}|\leq|\alpha|-1\). The desired result follows by applying (2.59) to Case 1 and Case 2, (2.60) to Case 3 and (2.61) to Case 4.
### Upper bounds from the non-cutoff perspective
In this subsection, we will give the upper bounds of the operators \(Q,R\) from the non-cutoff perspective. Roughly speaking, we expand the Taylor expansion up to the second order to kill the singular factor \(\epsilon^{-3}\). We start with the uniform-in-\(\epsilon\) estimate of \(Q_{1}\) in the following Proposition.
**Proposition 2.5**.: _It holds that_
\[|\langle Q_{1}(g,h),f\rangle|\lesssim I_{3}(\|g\|_{L^{1}}+\|g\|_{L^{2}})\|h \|_{L^{2}}\|f\|_{H^{2}}, \tag{2.65}\] \[|\langle Q_{1}(g,h),f\rangle|\lesssim I_{3}(\|g\|_{L^{1}}+\|g\|_{L ^{2}})\|h\|_{H^{2}}\|f\|_{L^{2}}. \tag{2.64}\]
Proof.: It is easy to see \(\langle Q_{1}(g,h),f\rangle=\int B_{1}^{\epsilon}g_{*}h(f^{\prime}-f)\mathrm{ d}V.\) By (2.3) for \(f^{\prime}-f\), (2.64) follows by applying (2.5), (2.7) and (2.10) for the first order and applying (2.43) for the second order. For (2.65), we use Cancellation Lemma to transfer the regularity from \(f\) to \(h\) through
\[\langle Q_{1}(g,h),f\rangle=\int B_{1}^{\epsilon}g_{*}\big{(}h-h^{\prime}\big{)} f^{\prime}\mathrm{d}V+\int B_{1}^{\epsilon}g_{*}\big{(}(hf)^{\prime}-hf\big{)} \mathrm{d}V.\]
The former term is dealt with by (2.4), (2.6), and (2.43). The latter term is estimated by (2.31). Then we conclude (2.65).
We next derive upper bounds of \(Q_{2}\).
**Proposition 2.6**.: _Let \(\delta>0,a\geq 0,b\geq 1,a+b=\frac{3}{2}+\delta\). Then_
\[|\langle Q_{2}(g,h),f\rangle|\lesssim_{\delta}I_{3}\|g\|_{L^{2}}\|h\|_{L^{2}} \|f\|_{H^{\frac{3}{2}+\delta}}, \tag{2.67}\] \[|\langle Q_{2}(g,h),f\rangle|\lesssim_{\delta}(I_{3}+I_{3}^{\prime })\|g\|_{H^{\alpha}}\|h\|_{H^{\beta}}\|f\|_{L^{2}},\] (2.68) \[|\langle Q_{2}(g,h),W_{l}f\rangle|\lesssim\epsilon^{\vartheta}(I_ {3+\vartheta}+I_{3+\vartheta}^{\prime})\|g\|_{H^{2}_{l}}\|h\|_{H^{2}_{l}}\|f\|_ {L^{2}}. \tag{2.66}\]
Proof.: We begin with the estimate of (2.66). We first observe that
\[|\int B_{2}^{\epsilon}g_{*}h(f^{\prime}-f)\mathrm{d}V|\leq 2\left(\int B_{1}^{ \epsilon}g_{*}^{2}(f^{\prime}-f)^{2}\mathrm{d}V\right)^{1/2}\left(\int B_{3}^{ \epsilon}h^{2}\mathrm{d}V\right)^{1/2}.\]
By the order-1 Taylor expansion (2.1), (2.42) and (2.11) imply that
\[\int B_{1}^{\epsilon}g_{*}^{2}(f^{\prime}-f)^{2}\mathrm{d}V \lesssim I_{3}\int\int_{0}^{1}B_{1}^{\epsilon}|g_{*}|^{2}|(\nabla f)( \kappa(v))|^{2}|v^{\prime}-v|^{2}\mathrm{d}\kappa\mathrm{d}V\lesssim_{\delta} I_{3}\|g\|_{H^{\alpha}}^{2}\|f\|_{H^{\beta}}^{2}. \tag{2.69}\]
By (2.20), the integral containing \(B_{3}^{\epsilon}\) is bounded by \(I_{3}\|h\|_{L^{2}}^{2}\). Then (2.66) follows.
As for (2.67), we first have \(\langle Q_{2}(g,h),f\rangle=\int B_{2}^{\epsilon}g_{*}\big{(}h-h^{\prime} \big{)}f^{\prime}\mathrm{d}V+\int B_{2}^{\epsilon}g_{*}\big{(}(hf)^{\prime}-hf \big{)}\mathrm{d}V.\) Following the above argument for (2.66), the term involving \(h-h^{\prime}\) is bounded by \(I_{3}\|g\|_{H^{\alpha}}\|h\|_{H^{\alpha}}\|f\|_{L^{2}}\). The term involving \((hf)^{\prime}-hf\) is estimated by (2.35). These lead to (2.67).
We turn to the estimate of (2.68). We observe that \(\langle Q_{2}(g,h),W_{l}f\rangle=\int B_{2}^{\epsilon}g_{*}\big{(}h-h^{\prime} \big{)}(W_{l}f)^{\prime}\mathrm{d}V+\int B_{2}^{\epsilon}g_{*}\big{(}(W_{l}hf)^ {\prime}-W_{l}hf\big{)}\mathrm{d}V.\) We only focus on the first term since (2.36) implies that \(|\int B_{2}^{\epsilon}g_{*}\big{(}(W_{l}hf)^{\prime}-W_{l}hf\big{)}\mathrm{d}V| \lesssim\epsilon^{\vartheta}(I_{3+\vartheta}+I_{3+\vartheta}^{\prime})\|g\|_{H^ {2}}\|h\|_{H^{2}_{l}}\|f\|_{L^{2}}.\) We split it into two cases.
\(\bullet\) If \(\vartheta\geq 1/2\), by (2.4) and (2.6), (2.26) together with the Cauchy-Schwartz inequality will lead to that
\[|\int B_{2}^{\epsilon}g_{*}\big{(}h-h^{\prime}\big{)}(W_{l}f)^{ \prime}\mathrm{d}V|\lesssim\int|B_{2}^{\epsilon}(W_{l}g)_{*}W_{l}(\kappa(v))( \nabla^{2}h)(\kappa(v))f^{\prime}||v^{\prime}-v|^{2}\mathrm{d}V\mathrm{d}\kappa\] \[\lesssim \left(\int B_{1}^{\epsilon}|(W_{l}g)_{*}W_{l}(\kappa(v))(\nabla^{ 2}h)(\kappa(v))|^{2}|v^{\prime}-v|^{4}|v-v_{*}|^{-\vartheta}\mathrm{d}V \mathrm{d}\kappa\right)^{1/2}\left(\int B_{3}^{\epsilon}|f^{\prime}|^{2}|v-v_ {*}|^{\vartheta}\mathrm{d}V\right)^{1/2}.\]
Using the change of variable (2.14) with \(\kappa=\iota=1\), (2.29) yields that
\[\int B_{3}^{\epsilon}|f^{\prime}|^{2}|v-v_{*}|^{\theta}\mathrm{d}V=\int B_{3}^{ \epsilon}|f|^{2}|v-v_{*}|^{\theta}\mathrm{d}V=\|(J_{0,\epsilon}|\cdot|^{\theta })*f^{2}\|_{L^{1}}\lesssim\epsilon^{\theta}I_{3+\vartheta}\|f\|_{L^{2}}^{2}.\]
Then we can conclude (2.68) since (2.41) and the Hardy's inequality yield that
\[\int B_{1}^{\epsilon}|(W_{l}g)_{*}W_{l}(\kappa(v))(\nabla^{2}h)( \kappa(v))|^{2}|v^{\prime}-v|^{4}|v-v_{*}|^{-\theta}\mathrm{d}V\mathrm{d}\kappa\] \[\lesssim\epsilon^{\theta}I_{3+\vartheta}\int|v-v_{*}|^{1-2 \vartheta}|(W_{l}g)_{*}W_{l}\nabla^{2}h|^{2}\mathrm{d}v_{*}\mathrm{d}v \lesssim\epsilon^{\theta}I_{3+\vartheta}\|g\|_{H^{\vartheta-1/2}_{l}}^{2}\|h \|_{H^{2}_{l}}^{2}\lesssim\epsilon^{\theta}I_{3+\vartheta}\|g\|_{H^{1/2}_{l} }^{2}\|h\|_{H^{2}_{l}}^{2}.\]
\(\bullet\) If \(\vartheta\leq 1/2\), Thanks to the Taylor expansion, (2.26) will imply that
\[|\int B_{2}^{\epsilon}g_{*}\big{(}h-h^{\prime}\big{)}(W_{l}f)^{ \prime}\mathrm{d}V|\lesssim\int|B_{2}^{\epsilon}(W_{l}g)_{*}W_{l}(\kappa(v))( \nabla h)(\kappa(v))f^{\prime}||v^{\prime}-v|\mathrm{d}V\mathrm{d}\kappa\] \[\lesssim\left(\int B_{1}^{\epsilon}|(W_{l}g)_{*}W_{l}(\kappa(v)) (\nabla h)(\kappa(v))|^{2}|v^{\prime}-v|^{2}\sin\frac{\theta}{2}|v-v_{*}|^{- \theta}\mathrm{d}V\mathrm{d}\kappa\right)^{1/2}\left(\int B_{3}^{\epsilon}\sin ^{-1}\frac{\theta}{2}|f^{\prime}|^{2}|v-v_{*}|^{\theta}\mathrm{d}V\right)^{1/2}.\]
We first use (2.14) and (2.29) to derive that \(\int B_{3}^{\epsilon}|f^{\prime}|^{2}|v-v_{*}|^{\theta}\mathrm{d}V=\int B_{3}^ {\epsilon}\sin^{-1}\frac{\theta}{2}|f|^{2}|v-v_{*}|^{\theta}\mathrm{d}V=\|(K_ {\epsilon}|\cdot|^{\theta})*f^{2}\|_{L^{1}}\), where \(K_{\epsilon}(z):=\int_{\mathbb{S}^{2}_{+}}\epsilon^{-4}|z|\hat{\phi}^{2}( \epsilon^{-1}|z|\cos(\theta/2))\sin^{-1}\frac{\theta}{2}\mathrm{d}\sigma\). It is easy to see that
\[\|K_{\epsilon}|\cdot|^{\vartheta}\|_{L^{1}}\lesssim\epsilon^{\theta}I_{3+ \vartheta}\int_{\mathbb{S}^{2}_{+}}\cos^{-4-\vartheta}(\theta/2)\sin^{-1} \frac{\theta}{2}\mathrm{d}\sigma\lesssim\epsilon^{\theta}I_{3+\vartheta},\]
which yields \(\|(K_{\epsilon}|\cdot|^{\vartheta})*f^{2}\|_{L^{1}}\lesssim\epsilon^{\theta}I _{3+\vartheta}\|f\|_{L^{2}}^{2}.\) Following the similar argument used in the before, we have
\[\int B_{1}^{\epsilon}|(W_{l}g)_{*}W_{l}(\kappa(v))(\nabla h)(\kappa(v))|^{2}|v ^{\prime}-v|^{2}\sin\frac{\theta}{2}|v-v_{*}|^{-\theta}\mathrm{d}V\mathrm{d} \kappa\lesssim\epsilon^{\theta}I_{3+\vartheta}\|g\|_{H^{1}_{l}}^{2}\|h\|_{H^{ 2}_{l}}^{2}.\]
Patching together the above estimates, we arrive at (2.68).
Thanks to Propositions 2.5, 2.6 and 2.1, we get the following upper bounds of \(Q\) uniformly in \(\epsilon\).
**Theorem 2.1**.: _It holds that_
\[|\langle Q(g,h),f\rangle|\lesssim I_{3}(\|g\|_{L^{1}}+\|g\|_{L^{2}})\|h\|_{L^ {2}}\|f\|_{H^{2}}, \tag{2.71}\] \[|\langle Q(g,h),f\rangle|\lesssim(I_{3}+I_{3}^{\prime})(\|g\|_{L^ {1}}+\|g\|_{L^{2}})\|h\|_{H^{2}}\|f\|_{L^{2}}. \tag{2.70}\]
As a direct application, we get the following upper bounds of \(R\) with the small factor \(\epsilon^{3}\).
**Theorem 2.2**.: _It holds that_
\[|\langle R(g,h,\rho),f\rangle|\lesssim\epsilon^{3}(I_{3}+I_{3}^{\prime})\|g\|_ {H^{2}_{2}}\|h\|_{H^{2}}\|\rho\|_{H^{2}}\|f\|_{L^{2}}. \tag{2.72}\]
Proof.: We observe that \(|\langle R(g,h,\rho),f\rangle|=\epsilon^{3}|\langle Q(g,h),\rho f\rangle+ \langle Q(g,hf),\rho\rangle+\langle Q(g\rho,h),f\rangle+\langle Q(hf,\rho),g \rangle+\langle Q(hf,g),\rho\rangle+g_{1}+g_{2}|\), where \(g_{1}=\int Bg_{*}(h-h^{\prime})(\rho_{*}^{\prime}-\rho_{*})f^{\prime}\mathrm{d}V\) and \(g_{2}=\int B(hf)_{*}(g\rho-(g\rho)^{\prime})\mathrm{d}V.\)
For the terms containing \(Q\), we appropriately use (2.70) and (2.71) to get that
\[|\langle Q(g,h),\rho f\rangle|+|\langle Q(g,hf),\rho\rangle|+| \langle Q(g\rho,h),f\rangle|+|\langle Q(hf,\rho),g\rangle|+|\langle Q(hf,g), \rho\rangle|\] \[\lesssim (I_{3}+I_{3}^{\prime})\|g\|_{H^{2}_{2}}\|h\|_{H^{2}}\|\rho\|_{H^{2} }\|f\|_{L^{2}}.\]
We first separate \(g_{1}\) into two parts:
\[|g_{1}|\leq 2\int B_{1}^{\epsilon}|g_{*}(h-h^{\prime})(\rho_{*}^{\prime}-\rho_{*} )f^{\prime}|\mathrm{d}V+2\int B_{3}^{\epsilon}|g_{*}(h-h^{\prime})(\rho_{*}^{ \prime}-\rho_{*})f^{\prime}|\mathrm{d}V.\]
By (2.22), the integral containing \(B_{3}^{\epsilon}\) is bounded by \(I_{3}\|g\|_{H^{1}}\|h\|_{H^{1}}\|\rho\|_{H^{1}}\|f\|_{L^{2}}\). Thanks to (2.69), the integral containing \(B_{1}^{\epsilon}\) is bounded by
\[\left(\int B_{1}^{\epsilon}g_{*}^{2}(h-h^{\prime})^{2}\mathrm{d}V\right)^{1/2} \left(\int B_{1}^{\epsilon}(\rho^{\prime}-\rho)^{2}f_{*}^{2}\mathrm{d}V\right)^{1/2 }\lesssim I_{3}\|g\|_{L^{2}}\|h\|_{H^{2}}\|\rho\|_{H^{2}}\|f\|_{L^{2}}.\]
As for \(g_{2}\), we separate it into three parts according to the fact that \(B=B_{1}^{\epsilon}+B_{2}^{\epsilon}+B_{3}^{\epsilon}\). Then applying (2.31), (2.35) and (2.22) to the integrals containing \(B_{1}^{\epsilon},B_{2}^{\epsilon}\) and \(B_{3}^{\epsilon}\) respectively, we arrive at
\[|g_{2}|\lesssim(I_{3}+I_{3}^{\prime})\|g\|_{H^{2}}\|h\|_{H^{2}}\|\rho\|_{H^{2}}\|f \|_{L^{2}}.\]
We complete the proof by patching together the above estimates.
In the forthcoming Lemma, we give another estimate of the commutator between the operator \(R\) and the weight function \(W_{l}\). Comparing to (2.53) and (2.54), here we can get rid of \(I_{0}\) and keep the small factor \(\epsilon^{3}I_{3}\) by imposing more regularity on the involved functions.
**Lemma 2.12**.: _Let \(l\geq 2\). Then_
\[\left|\langle R(g,h,\rho)W_{l}-R(g,W_{l}h,\rho),f\rangle\right|\leq\epsilon^{3 }C_{l}I_{3}\|g\|_{H^{2}_{l}}\|h\|_{H^{2}_{l}}\|\rho\|_{H^{2}}\|f\|_{L^{2}}. \tag{2.73}\]
Proof.: Recalling (2.57), it is easy to check that
\[|\mathcal{K}| = \epsilon^{3}|\int B^{\epsilon}g_{*}h(\rho^{\prime}+\rho_{*}^{ \prime})f^{\prime}(W_{l}^{\prime}-W_{l})\mathrm{d}V|=\epsilon^{3}|\int Bg_{*}^ {\prime}h^{\prime}(\rho+\rho_{*})f(W_{l}^{\prime}-W_{l})\mathrm{d}V|\] \[\leq \epsilon^{3}|\int B^{\epsilon}g_{*}h(\rho+\rho_{*})f(W_{l}^{ \prime}-W_{l})\mathrm{d}V|+\epsilon^{3}|\int B^{\epsilon}(g_{*}^{\prime}-g_{* })h^{\prime}(\rho+\rho_{*})f(W_{l}^{\prime}-W_{l})\mathrm{d}V|\] \[+\epsilon^{3}|\int B^{\epsilon}g_{*}(h^{\prime}-h)(\rho+\rho_{*} )f(W_{l}^{\prime}-W_{l})\mathrm{d}V|:=\epsilon^{3}\mathcal{K}_{1}+\epsilon^{3 }\mathcal{K}_{2}+\epsilon^{3}\mathcal{K}_{3}.\]
For \(\mathcal{K}_{1}\), by (2.3), (2.5) and (2.7), (2.10) and (2.9) imply that \(|\mathcal{K}_{1}|\lesssim I_{3}\|g\|_{H^{1}_{l}}\|h\|_{L^{2}_{l}}\|\rho\|_{L^{ \infty}}\|f\|_{L^{2}}.\) For \(\mathcal{K}_{2}\), we use \(\mathcal{K}_{2}\leq 2\mathcal{K}_{1}^{2}+2\mathcal{K}_{2}^{3}\) where \(\mathcal{K}_{2}^{1}:=\int B_{1}^{\epsilon}|(g_{*}^{\prime}-g_{*})h^{\prime}( \rho+\rho_{*})f(W_{l}^{\prime}-W_{l})|\mathrm{d}V\) and \(\mathcal{K}_{2}^{3}:=\int B_{3}^{\epsilon}|(g_{*}^{\prime}-g_{*})h^{\prime}( \rho+\rho_{*})f(W_{l}^{\prime}-W_{l})|\mathrm{d}V\). By order-1 Taylor expansion (2.1), (2.43) yields that
\[|\mathcal{K}_{2}^{1}|\lesssim\|\rho\|_{L^{\infty}}\int B_{1}^{\epsilon}|( \nabla g)(\iota(v_{*}))h^{\prime}f|W_{l-1}(\iota(v_{*}))W_{l-1}(v^{\prime})||v ^{\prime}-v|^{2}\mathrm{d}V\lesssim I_{3}\|g\|_{H^{1}_{l}}\|h\|_{L^{2}_{l}}\| \rho\|_{L^{\infty}}\|f\|_{L^{2}}.\]
Using (2.26) and (2.21), we have \(|\mathcal{K}_{2}^{3}|\lesssim I_{3}\|g\|_{L^{\infty}}\|h\|_{L^{2}_{l}}\|\rho\| _{L^{\infty}}\|f\|_{L^{2}}.\) As a result, by Sobolev embedding theorem, we get that \(|\mathcal{K}_{2}|\lesssim I_{3}\|g\|_{H^{2}_{l}}\|h\|_{L^{2}_{l}}\|\rho\|_{H^{ 2}}\|f\|_{L^{2}}.\) Similarly, we can also get \(|\mathcal{K}_{3}|\lesssim I_{3}\|g\|_{L^{2}_{l}}\|h\|_{H^{2}_{l}}\|\rho\|_{H^{ 2}}\|f\|_{L^{2}}.\) These are enough to conclude (2.73).
By the upper bound estimate (2.72) and the commutator estimate (2.73), we have the following upper bound estimate for \(R\) in weighted Sobolev spaces.
**Proposition 2.7**.: _Let \(l\geq 2\), then_
\[|\langle R(g,h,\rho),W_{l}f\rangle|\leq\epsilon^{3}C_{l}(I_{3}+I_{3}^{\prime}) \|g\|_{H^{2}_{l}}\|h\|_{H^{2}_{l}}\|\rho\|_{H^{2}}\|f\|_{L^{2}}. \tag{2.74}\]
Note that the small factor \(\epsilon^{3}\) is kept in (2.74) which shows that \(R\) is smaller term if the involved functions are regular enough. Proposition 2.7 will be used in the last section to derive the asymptotic formula in Theorem 1.3.
## 3. Uniform upper bounds in weighted Sobolev space
In this section, we will close energy estimate for the Uehling-Uhlenbeck operator \(Q^{c}_{UU}\) uniformly in \(\epsilon\) in the weighted Sobolev space \(H^{N}_{l}\). More precisely, we will derive
**Theorem 3.1**.: _Let \(N,l\geq 2\). Let \(f\geq 0\), then_
\[\sum_{|\alpha|\leq N}\langle W_{l}\partial^{\alpha}Q^{c}_{UU}(f),W_{l}\partial ^{\alpha}f\rangle\lesssim_{N,l}(I_{0}+I_{3}+I_{3}^{\prime})\|f\|^{2}_{H^{N}_ {l}}(\|f\|_{H^{N}_{l}}+\|f\|^{2}_{H^{N}_{l}}). \tag{3.1}\]
Proof.: Recalling (1.19) and (1.20), \(Q^{c}_{UU}(f)=Q(f,f)+R(f,f,f)\) and \(Q(f,f)=Q_{1}(f,f)+Q_{2}(f,f)+Q_{3}(f,f)\), we write
\[\partial^{\alpha}Q^{c}_{UU}(f)=\partial^{\alpha}Q(f,f)+\partial^{\alpha}R(f,f, f),\]
and
\[\partial^{\alpha}Q(f,f) = Q(f,\partial^{\alpha}f) \tag{3.3}\] \[+\sum_{\alpha_{1}+\alpha_{2}=\alpha,|\alpha_{1}|=1}C^{\alpha_{1}}_ {\alpha}Q_{1}(\partial^{\alpha_{1}}f,\partial^{\alpha_{2}}f)\] (3.4) \[+\sum_{\alpha_{1}+\alpha_{2}=\alpha,|\alpha_{1}|\geq 2}C^{\alpha_{1}}_ {\alpha}Q_{1}(\partial^{\alpha_{1}}f,\partial^{\alpha_{2}}f)\] (3.5) \[+\sum_{\alpha_{1}+\alpha_{2}=\alpha,|\alpha_{1}|\geq 1}C^{\alpha_{1}}_ {\alpha}Q_{2}(\partial^{\alpha_{1}}f,\partial^{\alpha_{2}}f)\] (3.6) \[+\sum_{\alpha_{1}+\alpha_{2}=\alpha,|\alpha_{1}|\geq 1}C^{\alpha_{1}}_ {\alpha}Q_{3}(\partial^{\alpha_{1}}f,\partial^{\alpha_{2}}f). \tag{3.2}\]
Let us give the estimates term by term. We first observe that the term \(Q(f,\partial^{\alpha}f)\) in (3.2) can be deal by the forthcoming coercivity estimate (3.9). More precisely, using \(f\geq 0\),
\[\langle Q(f,\partial^{\alpha}f)W_{l},W_{l}\partial^{\alpha}f\rangle\leq C_{l}(I _{3}+I_{3}^{\prime})\|f\|_{H_{l}^{2}}\|\partial^{\alpha}f\|_{L_{l}^{2}}^{2}. \tag{3.7}\]
The term in (3.3) will be given by Lemma 3.5. This is referred to as the penultimate order term. The terms in (3.4) and in (3.5) will be done in Lemma 3.3 and in Lemma 3.2 respectively. Then we conclude (3.1) since the terms \(Q_{3}(\partial^{\alpha_{1}}f,\partial^{\alpha_{2}}f)\) in (3.6) and \(\partial^{\alpha}R(f,f,f)\) have been estimated by Lemma 2.5 and Lemma 2.11.
In the following theorem, we derive two types of coercivity estimates of \(Q\).
**Theorem 3.2** (Coercivity estimate of \(Q\)).: _Let \(l\geq 2\). Let \(g\geq 0\). Then_
\[-\langle Q(g,f)W_{l},fW_{l}\rangle \geq \frac{1}{8}\int(B^{\epsilon}+B_{1}^{\epsilon})g_{*}((fW_{l})^{ \prime}-fW_{l})^{2}\mathrm{d}V\] \[-C_{l}(I_{3}+I_{3}^{\prime})\|g\|_{L^{1}\cap L^{\infty}}\|f\|_{L_ {l}^{2}}^{2}-C_{l}(I_{3}+I_{3}^{\prime})\|g\|_{L_{l}^{2}}\|f\|_{L^{1}\cap L^{ \infty}}\|f\|_{L_{l}^{2}}, \tag{3.9}\] \[-\langle Q(g,f)W_{l},fW_{l}\rangle \geq \frac{1}{8}\int(B^{\epsilon}+B_{1}^{\epsilon})g_{*}((fW_{l})^{ \prime}-fW_{l})^{2}\mathrm{d}V-C_{l}(I_{3}+I_{3}^{\prime})\|g\|_{H_{l}^{2}}\| f\|_{L_{l}^{2}}^{2}. \tag{3.8}\]
Proof.: We observe that \(-\langle Q(g,f)W_{l},fW_{l}\rangle=-\langle Q(g,fW_{l}),fW_{l}\rangle-\langle Q (g,f)W_{l}-Q(g,fW_{l}),fW_{l}\rangle\) and
\(-\langle Q(g,fW_{l}),fW_{l}\rangle=\frac{1}{2}\int B^{\epsilon}g_{*}((fW_{l}) ^{\prime}-fW_{l})^{2}\mathrm{d}V+\frac{1}{2}\int B^{\epsilon}g_{*}((f^{2}W_{l }^{2})-(f^{2}W_{l}^{2})^{\prime})\mathrm{d}V.\) We first notice that by (3.18) the commutator can be estimate as follows: for \(0<\eta<1\),
\[|\langle Q(g,f)W_{l}-Q(g,fW_{l}),fW_{l}\rangle|\] \[\lesssim_{l} \eta\int B_{1}^{\epsilon}g_{*}((fW_{l})^{\prime}-fW_{l})^{2} \mathrm{d}V+\eta^{-1}I_{3}\|g\|_{L^{1}\cap L^{\infty}}\|f\|_{L_{l}^{2}}^{2}+I _{3}\|g\|_{L_{l}^{2}}\|f\|_{L^{1}\cap L^{\infty}}\|f\|_{L_{l}^{2}}.\]
Then Lemma 2.6 and (2.21) imply that
\[\big{|}\int B^{\epsilon}g_{*}((f^{2}W_{l}^{2})-(f^{2}W_{l}^{2})^{\prime}) \mathrm{d}V\big{|}\lesssim(I_{3}+I_{3}^{\prime})\|g\|_{L^{\infty}}\|f\|_{L_{l }^{2}}^{2}.\]
By taking \(\eta\) small enough, we derive that
\[-\langle Q(g,f)W_{l},fW_{l}\rangle \geq \frac{3}{8}\bigg{(}\int B^{\epsilon}g_{*}((fW_{l})^{\prime}-fW_{ l})^{2}\mathrm{d}V\bigg{)}\] \[-C_{l}(I_{3}+I_{3}^{\prime})\|g\|_{L^{1}\cap L^{\infty}}\|f\|_{L_ {l}^{2}}^{2}-C_{l}(I_{3}+I_{3}^{\prime})\|g\|_{L_{l}^{2}}\|f\|_{L^{1}\cap L^{ \infty}}\|f\|_{L_{l}^{2}}.\]
Using the fact \(B^{\epsilon}\geq\frac{1}{2}B_{1}^{\epsilon}-B_{3}^{\epsilon}\), we get
\[\int B^{\epsilon}g_{*}((fW_{l})^{\prime}-fW_{l})^{2}\mathrm{d}V\geq\frac{1}{2} \int B_{1}^{\epsilon}g_{*}((fW_{l})^{\prime}-fW_{l})^{2}\mathrm{d}V-\int B_{3} ^{\epsilon}g_{*}((fW_{l})^{\prime}-fW_{l})^{2}\mathrm{d}V.\]
By (2.21), we have \(\int B_{3}^{\epsilon}g_{*}((fW_{l})^{\prime}-fW_{l})^{2}\mathrm{d}V\lesssim I_ {3}\|g\|_{L^{\infty}}\|f\|_{L_{l}^{2}}^{2}\). Then (3.8) follows. If (3.17) is applied to \(\langle Q(g,f)W_{l}-Q(g,fW_{l}),fW_{l}\rangle\), we will get (3.9).
**Remark 3.1**.: _Noting that \(B_{1}^{\epsilon}\geq\frac{1}{2}B^{\epsilon}-B_{3}^{\epsilon}\), the estimates in Theorem 3.2 still hold if \(Q\) is replaced by \(Q_{1}\)._
### Commutator estimates and weighted upper bounds of \(Q\) from the angular non-cutoff perspective
We have the following commutator estimates that are uniform in \(\epsilon\).
**Lemma 3.1**.: _For \(i=1,2,3\), we define \(\mathpzc{J}_{i}:=\langle Q_{i}(g,h)W_{l}-Q_{i}(g,hW_{l}),f\rangle\)\(\mathpzc{J}:=\langle Q(g,h)W_{l}-Q(g,hW_{l}),f\rangle\)._
1. _If_ \(g\geq 0\)_, then_ (3.10) \[|g_{1}|\lesssim_{l}\eta\int B_{1}^{\epsilon}g_{*}(f^{\prime}-f)^{2} \mathrm{d}V+\eta^{-1}I_{3}\|g\|_{L^{1}\cap L^{2}}\|h\|_{L^{2}_{l-1}}^{2}+I_{3}( \|g\|_{L^{2}}\|h\|_{L^{2}}+\|g\|_{L^{1}\cap H^{1}}\|h\|_{L^{2}_{l-1}})\|f\|_{L^{ 2}},\] (3.11) \[|g_{1}|\lesssim_{l}\eta\int B_{1}^{\epsilon}g_{*}(f^{\prime}-f)^{2} \mathrm{d}V+\eta^{-1}I_{3}\|g\|_{L^{1}\cap L^{2}}\|h\|_{L^{2}_{l-1}}^{2}+I_{3}( \|g\|_{L^{2}_{l}}\|h\|_{L^{2}}+\|g\|_{L^{1}\cap L^{\infty}}\|h\|_{L^{2}_{l-1}}) \|f\|_{L^{2}}.\] _In general, it holds that_ (3.12) \[|g_{1}|\lesssim_{I}I_{3}(\|g\|_{L^{2}_{0}}\|h\|_{H^{1}_{l-1}}+\|g\|_{L^{2}_{l}} \|h\|_{H^{1}_{l}})\|f\|_{L^{2}}.\]
2. _For_ \((a,b)=(1,0)\) _or_ \((0,1)\)_, then_ (3.13) \[|g_{2}|\lesssim_{I}I_{3}(\|g\|_{H^{a}}\|h\|_{H^{b}_{l-1}}+\|g\|_{H^{a}_{ l-1}}\|h\|_{H^{b}})\|f\|_{L^{2}},\] (3.14) \[|g_{2}|\lesssim_{I}I_{3}(\|g\|_{L^{1}\cap L^{\infty}}\|h\|_{L^{2}_{ l-1}}+\|g\|_{L^{2}_{l-1}}\|h\|_{L^{1}\cap L^{\infty}})\|f\|_{L^{2}}.\]
_._
3. _For_ \(\delta>0\) _and_ \(c,d\geq 0\) _with_ \(c+d=\frac{3}{2}+\delta\)_, then_ (3.15) \[|I_{3}|\lesssim_{l,\delta}I_{3}(\|g\|_{H^{c}}\|h\|_{H^{d}_{1}}+\|g\|_{H^{c}_{1}} \|h\|_{H^{d}})\|f\|_{L^{2}},\] (3.16) \[|I_{3}|\lesssim_{l,\delta}I_{3}(\|g\|_{L^{\infty}}\|h\|_{L^{2}_{1} }+\|g\|_{L^{2}_{1}}\|h\|_{L^{\infty}})\|f\|_{L^{2}}.\]
4. _If_ \(g\geq 0\)_, then for_ \(\eta>0\)_,_ (3.17) \[|g|\lesssim_{l}\eta\int B^{\epsilon}g_{*}(f^{\prime}-f)^{2}\mathrm{d}V+\eta^{ -1}I_{3}\|g\|_{L^{2}_{1}}\|h\|_{L^{2}_{l-1}}^{2}+I_{3}\|g\|_{H^{2}_{1}}\|h\|_{L ^{2}_{1}}\|f\|_{L^{2}}.\] _In general,_ (3.18) \[|g| \lesssim_{l} \eta\int B^{\epsilon}_{1}g_{*}(f^{\prime}-f)^{2}\mathrm{d}V+\eta ^{-1}I_{3}\|g\|_{L^{1}\cap L^{2}}\|h\|_{L^{2}_{l-1}}^{2}\] \[+I_{3}(\|g\|_{L^{1}\cap L^{\infty}}\|h\|_{L^{2}_{1}}+I_{3}\|g\|_{L ^{2}_{1}}\|h\|_{L^{1}\cap L^{\infty}})\|f\|_{L^{2}}.\]
Proof.: It is easy to compute that
\[g_{i}=\int B^{\epsilon}_{i}g_{*}hf^{\prime}(W^{\prime}_{l}-W_{l})\mathrm{d}V. \tag{3.19}\]
_Step 1: Estimate of \(g_{1}\)._ We first consider the case that \(g\geq 0\). By the Taylor expansion (2.3) for \(W^{\prime}_{l}-W_{l}\), we have \(g_{1}=g_{1}^{1}+g_{1}^{2}\) where \(g_{1}^{1}:=\int B^{\epsilon}_{1}g_{*}hf^{\prime}(\nabla W_{l})(v)\cdot(v^{ \prime}-v)\mathrm{d}V\) and \(g_{1}^{2}:=\int B^{\epsilon}_{1}g_{*}hf^{\prime}(1-\kappa)(\nabla^{2}W_{l})( \kappa(v)):(v^{\prime}-v)\otimes(v^{\prime}-v)\mathrm{d}\mathrm{d}V.\) For \(g_{1}^{1}\), we have
\[g_{1}^{1}=\int B^{\epsilon}_{1}g_{*}h(f^{\prime}-f)(\nabla W_{l})(v)\cdot(v^{ \prime}-v)\mathrm{d}V+\int B^{\epsilon}_{1}g_{*}hf(\nabla W_{l})(v)\cdot(v^{ \prime}-v)\mathrm{d}V:=g_{1}^{1,1}+g_{1}^{1,2}.\]
(2.43) implies that
\[|g_{1}^{1,1}| \lesssim \bigg{(}\int B^{\epsilon}_{1}g_{*}(f^{\prime}-f)^{2}\mathrm{d}V \bigg{)}^{\frac{1}{2}}\bigg{(}\int B^{\epsilon}_{1}g_{*}|hW_{l-1}|^{2}|v^{ \prime}-v|^{2}\mathrm{d}V\bigg{)}^{\frac{1}{2}}\] \[\lesssim \eta\int B^{\epsilon}_{1}g_{*}(f^{\prime}-f)^{2}\mathrm{d}V+\eta ^{-1}I_{3}\|g\|_{L^{1}\cap L^{2}}\|h\|_{L^{2}_{l-1}}^{2}.\]
Using (2.5), (2.7) and (2.10), we have
\[|g_{1}^{1,2}| = \big{|}\int B^{\epsilon}_{1}g_{*}hf(\nabla W_{l})(v)\cdot(v-v_{*} )\sin^{2}(\theta/2)\mathrm{d}V\big{|}\] \[\lesssim I_{3}\int|v-v_{*}|^{-2}g_{*}|hW_{l-1}|f\mathrm{d}v_{*}\mathrm{d} v\lesssim I_{3}\|g\|_{H^{1}}\|h\|_{L^{2}_{l-1}}\|f\|_{L^{2}}\text{ or }I_{3}\|g\|_{L^{2}\cap L^{\infty}}\|h\|_{L^{2}_{l-1}}\|f\|_{L^{2}}.\]
For \(g_{1}^{2}\), by applying (2.43), one has
\[|g_{1}^{2}|\lesssim\int B^{\epsilon}_{1}|g_{*}||h||f^{\prime}||v^{\prime}-|^{2 }(W_{l-2}+(W_{l-2})_{*})\mathrm{d}V\lesssim I_{3}(\|g\|_{L^{2}_{l}}\|h\|_{L^{2 }}+\|g\|_{L^{1}\cap L^{2}}\|h\|_{L^{2}_{l-2}})\|f\|_{L^{2}}. \tag{3.20}\]
Patching together the above estimates, we get (3.10) and (3.11) by using \(\|g\|_{L^{2}}\lesssim\|g\|_{L^{1}}+\|g\|_{L^{\infty}}\).
We next turn to the general case. Using (2.4) for \(W^{\prime}_{l}-W_{l}\), we have \(g_{1}^{3}:=g_{1}^{3}+g_{1}^{4}\) where \(g_{1}^{3}:=\int B^{\epsilon}_{1}g_{*}hf^{\prime}(\nabla W_{l})(v^{\prime}-v) \mathrm{d}V\) and \(g_{1}^{4}:=-\int B^{\epsilon}_{1}g_{*}hf^{\prime}\kappa(\nabla^{2}W_{l})( \kappa(v)):(v^{\prime}-v)\otimes(v^{\prime}-v)\mathrm{d}\kappa\mathrm{d}V.\) For \(g_{1}^{3}\), thanks to (2.6), we observe that
\[|g_{1}^{3}|=\big{|}\int B^{\epsilon}_{1}g_{*}(h-h^{\prime})f^{\prime}(\nabla W_{ l})(v^{\prime})(v^{\prime}-v)\mathrm{d}V\big{|}.\]
Then by order-1 Taylor expansion for \(h-h^{\prime}\), suitably using (2.43), we get
\[|g_{1}^{3}| \lesssim \int B^{\epsilon}_{1}|g_{*}|(\nabla h)(\kappa(v))||f^{\prime}||v^{ \prime}-v|^{2}(W_{l-1}(v_{*})+W_{l-1}(\kappa(v)))\mathrm{d}\kappa\mathrm{d}V\] \[\lesssim I_{3}\|g\|_{L^{2}_{l}}\|h\|_{H^{1}_{1}}\|f\|_{L^{2}}+I_{3}\|g\|_{L ^{2}_{2}}\|h\|_{H^{1}_{l-1}}\|f\|_{L^{2}}.\]
By comparing \(g_{1}^{2}\) and \(g_{1}^{4}\), it is not difficult to see that \(g_{1}^{4}\) is also bounded by (3.20). We get (3.12).
_Step 2: Estimate of \(g_{2}\)._ Thanks to the Taylor expansion (2.3) for \(W^{\prime}_{l}-W_{l}\), we deduce that
\[|g_{2}|\lesssim\int|B^{\epsilon}_{2}||g_{*}||h||f^{\prime}||v-v^{\prime}|\big{(}W_ {l-1}+(W_{l-1})_{*}\big{)}\mathrm{d}V.\]
By the estimates (2.42) and (2.20) yield that
\[g \lesssim \bigg{(}\int|B_{1}^{\epsilon}|(|g_{*}|^{2}|W_{l-1}h|^{2}+|(W_{l-1}g )_{*}|^{2}|h|^{2})|v-v^{\prime}|^{2}\mathrm{d}V\bigg{)}^{\frac{1}{2}}\bigg{(} \int|B_{3}^{\epsilon}||f^{\prime}|^{2}\mathrm{d}V\bigg{)}^{\frac{1}{2}}\] \[\lesssim I_{3}\bigg{(}\int|(g_{*}|^{2}|W_{l-1}h|^{2}+|(W_{l-1}g)_{*}|^{2 }|h|^{2})|v-v_{*}|^{-1}\mathrm{d}v\mathrm{d}v_{*}\bigg{)}^{\frac{1}{2}}\|f\|_{L ^{2}}.\]
Now we can use (2.11) to get (3.13) and use (2.9) to get (3.14).
_Step 3: Estimate of \(I_{3}\)._ Since \(W_{l}^{\prime}+W_{l}\lesssim W_{l}+(W_{l})_{*}\), (2.21) implies (3.16). Similarly, using (2.21) and (2.22), one can easily get (3.15).
Obviously, (3.17) and (3.18) follow (3.10), (3.13), (3.15) and (3.11), (3.14), (3.16) respectively.
With the upper bounds in the previous section and the commutator estimates in Lemma 3.1, we are ready to state the following weighted upper bounds for \(Q_{1}\) and \(Q_{2}\).
**Proposition 3.1**.: _Let \(l\geq 2\), \((a,b)=(1,0)\) or \((0,1)\), then_
\[|\langle Q_{1}(g,h),W_{l}f\rangle|\lesssim_{l}I_{3}\|g\|_{L^{2}_{ l}}\|h\|_{H_{l}^{2}}\|f\|_{L^{2}}, \tag{3.22}\] \[|\langle Q_{2}(g,h),W_{l}f\rangle|\lesssim_{l}(I_{3}+I_{3}^{ \prime})\|g\|_{H_{l}^{n}}\|h\|_{H_{l}^{k+1}}\|f\|_{L^{2}}. \tag{3.21}\]
Proof.: It is not difficult to conclude that (3.21) and (3.22) follow (2.65),(3.12) (2.67) and (3.13).
Now we are ready to state the weighted upper bound of the operator \(Q_{UU}^{\epsilon}\).
**Theorem 3.3**.: _Let \(N,l\geq 2\), then_
\[\|Q_{UU}^{\epsilon}(f)\|_{L^{2}_{l}}\leq C_{l}(I_{3}+I_{3}^{ \prime})\|f\|_{H_{l}^{2}}^{2}+\epsilon^{3}C_{l}(I_{3}+I_{3}^{\prime})\|f\|_{H_ {l}^{2}}^{3}, \tag{3.24}\] \[\|Q_{UU}^{\epsilon}(f)\|_{H_{l}^{N-2}}\leq C_{l,N}(I_{3}+I_{3}^{ \prime})\|f\|_{H_{l}^{N}}^{2}+\epsilon^{3}C_{l,N}(I_{3}+I_{3}^{\prime})\|f\|_{H _{l}^{N}}^{3}. \tag{3.23}\]
Proof.: Recalling \(Q_{UU}^{\epsilon}(f)=Q_{1}(f,f)+Q_{2}(f,f)+Q_{3}(f,f,f)+R(f,f,f)\), by (3.21), (3.22), (2.25) and (2.74), we derive (3.23). Of course, (3.24) is a direct result of (3.23).
Note that (3.22) allows us to consider \(\langle Q_{2}(\partial^{\alpha_{1}}g,\partial^{\alpha_{2}}f)W_{l},W_{l} \partial^{\alpha}f\rangle\) with \(|\alpha_{1}|\geq 1\).
**Lemma 3.2**.: _Let \(1\leq m=|\alpha|\leq N\), then_
\[\sum_{\alpha_{1}+\alpha_{2}=\alpha,|\alpha_{1}|\geq 1}|\langle Q_{2}(\partial^{ \alpha_{1}}g,\partial^{\alpha_{2}}f)W_{l},W_{l}\partial^{\alpha}f\rangle| \lesssim_{N,l}(I_{3}+I_{3}^{\prime})\|g\|_{H_{l}^{N}}\|f\|_{H_{l}^{N}}^{2}.\]
Proof.: If \(|\alpha_{1}|\geq 2\), then \(|\alpha_{2}|\leq m-2\). By taking \(s_{1}=0,s_{2}=2,s_{3}=0\) in (2.25) and \(a=0,b=1\) in (3.22), we get that
\[|\langle Q_{2}(\partial^{\alpha_{1}}g,\partial^{\alpha_{2}}f)W_{l},W_{l} \partial^{\alpha}f\rangle|\lesssim_{l}(I_{3}+I_{3}^{\prime})\|\partial^{\alpha_ {1}}g\|_{L^{2}_{l}}\|\partial^{\alpha_{2}}f\|_{H_{l}^{2}}\|\partial^{\alpha}f\|_ {L^{2}_{l}}\lesssim_{l}(I_{3}+I_{3}^{\prime})\|g\|_{H_{l}^{m}}\|f\|_{H_{l}^{m}} ^{2}.\]
If \(|\alpha_{1}|=1\), then \(|\alpha_{2}|\leq m-1\). By taking \(s_{1}=1,s_{2}=1,s_{3}=0\) in (2.25) and \(a=1,b=0\) in (3.22), we get
\[|\langle Q_{2}(\partial^{\alpha_{1}}g,\partial^{\alpha_{2}}f)W_{l},W_{l} \partial^{\alpha}f\rangle|\lesssim_{l}(I_{3}+I_{3}^{\prime})\|\partial^{\alpha_ {1}}g\|_{H_{l}^{1}}\|\partial^{\alpha_{2}}f\|_{H_{l}^{1}}\|\partial^{\alpha}f\|_ {L^{2}_{l}}\lesssim_{l}(I_{3}+I_{3}^{\prime})\|g\|_{H_{l}^{2}}\|f\|_{H_{l}^{m}} ^{2}.\]
These are enough to conclude the desired result.
Note that (3.21) allows us to consider \(\langle Q_{1}(\partial^{\alpha_{1}}g,\partial^{\alpha_{2}}f)W_{l},W_{l} \partial^{\alpha}f\rangle\) with \(|\alpha_{1}|\geq 2\).
**Lemma 3.3**.: _Let \(2\leq m=|\alpha|\leq N\), then_
\[\sum_{\alpha_{1}+\alpha_{2}=\alpha,|\alpha_{1}|\geq 2}|\langle Q_{1}(\partial^{ \alpha_{1}}g,\partial^{\alpha_{2}}f)W_{l},W_{l}\partial^{\alpha}f\rangle| \lesssim_{N,l}I_{3}\|g\|_{H_{l}^{m}}\|f\|_{H_{l}^{m}}^{2}.\]
Proof.: Since \(|\alpha_{1}|\geq 2\), \(|\alpha_{2}|\leq m-2\), using (3.21), we have
\[|\langle Q_{1}(\partial^{\alpha_{1}}g,\partial^{\alpha_{2}}f)W_{l},W_{l} \partial^{\alpha}f\rangle|\lesssim_{l}I_{3}\|\partial^{\alpha_{1}}g\|_{L^{2}_{l}} \|\partial^{\alpha_{2}}f\|_{H_{l}^{2}}\|\partial^{\alpha}f\|_{L^{2}_{l}} \lesssim_{l}I_{3}\|g\|_{H_{l}^{m}}\|f\|_{H_{l}^{m}}^{2}.\]
The desired result follows by summation.
### The penultimate order terms
In this subsection, we deal with the penultimate order terms \(\langle Q_{1}(\partial^{\alpha_{1}}g,\partial^{\alpha_{2}}f)W_{l},W_{l}\partial^ {\alpha}f\rangle\) where \(|\alpha_{1}|=1\). The forthcoming Lemma is motivated by [11]. For the ease of interested readers, we reproduce its proof in a clearer way. The proof is based on some basic property of Boltzmann-type integral and integration by parts.
**Lemma 3.4**.: _For simplicity, let \(\tilde{\partial}_{i}:=\partial^{e_{i}}+\partial_{*}^{e_{i}}\) where \(\partial^{e_{i}}=\partial_{v_{i}},\partial_{*}^{e_{i}}=\partial_{(v_{i})_{*}}\), for a unit index \(|e_{i}|=1\). For a general function depending on the variables \(g=g(v,v_{*},v^{\prime},v^{\prime}_{*})\), let \(g^{\prime}=g(v^{\prime},v^{\prime}_{*},v,v_{*})\). For a general kernel \(B=B(|v-v_{*}|,\cos\theta)\), and a general function \(G=G(v)\), it holds that_
\[\int Bg(G^{\prime}-G)\tilde{\partial}_{i}G\mathrm{d}V=\frac{1}{4}\int B\tilde {\partial}_{i}g(G^{\prime}-G)^{2}\mathrm{d}V+\frac{1}{2}\int B(g-g^{\prime})( G^{\prime}-G)\tilde{\partial}_{i}G\mathrm{d}V.\]
Proof.: We first observe that
\[\tilde{\partial}_{i}B=0,\quad\tilde{\partial}_{i}G=\partial^{e_{i}}G,\quad \tilde{\partial}_{i}G_{*}=(\partial^{e_{i}}G)_{*},\quad\tilde{\partial}_{i}G^ {\prime}=(\partial^{e_{i}}G)^{\prime},\quad\tilde{\partial}_{i}G^{\prime}_{*} =(\partial^{e_{i}}G)^{\prime}_{*}.\]
As a result, when the integration by parts is used, the derivative on \(B\) disappears. Moreover, derivatives and taking values are commutative through \(\partial^{e_{i}}\) and \(\tilde{\partial}_{i}\). Using these facts, we have
\[Bg(G^{\prime}-G)\tilde{\partial}_{i}G=-B\tilde{\partial}_{i}g(G^ {\prime}-G)G-Bg(\tilde{\partial}_{i}G^{\prime}-\tilde{\partial}_{i}G)G \tag{3.26}\] \[=B\tilde{\partial}_{i}g(G-G^{\prime})^{2}+B\tilde{\partial}_{i}g( G-G^{\prime})G^{\prime}-Bg(\tilde{\partial}_{i}G^{\prime}-\tilde{\partial}_{i}G)G\] (3.27) \[=B\tilde{\partial}_{i}g(G-G^{\prime})^{2}-Bg(\tilde{\partial}_{i }G-\tilde{\partial}_{i}G^{\prime})G^{\prime}-Bg(G-G^{\prime})\tilde{\partial}_ {i}G^{\prime}+Bg(\tilde{\partial}_{i}G-\tilde{\partial}_{i}G^{\prime})G\] (3.28) \[=B\tilde{\partial}_{i}g(G-G^{\prime})^{2}-2Bg(G-G^{\prime})\tilde {\partial}_{i}G^{\prime}+Bg\tilde{\partial}_{i}G(G-G^{\prime})\] (3.29) \[=B\tilde{\partial}_{i}g(G-G^{\prime})^{2}-2Bg^{\prime}(G^{\prime }-G)\tilde{\partial}_{i}G-Bg(G^{\prime}-G)\tilde{\partial}_{i}G\] (3.30) \[=B\tilde{\partial}_{i}g(G-G^{\prime})^{2}-2B(g^{\prime}-g)(G^{ \prime}-G)\tilde{\partial}_{i}G-3Bg(G^{\prime}-G)\tilde{\partial}_{i}G. \tag{3.25}\]
Here for the equality in (3.25), we use the integration by parts formula to transfer \(\tilde{\partial}_{i}\) to other functions. For the equality from (3.26) to (3.27), we use the integration by parts formula to deal with the middle term in the line (3.26). For the equality from (3.28) to (3.29), we use the change of variable \((v,v_{*},\sigma)\to(v^{\prime},v^{\prime}_{*},\sigma^{\prime})\) to deal with the last two terms in (3.28). The other changes are easily to verify. We conclude the result by moving the last term of (3.30) to the left of (3.25).
Now we are ready to estimate the penultimate order terms.
**Lemma 3.5**.: _Let \(1\leq m=|\alpha|\leq N\), then_
\[\sum_{\alpha_{1}+\alpha_{2}=\alpha,|\alpha_{1}|=1}|\langle Q_{1}(\partial^{ \alpha_{1}}g,\partial^{\alpha_{2}}f)W_{l},W_{l}\partial^{\alpha}f\rangle| \lesssim_{N,l}I_{3}\|g\|_{H^{2}_{l}}\|f\|_{H^{m}_{l}}^{2}.\]
Proof.: Let \(\mathcal{A}=|\langle Q_{1}(\partial^{\alpha_{1}}g,\partial^{\alpha_{2}}f)W_{l },W_{l}\partial^{\alpha}f\rangle|\). It is easy to see that \(\mathcal{A}\leq|\mathcal{A}_{1}|+|\mathcal{A}_{2}|\) where \(\mathcal{A}_{1}=\langle Q_{1}(\partial^{\alpha_{1}}g,W_{l}\partial^{\alpha_{2} }f),W_{l}\partial^{\alpha}f\rangle\) and \(\mathcal{A}_{2}=\langle Q_{1}(\partial^{\alpha_{1}}g,\partial^{\alpha_{2}}f)W _{l}-Q_{1}(\partial^{\alpha_{1}}f,W_{l}\partial^{\alpha_{2}}f),W_{l}\partial^{ \alpha}f\rangle.\) By (3.12) in Lemma 3.1, as \(|\alpha_{2}|=m-1\), we have
\[|\mathcal{A}_{2}|\lesssim_{I}I_{3}(\|W_{l}\partial^{\alpha_{1}}g\|_{L^{2}}\| \partial^{\alpha_{2}}f\|_{H^{1}_{1}}+\|\partial^{\alpha_{1}}g\|_{L^{2}_{2}}\|W_ {l}\partial^{\alpha_{2}}f\|_{H^{1}_{-1}}\|W_{l}\partial^{\alpha}f\|_{L^{2}} \lesssim_{I}I_{3}\|g\|_{H^{2}_{l}}\|f\|_{H^{m}_{l}}^{2}.\]
Since \(|\alpha_{1}|=1\), we write \(\alpha_{1}=e_{i}\) for some \(1\leq i\leq 3\). Let \(\partial^{e_{i}}=\partial_{v_{i}}\) and \(F=\partial^{\alpha_{2}}f=\partial^{\alpha-e_{i}}f\), then \(\partial^{\alpha}f=\partial^{e_{i}}F\) and
\[\mathcal{A}_{1}=\langle Q_{1}(\partial^{e_{i}}g,W_{l}F),\partial^{e_{i}}(W_{l}F) \rangle+\langle Q_{1}(\partial^{e_{i}}g,W_{l}F),W_{l}\partial^{e_{i}}F-\partial ^{e_{i}}(W_{l}F)\rangle. \tag{3.31}\]
\(\bullet\) Let \(G=W_{l}F\), then the first term of the r.h.s of (3.31) can be written as
\[\langle Q_{1}(\partial^{e_{i}}g,G),\partial^{e_{i}}G\rangle=\int B_{1}^{*}( \partial^{e_{i}}g)^{\prime}_{*}(G^{\prime}-G)\partial^{e_{i}}G\mathrm{d}V+\int B _{1}^{*}((\partial^{e_{i}}g)^{\prime}_{*}-(\partial^{e_{i}}g)_{*})G\partial^{e _{i}}G\mathrm{d}V. \tag{3.32}\]
For the second term in (3.32), we use (2.31) to get
\[|\int B_{1}^{*}((\partial^{e_{i}}g)^{\prime}_{*}-(\partial^{e_{i}}g)_{*})G \partial^{e_{i}}G\mathrm{d}V|\lesssim I_{3}\|\partial^{e_{i}}g\|_{H^{1}}\|G \|_{H^{1}}\|\partial^{e_{i}}G\|_{L^{2}}\lesssim_{I}I_{3}\|g\|_{H^{2}}\|f\|_{H^{m}_ {l}}^{2}.\]
By Lemma 3.4, the first term in (3.32) is
\[\frac{1}{4}\int B_{1}^{*}(\partial^{2e_{i}}g)_{*}(G^{\prime}-G)^{2}\mathrm{d}V+ \frac{1}{2}\int B((\partial^{e_{i}}g)^{\prime}_{*}-(\partial^{e_{i}}g)_{*})(G^{ \prime}-G)\partial^{e_{i}}G\mathrm{d}V. \tag{3.33}\]
By Taylor expansion to \(G\), (2.43) gives that
\[\int B_{1}^{*}(\partial^{2e_{i}}g)_{*}(G^{\prime}-G)^{2}\mathrm{d}V\lesssim I_{3} \|g\|_{H^{2}_{l}}\|G\|_{H^{1}}\lesssim_{I}I_{3}\|g\|_{H^{2}_{l}}\|f\|_{H^{m}_{l}}^ {2}.\]
Apply order-1 Taylor expansion to \(\partial^{e_{i}}g\) and \(G\) and use (2.43), then the second term in (3.33) is bounded by
\[|\int B_{1}^{\epsilon}|(\nabla\partial^{e_{i}}g)(\iota(v_{*}))||(\nabla G)(\kappa (v))||\partial^{e_{i}}G|\mathrm{d}V\mathrm{d}\kappa\mathrm{d}\iota|\lesssim I_ {3}\|\nabla\partial^{e_{i}}g\|_{L^{2}_{2}}\|\nabla G\|_{L^{2}}\|G\|_{L^{2}} \lesssim I_{3}\|g\|_{H^{2}_{2}}\|f\|^{2}_{H^{m}_{l}}.\]
\(\bullet\) Since \(W_{l}\partial^{e_{i}}F-\partial^{e_{i}}(W_{l}F)=-F\partial^{e_{i}}W_{l}\), the second term in the r.h.s of (3.31) is
\[-\langle Q_{1}(\partial^{e_{i}}g,W_{l}F),F\partial^{e_{i}}W_{l}\rangle = \frac{1}{2}\int B_{1}(\partial^{e_{i}}g)_{*}((W_{l}F)^{\prime}-W_ {l}F)((F\partial^{e_{i}}W_{l})^{\prime}-F\partial^{e_{i}}W_{l})\mathrm{d}V\] \[+\frac{1}{2}\int B_{1}((\partial^{e_{i}}g)^{\prime}_{*}-(\partial ^{e_{i}}g)_{*})W_{l}F((F\partial^{e_{i}}W_{l})^{\prime}-F\partial^{e_{i}}W_{l })\mathrm{d}V.\]
By similar argument, the r.h.s can be bounded by \(I_{3}\|g\|_{H^{2}_{2}}\|f\|^{2}_{H^{m}_{l}}\) and \(I_{3}\|g\|_{H^{2}_{2}}\|f\|^{2}_{H^{m}_{l}}\). We are led to the desired result.
## 4. Well-posedness and propagation of regularity
In this section, we will prove part of Theorem 1.1 for Fermi-Dirac particles as well as Theorem 1.2 for Bose-Einstein particles.
### Fermi-Dirac particles
In this subsection, will prove the first two results in Theorem 1.1. We start with the mild solution since in this situation the non-negativity can be easily proved. We recall that \(\partial_{t}f=Q^{\epsilon}_{UU}(f)\), where \(Q^{\epsilon}_{UU}\) is defined by
\[Q^{\epsilon}_{UU}(f)=\int_{\mathbb{R}^{3}\times\mathbb{S}^{2}}B^{\epsilon}\Pi^ {\epsilon}(f)\mathrm{d}\sigma\mathrm{d}v_{*}, \tag{4.1}\]
where
\[\Pi^{\epsilon}(f):=f^{\prime}_{*}f^{\prime}(1-\epsilon^{3}f_{*})(1-\epsilon^{ 3}f)-f_{*}f(1-\epsilon^{3}f^{\prime}_{*})(1-\epsilon^{3}f^{\prime}).\]
As usual, we define the gain and loss terms by
\[Q^{\epsilon,+}(f)=\int_{\mathbb{R}^{3}\times\mathbb{S}^{2}}B^{\epsilon}\Pi^{ \epsilon,+}(f)\mathrm{d}\sigma\mathrm{d}v_{*},\quad Q^{\epsilon,-}(f)=\int_{ \mathbb{R}^{3}\times\mathbb{S}^{2}}B^{\epsilon}\Pi^{\epsilon,-}(f)\mathrm{d} \sigma\mathrm{d}v_{*}, \tag{4.2}\]
where
\[\Pi^{\epsilon,+}(f):=f^{\prime}_{*}f^{\prime}(1-\epsilon^{3}f_{*})(1-\epsilon^ {3}f),\quad\Pi^{\epsilon,-}(f):=f_{*}f(1-\epsilon^{3}f^{\prime}_{*})(1- \epsilon^{3}f^{\prime}).\]
We now prove that the initial value problem admits a mild solution if \(f_{0}\in L^{1}\) with \(0\leq f_{0}\leq\epsilon^{-3}\). To do that, we prove the following lemma.
**Lemma 4.1**.: _Let \(f,g\in L^{1}\cap L^{\infty},0\leq f,g\leq\epsilon^{-3}\), then_
\[\|Q^{\epsilon}_{UU}(f)\|_{L^{1}}\lesssim\epsilon^{-3}I_{0}\|f\|^ {2}_{L^{1}}, \tag{4.4}\] \[\|Q^{\epsilon}_{UU}(f)\|_{L^{\infty}}\lesssim\epsilon^{-6}I_{0}\| f\|_{L^{1}}+\epsilon^{-6}I_{3},\] (4.5) \[\|Q^{\epsilon}_{UU}(f)-Q^{\epsilon}_{UU}(g)\|_{L^{1}}\lesssim( \epsilon^{-3}I_{0}+I_{3})(\|f\|_{L^{1}}+\|g\|_{L^{1}}+\epsilon^{-3})\|f-g\|_{L^{ 1}}. \tag{4.3}\]
Proof.: By (4.1) and (4.2), we have
\[Q^{\epsilon}_{UU}(f)=Q^{\epsilon,+}(f)-Q^{\epsilon,-}(f). \tag{4.6}\]
From (4.6) and the change of variable (2.14) in the special case \(\kappa=\iota=1\), we get that
\[\|Q^{\epsilon}_{UU}(f)\|_{L^{1}}\leq\|Q^{\epsilon,+}(f)\|_{L^{1}}+\|Q^{ \epsilon,-}(f)\|_{L^{1}}=2\|Q^{\epsilon,-}(f)\|_{L^{1}}. \tag{4.7}\]
Since \(0\leq g\leq\epsilon^{-3}\), (2.18) implies that
\[\|Q^{\epsilon,-}(f)\|_{L^{1}}=\int B^{\epsilon}f_{*}f(1-\epsilon^{3}f^{\prime} _{*})(1-\epsilon^{3}f^{\prime})\mathrm{d}\sigma\mathrm{d}v_{*}\mathrm{d}v\leq \int B^{\epsilon}f_{*}f\mathrm{d}\sigma\mathrm{d}v_{*}\mathrm{d}v\lesssim \epsilon^{-3}I_{0}\|g\|^{2}_{L^{1}}. \tag{4.8}\]
This gives (4.3).
From (4.6), it holds that \(\|Q^{\epsilon}_{UU}(f)\|_{L^{\infty}}\leq\|Q^{\epsilon,+}(f)\|_{L^{\infty}}+\| Q^{\epsilon,-}(f)\|_{L^{\infty}}.\) Since \(0\leq f\leq\epsilon^{-3}\), by using (2.18), we obtain that
\[Q^{\epsilon,-}(f)(v)=\int B^{\epsilon}f_{*}f(1-\epsilon^{3}f^{\prime}_{*})(1- \epsilon^{3}f^{\prime})\mathrm{d}\sigma\mathrm{d}v_{*}\leq f\int B^{\epsilon}f _{*}\mathrm{d}\sigma\mathrm{d}v_{*}=A^{\epsilon}f\|f\|_{L^{1}}\lesssim\epsilon^{- 6}I_{0}\|f\|_{L^{1}}. \tag{4.9}\]
For \(Q^{\epsilon,+}(f)(v)\), we use \(B^{\epsilon}\leq 2B^{\epsilon}_{1}+2B^{\epsilon}_{3}\) to get
\[Q^{\epsilon,+}(f)(v)=\int B^{\epsilon}f^{\prime}_{*}f^{\prime}(1-\epsilon^{3}f_{*}) (1-\epsilon^{3}f)\mathrm{d}\sigma\mathrm{d}v_{*}\leq 2\epsilon^{-3}\int B^{\epsilon}_{1}f^{\prime}_{*} \mathrm{d}\sigma\mathrm{d}v_{*}+2\epsilon^{-6}\int B^{\epsilon}_{3}\mathrm{d} \sigma\mathrm{d}v_{*}. \tag{4.10}\]
By (4.10), (2.19) and (2.39), we arrive at
\[Q^{\epsilon,+}(f)(v)\lesssim\epsilon^{-6}I_{0}\|f\|_{L^{1}}+\epsilon^{-6}I_{3}. \tag{4.11}\]
Patching together (4.9) and (4.11) will give (4.4).
From (4.6), we notice that \(Q^{\epsilon}_{UU}(f)-Q^{\epsilon}_{UU}(g)=Q^{\epsilon,+}(f)-Q^{\epsilon,+}(g) -(Q^{\epsilon,-}(f)-Q^{\epsilon,-}(g))\,.\) Recalling (4.2), by the change of variable (2.14) in the special case \(\kappa=\iota=1\), we get that
\[\|Q^{\epsilon}_{UU}(f)-Q^{\epsilon}_{UU}(g)\|_{L^{1}}\leq\|Q^{ \epsilon,+}(f)-Q^{\epsilon,+}(g)\|_{L^{1}}+\|Q^{\epsilon,-}(f)-Q^{\epsilon,-}( g)\|_{L^{1}}\] \[\leq\int B^{\epsilon}|\Pi^{\epsilon,+}(f)-\Pi^{\epsilon,+}(g)| \mathrm{d}V+\int B^{\epsilon}|\Pi^{\epsilon,-}(f)-\Pi^{\epsilon,-}(g)|\mathrm{ d}V=2\int B^{\epsilon}|\Pi^{\epsilon,-}(f)-\Pi^{\epsilon,-}(g)|\mathrm{d}V. \tag{4.12}\]
Observe that
\[\Pi^{\epsilon,-}(f)-\Pi^{\epsilon,-}(g) = (f_{*}-g_{*})f(1-\epsilon^{3}f_{*}^{\prime})(1-\epsilon^{3}f^{ \prime})+g_{*}(f-g)(1-\epsilon^{3}f_{*}^{\prime})(1-\epsilon^{3}f^{\prime})\] \[+\epsilon^{3}g_{*}g(g_{*}^{\prime}-f_{*}^{\prime})(1-\epsilon^{3 }f^{\prime})+\epsilon^{3}g_{*}g(1-\epsilon^{3}g_{*}^{\prime})(g^{\prime}-f^{ \prime}).\]
If \(0\leq f,g\leq\epsilon^{-3}\), then \(|\Pi^{\epsilon,-}(f)-\Pi^{\epsilon,-}(g)|\leq|f_{*}-g_{*}|f+g_{*}|f-g|+g|g_{* }^{\prime}-f_{*}^{\prime}|+g_{*}|g^{\prime}-f^{\prime}|.\) Therefore,
\[\int B^{\epsilon}|\Pi^{\epsilon,-}(f)-\Pi^{\epsilon,-}(g)|\mathrm{d}V\leq\int B ^{\epsilon}(|f_{*}-g_{*}|f+g_{*}|f-g|)\mathrm{d}V+2\int B^{\epsilon}g_{*}|g^ {\prime}-f^{\prime}|\mathrm{d}V.\]
By using (2.18), (2.20) and (2.39), we get that
\[\int B^{\epsilon}|\Pi^{\epsilon,-}(f)-\Pi^{\epsilon,-}(g)|\mathrm{d}V\lesssim (\epsilon^{-3}I_{0}+I_{3})(\|f\|_{L^{1}}+\|g\|_{L^{1}}+\|g\|_{L^{\infty}})\|f -g\|_{L^{1}},\]
which yields (4.5).
Proof of Theorem 1.1: (Well-posedness part).: We recall that \(\mathcal{A}_{T}:=L^{\infty}([0,T];L^{1}(\mathbb{R}^{3}))\) associated with the norm \(\|f\|_{T}:=\sup\limits_{0\leq t\leq T}\|f(t)\|_{L^{1}}\). We define the operator \(J^{\epsilon}(\cdot)\) on \(\mathcal{A}_{T}\) by
\[J^{\epsilon}(f)(t,v):=f_{0}(v)+\int_{0}^{t}Q^{\epsilon}_{UU}(|f|\wedge \epsilon^{-3})(\tau,v)\mathrm{d}\tau.\]
\(\bullet\)\(J^{\epsilon}(\cdot)\) is a map from \(\mathcal{A}_{T}\) onto \(\mathcal{A}_{T}\). It is easy to check that \(\|J^{\epsilon}(f)\|_{T}\leq\|f_{0}\|_{L^{1}}+T\|Q^{\epsilon}_{UU}(|f|\wedge \epsilon^{-3})\|_{T}\). By (4.3), we get
\[\|J^{\epsilon}(f)\|_{T}\leq\|f_{0}\|_{L^{1}}+TC_{\epsilon,\phi}\|f\|_{T}^{2}, \tag{4.13}\]
where \(C_{\epsilon,\phi}\lesssim\epsilon^{-3}I_{0}\). This means \(J^{\epsilon}(\cdot)\) is an operator onto \(\mathcal{A}_{T}\).
\(\bullet\) Let \(\mathscr{B}_{T}:=\{f\in\mathcal{A}_{T},\|f\|_{T}\leq 2\|f_{0}\|_{L^{1}}\}\). We want to show that \(J^{\epsilon}(\cdot)\) is contraction on the complete metric space \((\mathscr{B}_{T},\|\cdot-\cdot\|_{T})\) for small enough \(T>0\). By (4.13), if \(T\) satisfies
\[4TC_{\epsilon,\phi}\|f_{0}\|_{L^{1}}\leq 1, \tag{4.14}\]
then \(J^{\epsilon}(\cdot)\) is an operator onto \(\mathscr{B}_{T}\). Given \(f,g\in\mathscr{B}_{T}\), we have
\[J^{\epsilon}(f)(t,v)-J^{\epsilon}(g)(t,v)=\int_{0}^{t}\left(Q^{\epsilon}_{UU}( |f|\wedge\epsilon^{-3})-Q^{\epsilon}_{UU}(|g|\wedge\epsilon^{-3})(\tau,v) \right)\mathrm{d}\tau.\]
Similarly to the above, we have \(\|J^{\epsilon}(f)-J^{\epsilon}(g)\|_{T}\leq T\|Q^{\epsilon}_{UU}(|f|\wedge \epsilon^{-3})-Q^{\epsilon}_{UU}(|g|\wedge\epsilon^{-3})\|_{T}.\) By (4.5), since the function \(x\in\mathbb{R}\to|x|\wedge\epsilon^{-3}\) is Lipschitz continuous with Lipschitz constant \(1\), we get that
\[\|J^{\epsilon}(f)-J^{\epsilon}(g)\|_{T}\leq TC_{\epsilon,\phi}(\|f\|_{T}+\|g\| _{T}+\epsilon^{-3})\|f-g\|_{T}\leq TC_{\epsilon,\phi}(4\|f_{0}\|_{L^{1}}+ \epsilon^{-3})\|f-g\|_{T}.\]
where \(C_{\epsilon,\phi}\lesssim\epsilon^{-3}I_{0}+I_{3}\). If \(T\) satisfies
\[TC_{\epsilon,\phi}(4\|f_{0}\|_{L^{1}}+\epsilon^{-3})\leq\frac{1}{2}, \tag{4.15}\]
then \(J^{\epsilon}(\cdot)\) is a contraction mapping. By the Fixed Point Theorem, there exists a unique \(f^{\epsilon}\in\mathscr{B}_{T}\), s.t. \(f^{\epsilon}=J^{\epsilon}(f^{\epsilon})\). After a modification on the \(v\)-null sets, there is a null set \(Z\subset\mathbb{R}^{3}\), for all \(t\in[0,T]\) and \(v\in\mathbb{R}^{3}\setminus Z\),
\[f^{\epsilon}(t,v)=J^{\epsilon}(f^{\epsilon})(t,v)=f_{0}(v)+\int_{0}^{t}Q^{ \epsilon}_{UU}(|f^{\epsilon}|\wedge\epsilon^{-3})(\tau,v)\mathrm{d}\tau.\]
\(\bullet\) We now utilize the special definition of \(J^{\epsilon}(\cdot)\) to prove that \(0\leq f^{\epsilon}(t)\leq\epsilon^{-3}\) for all \(t\in[0,T]\) and \(v\in\mathbb{R}^{3}\setminus Z\). By (4.4), we can see that for any \(v\in\mathbb{R}^{3}\setminus Z\) and \(t_{1},t_{2}\in[0,T]\),
\[|f^{\epsilon}(t_{2},v)-f^{\epsilon}(t_{1},v)|\leq C_{\epsilon,\phi}(\|f\|_{L^{1} }+1)|t_{2}-t_{1}|,\]
for some \(C_{\epsilon,\phi}\lesssim\epsilon^{-6}\left(I_{0}+I_{3}\right)\). That is, \(f^{\epsilon}(\cdot,v)\) is uniformly continuous w.r.t. \(t\) on \([0,T]\) for any \(v\in\mathbb{R}^{3}\setminus Z\). Then we have
\[(-f^{\epsilon}(t,v))^{+} = -\int_{0}^{t}Q^{\epsilon}_{UU}(|f^{\epsilon}|\wedge\epsilon^{-3} )(\tau,v)1_{f^{\epsilon}(\tau,v)<0}\mathrm{d}\tau\leq\int_{0}^{t}Q^{\epsilon,- }(|f^{\epsilon}|\wedge\epsilon^{-3})(\tau,v)1_{f^{\epsilon}(\tau,v)<0} \mathrm{d}\tau\] \[\leq A^{\epsilon}\int_{0}^{t}\|f^{\epsilon}(\tau)\|_{L^{1}}|f^{ \epsilon}(\tau,v)|1_{f^{\epsilon}(\tau,v)<0}\mathrm{d}\tau\leq 2A^{\epsilon}\|f_{ 0}\|_{L^{1}}\int_{0}^{t}(-f^{\epsilon}(\tau,v))^{+}\mathrm{d}\tau. \tag{4.16}\]
By Gronwall's inequality, we get \((-f^{\epsilon}(t,v))^{+}=0\) on \([0,T]\) and so \(f^{\epsilon}(t,v)\geq 0\). We also have
\[(f^{\epsilon}(t,v)-\epsilon^{-3})^{+} = \int_{0}^{t}Q^{\epsilon}_{UU}(|f^{\epsilon}|\wedge\epsilon^{-3} )(\tau,v)1_{f^{\epsilon}(\tau,v)>\epsilon^{-3}}\mathrm{d}\tau\leq\int_{0}^{t}Q ^{\epsilon,+}(|f^{\epsilon}|\wedge\epsilon^{-3})(\tau,v)1_{f^{\epsilon}(\tau, v)>\epsilon^{-3}}\mathrm{d}\tau=0,\]
which yields \(f^{\epsilon}(t,v)\leq\epsilon^{-3}\).
Recalling Definition 1.1 for F-D particles, we already get a mild solution on \([0,T]\). By (4.3) and Fubini's Theorem, we have conservation of mass \(\|f^{\epsilon}(t)\|_{L^{1}}:=\|f_{0}\|_{L^{1}}\) for \(t\in[0,T]\). Note that the lifespan \(T\) depends on \(\epsilon,\phi\) and \(\|f_{0}\|_{L^{1}}\). Thanks to the conservation of mass, we can continue to construct the solution on \([T,2T],[2T,3T],\cdots\) and get a global mild solution. That is, there is a unique measurable function \(f^{\epsilon}\in\mathscr{A}_{\infty}\) satisfying: there is a null set \(Z\subset\mathbb{R}^{3}\) s.t., for all \(t\geq 0\) and \(v\in\mathbb{R}^{3}\setminus Z\),
\[f^{\epsilon}(t,v)=f_{0}(v)+\int_{0}^{t}Q^{\epsilon}_{UU}(f^{\epsilon})(\tau, v)\mathrm{d}\tau,\quad 0\leq f^{\epsilon}(t,v)\leq\epsilon^{-3}. \tag{4.17}\]
Moreover, for all \(t\geq 0\), \(\|f^{\epsilon}(t)\|_{L^{1}}=\|f_{0}\|_{L^{1}}\).
Proof of Theorem 1.1: (Propagation of regularity).: We will focus on the propagation of the regularity in weighted Sobolev spaces uniformly in \(\epsilon\) for the mild solution. Suppose that
\[T_{M}:=\sup_{t>0}\bigg{\{}t\big{|}\sup_{0\leq s\leq t}\|f^{\epsilon}(t)\|_{H^{ N}_{l}}\leq 2\|f_{0}\|_{H^{N}_{l}}\bigg{\}}. \tag{4.18}\]
The main goal is to prove that \(T_{M}=T_{M}(N,l,\phi,\|f_{0}\|_{H^{N}_{l}})\) is strictly positive and independent of \(\epsilon\).
Let \(f_{0}\in H^{N}_{l}\) with \(N,l\geq 2\). Using (4.17) and Theorem 3.1, the conservation of mass and the upper bound: \(f(t)\leq\epsilon^{-3}\) will lead to that
\[\frac{1}{2}\|f^{\epsilon}(t)\|_{H^{N}_{l}}^{2}-\frac{1}{2}\|f^{ \epsilon}(0)\|_{H^{N}_{l}}^{2}=\sum_{|\alpha|\leq N}\int_{0}^{t}\langle W_{l} \partial^{\alpha}Q^{\epsilon}_{UU}(f^{\epsilon}(\tau)),W_{l}\partial^{\alpha }f^{\epsilon}(\tau)\rangle\mathrm{d}\tau\] \[\leq C_{N,l,\phi}\int_{0}^{t}\|f^{\epsilon}(\tau)\|_{H^{N}_{l}}^{ 2}(\|f^{\epsilon}(\tau)\|_{H^{N}_{l}}+\|f^{\epsilon}(\tau)\|_{H^{N}_{l}}^{2})d\tau. \tag{4.19}\]
Let \(T^{*}_{M}:=\frac{21}{4C_{N,l,\phi}(\|f_{0}\|_{H^{N}_{l}}^{2}+1)^{3/2}}.\) Then by Gronwall's inequality, we derive that for \(t\in[0,T^{*}_{M}]\),
\[\sup_{0\leq t\leq T^{*}_{M}}\|f^{\epsilon}(t)\|_{H^{N}_{l}}\leq 2\|f_{0}\|_{H^{N} _{l}},\text{which implies that }T_{M}\geq T^{*}_{M}. \tag{4.20}\]
We now prove (1.11) based on (4.17) and (4.20). By (4.17) and Minkowski's inequality, we have
\[\|f^{\epsilon}(t_{2},v)-f^{\epsilon}(t_{1},v)\|_{H^{N-2}_{l}}\lesssim\int_{t_{ 1}}^{t_{2}}\|Q^{\epsilon}_{UU}(f^{\epsilon})(\tau,v)\|_{H^{N-2}_{l}}\mathrm{d}\tau.\]
From (3.24) and (4.20), we are led to that
\[\|f^{\epsilon}(t_{2},v)-f^{\epsilon}(t_{1},v)\|_{H^{N-2}_{l}}\lesssim\int_{t_{ 1}}^{t_{2}}(\|f^{\epsilon}(\tau)\|_{H^{N}_{l}}^{2}+\|f^{\epsilon}(\tau)\|_{H^{N }_{l}}^{3})\mathrm{d}\tau\lesssim(\|f_{0}\|_{H^{N}_{l}}^{2}+\|f_{0}\|_{H^{N} _{l}}^{3})(t_{2}-t_{1}),\]
which gives (1.11).
### Bose-Einstein particles
In this subsection, we will prove the local well-posedness and propagation of regularity uniformly in \(\epsilon\) for Bose-Einstein particles. The proof will follow the same spirit as we did in the previous subsection. However, density of Bose-Einstein particles could blow up, unlike Fermi-Dirac particles whose density has a natural bound \(f\leq\epsilon^{-3}\). This motivates us to construct the solution in \(L^{1}\cap L^{\infty}\) space.
Recall (1.1) and (1.2), we consider \(\partial_{t}f=Q^{\epsilon}_{UU}(f)\), where \(Q^{\epsilon}_{UU}\) denotes the Uehling-Uhlenbeck operator for Bose-Einstein particles, i.e., \(Q^{\epsilon}_{UU}(f)=\int_{\mathbb{R}^{3}\times\mathbb{S}^{2}}B^{\epsilon\Phi}(f )\mathrm{d}\sigma dv_{*}\), where
\[\Phi^{\epsilon}(f):=f^{\epsilon}_{*}f^{\prime}(1+\epsilon^{3}f_{*}+\epsilon^{3}f )-f_{*}f(1+\epsilon^{3}f^{\prime}_{*}+\epsilon^{3}f^{\prime}).\]
As usual, we define the gain and loss terms by
\[Q^{\epsilon,+}(f)=\int_{\mathbb{R}^{3}\times\,\mathbb{S}^{2}}B^{\epsilon}\Phi^{ \epsilon,+}(f)\mathrm{d}\sigma\mathrm{d}v_{*},\quad Q^{\epsilon,-}(f)=\int_{ \mathbb{R}^{3}\times\,\mathbb{S}^{2}}B^{\epsilon}\Phi^{\epsilon,-}(f)\mathrm{d }\sigma\mathrm{d}v_{*},\]
where
\[\Phi^{\epsilon,+}(f):=f^{\prime}_{*}f^{\prime}(1+\epsilon^{3}f_{*}+\epsilon^{ 3}f),\quad\Phi^{\epsilon,-}(f):=f_{*}f(1+\epsilon^{3}f^{\prime}_{*}+\epsilon^ {3}f^{\prime}).\]
We begin with a lemma on the upper bound of collision operator in \(L^{1}\cap L^{\infty}\) space.
**Lemma 4.2**.: _Let \(f,g\in L^{1}\cap L^{\infty},0\leq f,g\), then_
\[\|Q^{\epsilon}_{UU}(f)\|_{L^{1}}\lesssim\epsilon^{-3}I_{0}(1+\|f \|_{L^{\infty}})\|f\|_{L^{1}}^{2}, \tag{4.22}\] \[\|Q^{\epsilon}_{UU}(f)\|_{L^{\infty}}\lesssim(I_{3}+\epsilon^{- 3}I_{0})(1+\|f\|_{L^{\infty}})\|f\|_{L^{\infty}}(\|f\|_{L^{1}}+\|f\|_{L^{\infty }}). \tag{4.21}\]
_Moreover,_
\[\|Q^{\epsilon}_{UU}(f)-Q^{\epsilon}_{UU}(g)\|_{L^{1}\cap L^{ \infty}}\] \[\lesssim (\epsilon^{-3}I_{0}+I_{3})(\|f\|_{L^{1}\cap L^{\infty}}+\|g\|_{ L^{1}\cap L^{\infty}}+1)(\|f\|_{L^{1}\cap L^{\infty}}+\|g\|_{L^{1}\cap L^{ \infty}})\|f-g\|_{L^{1}\cap L^{\infty}}. \tag{4.23}\]
Proof.: We give the estimates term by term. Using (4.7) and (4.8), (4.21) follows the facts \(\|Q^{\epsilon}_{UU}(f)\leq 2\|Q^{\epsilon,-}(f)\|_{L^{1}}\|_{L^{1}}\) and \(\|Q^{\epsilon,-}(f)\|_{L^{1}}\lesssim\epsilon^{-3}I_{0}(1+\|f\|_{L^{\infty}}) \|f\|_{L^{1}}^{2}\).
\(\bullet\) To derive (4.22), we observe that \(Q^{\epsilon,-}(f)(v)\lesssim\epsilon^{-3}I_{0}(1+\|f\|_{L^{\infty}})\|f\|_{L^ {\infty}}\|f\|_{L^{1}}\) and
\[Q^{\epsilon,+}(f)(v) \lesssim (1+\|f\|_{L^{\infty}})\|f\|_{L^{\infty}}\int B^{\epsilon}_{1}f^{ \prime}_{*}\mathrm{d}\sigma\mathrm{d}v_{*}+(1+\|f\|_{L^{\infty}})\|f\|_{L^{ \infty}}^{2}\int B^{\epsilon}_{3}\mathrm{d}\sigma\mathrm{d}v_{*} \tag{4.24}\] \[\lesssim \epsilon^{-3}I_{0}(1+\|f\|_{L^{\infty}})\|f\|_{L^{\infty}}|f|_{L^ {1}}+I_{3}(1+\|f\|_{L^{\infty}})\|f\|_{L^{\infty}}^{2}.\]
Then (4.22) follows.
\(\bullet\) To show (4.23), we notice that \(\|Q^{\epsilon}_{UU}(f)-Q^{\epsilon}_{UU}(g)\|_{L^{1}}\leq 2\int B^{\epsilon}|\Phi^{ \epsilon,-}(f)-\Phi^{\epsilon,-}(g)|\mathrm{d}V\) and
\[|\Phi^{\epsilon,-}(f)-\Phi^{\epsilon,-}(g)|\leq(1+2\|f\|_{L^{\infty}})\left(| f_{*}-g_{*}|f+g_{*}|f-g|\right)+2\|f-g\|_{L^{\infty}}gg_{*}, \tag{4.25}\]
for \(0<\epsilon\leq 1\) and \(0\leq f,g\in L^{\infty}\). From these together with (2.18), we get that
\[\int B^{\epsilon}|\Phi^{\epsilon,-}(f)-\Phi^{\epsilon,-}(g)|\mathrm{d}V\lesssim \epsilon^{-3}I_{0}(1+\|f\|_{L^{\infty}})(\|f\|_{L^{1}}+\|g\|_{L^{1}})\|f-g\|_{ L^{1}}+\epsilon^{-3}I_{0}\|f-g\|_{L^{\infty}}\|g\|_{L^{1}}^{2},\]
which gives
\[\|Q^{\epsilon}_{UU}(f)-Q^{\epsilon}_{UU}(g)\|_{L^{1}}\lesssim\epsilon^{-3}I_ {0}(1+\|f\|_{L^{\infty}}+\|g\|_{L^{1}})(\|f\|_{L^{1}}+\|g\|_{L^{1}})\|f-g\|_{L^{ 1}\cap L^{\infty}}. \tag{4.26}\]
\(\bullet\) To get the \(L^{\infty}\) bounds, (4.25) and (2.18) yield that
\[|Q^{\epsilon,-}(f)-Q^{\epsilon,-}(g)|\leq\int B^{\epsilon}|\Phi^{ \epsilon,-}(f)-\Phi^{\epsilon,-}(g)|\mathrm{d}v_{*}\mathrm{d}\sigma\] \[\lesssim \epsilon^{-3}I_{0}(1+\|f\|_{L^{\infty}})(\|f\|_{L^{\infty}}\|f-g \|_{L^{1}}+\|f-g\|_{L^{\infty}}\|g\|_{L^{1}})+\epsilon^{-3}I_{0}\|f-g\|_{L^{ \infty}}\|g\|_{L^{1}}\|g\|_{L^{\infty}}.\]
For the gain term, similar to (4.25), by exchanging \((v,v_{*})\) and \((v^{\prime},v^{\prime}_{*})\), we have
\[|\Phi^{\epsilon,+}(f)-\Phi^{\epsilon,+}(g)| \leq (1+2\|f\|_{L^{\infty}})\left(|f^{\prime}_{*}-g^{\prime}_{*}|f^{ \prime}+g^{\prime}_{*}|f^{\prime}-g^{\prime}|\right)+2\|f-g\|_{L^{\infty}}g^{ \prime}g^{\prime}_{*}\] \[\leq (1+2\|f\|_{L^{\infty}})\left(|f^{\prime}_{*}-g^{\prime}_{*}|\|f\|_{ L^{\infty}}+g^{\prime}_{*}\|f-g\|_{L^{\infty}}\right)+2\|f-g\|_{L^{\infty}}\|g\|_{L^{ \infty}}g^{\prime}_{*}.\]
Now combining it with (4.27), (2.39) and (2.19), we have
\[\|Q^{\epsilon}_{UU}(f)-Q^{\epsilon}_{UU}(g)\|_{L^{\infty}}\] \[\lesssim (\epsilon^{-3}I_{0}+I_{3})(\|f\|_{L^{1}\cap L^{\infty}}+\|g\|_{L^{1 }\cap L^{\infty}}+1)(\|f\|_{L^{1}\cap L^{\infty}}+\|g\|_{L^{1}\cap L^{\infty}}) \|f-g\|_{L^{1}\cap L^{\infty}}. \tag{4.28}\]
We complete the proof of the lemma.
We are now ready to prove Theorem 1.2.
Proof of Theorem 1.2.: We introduce the function space \(\mathscr{E}_{T}:=L^{\infty}([0,T];L^{1}(\mathbb{R}^{3})\cap L^{\infty}(\mathbb{R }^{3}))\) associated with the norm \(\|f\|_{ET}:=\sup_{0\leq t\leq T}\|f(t)\|_{L^{1}\cap L^{\infty}}\). Similarly we define the operator \(J^{\epsilon}(\cdot)\) on \(\mathscr{E}_{T}\) by
\[J^{\epsilon}(f)(t,v):=f_{0}(v)+\int_{0}^{t}Q^{\epsilon}_{UU}(|f|)(\tau,v) \mathrm{d}\tau.\]
\(\bullet\) Given \(f\in\mathscr{E}_{T}\), we now check that \(J^{\epsilon}(f)\in\mathscr{E}_{T}\). By (4.21) and (4.22), we derive that
\[\|J^{\epsilon}(f)\|_{ET}\leq\|f_{0}\|_{L^{1}\cap L^{\infty}}+TC_{\epsilon, \phi}(1+\|f\|_{ET})\|f\|_{ET}^{2}, \tag{4.29}\]
where \(C_{\epsilon,\phi}\lesssim I_{3}+\epsilon^{-3}I_{0}\). This means \(J^{\epsilon}(\cdot)\) is an operator on \(\mathscr{E}_{T}\).
\(\bullet\) Let \(\mathcal{T}_{T}:=\{f\in\mathcal{T}_{T},\|f\|_{ET}\leq 2\|f_{0}\|_{L^{1}\cap L^{ \infty}}\}\). We want to show that \(J^{\epsilon}(\cdot)\) is a contraction mapping on the complete metric space \((\mathcal{T}_{T},\|\cdot-\cdot\|_{ET})\) for a short time \(T>0\). By (4.29), if \(T\) satisfies
\[4TC_{\epsilon,\phi}(1+2\|f_{0}\|_{L^{1}\cap L^{\infty}})\|f_{0}\|_{L^{1}\cap L ^{\infty}}\leq 1, \tag{4.30}\]
then \(J^{\epsilon}(\cdot)\) is an operator onto \(\mathcal{T}_{T}\). Now we prove the contraction property. Since \(||x|-|y||\leq|x-y|\), for \(f,g\in\mathcal{T}_{T}\), we get
\[\|J^{\epsilon}(|f|)-J^{\epsilon}(|g|)\|_{ET} \leq TC_{\epsilon,\phi}(\|f\|_{ET}+\|g\|_{ET}+1)(\|f\|_{ET}+\|g\|_{ET })\|f-g\|_{L^{1}\cap L^{\infty}}\] \[\leq 4\|f_{0}\|_{L^{1}\cap L^{\infty}}(4\|f_{0}\|_{L^{1}\cap L^{ \infty}}+1)TC_{\epsilon,\phi}\|f-g\|_{ET}.\]
If \(T\) satisfies
\[4\|f_{0}\|_{L^{1}\cap L^{\infty}}(4\|f_{0}\|_{L^{1}\cap L^{\infty}}+1)TC_{ \epsilon,\phi}\leq\frac{1}{2}, \tag{4.31}\]
then \(J^{\epsilon}(\cdot)\) is a contraction mapping on the complete metric space \((\mathcal{T}_{T},\|\cdot-\cdot\|_{T})\). By the fixed point theorem, there exists a unique \(f\in\mathcal{T}_{T}\), s.t. \(f^{\epsilon}=J^{\epsilon}(f^{\epsilon})\). After a modification on the \(v\)-null sets, there is a null set \(Z\subset\mathbb{R}^{3}\), for all \(t\in[0,T]\) and \(v\in\mathbb{R}^{3}\setminus Z\),
\[f^{\epsilon}(t,v)=J^{\epsilon}(f^{\epsilon})(t,v)=f_{0}(v)+\int_{0}^{t}Q^{ \epsilon}_{UU}(|f^{\epsilon}|)(\tau,v)\mathrm{d}\tau.\]
Recalling (4.22), we can see that for any \(v\in\mathbb{R}^{3}\setminus Z\) and \(t_{1},t_{2}\in[0,T]\),
\[|f^{\epsilon}(t_{2},v)-f^{\epsilon}(t_{1},v)|\leq C_{\epsilon,\phi}(\|f_{0}\| _{L^{1}\cap L^{\infty}}+1)\|f_{0}\|_{L^{2}\cap L^{\infty}}^{2}|t_{2}-t_{1}|,\]
for some \(C_{\epsilon,\phi}\lesssim I_{3}+\epsilon^{-3}I_{0}\). That is, \(f(\cdot,v)\) is uniformly continuous w.r.t. \(t\) on \([0,T]\) for any \(v\in\mathbb{R}^{3}\setminus Z\). Following the argument in (4.16) we can get that \(f^{\epsilon}(t,v)\geq 0\) on \([0,T]\), i.e.,
\[f^{\epsilon}(t,v)=f_{0}(v)+\int_{0}^{t}Q^{\epsilon}_{UU}(f^{\epsilon})(\tau,v )\mathrm{d}\tau,\quad 0\leq f^{\epsilon}(t,v). \tag{4.32}\]
Conservation of mass is a direct result of (4.21) and the Fubini's Theorem. Moreover that the lifespan \(T\) depends on \(\epsilon,\phi\) and \(\|f_{0}\|_{L^{1}\cap L^{\infty}}\), i.e., \(T=T(\epsilon,\phi,f_{0})\). In order to extend the solution to a positive time independent of \(\epsilon\), we will prove the propagation of regularity in weighted Sobolev space \(H^{N}_{l}\).
Assume that \(f_{0}\in H^{N}_{l}\) with \(N,l\geq 2\). Again by (4.32) and Theorem 3.1, (4.19) still holds. This shows that the solution verifies the _a priori_ estimate: for any \(t\in[0,T^{*}_{M}=\frac{21}{4C_{N,l,\phi}(\|f_{0}\|_{H^{N}_{l}}^{2}+1)^{3/2}}]\),
\[\sup_{0\leq t\leq T^{*}_{M}}\|f^{\epsilon}(t)\|_{H^{N}_{l}}\leq 2\|f_{0}\|_{H^{N}_ {l}},\]
which implies the _a priori_ upper bound:
\[\sup_{0\leq t\leq T^{*}_{M}}\|f^{\epsilon}(t)\|_{L^{\infty}}\leq 2C_{S}\|f_{0}\|_{ H^{N}_{l}}.\]
This enables to use continuity argument to extend the lifespan from \(T=T(\epsilon,\phi,f_{0})\) to \(T^{*}_{M}\) which is independent of \(\epsilon\). We end the proof.
## 5. Asymptotic formula
This section is devoted to the proof of Theorem 1.3. Let \(f^{\epsilon}\) and \(f\) be the solutions to (1.1) and (1.6) respectively with the initial datum \(f_{0}\) on \([0,T^{*}]\) where \(T^{*}=T^{*}(N,l,\phi,\|f_{0}\|_{H^{N+3}_{l+5}})\) given in Theorem 1.1 and 1.2. The solutions are uniformly bounded in \(H^{N+3}_{l+5}\). More precisely,
\[\sup_{t\leq T^{*}}\|f^{\epsilon}(t)\|_{H^{N+3}_{l+5}}\leq 2\|f_{0}\|_{H^{N+3}_{l+5}}, \quad\sup_{t\leq T^{*}}\|f(t)\|_{H^{N+3}_{l+5}}\leq 2\|f_{0}\|_{H^{N+3}_{l+5}}. \tag{5.1}\]
Let \(R^{\epsilon}:=\epsilon^{-\theta}(f^{\epsilon}-f)\), then it is not difficult to see that \(R^{\epsilon}\) verifies the following equation:
\[\partial_{t}R^{\epsilon} = Q_{1}(f^{\epsilon},R^{\epsilon})+Q_{1}(R^{\epsilon},f)+\epsilon^ {-\theta}(Q_{1}(f,f)-Q_{L}(f,f))+\epsilon^{-\theta}(Q_{2}+Q_{3})(f^{\epsilon}, f^{\epsilon})\] \[+\epsilon^{-\theta}R(f^{\epsilon},f^{\epsilon},f^{\epsilon}).\]
We will prove \(R^{\epsilon}\) is bounded in \(H^{N}_{l}\) over \([0,T^{*}]\). To this end, we first derive an estimate for the operator difference \(Q_{1}-Q_{L}\). This is the key point for the asymptotic formula.
**Lemma 5.1**.: _It holds that_
\[\|Q_{1}(g,h)-Q_{L}(g,h)\|_{L^{2}_{l}}\lesssim_{l}\epsilon^{\theta}I_{3+\theta} \left(\|g\|_{H^{3}_{l+5}}\|h\|_{H^{3}}+\|g\|_{H^{3}_{2}}\|h\|_{H^{3}_{l+3}}\right). \tag{5.3}\]
Proof.: The estimate is proved in the spirit of [15]. The proof is divided into several steps.
_Step 1: Reformulation of \(Q_{1}.\)_ Firstly, given \(v,v_{*}\in\mathbb{R}^{3},\) we introduce an orthonormal basis of \(\mathbb{R}^{3}\):
\[\big{(}\frac{v-v_{*}}{|v-v_{*}|},\;h^{1}_{v,v_{*}},\;h^{2}_{v,v_{*}}\big{)}.\]
Then \(\sigma=\frac{v-v_{*}}{|v-v_{*}|}\cos\theta+(\cos\varphi h^{1}_{v,v_{*}}+\sin \varphi h^{2}_{v,v_{*}})\sin\theta,\) which implies \(v^{\prime}=v+\frac{1}{2}A\) and \(v^{\prime}_{*}=v_{*}-\frac{1}{2}A\) with
\[A=-(v-v_{*})(1-\cos\theta)+|v-v_{*}|(\cos\varphi h^{1}_{v,v_{*}}+\sin\varphi h ^{2}_{v,v_{*}})\sin\theta.\]
Now \(Q_{1}\) can be rewritten as
\[Q_{1}(g,h)=\int_{\mathbb{R}^{3}}\int_{0}^{\frac{\pi}{2}}\int_{0}^{2\pi}B^{*}_ {1}\big{[}g(v_{*}-\frac{1}{2}A)h(v+\frac{1}{2}A)-g(v_{*})h(v)\big{]}\sin \theta\mathrm{d}\varphi\mathrm{d}\theta\mathrm{d}v_{*}.\]
By the Taylor expansion formula up to order 3:
\[g(v_{*}-\frac{1}{2}A)=g(v_{*})-\frac{1}{2}A\cdot\nabla_{v_{*}}g( v_{*})+\frac{1}{8}A\otimes A:\nabla^{2}g(v_{*})+r_{1}(v,v_{*},\sigma),\] \[h(v+\frac{1}{2}A)=h(v)+\frac{1}{2}A\cdot\nabla_{v}h(v)+\frac{1}{ 8}A\otimes A:\nabla^{2}h(v)+r_{2}(v,v_{*},\sigma),\]
where
\[|r_{1}(v,v_{*},\sigma)|\lesssim|A|^{3}\int_{0}^{1}|\nabla^{3}g( \iota(v_{*}))|\mathrm{d}\iota,\quad|r_{2}(v,v_{*},\sigma)|\lesssim|A|^{3}\int _{0}^{1}|\nabla^{3}h(\kappa(v))|\mathrm{d}\kappa.\]
Then we arrive at
\[Q_{1}(g,h) = \int_{\mathbb{R}^{3}}\int_{0}^{\frac{\pi}{2}}\int_{0}^{2\pi} \big{[}\frac{1}{2}A\cdot(\nabla_{v}-\nabla_{v_{*}})(g(v_{*})h(v))\] \[\quad+\frac{1}{8}A\otimes A:(\nabla_{v}-\nabla_{v_{*}})^{2}(g(v_{ *})h(v))+\mathscr{R}^{1}(v,v_{*},\sigma)\big{]}B^{*}_{1}\sin\theta\mathrm{d} \varphi\mathrm{d}\theta\mathrm{d}v_{*},\]
where
\[\mathscr{R}^{1}(v,v_{*},\sigma) = r_{1}(v,v_{*},\sigma)\big{(}h(v)+\frac{1}{2}A\cdot\nabla h(v)+ \frac{1}{8}A\otimes A:\nabla^{2}h(v)+r_{2}(v,v_{*},\sigma)\big{)}\] \[+\frac{1}{8}A\otimes A:\nabla^{2}g(v_{*})\big{(}\frac{1}{2}A \cdot\nabla h(v)+\frac{1}{8}A\otimes A:\nabla^{2}h(v)+r_{2}(v,v_{*},\sigma) \big{)}\] \[-\frac{1}{2}A\cdot\nabla g(v_{*})\big{(}\frac{1}{8}A\otimes A: \nabla^{2}h(v)+r_{2}(v,v_{*},\sigma)\big{)}+g(v_{*})r_{2}(v,v_{*},\sigma).\]
_Step 2: Reduction of \(Q_{1}.\)_ According to (5.4), if we define \(T^{\epsilon}(v-v_{*}):=\int_{0}^{\frac{\pi}{2}}\int_{0}^{2\pi}\frac{1}{2}AB^{ \epsilon}_{1}\sin\theta\mathrm{d}\varphi\mathrm{d}\theta\) and \(U^{\epsilon}(v-v_{*}):=\int_{0}^{\frac{\pi}{2}}\int_{0}^{2\pi}\frac{1}{8}A \otimes AB^{\epsilon}_{1}\sin\theta\mathrm{d}\phi\mathrm{d}\varphi\), then
\[Q_{1}(g,h) = \int_{\mathbb{R}^{3}}\Big{[}T^{\epsilon}(v-v_{*})\cdot(\nabla_{v }-\nabla_{v_{*}})(g(v_{*})h(v))+U^{\epsilon}(v-v_{*}):(\nabla_{v}-\nabla_{v_{* }})^{2}(g(v_{*})h(v))\Big{]}\mathrm{d}v_{*}\] \[+\int_{\mathbb{R}^{3}}\int_{0}^{\frac{\pi}{2}}\int_{0}^{2\pi} \mathscr{R}^{1}(v,v_{*},\sigma)B^{\epsilon}_{1}\sin\theta\mathrm{d}\varphi \mathrm{d}\theta\mathrm{d}v_{*}.\]
Computation of \(T^{\epsilon}.\) It is not difficult to compute that
\[T^{\epsilon}(v-v_{*})=-8\pi I_{3}|v-v_{*}|^{-3}(v-v_{*})+\mathscr{R}^{2}(v-v_{* }), \tag{5.6}\]
with
\[\mathscr{R}^{2}(v-v_{*})=8\pi|v-v_{*}|^{-3}(v-v_{*})\int_{\frac{\sqrt{2}|v|}{2 \pi}}^{\infty}\hat{\phi}^{2}(r)r^{3}\mathrm{d}r. \tag{5.7}\]
Computation of \(U^{\epsilon}.\) We claim that
\[U^{\epsilon}(v-v_{*})=a(v-v_{*})+\mathscr{R}^{3}(v-v_{*}), \tag{5.8}\]
where \(a\) is the matrix defined in (1.8), and
\[\mathscr{R}^{3}(z) = 2\pi|z|^{-1}\Pi(z)\left(-\int_{\frac{\sqrt{2}|z|}{2\pi}}^{\infty} \hat{\phi}^{2}(r)r^{3}\mathrm{d}r-\int_{0}^{\frac{\sqrt{2}|z|}{2\pi}}\hat{\phi }^{2}(r)r^{3}(\epsilon r)^{2}|z|^{-2}\mathrm{d}r\right)\] \[+\frac{\pi}{4}z\otimes z\int_{0}^{\pi/2}(1-\cos\theta)^{2}B^{ \epsilon}_{1}\sin\theta\mathrm{d}\theta.\]
where \(\Pi\) is the matrix defined in (1.8). To see this, by definition, we have
\[U^{\epsilon}(v-v_{*}) = \frac{1}{8}\int_{0}^{\frac{\pi}{2}}\int_{0}^{2\pi}\big{(}-(v-v_{*}) (1-\cos\theta)+|v-v_{*}|(\cos\varphi h^{1}_{v,v_{*}}+\sin\varphi h^{2}_{v,v_{*} })\sin\theta\big{)}\] \[\otimes\big{(}-(v-v_{*})(1-\cos\theta)+|v-v_{*}|(\cos\varphi h^{1 }_{v,v_{*}}+\sin\varphi h^{2}_{v,v_{*}})\sin\theta\big{)}B^{\epsilon}_{1}\sin \theta\mathrm{d}\theta\mathrm{d}\varphi\] \[= \frac{\pi}{4}(v-v_{*})\otimes(v-v_{*})\int_{0}^{\pi/2}(1-\cos \theta)^{2}B^{\epsilon}_{1}\sin\theta\mathrm{d}\theta\] \[+\frac{1}{8}|v-v_{*}|^{2}\int_{0}^{\frac{\pi}{2}}\int_{0}^{2\pi }\sin^{2}\theta(\cos^{2}\varphi h^{1}_{v,v_{*}}\otimes h^{1}_{v,v_{*}}+\sin^{ 2}\varphi h^{2}_{v,v_{*}}\otimes h^{2}_{v,v_{*}})B^{\epsilon}_{1}\sin\theta \mathrm{d}\theta\mathrm{d}\varphi.\]
Use the fact that \(\frac{v-v_{*}}{|v-v_{*}|}\otimes\frac{v-v_{*}}{|v-v_{*}|}+h^{1}_{v,v_{*}} \otimes h^{1}_{v,v_{*}}+h^{2}_{v,v_{*}}\otimes h^{2}_{v,v_{*}}=Id\), then
\[U^{\epsilon}(v-v_{*}) = 2\pi|v-v_{*}|^{-1}(Id-\frac{v-v_{*}}{|v-v_{*}|}\otimes\frac{v-v _{*}}{|v-v_{*}|})\int_{0}^{\frac{\sqrt{2}|v-v_{*}|}{2}}\hat{\phi}^{2}(r)r^{3} (1-(\epsilon r)^{2}|v-v_{*}|^{-2})\mathrm{d}r\] \[+\frac{\pi}{4}(v-v_{*})\otimes(v-v_{*})\int_{0}^{\pi/2}(1-\cos \theta)^{2}B^{\epsilon}_{1}\sin\theta\mathrm{d}\theta,\]
which is enough to get (5.8).
Since \(\nabla\cdot\big{(}(|x|^{2}Id-x\otimes x)f(|x|^{2})\big{)}=-2xf(|x|^{2})\), recalling (1.8) and (5.6), one has
\[(\nabla_{v}-\nabla_{v_{*}})\cdot a(v,v_{*})=-8\pi I_{3}|v-v_{*}|^{-3}(v-v_{*} )=T^{\epsilon}(v,v_{*})-\mathscr{R}^{2}(v-v_{*}).\]
Plugging this and (5.8) into (5.5), we get
\[Q_{1}(g,h) = \int_{\mathbb{R}^{3}}(\nabla_{v}-\nabla_{v_{*}})\cdot\big{[}a(v, v_{*})(\nabla_{v}-\nabla_{v_{*}})(g_{*}h)\big{]}\mathrm{d}v_{*}+\int_{\mathbb{R}^{3}} \mathscr{R}^{3}(v-v_{*}):\big{[}(\nabla_{v}-\nabla_{v_{*}})^{2}(g_{*}h)\big{]} \mathrm{d}v_{*}\] \[+\int_{\mathbb{R}^{3}}\mathscr{R}^{2}(v-v_{*})\cdot(\nabla_{v}- \nabla_{v_{*}})(g_{*}h)\mathrm{d}v_{*}+\int_{\mathbb{R}^{3}}\int_{0}^{\frac{ \pi}{2}}\int_{0}^{2\pi}\mathscr{R}^{1}(v,v_{*},\sigma)B^{\epsilon}_{1}\sin \theta\mathrm{d}\theta\mathrm{d}\varphi\mathrm{d}v_{*}.\]
Note that the first integral term is the Landau operator.
_Step 3: Estimate of \(Q_{1}-Q_{L}.\)_ From the above equality, we arrive at
\[Q_{1}(g,h)-Q_{L}(g,h) = \int\mathscr{R}^{1}(v,v_{*},\sigma)B^{\epsilon}_{1}\sin\theta \mathrm{d}\theta\mathrm{d}\varphi\mathrm{d}v_{*}+\int\mathscr{R}^{2}(v-v_{*} )\cdot(\nabla_{v}-\nabla_{v_{*}})(g_{*}h)\mathrm{d}v_{*}\] \[+\int\mathscr{R}^{3}(v-v_{*}):\big{[}(\nabla_{v}-\nabla_{v_{*}}) ^{2}(g_{*}h)\big{]}\mathrm{d}v_{*}:=\sum_{i=1}^{3}\mathscr{Q}_{i}.\]
Estimate of \(\mathscr{Q}_{3}.\) Recalling (5.9) for \(\mathscr{R}^{3},\) it is easy to see
\[|\mathscr{R}^{3}(z)|\lesssim\epsilon^{\partial}I_{3+\partial}|z|^{-1-\partial},\]
which gives
\[\|\mathscr{Q}_{3}\|_{L^{2}_{1}}\lesssim\epsilon^{\partial}I_{3+\partial}\|g\|_ {H^{2}_{2}}\|h\|_{H^{2}_{l}}.\]
Estimate of \(\mathscr{Q}_{2}.\) Recalling (5.7),
\[\epsilon^{-\partial}\mathscr{R}^{2}(x)=8\pi|x|^{-3-\partial}x\int_{\frac{ \sqrt{2}|x|}{2x}}^{\infty}\hat{\phi}^{2}(r)r^{3}\big{(}\frac{|x|}{\epsilon} \big{)}^{\vartheta}\mathrm{d}r,\]
it is obvious that \(\epsilon^{-\partial}|\mathscr{R}^{2}(x)|\lesssim I_{3+\partial}|x|^{-2-\vartheta}\). If \(\vartheta=1,\) we claim that \(\epsilon^{-\partial}\mathscr{R}^{2}\) is the kernel of a Calderon-Zygmund operator. To see this, we first compute directly that for any \(0<R_{1}<R_{2}<\infty,\)
\[\int_{R_{1}<|x|<R_{2}}\epsilon^{-1}\mathscr{R}^{2}(x)dx=0,\quad\sup_{R>0}\int_{R< |x|<2R}\epsilon^{-1}|\mathscr{R}^{2}(x)|dx\lesssim I_{4}.\]
Next we need to check that \(\epsilon^{-1}\mathscr{R}^{2}\) satisfies Hormander's condition:
\[\int_{|x|\geq 2|y|}|\epsilon^{-1}\mathscr{R}^{2}(x)-\epsilon^{-1}\mathscr{R}^{2}(x-y)| dx\lesssim I_{4}. \tag{5.10}\]
Note that
\[\epsilon^{-1}\mathcal{R}^{2}(x)-\epsilon^{-1}\mathcal{R}^{2}(x-y) = 8\pi|x|^{-3}x\int_{\frac{\sqrt{2}|x|}{2x}}^{\infty}\hat{\phi}^{2}(r )r^{3}\epsilon^{-1}\mathrm{d}r-8\pi|x-y|^{-3}(x-y)\int_{\frac{\sqrt{2}|x-y|}{2x }}^{\infty}\hat{\phi}^{2}(r)r^{3}\epsilon^{-1}\mathrm{d}r\] \[= 8\pi\left(|x|^{-3}x-|x-y|^{-3}(x-y)\right)\int_{\frac{\sqrt{2}|x| }{2x}}^{\infty}\hat{\phi}^{2}(r)r^{3}\epsilon^{-1}\mathrm{d}r\] \[+8\pi|x-y|^{-3}(x-y)\left(\int_{\frac{\sqrt{2}|x|}{2x}}^{\infty} \hat{\phi}^{2}(r)r^{3}\epsilon^{-1}\mathrm{d}r-\int_{\frac{\sqrt{2}|x-y|}{2x} }^{\infty}\hat{\phi}^{2}(r)r^{3}\epsilon^{-1}\mathrm{d}r\right).\]
Under the condition \(|x|\geq 2|y|\), it is easy to see that \(|x-y|\sim|x|\) and \(\left|x|x|^{-3}-(x-y)|x-y|^{-3}\right|\lesssim|x|^{-3}|y|\), then
\[|\epsilon^{-1}\mathcal{R}^{2}(x)-\epsilon^{-1}\mathcal{R}^{2}(x-y)|\lesssim|x |^{-3}|y|\int_{\frac{\sqrt{2}|x|}{2x}}^{\infty}\hat{\phi}^{2}(r)r^{3}\epsilon ^{-1}\mathrm{d}r+|x|^{-2}\int_{\frac{\sqrt{2}}{2x}\max\{|x|,|x-y|\}}^{\frac{ \sqrt{2}}{2x}\max\{|x|,|x-y|\}}\hat{\phi}^{2}(r)r^{3}\epsilon^{-1}\mathrm{d}r.\]
For the first term,
\[\int_{|x|\geq 2|y|}|x|^{-3}|y|\int_{\frac{\sqrt{2}|x|}{2x}}^{\infty}\hat{\phi}^ {2}(r)r^{3}\epsilon^{-1}\mathrm{d}r\mathrm{d}x\lesssim I_{4}|y|\int_{|x|\geq 2 |y|}|x|^{-4}\mathrm{d}x\lesssim I_{4}.\]
For the second term,
\[\int_{|x|\geq 2|y|}|x|^{-2}\int_{\frac{\sqrt{2}}{2x}\min\{|x|,|x- y|\}}^{\frac{\sqrt{2}}{2x}\max\{|x|,|x-y|\}}\hat{\phi}^{2}(r)r^{3}\epsilon^{-1} \mathrm{d}r\mathrm{d}x\] \[\lesssim \int_{|x|\geq 2|y|}|x|^{-3}\bigg{(}\int_{\frac{\sqrt{2}|x|}{2x}}^{ \frac{\sqrt{2}|x|}{2x}}+\int_{\frac{\sqrt{2}|x|}{2x}}^{\frac{\sqrt{2}|x|}{2x} }\bigg{)}\hat{\phi}^{2}(r)r^{4}\mathrm{d}r\mathrm{d}x\] \[\lesssim \int_{|y|\leq\sqrt{2}x}\hat{\phi}^{2}(r)r^{4}\mathrm{d}r\int_{ \sqrt{2}x\leq|x|\leq 2\sqrt{2}xr}|x|^{-3}\mathrm{d}x+\int_{2|y|\leq\sqrt{2}xr} \hat{\phi}^{2}(r)r^{4}\mathrm{d}r\int_{\sqrt{2}/2xr\leq|x|\leq\sqrt{2}xr}|x|^ {-3}\mathrm{d}x\lesssim I_{4}.\]
Combining these two estimates will yield (5.10). Thus we have
\[\|\mathcal{Q}_{2}\|_{L^{2}_{l}}\lesssim\epsilon^{\theta}I_{3+\phi}\|g\|_{H^{2} _{2}}\|h\|_{H^{2}_{l}}.\]
Estimate of \(\mathcal{Q}_{3}\). Recalling the definition of \(\mathcal{R}^{1}(v,v_{*},\sigma)\), it is bounded by
\[|A|^{3}\bigg{(}|g_{*}|\int_{0}^{1}|\nabla^{3}h(\kappa(v))|\mathrm{ d}\kappa+|(\nabla g)_{*}||\nabla^{2}h|+|(\nabla^{2}g)_{*}||\nabla h|+\int_{0}^{1}| \nabla^{3}g(\iota(v_{*}))|\mathrm{d}|h|\bigg{)}\] \[+ |A|^{4}\bigg{(}|(\nabla g)_{*}|\int_{0}^{1}|\nabla^{3}h(\kappa(v ))|\mathrm{d}\kappa+|(\nabla^{2}g)_{*}||\nabla^{2}h|+\int_{0}^{1}|\nabla^{3}g( \iota(v_{*}))|\mathrm{d}|\nabla h|\bigg{)}\] \[+ |A|^{5}\bigg{(}|(\nabla^{2}g)_{*}|\int_{0}^{1}|\nabla^{3}h(\kappa (v))|\mathrm{d}\kappa+\int_{0}^{1}|\nabla^{3}g(\iota(v_{*}))|\mathrm{d}| \nabla^{2}h|\bigg{)}\] \[+ |A|^{6}\int_{0}^{1}|\nabla^{3}g(\iota(v_{*}))|\mathrm{d}\iota\int _{0}^{1}|\nabla^{3}h(\kappa(v))|\mathrm{d}\kappa:=\sum_{i=1}^{4}\mathcal{R}^{ 1}_{i}.\]
Let \(\mathcal{Q}^{i}_{3}:=\int_{v_{*},\theta,\varphi}\mathcal{R}^{1}_{i}(v,v_{*})B^{ *}_{1}\sin\theta\mathrm{d}\mathrm{d}\varphi\mathrm{d}v_{*}\). Using the facts \(|A|\lesssim\sin(\theta/2)|v-v_{*}|\) and \(W_{l}\lesssim_{l}W_{l}(\kappa(v))+W_{l}(\iota(v_{*}))\), by the C-S inequality and (2.41), we have
\[\sum_{i=1}^{4}|\langle\mathcal{Q}^{i}_{3}W_{l},F\rangle|\lesssim_{l}\epsilon^{ \theta}I_{3+\phi}\|F\|_{L^{2}}(\|g\|_{H^{3}_{l+5}}\|h\|_{H^{3}}+\|g\|_{H^{2}_{l }}\|h\|_{H^{3}_{l+3}}),\]
which yields that \(\|\mathcal{Q}_{3}\|_{L^{2}_{l}}\lesssim_{l}\epsilon^{\theta}I_{3+\vartheta}(\|g \|_{H^{3}_{l+5}}\|h\|_{H^{3}}+\|g\|_{H^{2}_{l}}\|h\|_{H^{3}_{l+3}})\).
The desired result (5.3) follows by patching together all the estimates.
Now we are in a position to prove the asymptotic expansion in Theorem 1.3.
Proof of Theorem 1.3.: To derive the asymptotic formula in the theorem, the key point is to give the energy estimates for \(R^{\epsilon}\) in the space \(H^{N}_{l}\) for \(N\geq 0,l\geq 2\). Recalling (5.2), the proof is divided into several steps.
_Step 1: Estimate of \(Q_{1}(f^{\epsilon},R^{\epsilon})\)._ We claim that
\[\sum_{m=0}^{N}\sum_{|\alpha|=m}\langle\partial^{\alpha}Q_{1}(f^{\epsilon},R^{ \epsilon})W_{l},W_{l}\partial^{\alpha}R^{\epsilon}\rangle\lesssim_{N,l,\phi} \|f^{\epsilon}\|_{H_{l}^{N+2}}\|R^{\epsilon}\|_{H_{l}^{N}}^{2}.\]
We need to consider \(\langle Q_{1}(\partial^{\alpha_{1}}f^{\epsilon},\partial^{\alpha_{2}}R^{ \epsilon})W_{l},W_{l}\partial^{\alpha}R^{\epsilon}\rangle\) for \(\alpha_{1}+\alpha_{2}=\alpha\). If \(|\alpha_{1}|=0\), we use (3.9) and Remark 3.1. If \(|\alpha_{1}|=1\), we use Lemma 3.5. If \(|\alpha_{1}|\geq 2\), we use Lemma 3.3.
_Step 2: Estimate of \(Q_{1}(R^{\epsilon},f)\)._ Using (3.21), we have
\[\sum_{m=0}^{N}\sum_{|\alpha|=m}\langle\partial^{\alpha}Q_{1}(R^{\epsilon},f)W _{l},W_{l}\partial^{\alpha}R^{\epsilon}\rangle\lesssim_{N,l,\phi}\|f\|_{H_{l }^{N+2}}\|R^{\epsilon}\|_{H_{l}^{N}}^{2}.\]
_Step 3: Estimate of \(\epsilon^{-\vartheta}(Q_{1}(f,f)-Q_{L}(f,f))\)._ Note that (5.3) yields
\[\epsilon^{-\vartheta}\|Q_{1}(g,h)-Q_{L}(g,h)\|_{H_{l}^{m}}\lesssim_{m,l}I_{3+ \vartheta}\left(\|g\|_{H_{l+5}^{m+3}}\|h\|_{H^{m+3}}+\|g\|_{H_{2}^{m+3}}\|h\|_ {H_{l+3}^{m+3}}\right),\]
which gives
\[\sum_{m=0}^{N}\sum_{|\alpha|=m,\alpha_{1}+\alpha_{2}=\alpha}\left|\langle \epsilon^{-\vartheta}(Q_{1}(\partial^{\alpha_{1}}f,\partial^{\alpha_{2}}f)-Q _{L}(\partial^{\alpha_{1}}f,\partial^{\alpha_{2}}f))W_{l},W_{l}\partial^{ \alpha}R^{\epsilon}\rangle\right|\lesssim_{N,l,\phi}\|f\|_{H_{l+5}^{N+3}}^{2} \|R^{\epsilon}\|_{H_{l}^{N}}.\]
_Step 4: Estimate of \(\epsilon^{-\vartheta}(Q_{2}+Q_{3})(f^{\epsilon},f^{\epsilon})\)._ We claim that
\[\sum_{m=0}^{N}\sum_{|\alpha|=m,\alpha_{1}+\alpha_{2}=\alpha}|\langle\epsilon^ {-\vartheta}(Q_{2}+Q_{3})(\partial^{\alpha_{1}}f^{\epsilon},\partial^{\alpha _{2}}f^{\epsilon}))W_{l},W_{l}\partial^{\alpha}R^{\epsilon}\rangle|\lesssim_ {N,l,\phi}\|f^{\epsilon}\|_{H_{l}^{N+2}}^{2}\|R^{\epsilon}\|_{H_{l}^{N}}.\]
For the \(Q_{2}\) term, by using (2.68), we get
\[|\langle\epsilon^{-\vartheta}Q_{2}(\partial^{\alpha_{1}}f^{\epsilon},\partial ^{\alpha_{2}}f^{\epsilon}))W_{l},W_{l}\partial^{\alpha}R^{\epsilon}\rangle|| \lesssim_{l}(I_{3+\vartheta}+I_{3+\vartheta}^{\prime})\|f^{\epsilon}\|_{H_{l }^{N+2}}^{2}\|R^{\epsilon}\|_{H_{l}^{N}}.\]
For the \(Q_{3}\) term, by using (2.28), we get
\[|\langle\epsilon^{-\vartheta}Q_{3}(\partial^{\alpha_{1}}f^{\epsilon},\partial ^{\alpha_{2}}f^{\epsilon}))W_{l},W_{l}\partial^{\alpha}R^{\epsilon}\rangle|| \lesssim_{l}I_{3}\|f^{\epsilon}\|_{H_{l}^{N+2}}^{2}\|R^{\epsilon}\|_{H_{l}^{ N}}.\]
_Step 5: Estimate of \(\epsilon^{-\vartheta}R(f^{\epsilon},f^{\epsilon},f^{\epsilon})\)._ Applying (2.74), we have
\[\sum_{m=0}^{N}\sum_{|\alpha|=m}|\langle\epsilon^{-3}\partial^{\alpha}R(f^{ \epsilon},f^{\epsilon},f^{\epsilon})W_{l},W_{l}\partial^{\alpha}R^{\epsilon} \rangle|\lesssim_{N,l,\phi}\|f^{\epsilon}\|_{H_{l}^{N+2}}^{3}\|R^{\epsilon} \|_{H_{l}^{N}}.\]
_Step 6: Closure of the energy estimates._ Patching together all the estimates in the previous steps, we arrive at
\[\frac{d}{\mathrm{d}t}\|R^{\epsilon}\|_{H_{l}^{N}}^{2} \lesssim_{N,l,\phi} \|f^{\epsilon}\|_{H_{l}^{N}}\|R^{\epsilon}\|_{H_{l}^{N}}^{2}+\|f \|_{H_{l}^{N+2}}\|R^{\epsilon}\|_{H_{l}^{N}}^{2}+\|f\|_{H_{l+5}^{N+3}}^{2}\|R^{ \epsilon}\|_{H_{l}^{N}}\] \[+\|f^{\epsilon}\|_{H_{l}^{N+2}}^{2}\|R^{\epsilon}\|_{H_{l}^{N}}+ \|f^{\epsilon}\|_{H_{l}^{N+2}}^{3}\|R^{\epsilon}\|_{H_{l}^{N}}^{2}.\]
Using the uniform upper bounds (5.1) of \(\|f^{\epsilon}\|_{H_{l+5}^{N+3}}\) and \(\|f\|_{H_{l+5}^{N+3}}\), we get
\[\frac{d}{\mathrm{d}t}\|R^{\epsilon}\|_{H_{l}^{N}}\lesssim_{N,l,\phi}(\|f_{0} \|_{H_{l+5}^{N+3}}+\|f_{0}\|_{H_{l+5}^{N+3}}^{3})(\|R^{\epsilon}\|_{H_{l}^{N}}+1).\]
Then (1.13) follows the Gronwall's inequality.
**Acknowledgments.** The work was initiated when M. Pulvirenti visited Tsinghua University in 2016. The work was partially supported by National Key Research and Development Program of China under the grant 2021YFA1002100. Ling-Bing He was also supported by NSF of China under the grant 12141102. Yu-Long Zhou was partially supported by NSF of China under the grant 12001552, Science and Technology Projects in Guangzhou under the grant 202201011144, and Youth Talent Support Program of Guangdong Provincial Association for Science and Technology under the grant SKXRC202311, |
2310.03223 | TacoGFN: Target-conditioned GFlowNet for Structure-based Drug Design | Searching the vast chemical space for drug-like molecules that bind with a
protein pocket is a challenging task in drug discovery. Recently,
structure-based generative models have been introduced which promise to be more
efficient by learning to generate molecules for any given protein structure.
However, since they learn the distribution of a limited protein-ligand complex
dataset, structure-based methods do not yet outperform optimization-based
methods that generate binding molecules for just one pocket. To overcome
limitations on data while leveraging learning across protein targets, we choose
to model the reward distribution conditioned on pocket structure, instead of
the training data distribution. We design TacoGFN, a novel GFlowNet-based
approach for structure-based drug design, which can generate molecules
conditioned on any protein pocket structure with probabilities proportional to
its affinity and property rewards. In the generative setting for
CrossDocked2020 benchmark, TacoGFN attains a state-of-the-art success rate of
$56.0\%$ and $-8.44$ kcal/mol in median Vina Dock score while improving the
generation time by multiple orders of magnitude. Fine-tuning TacoGFN further
improves the median Vina Dock score to $-10.93$ kcal/mol and the success rate
to $88.8\%$, outperforming all optimization-based methods. | Tony Shen, Seonghwan Seo, Grayson Lee, Mohit Pandey, Jason R Smith, Artem Cherkasov, Woo Youn Kim, Martin Ester | 2023-10-05T00:45:04Z | http://arxiv.org/abs/2310.03223v6 | # TacoGFN: Target Conditioned GFlowNet for Structure-Based Drug Design
###### Abstract
We seek to automate the generation of drug-like compounds conditioned to specific protein pocket targets. Most current methods approximate the protein-molecule distribution of a finite dataset and, therefore struggle to generate molecules with significant binding improvement over the training dataset. We instead frame the pocket-conditioned molecular generation task as an RL problem and develop TacoGFN, a target conditional Generative Flow Network model. Our method is explicitly encouraged to generate molecules with desired properties as opposed to fitting on a pre-existing data distribution. To this end, we develop transformer-based docking score prediction to speed up docking score computation and propose TacoGFN to explore molecule space efficiently. Furthermore, we incorporate several rounds of active learning where generated samples are queried using a docking oracle to improve the docking score prediction. This approach allows us to accurately explore as much of the molecule landscape as we can afford computationally. Empirically, molecules generated using TacoGFN and its variants significantly outperform all baseline methods across every property (Docking score, QED, SA, Lipinski), while being orders of magnitude faster.
## 1 Introduction
Structure-based drug design (SBDD) leverages target protein structures to design and optimize potential drug molecules. Due to the growing availability of protein structures from ML protein structure prediction methods (Jumper et al., 2021), and many novel targets identified from high-throughput perturbation experiments, SBDD is becoming an increasingly powerful approach in drug discovery.
Traditional SBDD uses molecular docking to screen virtual libraries of molecules for interaction with a target protein, but its efficacy is impeded by the nature of its exhaustive search within a limited virtual library. Recent works proposed accelerating virtual screening by using an ML model as the molecular docking proxy (Gentile et al., 2022) and incorporating active learning to improve molecular docking proxy model (Graff et al., 2021). Nevertheless, one virtual screening campaign concerns a single target of interest - learning from one model does not generalize to another target.
Generative models for molecules have been proposed to more efficiently explore the chemical space, as they turn the brute-force virtual screening problem into a search problem. Current generative models condition molecule generation on 3D geometric information of the protein pocket using Geometric Deep Learning model architectures (Atz et al., 2021). Recent works in this area typically
use auto-regressive or diffusion models (Guan et al., 2023; Peng et al., 2022; Luo et al., 2022; Schneuing et al., 2023), and could theoretically generate molecule binder from a far greater chemical space than any virtual library for any given pocket.
As most existing generative models approximate the protein-ligand distribution of the dataset, they are unable to propose molecules with significantly better binding affinities than the found in training dataset without further lead optimization. Furthermore, since the availability of such data is limited, this leads to poor generalization if the molecule complementary to a target protein pocket is outside of the training distribution.
In contrary, we learn a RL policy for exploring the chemical space of molecules and explicitly reward policies that construct molecules with high docking score, synthetic accessibility and drug-likeliness score. The performance of our method no longer depend on the size of a fixed dataset, but rather on how much compute power is available to generate molecules and compute their reward. As a result, the molecules generated have better properties than reference; the chemical space explored is no longer constrained by a fixed dataset.
In this paper, we employ the recently proposed RL method GFlowNet (Bengio et al., 2021), which constructs objects with probability proportional to their reward, thus guaranteeing a diverse set of results. We incorporate the target pocket information into GFlowNet and train a docking score prediction model based on graph transformer (Yun et al., 2020) to estimate the affinity of a molecule with respect to a given pocket. Since the initial training dataset may be small and biased, we incorporate an active learning approach to improve the generalization of the docking score prediction model. We benchmark the performance of our method, which we call TacoGFN: _Target Conditioned GFlowNet for Drug Design_, on unseen pockets against current state-of-the-art baseline methods to show that the generated molecules show higher docking scores, synthetic accessibility and drug-likeness score. In summary, the key contributions of this work are the following:
* learning a policy that generates candidates with probabilities proportional to reward based on docking score and desired properties.
* To solve this problem, we propose an extension of GFlowNets to incorporate protein pocket structure context for target conditional molecule generation.
* We incorporate an active learning approach to gradually improve the generalization of the docking score proxy, and consequently of the GFlowNet generator.
* We performed an experimental evaluation on the Cross-Docked dataset and demonstrated that TacoGFN generates molecules with better docking scores and properties compared to all existing state-of-the-art methods.
## 2 Related work
Structure based drug designaims to sample drug-like molecules for target protein pockets. LiGAN (Ragoza et al., 2022) uses 3D CNN to encode protein pocket structure and predict atom densities from the encoded latent space. 3DSBDD (Luo et al., 2022) and Pocket2Mol (Peng et al., 2022) builds molecule atom by atom autoregressively. Other methods such as FLAG (Zhang et al., 2023) and DrugGPS (Zhang and Liu, 2023) build molecules fragment by fragment to leverage the chemical prior. There's also a new line of research using diffusion models (Guan et al., 2023; Schneuing et al., 2023) for SBDD. These methods typically implicitly assume an improvement in molecule affinity to pocket through generating molecules and representing pocket in 3D space. However, (Harris et al., 2023) found that despite the geometric representation, generated 3D molecules have much more physical violations and fewer key interactions compared to reference set. In our work, we generate molecules in 2D space instead to allow us to explore 1000 times more molecules compared to existing models. 1
Footnote 1: Our method generates and evaluates \(\sim\) 100M molecules pocket complexes compared to \(\sim\) 100k complexes from CrossDock dataset used by existing methods.
Generative Flow Network(GFlowNet, GFN) (Bengio et al., 2021) learns a stochastic policy for generating an object (like a molecular graph) from a sequence of actions. GFlowNet learns a policy
such that the probability of generating an object is proportional to a given reward for that object, therefore generates a more diverse set of solutions compared to other RL methods. Following [14], we incorporate an active learning algorithm and offline policy training with GFlowNet, taking advantage of molecular docking as our oracle. We use Multi-objective GFlowNets [14] to generate molecules optimizing for both docking score, drug-likeliness and synthesizability.
## 3 Method
Our goal is to train a single conditional generative model that generalizes over protein pockets. We first pre-train the pocket embedding using geometric-vector-perceptron-based graph neural network (GVP-GNN) [15] on the docking score prediction task. We then take these learned graph-level embedding of the protein pocket as the conditioning for TacoGFN.
### Target pocket conditioned GFlowNet
We introduce target pocket context as the condition for GFlowNet to generate molecules with preferential interaction with specific protein pockets, building on multi-objective GFlowNet conditioning in [1]. The goal is to learn a single target conditional GFlowNet that models distribution associated with all pocket structures respectively. While many pocket-ligand complex representation methods that take advantage of geometric deep learning exist, the memory and time complexity associated with using these models for predicting every GFlowNet action makes exploring a large number of molecules computationally expensive. Instead, we use a pocket encoder trained from the docking score prediction task (with weights frozen) to pre-compute the pocket embedding to condition molecule generation. This allows GFlowNet to leverage a pocket representation suitable for computing pocket-ligand interaction, to condition the generation of high affinity molecules.
### Docking score proxy
To incorporate the docking score as a reward that is fast to compute, we propose a docking score predictor as the proxy for the molecular docking program. The model is trained in an end-to-end manner. First, the pocket graph embedding is computed using a GVP-GNN [15] as the pocket encoder. The graph transformer then takes in the molecular graph and the pocket graph embedding as inputs and extracts a ligand-pocket complex embedding. This embedding is passed through MLP to obtain a docking score prediction.
### Active learning and offline policy
As GFlowNet often samples molecules beyond the initial training distribution of the docking score predictor, the predictor can be overconfident in assigning high rewards for out-of-distribution molecules. We incorporate a similar active learning scheme from [14], leveraging the docking program to compute new training labels used for improving the generalization of the docking proxy. In addition, the evaluated batch from active learning is added to GFlowNet's offline training dataset, to ensure exploration around known high reward regions.
## 4 Experiments
Dataset and evaluation metricsFollowing the same data preparation and splitting as [11] and [12], we obtain 100k pocket-ligand complexes from Cross-Docked dataset
Figure 1: Our proposed approach consists of two main components: (1) GFlowNet generator (yellow box) which is conditioned with target pocket embedding from the pocket encoder (cyan box) (2) Docking score predictor (green box) that takes the pocket embedding and generated molecule to compute a docking reward to train the GFlowNet policy.
[12] and compute docking score using QVina2 [1] for all these complexes. The pocket-ligand pairs and associated docking score form the initial dataset for training the docking score predictor.
We evaluate the performance of target conditional molecule generation using widely used metrics from previous work [14, 15, 16, 17].
(1) **Vina Score** approximates binding affinity between molecules and their target pockets; (2) **QED** measures how closely a molecule resembles properties of bioactive compounds present in PubChem; (3) **SA** (synthetic accessibility) estimates how easily the molecule can be synthesized; (4) **Lipinski** measures the number of rules satisfied in Lipinski rule five [12] - a heuristic measuring drug-likeness; (5) **Diversity** is the average pairwise fingerprint dissimilarity between generated molecules for each target; (6) **Inference Time** is the average time in seconds to generate 100 molecules for one target pocket.
Baselines and evaluationWe compare our method without active learning (TACOGFN) with LiGAN [13], TargetDiff [21], DiffSBDD [14], Pocket2Mol[14] and DrugGPS[15] in the fixed dataset setting. Since there is currently no established baseline for target-conditional generation with active learning, TacoGFN-AL is only compared against the original CrossDocked dataset. We also introduce Top50 variant, where we retain the top 50 generated molecules ranked by _docking score proxy_ (with QED scores below 0.4 discarded) for each pocket. For every experiment, we sample 100 molecules from the target conditional GFlowNet for each pocket. For TacoGFN (fixed dataset setting), the docking score predictor is trained on the Cross-Docked dataset and used for providing the reward for target conditional GFlowNet generation. In the active learning setting (TacoGFN-AL), we conduct 3 rounds of such training and query the docking oracle for 30k training samples each round.
ResultsTacoGFN outperforms all baselines in generating molecules with higher scores for all molecular properties, including Vina Score. In particular, TacoGFN achieves excellent QED (\(0.681\)) compared to the previous best (\(0.592\)). The poor QED of previous methods is likely due to the low QED of the training dataset and their reliance on maximizing the likelihood. Under our RL framework, molecules generated from TacoGFN have significantly better properties compared to the training dataset. In the active learning setting, TacoGFN-AL further improves the Vina score to \(-7.678\) compared to \(-7.405\) in the base model. A quick screening of generated molecules using the trained docking proxy in TacoGFN-AL-Top50 improves the Vina score to an impressive \(-7.924\). Finally, due to the simple yet effective pocket representation and molecule construction process, TacoGFN generates molecules 50 to 1000 times faster compared to baseline methods.
## 5 Conclusions
We presented TacoGFN - a GFlowNet tailored to the task of generalizable structure-based drug design. This represents a paradigm shift from previous generative approaches, which learn the distribution of a fixed dataset, to an RL approach where the model is rewarded for exploring the chemical space and generating molecules with desired interactions and properties. To search for high-affinity molecules with desired properties in the vast chemical space efficiently, we design a target-conditioned GFlowNet, which incorporates pocket context and gets rewards from our proposed
\begin{table}
\begin{tabular}{l l l c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Methods}} & \multicolumn{1}{c}{Vina Score} & \multicolumn{1}{c}{High Affinity} & \multicolumn{1}{c}{QED} & \multicolumn{1}{c}{SA} & \multicolumn{1}{c}{Lipinski} & \multicolumn{1}{c}{Diversity} & \multicolumn{1}{c}{Time} \\ \hline \multirow{7}{*}{[14]} & Reference & -7.158\(\pm\)2.10 & - & 0.484\(\pm\)0.21 & 0.732\(\pm\)0.14 & 4.367\(\pm\)1.14 & - & - \\ & LiGAN & -6.114\(\pm\)1.57 & 0.238\(\pm\)0.28 & 0.369\(\pm\)0.22 & 0.590\(\pm\)0.15 & 4.027\(\pm\)1.38 & 0.654\(\pm\)0.12 & * \\ & Pocket2Mol & -7.288\(\pm\)2.53 & 0.542\(\pm\)0.32 & 0.563\(\pm\)0.16 & 0.765\(\pm\)0.13 & 4.902\(\pm\)0.42 & 0.688\(\pm\)0.14 & 2503.5\(\pm\)2207 \\ & TargetDiff & -7.318\(\pm\)2.47 & 0.581\(\pm\)* & 0.483\(\pm\)0.20 & 0.584\(\pm\)0.13 & 4.594\(\pm\)0.83 & 0.718\(\pm\)0.09 & \(\sim\) 3428 \\ & DiffSBDD & -7.333\(\pm\)2.56 & - & 0.467\(\pm\)0.18 & 0.554\(\pm\)0.12 & 4.702\(\pm\)0.64 & **0.758\(\pm\)0.05** & 160.3 & \(\sim\) 73.3 \\ & DrugGPS & -7.345\(\pm\)2.42 & 0.620\(\pm\)0.29 & 0.592\(\pm\)0.21 & 0.728\(\pm\)0.23 & 4.923\(\pm\)0.11 & 0.695\(\pm\)0.17 & 956.34\(\pm\)51.6 \\ & TacoGFN & **-7.405\(\pm\)1.70** & **0.625\(\pm\)0.35** & **0.681\(\pm\)0.20** & **0.783\(\pm\)0.07** & **4.938\(\pm\)0.24** & 0.653\(\pm\)0.07 & **2.90\(\pm\)0.28** \\ \hline \multirow{7}{*}{[14]} & TacoGFN-AL & -7.678\(\pm\)1.71 & 0.706\(\pm\)0.31 & 0.640\(\pm\)0.21 & 0.814\(\pm\)0.06 & 4.931\(\pm\)0.26 & 0.663\(\pm\)0.07 & 3.07\(\pm\)0.31 \\ & _TacoGFN-AL-Top50_ & **-7.924\(\pm\)1.55** & **0.771\(\pm\)0.31** & **0.678\(\pm\)0.15** & **0.821\(\pm\)0.06** & **4.997\(\pm\)0.06** & **0.675\(\pm\)0.07** & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation of generated molecules for targets from the Cross-Docked test set. The baseline results are taken from the corresponding publications. * denotes values not provided by the authors.
-docking score proxy. Empirically, we show that TacoGFN outperforms the state-of-the-art on target conditional molecule generation for all molecular property metrics, while taking orders of magnitude less time. We show the incorporation of active learning and screening using docking score proxy on generated molecules can further facilitate the discovery of molecules with desirable properties. In summary, TacoGFN and its variants can offer great value for many existing real-world SBDD campaigns. Interesting future directions include leveraging uncertainty estimation in docking proxy, exploring better ways of pocket representation, generating molecules using known reactions (Gao et al., 2022) and adopting continuous GFlowNet (Lahlou et al., 2023) for molecule structure and conformation co-design.
## Acknowledgments and Disclosure of Funding
We thank Emmanuel Bengio, Artem Cherkasov and Shuman Peng for the valuable discussions and feedback. This work was supported by the NSERC Discovery grant.
|
2307.06794 | Negated Complementary Commonsense using Large Language Models | Larger language models, such as GPT-3, have shown to be excellent in many
tasks. However, we demonstrate that out-of-ordinary questions can throw the
model off guard. This work focuses on finding answers to negated complementary
questions in commonsense scenarios. We illustrate how such questions adversely
affect the model responses. We propose a model-agnostic methodology to improve
the performance in negated complementary scenarios. Our method outperforms
few-shot generation from GPT-3 (by more than 11 points) and, more importantly,
highlights the significance of studying the response of large language models
in negated complementary questions. The code, data, and experiments are
available under: https://github.com/navidre/negated_complementary_commonsense. | Navid Rezaei, Marek Z. Reformat | 2023-07-13T15:03:48Z | http://arxiv.org/abs/2307.06794v1 | # Appeared in Natural Language Reasoning and Structured Explanations Workshop (NLRSE)
###### Abstract
Larger language models, such as GPT-3, have shown to be excellent in many tasks. However, we demonstrate that out-of-ordinary questions can throw the model off guard. This work focuses on finding answers to negated complementary questions in commonsense scenarios. We illustrate how such questions adversely affect the model responses. We propose a model-agnostic methodology to improve the performance in negated complementary scenarios. Our method outperforms few-shot generation from GPT-3 (by more than 11 points) and, more importantly, highlights the significance of studying the response of large language models in negated complementary questions. The code, data, and experiments are available under: [https://github.com/navidre/negated_complementary_commonsense](https://github.com/navidre/negated_complementary_commonsense).
## 1 Introduction
The larger the language models (LLMs) become, the better they demonstrate new, outstanding capabilities. For example, one is conducting a conversation about commonsense scenarios. However, our interaction with LLMs has led us to observe that the models tend to emphasize the normal flow of events and seem to struggle with questions involving a negated form of verbs, such as _not_ or _cannot_. An example of that is in Figure 1. Therefore, in this paper, we focus on demonstrating the issue and then suggest an approach to remedy the problem.
To better clarify the problem statement, we start with an example and then formalize it using elements of the set theory. Let us look at the scenario in Figure 1; the standard question is "Who PersonX can be?". The answer to this question is _Santa Claus_. The answer to the _negated complementary_ question - "Who PersonX _cannot_ be?" - should be all valid answers which are not the answer to the standard (can be) question. A valid answer fits the scenario described. In this case, we ask about a person, so a non-person cannot be a valid answer. To better illustrate the concept of a _negated complementary_ question, we refer to the basic notion of the complement of a set, Figure 2. Furthermore, we define a set of correct answers to a _negated complementary_, Equation 1.
\[NC=V\cap A^{\prime}=\{x\mid x\in V\wedge x\notin A\} \tag{1}\]
where \(NC\) represents answers to the _negated complementary_ question, \(V\) is the set of all valid answers, \(A\) is the set of correct answers to the standard question, and \(A^{\prime}\) is the complement of \(A\) under the universal set of all answers (\(U\)).
We focus our efforts on commonsensical ques
Figure 1: An example of a large language model (GPT-3) generating negated commonsense. Five responses per query are demonstrated. The applied pre-processing and post-processing can improve the performance of the models in negated commonsense cases. Non-specific answers, such as _not Santa_, are considered incorrect.
Figure 2: Venn diagram of answer sets: \(U\) is the universal set of answers; \(V\) is the set of all valid answers that includes two sets – correct answers to a standard question \(A\), and correct answers to its negated complementary version \(NC\).
tions as the uncertainty of results depends on the context and experiences of people answering the questions. As defined in LeCun (2022), commonsense is a collection of world models representing what is likely, plausible, or impossible. In light of that, our goal is to assess the ability of LLMs to answer plausible questions that could be refuted or accepted in a given context.
Given their pre-training nature, we hypothesize that LLMs have an inherent bias towards likely scenarios, which are the most repeated in the common text. Most of the text available on the web contains information supporting answers to 'positive' questions, like, how to do things or where to go, not to questions such as how things could not be done or where not to go. It results in an imbalance of the training datasets due to the sparsity of plausible or impossible scenarios. In this paper, we demonstrate that LLMs have difficulty answering _negated complementary_ questions, which results in responses representing plausible, but not impossible, answers. Although LLMs are shown to have this shortcoming, we claim that enough instructions and examples, especially showing reasoning processes, can guide the LLMs into the right path to answer _negated complementary_ questions with commonsense context.
Our contributions are as follows. (1) We present an analysis exposing the shortcomings of LLMs when it comes to _negated complementary_ questions in commonsensical scenarios. (2) We propose a novel methodology to improve the performance of the GPT-3 model when _negated complementary_ questions are asked; compare the results with the results obtained using conventional methods. Our code, human-evaluation process, and data will be publicly available.
## 2 Related Work
Language models with transformer architectures have revolutionized the natural language processing landscape in recent years Vaswani et al. (2017); Devlin et al. (2019). It is shown that improved performance and new capabilities emerge when scaling up the size of language models Brown et al. (2020); Chowdhery et al. (2022), although more is needed in challenging tasks, such as commonsense Rae et al. (2021).
A body of research focuses on analyzing and extracting commonsense from language models West et al. (2022); Rezaei and Reformat (2022); Hwang et al. (2021); Da et al. (2021). Authors of Jiang et al. (2021) focus on implications of negated statements and contradictions, where in a commonsense triple relationship (head-relation-tail), the head is either contradicted or logically negated. Comparably this paper focuses on negating relations instead of the head, as explained in Section 4.
## 3 Commonsense Data
The commonsense dataset used in this paper is the ATOMIC-2020 dataset Hwang et al. (2021). It includes general purpose commonsense knowledge, divided into three main categories - physical, event-centered, and social commonsense. The ATOMIC 2020 dataset is licensed under CC-BY and we use it according to the license.
In our experiments, ten relation types are selected from the twenty-three relations from the ATOMIC-2020 dataset. These ten relation types performed worse in our initial evaluation of _negated complementary_ questions. The relations are: _xWant_, _xReact_, _oWant_, _CapableOf_, _Desires_, _HinderedBy_, _isBefore_, _isAfter_, _AtLocation_, _HasSubEvent_. The worse-performer triples are intuitively more common in the normal format in written language than their negated complementary versions, which can result in unbalanced training data.
The dataset is formatted in a triple style. Each atomic piece of data contains \(\langle head-relation-tail\rangle\). For example, \(\langle a\ curved\ yellow\ fruit\ (head)-CanBe\ (relation)-banana\ (tail)\rangle\).
## 4 Methodology
We propose a pipeline system to improve the performance on _negated complementary commonsense_ questions. The pipeline consists of an input prompt-ing technique and a post-processing module. The input prompt adds relevant context and logic in the form of chain-of-thought prompting Wei et al. (2022) to improve the LLM performance. The post-processing module selects the outputs with a higher chance of correctness and filters out the rest.
### Generating Negated Complementary Questions
As described in Section 3, the used dataset is in the format of triples. To form a standard question, we use the head and the relation nodes and leave out the tail to be answered. By standard, we mean
utilizing the head, relation, and tail, without any modifications. Assuming a triple, _a curved yellow fruit_ (head), _CanBe_ (relation), _banana_ (tail), the standard question is _What can be a curved yellow fruit?_. The _negated complementary_ question is formed by negating the relation and verbalizing the resulting triple in question format: _What cannot be a curved yellow fruit?_ A valid answer to the standard question is _banana_, and a reasonable response to the _negated complementary_ question is _apple_. The process is visualized in Figure 3. For the complete list of triple verbalizations, please see Appendix A.
### Prompting Technique
The proposed methodology to improve the performance of LLMs relies on building an adequate prompt. It starts with a general introduction of what negations are and emphasizes a need to pay special attention to the word _Not_. The chain-of-thought prompt in each answer has five sections in sequence: 1) phrasing standard question; 2) standard question reasoning, 3) standard question answer; 4) negation logic, and 5) _negated complementary_ question answer. The steps are visualized in Figure 4. For a fair comparison, we used the same number of five question/answer examples in the prompts. We also used the same questions for all prompts.
### Post Processing
Inspired by Kadavath et al. (2022), we feed the question and answer pair back to the GPT-3 model and ask if it considers a question/answer pair correct. The prompt has instructions for assessing an answer and includes five sample questions/answer pairs. Interestingly, this extra step can improve the results by almost one percent. To better understand the effect of this step, please refer to Table 2.
## 5 Experiments
Experiments are conducted on each type of relation mentioned in Section 3. A hundred data points (triples) are sampled randomly from the dataset. The head and relation from each triple are verbalized and fed into the GPT-3 model (_text-davinci-002_). The goal is to predict the tail for two forms of questions: (1) standard question; (2) _negated complementary_ question. For each question, three responses are requested from the model. They are then parsed, and the answers (tails) are automatically extracted. Therefore, three possible tails are obtained for each head and relation, which results in \(600\) total answers per method.
In social commonsense scenarios, PersonX and PersonY are used in place of gender-specific pronouns to make the questions and answers gender-neutral.
The experiments are done using the GPT-3 model Brown et al. (2020) with version _text-davinci-002_, which has 175 billion parameters. The temperature is set to \(0.7\), and in case of no answer, it is increased to \(1.0\). The maximum length of the output is set between \(100\) and \(150\) tokens, depending on the method. The presence and frequency penalties are set to \(0\). GPT-3 is commercially available, and we have used it within its intended usage and terms of service.
### Human Evaluations
We use Amazon mTurk evaluations via AWS Sage-Maker to evaluate the results. Each answer is written in a sentence format and given to nine different annotators for assessment. Instructions and examples are provided with each question to assist the annotators better. The options to choose from are: (1) Makes sense; (2) Sometimes makes sense; (3) Does not make sense or incorrect; (4) The first part and the second part are not related; or not enough information to judge; (5) Unfamiliar to me to judge. The first two options are considered correct, the second two are considered incorrect, and the last is considered unfamiliar. To measure inter-rater reliability, we use Krippendorff's alpha and make sure the value is above acceptable amounts (minimum 0.667) Hayes and Krippendorff (2007). The evaluators were paid based on AWS guidelines.
Figure 3: The process to automatically generate negated complementary questions from dataset triples. The head and relation nodes are used to form a question.
### Results
As seen in Table 1, our method outperforms the few-shot method by more than eleven percentage points when answering _negated complementary_ questions. The few-shot method includes five different questions in the prompt with their answers without chain-of-thought prompting. The performance of our method can mainly be attributed to the specific chain-of-thought prompting with negation logic description, Figure 4. More information about the main contributing factors is in Section 5.3. Although chain-of-thought prompting seems to help the _negated complementary_ questions, it adversely affects answers to the standard questions. Please note that the chain-of-thought prompt for the standard questions does not include negation logic, and a post-processing technique similar to negated complementary questions is performed.
### Ablation Studies
To gain insight into the importance of elements of our method, we perform an ablation study, Table 2. As we can see, adding standard question reasoning (step 2 of Figure 4) results in more than 7% improvement in the results. Adding the thought process explaining the negation logic (steps 1, 3, and 4 of Figure 4) adds another 3% performance improvement. Finally, the post-processing (Section 4.3) is responsible for about 1% improvement in the results.
## 6 Conclusions
In this paper, we demonstrate how simple changes in question formats, which can be trivial for humans, can be challenging for large language models (LLMs). We specifically focus on _negated complementary_ questions in a commonsense context, which is constructed by negating a relation in a commonsense triple. Given the vast amount of knowledge embedded in LLMs, we show that by appropriate guidance, the models could perform well on _negated complementary_ tasks. Our method results in more than eleven percent improvement compared to the vanilla few-shot method. Given the widespread usage of LLMs and their growth rate, we believe focusing on and solving the model's weaknesses is imperative. As future work, _negated complementary_ task can be further analyzed in different formats, such as sentence instead of a question, and also different contexts, e.g., new datasets.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Method** & **Standard** & **Negated Complementary** \\ \hline Few-shot & **88.7\%** & 78.7\% \\ Ours & 88.1\% & **89.8\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Our method compared with the few-shot method when applied to ATOMIC-2020 dataset.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Method** & **Neg. Comp.** \\ \hline Ours & 89.8\% \\ Ours-wo-pp & 89.0\% \\ Ours-wo-nl-pp & 86.0\% \\ Few-shot & 78.7\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study of the method: _Ours-wo-pp_ is ours without post-processing; _Ours-wo-nl-pp_ is ours without negation logic and post-processing.
Figure 4: Chain-of-thought steps for each answer. The process is to answer the standard question first and then lead the model to answer the negated complementary version.
### Limitations
The experiments in this paper have focused on the _negated complementary_ task in the context of commonsense and the format of questions. However, it is interesting to experiment with other contexts, such as mathematical datasets and other formats, such as sentences instead of questions.
This paper only uses the English language in the _negated complementary_ task experiments, so further investigation is needed in other languages to understand better the limitations of large language models across other languages.
GPT-3 is commercially available, and the cost can be a limitation. For example, the current price for _text-davinci-002_ model is $0.02 per 1,000 tokens.
### Ethics Statement
Given the widespread use of large language models and their growth, more software systems will depend on them. This could improve productivity and accessibility, but any vulnerability in large language models can propagate through the system and affect the end users. This work focused on distorted commonsense scenarios that are almost trivial for humans but can be challenging for large language models. Not only we highlighted the _negated complementary_ questions issue, but we also suggested practical solutions that do not require extensive computation. We believe this line of research can ultimately benefit end users in terms of productivity, reliability, and accessibility.
|
2309.01557 | Baryon correlations in Pythia | We present the results from our investigation of angular correlations between
baryon pairs in the PYTHIA8 event generator. We show how colour reconnection
models and hadronization mechanisms influence such angular correlations and in
particular address the effect of gluons on the baryon production mechanism in
the Lund string fragmentation model. We conclude by discussing the new
theoretical ideas in comparison with the ALICE pp collision results for the
baryon angular correlations. We propose a hypothesis for suppressing baryons
produced in gluon jets and show how that may influence the angular
correlations. | Leif Lönnblad, Harsh Shah | 2023-09-04T12:23:32Z | http://arxiv.org/abs/2309.01557v2 | # Baryon correlations in Pythia
###### Abstract
We present the results from our investigation of angular correlations between baryons pairs in the Pythia8 event generator. We show how colour reconnection models and hadronization mechanisms influence such angular correlations and in particular address the effect of gluons on the baryon production mechanism in the Lund string fragmentation model. We conclude by discussing the new theoretical ideas in comparison with the ALICE pp collision results for the baryon angular correlations. We propose a hypothesis for suppressing baryons produced in gluon jets and show how that may influence the angular correlations.
## 1 Introduction
One of the research interests in particle physics is understanding the production mechanism and the spatial distribution of particles produced in high-energy particle collisions. This can be studied in various ways in colliders, _e.g._, by measuring single particle distributions as a function of one or more variables or looking at correlations between particles, for different species of particles. Using phenomenological models we can then use these measurements to gain theoretical insights into the underlying particle production mechanisms.
In this work, we address a long-standing open question about the angular correlations of pairs of the produced hadrons. A two-particle correlation function provides information regarding the production of another particle near the first particle. It is often studied as a function of relative pseudorapidity (\(\Delta\eta\)) and azimuthal angle (\(\Delta\phi\)) between two particles.
Depending on the chosen range of \(\Delta\eta\), the angular distribution can be studied for long-range (large \(\Delta\eta\)) or short-range (\(\Delta\eta\sim 0\)). The long-range correlations around \(\Delta\phi\sim 0\) are known as the near-side "ridge". They are studied extensively in different collision systems like pp, p\(A\), and \(AA\) to understand the collective behaviour of the produced particles (see, _e.g._, [1] and references therein). The correlation function is defined such that it is unity for completely uncorrelated pairs of particles, and any
correlation will show up as a larger value, while lower values indicate that there is an anti-correlation.
The short-range (\(|\Delta\eta|<1.3\)) two-particle angular correlations were studied by the ALICE experiment in [2; 3] for low transverse momentum (\(p_{\perp}<2.5\) GeV) hadrons produced in pp collisions at \(\sqrt{s}=7\) GeV. This angular correlation study shows that the identified hadron pairs have different angular distributions depending on the types of hadrons in the pairs. The meson pairs of the same-sign and opposite-sign particles show correlations peak near \(\Delta\phi=0\) and a wide bump near \(\Delta\phi=\pi\) (also known as the jet peak and the away-side ridge). On the other hand, baryons behave differently whether the angular distributions are produced for the same-sign or opposite-sign baryon pairs. For the opposite-sign baryon pairs the angular distribution is similar to that of the meson pairs, with a visible peak near \(\Delta\phi=0\) and almost flat distribution around \(\Delta\phi=\pi\). For the same-sign baryon pairs, however, there is a clear anti-correlations near \(\Delta\phi=0\) (except for an indication of a tiny peak for \(\Delta\phi=0\)), and a broad peak is observed around \(\Delta\phi=\pi\).
When comparing the ALICE experiment results with Pythia8[4] generated events, the angular correlations for the same- and opposite-sign meson pairs are well reproduced, but Pythia is not able to reproduce the angular correlations for any of the baryon pairs types. It is also observed that this peculiar behaviour in the baryon sector is independent of the flavours of the baryons in the pairs, hence ruling out that the Fermi-Dirac correlation effects could play a major role. Some suggestions and hypothesises are proposed in [3]. Following these suggestions, recently Pythia's hadronization mechanism was studied by a theory group [5].
It can be noted that one of the heavy-ion collision experiments, STAR, measures anti-correlations around \(\Delta\phi=0\) for \(\mathrm{p\bar{p}}\) pairs produced in Au-Au collisions [6]. These results show that, unlike the observed correlations in pp collisions, anti-correlations are observed for \(\mathrm{p\bar{p}}\) pairs in heavy-ion collisions. Furthermore, if we look into \(\mathrm{e^{+}e^{-}}\) collisions, then Pythia is able to reproduce the baryon angular correlations [7] in \(\mathrm{e^{+}e^{-}}\) collisions. These results from different collision systems reflect the non-triviality of the underlying physics of the angular correlations in the baryon sector. Hence we have decided to investigate the discrepancy in the angular correlations for baryons produced in \(\mathrm{e^{+}e^{-}}\) collisions and in pp collisions. Moreover, we want to identify if any of the event simulation stages have any significant role in the baryon angular correlations.
Phenomenological models like Pythia play important roles in our attempts to quantify the initial and final state effects on the observables. Pythia is one of the successful general-purpose event generators, which can reproduce a variety of observables in good agreement with the data for \(\mathrm{e^{+}e^{-}}\) and pp collisions for a wide range of collision energies. The partons are produced during the hard scattering, multiple partons interactions (MPIs) [8], and the parton showers, in stages in Pythia. These produced partons are then treated in terms of chains of colour dipoles between them,
forming strings. An important feature in hadron collisions is that colour connections between the partons can be re-arranged by a colour reconnection (CR) [8; 9] model. After the CR, the colour singlet strings are hadronized by the Lund string fragmentation model [10] in Pythia. All these steps can influence the production rate of different hadrons, and correlations of the hadrons in the simulation results.
For simplicity, we keep our investigation limited to pp collisions in this paper. We first have to understand which new effects appear in the event simulation when we move from \(\mathrm{e^{+}e^{-}}\) collisions to pp collisions. Since Pythia8 is able to reproduce the angular correlations for the same- and opposite-sign meson pairs fairly well, we do not discuss the mesons' angular correlations in the rest of the paper. Instead, our investigation is focused on the angular correlations of the same- and opposite-sign baryon pairs. We also keep our results limited to protons while discussing various theoretical aspects.
In the following, we will start in section 2 by outlining the main baryon production mechanism in the Pythia8 implementation of the Lund string fragmentation model. Special attention is given to the role of gluons and how they may affect the production of baryons. This is followed by section 3 where we discuss an alternative way of obtaining baryons in Pythia using the QCD-inspired colour reconnection model. In section 4 we then discuss final-state effects and how they could affect baryon correlations, with special attention to the hadronic rescattering model. In section 5 we then look at the phenomenology of these models and try to understand better what could cause the anti-correlation between like-sign baryons as found in data. Finally, in section 6 we summarise with a discussion and an outlook.
## 2 Baryons, popcorn and gluons in the Lund Model
Throughout the perturbative phase of the generation of an event in Pythia, from multiple scatterings, initial- and final-state showers, the tracing of colour connections between partons is done using a leading-colour (\(N_{C}\to\infty\)) approximation. In hadronic collisions there is a possibility to rearrange these connections, as described below in section 3, but the end results is in any case in colour-singlet _strings_, each connecting an anti-quark with a quark via a chain of colour-connected gluons. In the Lund string model, these strings are fragmented into hadrons as the string breaks by quark-anti-quark production in the string-like colour field between the partons.
The production rate of different hadron species depends on their quark content, mass, and spin. The quarks and anti-quarks of different flavours are produced in accordance with various parameters in the Lund String Fragmentation mechanism. The values of these parameters are primarily fixed from the model comparisons with LEP data. In a string breaking, a quark and an anti-quark are produced as virtual particles, which can come on-shell using the energy stored in the string through a
tunnelling mechanism. Clearly, the production of heavier quarks would then need more of the string energy than light ones and are therefore suppressed.
The sequence of the further string break-ups will decide if the string piece will form a meson or a baryon as a primary hadron. A series of string breaks of multiple \(q\bar{q}\) pairs will produce mesons. The simplest model for baryon production assumes that the string may break by the production of a diquark-antidiquark pair. This we call the 'diquark model' in Pythia[11]. The consecutive string breaks of \(q\bar{q}\) pairs on either side of the diquark and anti-diquark will form a baryon and an anti-baryon.
In the diquark model, the baryon and anti-baryon are always produced next to each other in rank, and therefore close in rapidity. Experimental results show that this is not the case always [12]. A mechanism was developed to add separation between a baryon and an anti-baryon produced next to each other in the same string. It is called a _popcorn mechanism_[13], and adds a possibility of meson production between the baryon and anti-baryon pair. The idea of the popcorn mechanism for baryon production is favoured by the experimental results [12]. At the moment, the popcorn mechanism is enabled by default although only one meson is allowed to form between the baryon and anti-baryon pair in Pythia8.
With or without popcorn, it is clear that we expect some correlations between baryons and anti-baryons. In particular, if we consider the case where they are produced next to each other along the string, their diquark and anti-diquark will have opposite transverse momentum along the string giving an anti-correlation in azimuth angle. However, there is no clear way of obtaining baryon-baryon correlation in the string fragmentation model as such. In a string, there must be at least one anti-baryon between two baryons, and the way transverse momentum is treated in the Lund model, there should be no correlation between them at all.
The MPI machinery for hadronic collisions produces many strings in an event, but they are hadronized independently and would not give rise to correlations between baryons from different strings. It has, however, been shown that the colour reconnection model in Pythia8 gives rise to radial flow [14], which in principle could be responsible for the correlations, and we will discuss that in section 3. Irrespective of colour reconnection it is clear that the strings in hadronic collisions in general are connected to partons from MPI scatterings and are therefore not parallel to the beam axis.
In figure 1 we show how a jet peak evolves by comparing baryon azimuthal correlations in a single straight string, parallel to the beam axis, with the situation where this string has a (soft) gluon inserted, giving a transverse "kink". For same-sign protons, the straight string has almost no correlations, but already a gluon with \(p_{\perp}=1\) GeV will give a rather strong correlation. It can be noted that in Pythia8, around 80% of all hadrons in the central pseudo-rapidity bin come from string pieces connected to a parton with a \(p_{\perp}\) of more than 1 GeV for a 7 TeV pp collision. For \(\mathrm{p\bar{p}}\) we see, as expected, a strong anti-correlation since the di-quark breakup gives
opposite transverse momenta for the baryon and anti-baryon. But we see that with a soft gluon, the anti-correlation reduced, and for a 2 GeV gluon it has been turned into a rather strong correlation.
In the MPI machinery, the string with a gluon would be accompanied by another string connected to a gluon going in the opposite azimuth direction. The latter would not be strongly correlated in rapidity, but would give rise to the so-called away-side ridge in a two-particle correlation spectra.
### Gluons vs. popcorn
Since we now have shown that gluon (mini-) jets contribute to the baryon angular correlations, it is relevant to scrutinise the baryon production in a Lund string with gluon "kinks" a bit closer.
The general idea behind the popcorn model used in Pythia is that the creation of a virtual \(q\bar{q}\) pair in a string does not necessarily break the string. To do that it has to have the right colours such that the string is divided into two colour singlets. If the colour of the virtual pair does not match the colours of the string ends, the virtual fluctuation can then live for a while before the pair is annihilated again. As an example, in figure 2 we consider a string stretched between a red quark and an anti-red anti-quark, then imagine a virtual green-anti-green \(q\bar{q}\) pair being created where the quark is moving towards the red end, and vice versa. The field between the virtual quarks will then effectively become antiblue-blue, and if another virtual pair occurs in this region the string can break. With the two quarks moving towards the quark end and two anti-quarks moving towards the anti-quark end. We created two string pieces, each carrying a non-zero baryon number.
Figure 1: Azimuthal correlations along a string with or without a soft (\(p_{\perp}=1.0\), \(1.5\), and \(2.0\) GeV) gluon. The left plot shows proton-proton (\(\rm{pp}+\bar{p}\bar{p}\)) correlations, while the right shows proton–anti-proton correlations. The string is spanned between a quark and an anti-quark with opposite momenta (\(p_{q/\bar{q}}=\pm 100\) GeV) along the \(z\)-axis and the gluons are placed at \(\eta=0\). Only protons with \(|\eta|<1\) are considered.
or the \(q\bar{q}\)-fluctuation to live long enough for the string to break in between, the momenta of the \(q\) must be longitudinal towards the quark end of the string and vice versa for the \(\bar{q}\). Any transverse momenta (\(k_{\perp}\)) would be suppressed with a factor \(\propto\exp(-\pi k_{\perp}^{2}/\kappa)\), where \(\kappa\) is the string tension. Also, if the \(q\) and \(\bar{q}\) had opposite momenta, the field in between would effectively be between two octet charges, which have more than twice the string tension1 giving rise to an extra attractive force between the virtual \(q\bar{q}\) pair, making long-lived fluctuations heavily suppressed.
Footnote 1: see, _e.g._, [15].
This picture works nicely for a straight string. But if there is a gluon along a string, the picture changes. In figure 3 we show two snapshots of a string stretched between a quark and an anti-quark via a gluon. At first, there are two straight-string pieces (A and C) with the gluon as a kink. But the gluon is here retarded by two string pieces and will eventually stop, resulting in a new straight string piece (B) being formed, and the gluon kink is split into two.
In the current implementation of string fragmentation in Pythia8, there is no special treatment of baryon production close to such gluon kinks. From the description of the popcorn model above, however, it is clear that for a non-breaking virtual \(q\bar{q}\) fluctuation, it would be very difficult for the quark or the anti-quark to propagate across such a kink. The pair should have only longitudinal momenta along the string piece where they are created, but the propagation across the kink corresponds to a non-zero transverse momenta in the string piece on the other side of the kink, such fluctuations would be suppressed.
Since we have here shown that gluons are important for the azimuthal corre
Figure 2: Illustration of the popcorn mechanism. In (a) no fluctuation has occurred, and a full string is spanned between a red–antired \(q\bar{q}\) pair. In (b) a green–antigreen pair has appeared on the string as a quantum fluctuation. If the red and green quarks form an antiblue triplet, this reverses the colour flow in this part of the string, and the net force acting on the green quark is zero. In (c) the string breaks by the production of a blue–antiblue \(q\bar{q}\) pair, resulting in two string pieces with diquark ends. In (d) another breakup in the blue triplet field results in an additional meson.
lations between baryons we will in section 5.3 use a toy model to investigate the possible effects of the suppression of baryon production close to gluon kinks.
## 3 Junctions and colour reconnections
The popcorn and di-quark models are not the only way of obtaining baryons in Pythia. In some cases non-trivial colour topologies may arise prior to the string fragmentation stage, _e.g._, from the treatment of remnants in hadronic collisions, or when looking at baryon number violating BSM processes. In the MPI machinery, it is not uncommon that two (valence) quarks are taken from a proton, leaving a remnant in a colour-triplet state. Similarly, baryon number violating processes may decay a colour-triplet particle into two anti-triplet particles. In both cases, we may obtain colour-singlet string systems connecting three quarks (or three anti-quarks) in a so-called string junction topology [16]. Pythia8 is able to hadronise such systems, in a process that always will produce a net baryon number.
We will not be concerned with BSM here, and the junctions formed in the MPI remnant treatment mainly affects baryons in the far forward or backward regions of rapidity. There is, however, another way of creating junctions available in Pythia8, using the so-called QCD colour reconnection model [9].
CR models re-arrange the colour connections of the colour dipoles produced after MPIs and parton showers. The primary objective of CR is to reduce the net string length so that the model can reproduce the charged particles' multiplicity and the observed enhancement in \(\langle p_{\perp}\rangle\) (\(N_{ch}\)) distribution. Pythia has a default CR model,
Figure 3: A schematic diagram shows two different phases of the movement of a \(\bar{q}gq\) string, where the initial momentum of the \(q(\bar{q})\) is along the (negative) \(z\)-axis while the gluon momenta is perpendicular to them. The innermost lines represent the initial phase where a quark and an anti-quark are connected with a gluon kink in between. As the string stretches out and moves, the gluon gradually loses its energy to the string and eventually stops. At this point the string cannot move further upwards, and the gluon kink is basically split into two kinks, and we enter the phase shown with the outermost lines. We thus end up with three pieces of straight string segments, \(\mathbf{A}\), \(\mathbf{B}\), and \(\mathbf{C}\).
which is based on MPIs [8], where the different MPIs are colour reconnected in the \(N_{c}\to\infty\) limit, and the only criteria to satisfy is to reduce the net string length.
Pythia8 has an alternative model, the QCDCR model [9], which follows QCD colour rules while performing CR. The QCDCR model allows the formation of junction systems, where two or three string pieces can be colour connected to a junction and an anti-junction system, each of which will produce at least one (anti-)baryon. The different colour reconnections possible in this model are summarised in figure 4. For case (a) two string pieces where the colour end of one is in a colour-singlet state w.r.t. the anti-colour end of the other, can reconnect in a so-called _swing_. In (b) we instead have the situation where the two (anti-) colour ends together are in an anti-triplet (triplet) state and can reconnect to two junction systems connected by a dipole. Finally, in (c) the (anti-) colour ends of three dipoles together form a colour singlet and can reconnect into a separate (anti-) junction system. In each case only reconnections that reduce the overall string lengths2 are allowed. This means that dipoles that are approximately anti-parallel in momentum space are more likely to reconnect like in case (a), while the opposite is true for cases (b) and (c).
Footnote 2: The _length_ of a string is approximately given by the sum of the logarithm of the invariant masses of the dipoles forming the string (see ref. [9] for details).
The number of junctions in \(\mathrm{e}^{+}\mathrm{e}^{-}\) collisions is very low, and it is pointed out in [9] that the effect of the QCDCR model is not clearly visible there. But in pp collisions there are sometimes many MPIs, which enhance the possibility of junction formation during the CR. This means the QCDCR will produce additional baryons,
Figure 4: Illustration of the possible reconnections in the QCDCR model. (a) A “swing” between two dipoles in the same colour state. (b) Two dipoles in different colour states can form a connected junction–anti-junction system. (c) Three dipoles in different colour states form separate junction and anti-junction systems. In all cases, the total string length must be reduced in the process. Note that the dipole ends may be gluons that connect to other dipoles in a string system.
on top of what is produced in the subsequent string fragmentation. It should also be noted that since the dipoles that are reconnected can have a large rapidity span, the correlation between the resulting baryon and anti-baryon is much weaker than for the baryon-anti-baryon pairs produced in the string breaking. We can therefore expect that the correlations will be diluted by the additional baryons from the QCDCR model.
## 4 Final-state effects on correlations
There are many potential final-state effects that may affect correlations between hadrons produced in the string fragmentation. The Lund group has studied several such models, e.g., a model for Fermi-Dirac correlations [17], the so-called rope hadronisation model [15] and a model for repulsion between strings [18]. Of these, the rope model mainly affects the flavour composition, and is not expected to give significant effects on correlations. Also, the string repulsion will give a flow effect in high multiplicity pp events, but the effect is overall quite small in pp, and it will increase correlations both at \(\Delta\phi=0\) and \(\Delta\phi=\pi\) and would therefore not improve the description of baryon-baryon correlations in Pythia8 at small angles. Fermi-Dirac effects would decrease the correlation at small angles for identical baryons, but again the effect is expected to be small3. Also, as already pointed out in [5], the effect found in [2; 3] is the same for pp and p\(\Lambda\), this can also not improve the situation.
Footnote 3: We have confirmed that the effect is small by making a rudimentary implementation in Pythia8 of the Fermi-Dirac model described in [17]
Instead, we will focus on the model for hadronic rescattering [19; 20]. By following the production vertices of all partons in the event, it is possible to calculate the production points of all hadrons in the string fragmentation [21]. Then one can study the possible scatterings between these hadrons in a way similar to the UrQMD [22] and SMASH [23] models. Clearly, the rescatterings will mainly affect hadrons that are propagating in the same general direction, and one may expect that it will reduce correlations at \(\Delta\phi=0\), and we will therefor investigate this model in the following section.
## 5 Comparison with data
So far we have presented a set of ideas that may affect baryon correlations in Pythia, and in this section, we will confront these ideas with data. It can be noted that we have also tested varying standard string fragmentation parameters, such as flavour ratios, di-quark production rate, spin ratios of the di-quarks, and \(p_{\perp}\) assignment to the produced hadrons. We found, however, that none of these changes significantly affects the angular correlations of the same-sign baryon pairs in Pythia.
In Ref.[5], the main conclusion was that only by forcing Pythia8 to produce at most one di-quark breakup per string, it was possible to understand the correlations found in data. Such an artificial change in the behaviour of the string fragmentation is of course not a satisfactory solution, but it gives us hints as to what is needed. We will therefore concentrate on _reducing_ the number of di-quark breakups in strings with (semi-) hard gluons, but also to introduce alternative baryon production mechanisms that do not exhibit the correlations found in string fragmentation. In addition, we will also consider final state effects from hadronic rescattering.
In all simulations, we have used the Pythia version 8.306 to generate pp events at \(\sqrt{s}=7\) TeV. The analysis of the generated events was done using the Rivet [24] routine ALICE_2016_11507157 which mimics the analysis in [3].4
Footnote 4: Note that we have made slight corrections to the kinematical cuts used for different particle species in the Rivet routine, to better reflect the cuts made in the experiment. These corrections will be included in a future release of Rivet.
### The QCD colour reconnection model
We begin with the QCDCR model, which introduces a completely new way of producing baryons. We have used the so-called "mode-0" tune presented in [9], with no further changes. The results are presented in figure 5 and show a remarkable improvement in the baryon-anti-baryon correlations as compared to the default CR model in Pythia8.
The choice of the CR model does not, however, improve the angular (anti-) correlations for the same-sign baryon pairs. The effect is rather a general reduction in correlations, which is expected since the model will produce additional baryons with fewer correlations.
The reduction is more clearly seen for the opposite-sign baryon pairs, where QCDCR model reduces the amplitude of the baryon-antibaryon pair correlations near \(\Delta\phi=0\), and also reduces the corresponding away side anti-correlation, bringing the simulation results in agreement with the data. It is clear that the separation between the junction and anti-junction systems created by the QCDCR model plays a significant role in improving the angular correlations between the opposite-sign baryon pairs.
It should be noted that we have also studied the effects in meson correlations (which are not shown here) but found no significant effect of the choice of CR model there.
Since the QCDCR shows significant improvement in the angular correlations of the opposite-sign baryon pairs, we will in the following use the QCDCR as our base-line set-up when adding other modifications.
### Hadronic rescattering
In a high multiplicity event, the produced hadrons can interact with nearby hadrons via elastic or inelastic scattering. A model for hadronic rescattering [19] was recently added in Pythia, implementing \(2\to 2\) and \(2\to 3\) type inelastic and elastic hadronic rescatterings.5 The naive expectation is that rescattering will blur preexistent corre
Figure 5: Baryon azimuthal correlations. _Top_: pp + \(\bar{\rm p}\bar{\rm p}\) pairs on the left, and \(\rm p\bar{\rm p}\) pairs on the right. _Bottom_: p\(\Lambda\) + \(\bar{\rm p}\bar{\Lambda}\) pairs on the left, and \(\rm p\bar{\Lambda}\) + \(\bar{\rm p}\Lambda\) pairs on the right. Events are generated for pp collisions at \(\sqrt{s}\) = 7 TeV and are compared with ALICE data [3]. The red and blue lines represent results using the Pythia8 Monash tune with MPIs-based colour reconnection, and using QCDCR (mode-0) colour reconnection respectively.
lations between particles going in the same direction, and that is indeed what is seen for the \(\mathrm{p\bar{p}}\) correlations in figure 6. The effect is not very large, but we know that rescattering effects in general are quite modest in pp collisions. We note, however, that for the like-sign proton correlations in figure 6 the effect is much more visible. In fact, the correlation around \(\Delta\phi=0\) is all but wiped out.
The reason for this is somewhat non-trivial, and is related to the annihilation of baryons-anti-baryon pairs in the rescattering. As explained in section 2, the peak at \(\Delta\phi=\Delta\eta=0\) mainly comes from jets, where the two particles are typically produced in the same string. For \(\mathrm{p\bar{p}}\) pairs the main contribution is pairs produced in a single diquark breakup, and since the diquark and anti-diquark will have opposite transverse momenta along the string, it is very unlikely that the baryons formed would rescatter with each other. For \(\mathrm{pp}\) and \(\mathrm{\bar{p}\bar{p}}\) pairs, however, we would need two baryon-antibaryon pairs produced close together along the string and a baryon in one pair could then more easily annihilate with the anti-baryon in the other. This effect turns out to be rather large. Adding rescattering to the \(p_{\perp g}=2\)\(\mathrm{Ge\kern-1.0ptV}\) runs in figure 1 does not affect the shape of the correlations very much, but the number of like-sign pairs will be reduced by around 40%. We, therefore, conclude that the reason for the relatively large effect for \(\mathrm{pp}\) and \(\mathrm{\bar{p}\bar{p}}\) in figure 6 is that the number of pairs stemming from the same string is reduced.
Figure 6: Proton azimuthal correlations for \(\mathrm{pp}\) + \(\mathrm{\bar{p}\bar{p}}\) pairs on the left, and \(\mathrm{p\bar{p}}\) pairs on the right. Events are generated for \(\mathrm{pp}\) collisions at \(\sqrt{s}=7\)\(\mathrm{Te\kern-1.0ptV}\) and are compared with ALICE data [3]. The red lines show results from Pythia8 with QCDCR (mode 0), while blue lines show the same but with hadronic rescattering.
Footnote 1: The \(\chi^{2}\) is defined as the sum of the \(\chi^{2}\) and \(\chi^{2}\).
### Suppressing baryon production close to gluon kinks
There is currently no proper implementation for the possible suppression of baryon production in string fragmentation close to gluon kinks in the popcorn mechanism discussed in section 2.1. Instead, we will study a simplified toy model to understand what the effects may be.
We have decided to constrain the baryon production using the UserHooks facility in Pythia8[4], which allows a user to intervene at different stages of the event generation. In particular, there are options to intervene in the string fragmentation procedure and one possibility is to simply veto the production of a single hadron, based on additional criteria implemented by the user.
In our crude implementation, we veto any baryon produced in a diquark breakup if the previous breakup was in a different string region. As an example consider the case in figure 3 where the gluon has lost all its energy. If there has been a normal \(q\bar{q}\) breakup in string region \(\mathbf{C}\) and the next breakup is a diquark-anti-diquark breakup in region \(\mathbf{B}\), we veto the baryon to be produced, and tell Pythia to try another breakup instead. It should be noted that the Lund string fragmentation model is left-right symmetric, and if we go from the other (\(\bar{q}\)) end and the same diquark breakup occurred in region \(\mathbf{B}\) the following \(q\bar{q}\) breakup in \(\mathbf{C}\) producing the same baryon from a _kinky_ string piece, is not vetoed. The reason for implementing it in
Figure 7: Proton azimuthal correlations for pp + \(\bar{\mathrm{p}}\bar{\mathrm{p}}\) pairs on the left, and \(\mathrm{p}\bar{\mathrm{p}}\) pairs on the right. Events are generated for pp collisions at \(\sqrt{s}=7\) TeV and are compared with ALICE data [3]. The red lines show results from Pythia8 with QCDCR (mode-0), while blue lines show the same but with a veto on primary baryons spanning a gluon kink as explained in the text.
this way is technical, but effectively it will result in a suppression of baryons produced around a string corner with a factor of 0.5.
In figure 7 see the effect of applying this toy model to Pythia8 including QCDCR. As expected the jet peak for baryon pairs is reduced, and for \(\mathrm{pp}+\bar{\mathrm{p}}\bar{\mathrm{p}}\) the lines are moved closer to the data. Unfortunately, the reduction of the jet peak is also present for \(\mathrm{p}\bar{\mathrm{p}}\), which worsens somewhat the excellent agreement with data obtained from the QCDCR model.
It should be noted that our toy model will reduce the overall number of baryons produced in general, even in \(\mathrm{e}^{+}\mathrm{e}^{-}\) collisions, since also there we have gluon kinks. To be completely fair we should therefore have retuned the parameters affecting baryon production to obtain the same reproduction of LEP data. We would, however, expect the reduction of the jet peak to stay more or less the same.
As a final result, we show in figure 8 a comparison between the default Pythia and the accumulated changes of all three models investigated her: QCDCR, hadronic rescattering and the vetoing of baryons close to gluon kinks. We see that the reproduction of the ALICE date is far from perfect if the models are added, but there is a clear improvement over the default Pythia8. The jet peak is reduced both for \(\mathrm{pp}\) +\(\bar{\mathrm{p}}\bar{\mathrm{p}}\) and \(\mathrm{p}\bar{\mathrm{p}}\), and we see that there is even an anti-correlation for like-sign proton pairs around \(\Delta\phi=\Delta\eta=0\).
Figure 8: Proton azimuthal correlations for \(\mathrm{pp}\) + \(\bar{\mathrm{p}}\bar{\mathrm{p}}\) pairs on the left, and \(\mathrm{p}\bar{\mathrm{p}}\) pairs on the right. Events are generated for \(\mathrm{pp}\) collisions at \(\sqrt{s}=7\) TeV and are compared with ALICE data [3]. The red lines show results from the default Pythia8, while blue lines show the result with QCDCR (mode-0) colour reconnection and hadronic rescattering (HR) switched on and with a veto on primary baryons spanning a gluon kink as explained in the text.
Discussion and summary
We have shown that the observed anti-correlation for the same-sign baryon pairs in the ALICE experiment for pp collisions is a non-trivial outcome of many-fold effects. We also presented the first steps towards understanding the failure of Pythia in reproducing these experimental results. We found that two already existing models in Pythia8, the QCDCR model and the hadronic rescattering model, have a significant effect on the correlations, and adding these to the default Pythia8 improves the description of data significantly.
The QCDCR model produces additional baryons due to junction systems forming as a part of the reconnections of the colour dipoles. Such junction baryons are much less correlated than those produced in string fragmentation. We show that it visibly reduces the correlations between the opposite-sign baryon pairs in the jet peak near \(\Delta\phi=0\). As a result, Pythia is able to reproduce the angular correlation distribution for the opposite-sign baryon pairs.
The anti-correlations in the same-sign baryon pairs are rather complex results. Although the QCDCR model improves the Pythia results, it is not sufficient. Adding the hadronic scattering model, we found that the effect of annihilation of baryon-anti-baryon in jets with more than one baryon-anti-baryon pair is quite significant, while if there is only one pair, there is typically no annihilation. This gives a further reduction of the jet peak for same-sign baryons, while the effects on unlike-sign correlations are small. Still, the jet peak for same-sign baryons in Pythia needs to be further reduced in order to reproduce data.
The authors in [5] managed to make Pythia reproduce data, by forcibly forbidding more than one baryon-anti-baryon pair to be produced in a string. This effectively removed the jet peak in the same-sign baryon correlations, leaving only the anti-correlation in the away-side ridge. Here we instead propose a more physical mechanism, where baryon production close to gluon kinks in a string is suppressed. The motivation for this comes from the popcorn model of baryon production in a string. Here an extra non-breaking virtual \(q\bar{q}\) is required to exist before a \(q\bar{q}\) pair breaking occurs to produce an effective di-quark breakup, and we argue that it is less likely to have such an extra pair close to a gluon kink.
Since the jet peak around \(\Delta\phi=\Delta\eta=0\) in the angular correlations in pp collisions mainly consist of particle pairs from the same (mini-) jet (which at the LHC is likely to be a gluon jet), one would then expect a reduction of the peak for baryon pairs in general. For same-sign baryon pairs, we expect the reduction to be even bigger since we require two such popcorn breakups in the same string. We have here qualitatively confirmed that this is the case using a toy model, where we simply disallow some such breakups close to gluon kinks, which has motivated us to attempt a more realistic modelling of the effect in the future. Such a model would have to take into account the size of the transverse momentum of the gluon kink, as well as
the distance between the breakup and the kink. Since the overall number of baryons would be reduced, such a model would also require a proper retuning of the baryon parameters in Pythia8, but it is still likely that the jet peak for same-sign baryon correlations would be reduced. Whether it will be reduced enough to reproduce data remains to be seen.
Finally, we note that there are other independent measurements that could verify our hypothesis of suppressed baryon production close to gluons. One obvious example is to compare the baryon-to-meson ratio inside a gluon jet to that of a quark jet, which could be done by comparing inclusive jets to jets produced together with hard photons. We are not aware of any study where the jet substructure has been studied for identified hadrons, but we would certainly like encourage our experimental colleagues to pursue such measurements at the LHC.
## Acknowledgements
We would like to thank Gosta Gustafson for coming up with the idea of suppressing popcorn production close to gluons. We would also like to thank Christian Bierlich and Torbjorn Sjostrand for useful discussions.
This work was funded in part by the Knut and Alice Wallenberg Foundation, contract number 2017.0036, Swedish Research Council, contracts numbers 2016-03291 and 2020-04869, in part by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 668679, and in part by the MCnetITN3 H2020 Marie Curie Initial Training Network, contract 722104.
|
2306.16876 | Vieta-Lucas Wavelet based schemes for the numerical solution of the
singular models | In this paper, numerical methods based on Vieta-Lucas wavelets are proposed
for solving a class of singular differential equations. The operational matrix
of the derivative for Vieta-Lucas wavelets is derived. It is employed to reduce
the differential equations into the system of algebraic equations by applying
the ideas of the collocation scheme, Tau scheme, and Galerkin scheme
respectively. Furthermore, the convergence analysis and error estimates for
Vieta-Lucas wavelets are performed. In the numerical section, the comparative
analysis is presented among the different versions of the proposed Vieta-Lucas
wavelet methods, and the accuracy of the approaches is evaluated by computing
the errors and comparing them to the existing findings. | Shivani Aeri, Rakesh Kumar, Dumitru Baleanu, Kottakkaran Sooppy Nisar | 2023-06-29T11:59:58Z | http://arxiv.org/abs/2306.16876v2 | # Vieta-Lucas Wavelet based schemes for the numerical solution of the singular models
###### Abstract
In this paper, numerical methods based on Vieta-Lucas wavelets are proposed for solving a class of singular differential equations. The operational matrix of the derivative for Vieta-Lucas wavelets is derived. It is employed to reduce the differential equations into the system of algebraic equations by applying the ideas of the collocation scheme, Tau scheme, and Galerkin scheme respectively. Furthermore, the convergence analysis and error estimates for Vieta-Lucas wavelets are performed. In the numerical section, the comparative analysis is presented among the different versions of the proposed Vieta-Lucas wavelet methods, and the accuracy of the approaches is evaluated by computing the errors and comparing them to the existing findings.
**Keywords: Vieta-Lucas wavelets, generating function, Rodrigues' formula, collocation method, Galerkin method, Tau method.**
## 1 Introduction
Singular differential equations (SDEs) have been attracting applied mathematicians for many years because of their applicability in various branches of science, engineering, and technology [1, 2]. We consider the following SDEs [3, 4]:
\[\mathrm{Y}^{\prime\prime}(\zeta)+\frac{\mu}{\zeta}\mathrm{Y}^{\prime}(\zeta)+ \mathrm{f}(\zeta,\mathrm{Y}(\zeta))=\mathrm{g}(\zeta),\ \ \zeta\in(0,\mathrm{L}), \tag{1}\]
with initial conditions
\[\mathrm{Y}(\zeta)\mid_{\zeta=0}=\alpha_{0},\ \mathrm{Y}^{\prime}(\zeta)\mid_{ \zeta=0}=\alpha_{1}, \tag{2}\]
or boundary conditions
\[\mathrm{Y}(\zeta)\mid_{\zeta=0}=\beta_{0},\ \mathrm{Y}(\zeta)\mid_{\zeta= \mathrm{L}}=\beta_{1}, \tag{3}\]
where \(\mathrm{g}(\zeta)\) is an analytical function, \(\mathrm{f}(\zeta,\mathrm{Y}(\zeta))\) is a continuous real valued function, and \(\alpha_{0}\), \(\alpha_{1}\), \(\beta_{0}\), \(\beta_{1}\) are arbitrary constants. Several researchers have been interested in singular models since the available singularity in these models makes computational efforts more tedious. The few well-known singular models which have wider applications in thermodynamics, astrophysics, and atomic physics, are named as Emden-Fowler model, Lane-Emden model, and Thomas-Fermi model [5]. The Runge-Kutta technique, Euler's method,
Adams method, Milne predictor-corrector approach, and other traditional methods fail to analyze the singular models efficiently [6]. SDEs generally appear with variable coefficients, and the solution to these equations can be obtained by the power series method, continued fractions, and integral transforms. The solution to these equations leads to the generation of several special functions [7]. The existence and uniqueness of the solutions for the SDEs have been investigated by Ford and Pennline [8]. The singularities occurring in these equations have encouraged numerous researchers to explore the solutions by proposing novel numerical schemes. To address singular nonlinear problems, Bender et al. [9] presented a perturbation approach. Russell and Shampine [10] used patch bases, collocation method, and finite difference method for the treatment of SBVPs. Wazwaz [11, 12] respectively employed Adomian decomposition to solve SIVPs and variational iteration method to solve the nonlinear SBVPs. By incorporating a quasi-linearization approach, El-Gebeily and O'Regan [13] solved second-order nonlinear SDEs. Yildirim and Ozis [14] employed the homotopy perturbation method to solve SIVPs. Pandey [15] used the finite difference method to solve a class of SBVPs. Sabir et al. used artificial neural networking based algorithms to solve various SDEs [16, 17, 18]. Researchers have also utilized Chebyshev polynomials [19], Legendre polynomials [20], Laguerre polynomials [21, 22], Jacobi polynomials [23] and Hermite polynomials [24] to derive the solutions of SDEs. However, less emphasis has been placed on the Vieta-Lucas polynomials (VLPs). Recently, VLPs based schemes are used by Agarwal [25, 26, 27, 28] and El-Sayed to solve fractional advection-dispersion equation and Heydari et. al [29] to solve variable order fractional Ginzburg-landau equations.
In recent decades, wavelets have piqued the interest of many academicians due to their ability in solving differential equations with high precision and minimal processing effort [30, 31]. Recently, Chebyshev wavelets [32, 33], Legendre wavelets [34], Hermite wavelets [35], Jacobi wavelets [36], Gegenbauer wavelets [37] and Lucas wavelets [38, 39] have been successfully utilized in literature for the simulation of various phenomena of scientific and technical importance. Vieta-Lucas wavelets have not been used in past to solve singular differential equations yet. Recently, some work is reported in the literature on Vieta-Lucas polynomials, but not much attention has been given to Vieta-Lucas wavelets. The wavelets are localized in nature and have compact support so to fill the literature gap, we choose to construct the Vieta-Lucas wavelets.
The main concern of the paper is to propose a novel class of wavelets called as Vieta-Lucas wavelets that can handle various differential equations efficiently. The novelty of the work includes the derivation of generating function and Rodrigues' formula for VLPs. The shifted form of VLPs is used to prepare the Vieta-Lucas wavelets. The operational matrix (OM) of derivatives is proposed to formulate the numerical scheme. The Emden-Fowler and Lane-Emden type SDEs are also solved by the proposed approaches and perform well near the singularity. The accuracy of the methods is analyzed by computing the errors in \(L^{2}\) and \(L^{\infty}\) norms.
The remaining part is organized as: A brief overview of VLPs is given in section 2 by including generating function, Rodrigues' formula, and other significant properties. In order to solve the problems over \([0,L]\), shifted version of VLPs is given in section 3. In section 4, Vieta-Lucas wavelets and their functional approximations are presented, and their operational matrix of the derivative is derived in section 5. In section 6 Vieta-Lucas wavelets-based numerical schemes are provided. Section 7 deals with the convergence and error estimation for Vieta-Lucas wavelets. In section 8, numerous illustrations are solved by the proposed approaches, and results are provided in section 9 that validate the efficiency and reliability of the proposed methods. Finally, the conclusion of the proposed work is demonstrated in section 10.
## 2 Vieta-Lucas polynomials and properties
In this section, we give brief introduction about Vieta-Lucas polynomials, recurrence relation, orthogonality property, generating function, Vieta-Lucas differential equation and Rodrigues' formula.
**Definition 2.1**.: The Vieta-Lucas polynomials \(\mathrm{VL}_{\mathrm{m}}(\zeta)\) of degree \(\mathrm{m}\) (\(\mathrm{m}\in\mathbb{N}\cup\{0\}\)) can be defined as [40, 41]:
\[\mathrm{VL}_{\mathrm{m}}(\zeta)=2\cos(\mathrm{m}\delta), \tag{4}\]
where \(\delta=\arccos\left(\frac{\zeta}{2}\right)\) and \(|\zeta|\in[-2,2]\), \(\delta\in[0,\pi]\).
**Proposition 2.1**.: The recurrence relation for Vieta-Lucas polynomials \(\mathrm{VL}_{\mathrm{m}}(\zeta)\) is given by [40]:
\[\mathrm{VL}_{\mathrm{m}}(\zeta)=\zeta\mathrm{VL}_{\mathrm{m-1}}(\zeta)-\mathrm{VL }_{\mathrm{m-2}}(\zeta),\ \ \mathrm{m}\geq 2, \tag{5}\]
with \(\mathrm{VL}_{0}(\zeta)=2\) and \(\mathrm{VL}_{1}(\zeta)=\zeta\).
**Proposition 2.2**.: The Vieta-Lucas polynomials can be represented in terms of power series expansion as [40]:
\[\mathrm{VL}_{\mathrm{m}}(\zeta)=\sum_{\mathrm{i}=0}^{\lfloor m/2\rfloor}(-1)^ {\mathrm{i}}\frac{\mathrm{m}(\mathrm{m}-\mathrm{i}-1)!}{\mathrm{i}!(\mathrm{m} -2\mathrm{i})!}\zeta^{\mathrm{m-2i}},\ \ \mathrm{m}\geq 1. \tag{6}\]
The first few Vieta-Lucas polynomials are given as:
\[\mathrm{VL}_{0}(\zeta)=2,\] \[\mathrm{VL}_{1}(\zeta)=\zeta,\] \[\mathrm{VL}_{2}(\zeta)=\zeta^{2}-2,\] \[\mathrm{VL}_{3}(\zeta)=\zeta^{3}-3\zeta,\] \[\mathrm{VL}_{4}(\zeta)=\zeta^{4}-4\zeta^{2}+2,\] \[\mathrm{VL}_{5}(\zeta)=\zeta^{5}-5\zeta^{3}+5\zeta,\] \[\mathrm{VL}_{6}(\zeta)=\zeta^{6}-6\zeta^{4}+9\zeta^{2}-2.\]
**Proposition 2.3**.: The Vieta-Lucas polynomials \(\mathrm{VL}_{\mathrm{n}}(\zeta)\) and \(\mathrm{VL}_{\mathrm{m}}(\zeta)\) defined over \([-2,2]\) are orthogonal in weighted sense with the weight function \(\mathrm{w}(\zeta)=\frac{1}{\sqrt{4-\zeta^{2}}}\) and satisfy the following condition [25]:
\[\langle\mathrm{VL}_{\mathrm{n}}(\zeta),\mathrm{VL}_{\mathrm{m}}(\zeta)\rangle_ {\mathrm{w}(\zeta)}=\int_{-2}^{2}\mathrm{VL}_{\mathrm{n}}(\zeta)\mathrm{VL}_ {\mathrm{m}}(\zeta)\mathrm{w}(\zeta)\,\mathrm{d}\zeta=\begin{cases}4\pi,&n=m=0, \\ 2\pi,&n=m\neq 0,\\ 0,&n\neq m.\end{cases} \tag{7}\]
**Proposition 2.4**.: The generating function for Vieta-Lucas polynomials is defined as:
\[\sum_{\mathrm{m}=0}^{\infty}\mathrm{VL}_{\mathrm{m}}(\zeta)\mathrm{t}^{ \mathrm{m}}=\frac{2-\zeta\mathrm{t}}{1-\zeta\mathrm{t}+\mathrm{t}^{2}}. \tag{8}\]
Proof.: Since,
\[\sum_{\mathrm{m}=0}^{\infty}\mathrm{VL}_{\mathrm{m}}(\zeta)\mathrm{t}^{ \mathrm{m}}=\mathrm{VL}_{0}(\zeta)\mathrm{t}^{0}+\mathrm{VL}_{1}(\zeta) \mathrm{t}^{1}+\sum_{\mathrm{m}=2}^{\infty}\mathrm{VL}_{\mathrm{m}}(\zeta) \mathrm{t}^{\mathrm{m}}.\]
Therefore,
\[\sum_{m=0}^{\infty}\mathrm{VL}_{\mathrm{m}}(\zeta)\mathrm{t}^{ \mathrm{m}} =2+\zeta\mathrm{t}+\sum_{\mathrm{m}=2}^{\infty}\left[\zeta\mathrm{VL}_ {\mathrm{m}-1}(\zeta)-\mathrm{VL}_{\mathrm{m}-2}(\zeta)\right]\mathrm{t}^{ \mathrm{m}},\] \[\implies\sum_{m=0}^{\infty}\mathrm{VL}_{\mathrm{m}}(\zeta)\mathrm{ t}^{\mathrm{m}} =2+\zeta\mathrm{t}+\zeta\sum_{\mathrm{m}=1}^{\infty}\mathrm{VL}_ {\mathrm{m}}(\zeta)\mathrm{t}^{\mathrm{m}+1}-\sum_{\mathrm{m}=0}^{\infty} \mathrm{VL}_{\mathrm{m}}(\zeta)\mathrm{t}^{\mathrm{m}+2},\] \[\implies\sum_{m=0}^{\infty}\mathrm{VL}_{\mathrm{m}}(\zeta)\mathrm{ t}^{\mathrm{m}} =2+\zeta\mathrm{t}+\zeta\mathrm{t}[\sum_{\mathrm{m}=0}^{\infty}\mathrm{VL}_ {\mathrm{m}}(\zeta)\mathrm{t}^{\mathrm{m}}-\mathrm{VL}_{0}(\zeta)]-\mathrm{t}^ {2}\sum_{\mathrm{m}=0}^{\infty}\mathrm{VL}_{\mathrm{m}}(\zeta)\mathrm{t}^{ \mathrm{m}},\] \[\implies\sum_{m=0}^{\infty}\mathrm{VL}_{\mathrm{m}}(\zeta)\mathrm{ t}^{\mathrm{m}} =2-\zeta\mathrm{t}+(\zeta\mathrm{t}-\mathrm{t}^{2})\sum_{\mathrm{m}=0}^{\infty} \mathrm{VL}_{\mathrm{m}}(\zeta)\mathrm{t}^{\mathrm{m}},\] \[\implies\sum_{m=0}^{\infty}\mathrm{VL}_{\mathrm{m}}(\zeta)\mathrm{ t}^{\mathrm{m}} =\frac{2-\zeta\mathrm{t}}{1-\zeta\mathrm{t}+\mathrm{t}^{2}}.\]
**Proposition 2.5**.: The Vieta-Lucas differential equation is defined as [42]:
\[(4-\zeta^{2})\frac{\mathrm{d}^{2}\mathrm{Y}}{\mathrm{d}\zeta^{2}}-\zeta\frac{ \mathrm{d}\mathrm{Y}}{\mathrm{d}\zeta}+\mathrm{m}^{2}\mathrm{Y}=0,\ \ \mathrm{m}\in\mathbb{N}. \tag{9}\]
**Theorem 2.1**.: _The Rodrigues' formula for Vieta-Lucas polynomials \(\mathrm{VL}_{\mathrm{m}}(\zeta)\) can be obtained as:_
\[\mathrm{VL}_{\mathrm{m}}(\zeta)=(-1)^{\mathrm{m}}\,2\,\frac{\mathrm{m}!}{(2 \mathrm{m})!}(4-\zeta^{2})^{\frac{1}{2}}\frac{\mathrm{d}^{\mathrm{m}}}{ \mathrm{d}\zeta^{\mathrm{m}}}\Big{\{}(4-\zeta^{2})^{\mathrm{m}-\frac{1}{2}} \Big{\}}. \tag{10}\]
Proof.: Consider
\[\mathrm{f}(\zeta)=(4-\zeta^{2})^{\mathrm{m}-\frac{1}{2}}.\]
On differentiating, we obtain
\[\mathrm{f}^{\prime}(\zeta)=-(2\mathrm{m}-1)\zeta(4-\zeta^{2})^{\mathrm{m}- \frac{3}{2}},\]
\[\Rightarrow(4-\zeta^{2})\mathrm{f}^{\prime}(\zeta)+(2\mathrm{m}-1)\zeta \mathrm{f}(\zeta)=0.\]
On differentiating it (m+1) times, we have
\[(4-\zeta^{2})\,\mathrm{D}^{\mathrm{m}+2}\mathrm{f}(\zeta)-3\,\zeta\,\mathrm{ D}^{\mathrm{m}+1}\mathrm{f}(\zeta)+(\mathrm{m}^{2}-1)\,\mathrm{D}^{\mathrm{m}} \mathrm{f}(\zeta)=0,\ \ \ \ \mathrm{where}\ \mathrm{D}\equiv\frac{\mathrm{d}}{\mathrm{d}\zeta}. \tag{11}\]
Let \(\mathrm{g}(\zeta)=(4-\zeta^{2})^{\frac{1}{2}}\,\mathrm{D}^{\mathrm{m}}\mathrm{ f}(\zeta)\), then (11) reduces to
\[(4-\zeta^{2})\,\mathrm{g}^{\prime\prime}(\zeta)-\zeta\,\mathrm{g}^{\prime}( \zeta)+\mathrm{m}^{2}\mathrm{g}(\zeta)=0,\]
which is equivalent to Vieta-Lucas differential equation. Thus, we can choose \(\mathrm{VL}_{\mathrm{m}}(\zeta)=\mathrm{C}_{\mathrm{m}}\,\mathrm{g}(\zeta)\), where \(\mathrm{C}_{\mathrm{m}}\) is a constant to be determined. To find \(\mathrm{C}_{\mathrm{m}}\), the coefficients of \(\zeta^{\mathrm{m}}\) in \(\mathrm{VL}_{\mathrm{m}}(\zeta)\) and \(\mathrm{g}(\zeta)\) are required to be compared. The coefficient of \(\zeta^{\mathrm{m}}\) in \(\mathrm{VL}_{\mathrm{m}}(\zeta)\) is 1.
Since,
\[\mathrm{g}(\zeta) =(4-\zeta^{2})^{\frac{1}{2}}\,\mathrm{D}^{\mathrm{m}}\,(4-\zeta^ {2})^{\mathrm{m}-\frac{1}{2}},\] \[=(4-\zeta^{2})^{\frac{1}{2}}\,\sum_{\mathrm{j}=0}^{\mathrm{m}} \binom{\mathrm{m}}{\mathrm{j}}\,\mathrm{D}^{\mathrm{m-j}}\,(2+\zeta)^{\mathrm{ m}-\frac{1}{2}}\,\mathrm{D}^{\mathrm{j}}\,(2-\zeta)^{\mathrm{m}-\frac{1}{2}},\ \ \ \ \text{(by Leibniz's rule)}\] \[=(-1)^{\mathrm{m}}\,\mathrm{m}!\,\sum_{\mathrm{j}=0}^{\mathrm{m}} \binom{\mathrm{m}-\frac{1}{2}}{\mathrm{j}}\binom{\mathrm{m}-\frac{1}{2}}{ \mathrm{m}-\frac{1}{2}}(\zeta-2)^{\mathrm{m}-\mathrm{j}}\,(\zeta+2)^{\mathrm{ j}}.\]
Therefore, the coefficient of \(\zeta^{\mathrm{m}}\) in \(\mathrm{g}(\zeta)\) is
\[(-1)^{\mathrm{m}}\,\mathrm{m}!\,\sum_{\mathrm{j}=0}^{\mathrm{m}}\binom{ \mathrm{m}-\frac{1}{2}}{\mathrm{j}}\binom{\mathrm{m}-\frac{1}{2}}{\mathrm{m}- \mathrm{j}}.\]
The Vandermonde's convolution modifies the above coefficient of \(\zeta^{\mathrm{m}}\) to
\[\frac{(-1)^{\mathrm{m}}}{2}\frac{(2\mathrm{m})!}{\mathrm{m}!}.\]
Thus, \(\mathrm{C}_{\mathrm{m}}=(-1)^{\mathrm{m}}\,2\ \frac{\mathrm{m}!}{(2\mathrm{m})!}\). Hence the Rodrigues' formula.
_Remark 2.1_.: The zeroes and extreme points of \(\mathrm{VL}_{\mathrm{m}}(\zeta)\) in \([-2,2]\) are respectively presented as \(\zeta_{j}=2\cos\big{(}(j-\frac{1}{2})\frac{\pi}{m}\big{)}\), \(\zeta_{j}=2\cos\big{(}j\frac{\pi}{m}\big{)}\), where j=1,2,3,..., m [40].
**Proposition 2.6**.: The function \(\zeta^{m}\) can be expressed in terms of \(\mathrm{VL}_{\mathrm{m}}(\zeta)\) as:
\[\zeta^{\mathrm{m}}=\sum_{\mathrm{j}=0}^{\left\lfloor\mathrm{m}/2\right\rfloor^{ *}}\binom{\mathrm{m}}{\mathrm{j}}\mathrm{VL}_{\mathrm{m}-2\mathrm{j}}( \zeta), \tag{12}\]
where '\(\star\)' denotes that the last term in the sum is to be halved when m is even.
Proof.: Since,
\[\zeta^{m}=(2\cos\delta)^{m}=(e^{i\delta}+e^{-i\delta})^{m}. \tag{13}\]
Therefore, the use of binomial expansion gives
\[\zeta^{m} =e^{im\delta}+\binom{m}{1}e^{i(m-2)\delta}+\cdots+\binom{m}{m-1}e^ {-i(m-2)\delta}+e^{-im\delta},\] \[=(e^{im\delta}+e^{-im\delta})+\binom{m}{1}(e^{i(m-2)\delta}+e^{-i (m-2)\delta})+\ldots,\] \[=2\cos m\delta+\binom{m}{1}2\cos(m-2)\delta+\binom{m}{2}2\cos(m-4 )\delta+\ldots,\] \[=\mathrm{VL}_{\mathrm{m}}(\zeta)+\binom{m}{1}\mathrm{VL}_{ \mathrm{m}-2}(\zeta)+\binom{m}{2}\mathrm{VL}_{\mathrm{m}-4}(\zeta)+\ldots.\]
Hence the result.
## 3 Shifted Vieta-Lucas polynomials
**Definition 3.1**.: The shifted Vieta-Lucas polynomials defined over \([0,2]\) are denoted by \(\mathrm{VL}_{\mathrm{m}}^{*}(\zeta)\) of degree \(\mathrm{m}\in\mathbb{N}\cup\{0\}\) as [29]:
\[\mathrm{VL}_{\mathrm{m}}^{*}(\zeta)=\mathrm{VL}_{\mathrm{m}}(2\zeta-2). \tag{14}\]
The recurrence relation for shifted Vieta-Lucas polynomials \(\mathrm{VL}_{\mathrm{m}}^{*}(\zeta)\) is [29]:
\[\mathrm{VL}_{\mathrm{m}}^{*}(\zeta)=(2\zeta-2)\mathrm{VL}_{\mathrm{m}-1}^{*}( \zeta)-\mathrm{VL}_{\mathrm{m}-2}^{*}(\zeta), \tag{15}\]
provided \(\mathrm{VL}_{0}^{*}(\zeta)=2\) and \(\mathrm{VL}_{1}^{*}(\zeta)=2\zeta-2\).
The shifted Vieta-Lucas polynomials satisfy the following orthogonality property [29]:
\[\langle\mathrm{VL}_{\mathrm{n}}^{*}(\zeta),\mathrm{VL}_{\mathrm{m}}^{*}(\zeta )\rangle_{\mathrm{w}^{*}(\zeta)}=\int_{0}^{2}\mathrm{VL}_{\mathrm{n}}^{*}( \zeta)\mathrm{VL}_{\mathrm{m}}^{*}(\zeta)\mathrm{w}^{*}(\zeta)\,\mathrm{d} \zeta=\begin{cases}4\pi,&n=m=0,\\ 2\pi,&n=m\neq 0,\\ 0,&n\neq m,\end{cases} \tag{16}\]
where \(\mathrm{w}^{*}(\zeta)=\frac{1}{\sqrt{2\zeta-\zeta^{2}}}\) is the weight function of shifted Vieta-Lucas polynomials.
## 4 Vieta-Lucas wavelets and function approximation
Wavelet is a type of function that is derived through the dilation and translation of the mother wavelet. The continuous wavelet family with dilation parameter \(h\) and translation parameter \(r\) is defined as [43]:
\[\gamma_{h,r}(\zeta)=|h|^{-1/2}\gamma(\frac{\zeta-r}{h}),\ \ \ \ h,r\in\mathbb{R},h\neq 0. \tag{17}\]
If \(h=h_{0}^{-k},r=sr_{0}h_{0}^{-k},h_{0}>1,r_{0}>0\), then the discrete wavelet family consists of the following members:
\[\gamma_{k,s}(\zeta)=|h_{0}|^{k/2}\gamma(h_{0}\zeta-sr_{0}),\ \ \ \ k,s\in \mathbb{Z}^{+}, \tag{18}\]
where \(\gamma_{k,s}(x)\) constitutes the wavelet basis for \(L_{2}(\mathbb{R})\). For a specific choice of \(h_{0}=2\) and \(r_{0}=1\), \(\gamma_{k,s}(x)\) constitutes an orthonormal basis.
**Definition 4.1**.: The Vieta-Lucas wavelets \(\Upsilon_{\rm s,m}(\zeta)=\Upsilon({\rm k,s,m},\zeta)\) is defined on the interval [0,2) as:
\[\Upsilon_{\rm s,m}(\zeta)=\begin{cases}2^{\frac{1}{2}}\ \widehat{\nabla} \overline{\rm L}_{\rm m}(\,2^{\rm k}\zeta-\hat{\rm s}\,),&\frac{\hat{\rm s}-2 }{2^{\rm k}}\leq\zeta<\frac{\hat{\rm s}+2}{2^{\rm k}},\\ 0,&\text{Otherwise},\end{cases} \tag{19}\]
where
\[\widehat{\nabla}\overline{\rm L}_{m}(\zeta)=\begin{cases}\frac{1}{\sqrt{\pi} },&m=0,\\ \frac{1}{\sqrt{2\pi}}{\rm VL}_{\rm m}(\zeta),&m\geq 1,\end{cases} \tag{20}\]
\(m=0,1,2,\ldots,M-1\); M be the maximum order of Vieta-Lucas polynomials, \(s=1,2,\ldots,2^{k-1}\); \(k=1,2,3,\ldots\); \(\hat{s}=2(2s-1)\).
_Remark 4.1_.: Vieta-Lucas wavelets form an orthogonal set with respect to the weight functions \({\rm w}_{\rm s}(\zeta)={\rm w}(2^{\rm k}\zeta-\hat{\rm s})\).
**Definition 4.2**.: A function \({\rm Y}(\zeta)\) defined over \(L^{2}{}_{w_{\rm s}}[0,2]\) can be written in terms of Vieta-Lucas wavelets series as:
\[{\rm Y}(\zeta)=\sum_{\rm s=1}^{\infty}\sum_{\rm m=0}^{\infty}\Lambda_{\rm s,m }\Upsilon_{\rm s,m}(\zeta), \tag{21}\]
with
\[\Lambda_{\rm s,m}=\langle{\rm Y}(\zeta),\Upsilon_{\rm s,m}(\zeta)\rangle_{{ \rm w}_{\rm s}(\zeta)}=\int_{0}^{2}{\rm Y}(\zeta)\Upsilon_{\rm s,m}(\zeta){ \rm w}_{\rm s}(\zeta)\,{\rm d}\zeta, \tag{22}\]
where \(\langle*,*\rangle\) denotes the inner product.
The truncated form of Vieta Lucas wavelet series can be written as:
\[\bar{\rm Y}(\zeta)\cong\sum_{\rm s=1}^{2^{k-1}}\sum_{\rm m=0}^{\rm M-1} \Lambda_{\rm s,m}\Upsilon_{\rm s,m}(\zeta)=\Lambda^{\rm T}\Upsilon(\zeta), \tag{23}\]
where \(\Lambda\) and \(\Upsilon(\zeta)\) are \(\eta\times 1\) matrices in the following form for \(\eta=2^{k-1}M\), and
\[\Lambda =[\Lambda_{1,0},\Lambda_{1,1},\ldots,\Lambda_{1,\rm M-1}, \Lambda_{2,0},\ldots,\Lambda_{2,\rm M-1},\ldots,\Lambda_{2^{k-1},0},\ldots, \Lambda_{2^{k-1},\rm M-1}]^{\rm T}, \tag{24}\] \[\Upsilon(\zeta) =[\Upsilon_{1,0}(\zeta),\ldots,\Upsilon_{1,\rm M-1}(\zeta), \Upsilon_{2,0}(\zeta),\ldots,\Upsilon_{2,\rm M-1}(\zeta),\ldots,\Upsilon_{2^{k -1},0}(\zeta),\ldots,\Upsilon_{2^{k-1},\rm M-1}(\zeta)]^{\rm T}. \tag{25}\]
## 5 Vieta-Lucas wavelet based operational matrix
**Theorem 5.1**.: _Let \(\Upsilon(\zeta)\) be the Vieta-Lucas wavelets vector defined in (25). Then the derivative of the vector \(\Upsilon(\zeta)\) can be expressed as:_
\[\frac{{\rm d}\Upsilon(\zeta)}{{\rm d}\zeta}={\rm D}\Upsilon(\zeta), \tag{26}\]
_where \(D\) is the \(\eta\times\eta\) matrix given by_
\[{\rm D}=\begin{pmatrix}{\rm F}&0&\ldots&0\\ 0&{\rm F}&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&0&{\rm F}\end{pmatrix}\]
_in which \(F\) is \(M\times M\) square matrix whose \((u,v)^{th}\) element is defined as:_
\[{\rm F}_{{\rm u},{\rm v}}=\begin{cases}\frac{2^{\rm k}(u-1)}{\sqrt{\alpha_{u -1}\sqrt{\alpha_{v-1}}}},&{\rm u}=2,\ldots,{\rm M};\,{\rm v}=1,2,\ldots,{\rm u }-1;\,({\rm u}+{\rm v})=\text{odd},\\ 0,&\text{Otherwise}.\end{cases} \tag{27}\]
Proof.: Using shifted Vieta-Lucas polynomials vector, \(\Upsilon(\zeta)\) can be rewritten as
\[\Upsilon_{\mathrm{u}}(\zeta)=\Upsilon_{\mathrm{s,m}}(\zeta)=2^{\frac{ \mathrm{s}}{2}}\sqrt{\frac{1}{2\pi\alpha_{\mathrm{m}}}}\mathrm{VL}_{\mathrm{m} }^{*}(2^{\mathrm{k-1}}\zeta-2\mathrm{s}+2)\chi_{[\frac{\mathrm{s-1}}{2^{\mathrm{ k-2}}},\frac{\mathrm{s}}{2^{\mathrm{k-2}}}]}, \tag{28}\]
where \(\mathrm{u}=(\mathrm{s}-1)\mathrm{M}+(\mathrm{m}+1)\); \(\mathrm{s}=1,2,3,....2^{\mathrm{k-1}}\), \(\mathrm{m}=0,1,2,....\mathrm{M}-1\) and
\[\chi_{[\frac{\mathrm{s-1}}{2^{\mathrm{k-2}}},\frac{\mathrm{s}}{2^{\mathrm{k- 2}}}]}=\begin{cases}1,&\zeta\in[\frac{\mathrm{s-1}}{2^{\mathrm{k-2}}},\frac{ \mathrm{s}}{2^{\mathrm{k-2}}}],\\ 0,&\mathrm{Otherwise}.\end{cases} \tag{29}\]
On differentiating (28) with respect to \(\zeta\), we obtain
\[\frac{\mathrm{d}\Upsilon_{\mathrm{u}}(\zeta)}{\mathrm{d}\zeta}=2^{\frac{ \mathrm{s_{0}}}{2}-1}\sqrt{\frac{1}{2\pi\alpha_{\mathrm{m}}}}\frac{\mathrm{d}} {\mathrm{d}\zeta}\mathrm{VL}_{\mathrm{m}}^{*}(2^{\mathrm{k-1}}\zeta-2\mathrm{s }+2)\chi_{[\frac{\mathrm{s-1}}{2^{\mathrm{k-2}}},\frac{\mathrm{s}}{2^{\mathrm{ k-2}}}]}, \tag{30}\]
since (30) vanishes outside the interval \(\zeta\in[\frac{\mathrm{s-1}}{2^{\mathrm{k-2}}},\frac{\mathrm{s}}{2^{\mathrm{ k-2}}}]\). Thus, the nonzero components of Vieta-Lucas wavelets expansion exist only in the interval \(\zeta\in[\frac{\mathrm{s-1}}{2^{\mathrm{k-2}}},\frac{\mathrm{s}}{2^{\mathrm{k- 2}}}]\) i.e, \(\Upsilon_{\mathrm{i}}(\zeta)\) for \(i=(s-1)M+1,(s-1)M+2,\ldots,sM\). Vieta-Lucas wavelets expansion can now be written as
\[\frac{\mathrm{d}\Upsilon_{\mathrm{u}}(\zeta)}{\mathrm{d}\zeta}=\sum_{\mathrm{ i}=(\mathrm{s-1})\mathrm{M}+1}^{\mathrm{sM}}\mathrm{a}_{\mathrm{i}}\Upsilon_{ \mathrm{i}}(\zeta).\]
Here,
\[\frac{\mathrm{d}}{\mathrm{d}\zeta}\Upsilon_{\mathrm{u}}(\zeta)=0,\quad\text{ for }\;\mathrm{u}=1,\mathrm{M}+1,2\mathrm{M}+1,\ldots,(2^{\mathrm{k-1}}-1)\mathrm{M}+1, \;\text{ because }\frac{\mathrm{d}}{\mathrm{d}\zeta}\mathrm{VL}_{0}(\zeta)=0\text{ everywhere}.\]
So, the first row of matrix \(F\) is zero.
Now, the first derivative of shifted Vieta-Lucas polynomials can be expressed as
\[\frac{\mathrm{d}}{\mathrm{d}\zeta}\mathrm{VL}_{\mathrm{m}}^{*}(\zeta)=2\sum_{ \begin{subarray}{c}j=0\\ j+m-odd\end{subarray}}^{\mathrm{M}-1}\frac{\mathrm{m}}{\alpha_{\mathrm{j}}} \mathrm{VL}_{\mathrm{j}}^{*}(\zeta). \tag{31}\]
Using (31) in (30), we obtain
\[\frac{\mathrm{d}\Upsilon_{\mathrm{u}}(\zeta)}{\mathrm{d}\zeta}=2^{\frac{ \mathrm{s_{0}}}{2}}\sqrt{\frac{1}{2\pi\alpha_{\mathrm{m}}}}\sum_{ \begin{subarray}{c}j=0\\ j+m-odd\end{subarray}}^{\mathrm{M}-1}\frac{\mathrm{m}}{\alpha_{\mathrm{j}}} \mathrm{VL}_{\mathrm{m}}^{*}(2^{\mathrm{k-1}}\zeta-2\mathrm{s}+2)\chi_{[\frac {\mathrm{s-1}}{2^{\mathrm{k-2}}},\frac{\mathrm{s}}{2^{\mathrm{k-2}}}]}, \tag{32}\]
which can be rewritten as
\[\frac{\mathrm{d}\Upsilon_{\mathrm{u}}(\zeta)}{\mathrm{d}\zeta}=2^{\mathrm{k}} \sum_{\begin{subarray}{c}v=1\\ u+v=odd\end{subarray}}^{u-1}\frac{\mathrm{u}-1}{\sqrt{\alpha_{\mathrm{u-1}}} \sqrt{\alpha_{\mathrm{v-1}}}}\Upsilon_{(\mathrm{s-1})\mathrm{M}+\mathrm{v}}( \zeta).\]
Therefore
\[\frac{\mathrm{d}\Upsilon_{\mathrm{u}}(\zeta)}{\mathrm{d}\zeta}=\mathrm{F}_{u, \mathrm{v}}\Upsilon_{(\mathrm{s-1})\mathrm{M}+\mathrm{v}}(\zeta),\]
with
\[\mathrm{F}_{u,\mathrm{v}}=\begin{cases}2^{\mathrm{k}}\frac{\mathrm{u}-1}{ \sqrt{\alpha_{\mathrm{u-1}}}\sqrt{\alpha_{\mathrm{v-1}}}},&\mathrm{u}=2, \ldots,\mathrm{M};\;\mathrm{v}=1,2,\ldots,\mathrm{u}-1;\;(\mathrm{u}+\mathrm{v })=\mathrm{odd},\\ 0,&\mathrm{Otherwise}.\end{cases} \tag{33}\]
which leads to the desired expression.
For example, if we select k=2 and M=3, then the discrete members of shifted Vieta-Lucas wavelets can be
written as:
\[\Upsilon_{1}(\zeta)=\Upsilon_{1,0}(\zeta)=\begin{cases}\frac{2}{ \sqrt{\pi}},&0\leq\zeta<1,\\ 0,&\text{Otherwise.}\end{cases}\] \[\Upsilon_{2}(\zeta)=\Upsilon_{1,1}(\zeta)=\begin{cases}\frac{2\sqrt {\zeta}}{\sqrt{\pi}}(2\zeta-1),&0\leq\zeta<1,\\ 0,&\text{Otherwise.}\end{cases}\] \[\Upsilon_{3}(\zeta)=\Upsilon_{1,2}(\zeta)=\begin{cases}\frac{2 \sqrt{\zeta}}{\sqrt{\pi}}(8\zeta^{2}-8\zeta+1),&0\leq\zeta<1,\\ 0,&\text{Otherwise.}\end{cases}\] \[\Upsilon_{4}(\zeta)=\Upsilon_{2,0}(\zeta)=\begin{cases}\frac{2}{ \sqrt{\pi}},&1\leq\zeta<2,\\ 0,&\text{Otherwise.}\end{cases}\] \[\Upsilon_{5}(\zeta)=\Upsilon_{2,1}(\zeta)=\begin{cases}\frac{2 \sqrt{\zeta}}{\sqrt{\pi}}(2\zeta-3),&1\leq\zeta<2,\\ 0,&\text{Otherwise.}\end{cases}\] \[\Upsilon_{6}(\zeta)=\Upsilon_{2,2}(\zeta)=\begin{cases}\frac{2 \sqrt{\zeta}}{\sqrt{\pi}}(8\zeta^{2}-24\zeta+17),&1\leq\zeta<2,\\ 0,&\text{Otherwise.}\end{cases}\]
As a result, the first order derivatives of the shifted Vieta-Lucas wavelets over the domain \([0,2]\) are:
\[\frac{\mathrm{d}\Upsilon_{1}}{\mathrm{d}\zeta} =0,\] \[\frac{\mathrm{d}\Upsilon_{2}}{\mathrm{d}\zeta} =\frac{4\sqrt{2}}{\sqrt{\pi}}=2\sqrt{2}\ \Upsilon_{1},\] \[\frac{\mathrm{d}\Upsilon_{3}}{\mathrm{d}\zeta} =\frac{16\sqrt{2}}{\sqrt{\pi}}(2\zeta-1)=8\ \Upsilon_{2},\] \[\frac{\mathrm{d}\Upsilon_{4}}{\mathrm{d}\zeta} =0,\] \[\frac{\mathrm{d}\Upsilon_{5}}{\mathrm{d}\zeta} =\frac{4\sqrt{2}}{\sqrt{\pi}}=2\sqrt{2}\ \Upsilon_{4},\] \[\frac{\mathrm{d}\Upsilon_{6}}{\mathrm{d}\zeta} =\frac{16\sqrt{2}}{\sqrt{\pi}}(2\zeta-3)=8\ \Upsilon_{5}.\]
So, the matrix \(\mathrm{D}\) is as follows:
\[\mathrm{D}=\begin{pmatrix}0&0&0&0&0&0\\ 2\sqrt{2}&0&0&0&0&0\\ 0&8&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&2\sqrt{2}&0&0\\ 0&0&0&0&8&0\end{pmatrix}.\]
**Corollary 5.1**.: _The \(m^{th}\) order differential OM can be achieved as:_
\[\frac{\mathrm{d}^{(\mathrm{m})}\Upsilon(\zeta)}{\mathrm{d}\zeta^{(\mathrm{m}) }}=\mathrm{D}^{(\mathrm{m})}\Upsilon(\zeta), \tag{34}\]
_and \(D^{(m)}\) denotes the \(m^{th}\) order derivative of \(D\)._
Numerical Scheme
Consider the most general appearance of second order differential equations as
\[\mathrm{Y}^{\prime\prime}(\zeta)=\mathrm{F}(\zeta,\mathrm{Y}(\zeta),\mathrm{Y}^{ \prime}(\zeta)),\ \ \ \ 0\leq\zeta\leq\mathrm{L}. \tag{35}\]
with
\[\mathrm{Y}(\zeta)|_{\zeta=0}=\alpha_{0},\ \mathrm{Y}^{\prime}(\zeta)|_{\zeta=0}= \alpha_{1}, \tag{36}\]
or
\[\mathrm{Y}(\zeta)|_{\zeta=0}=\beta_{0},\ \mathrm{Y}(\zeta)|_{\zeta=\mathrm{L}}= \beta_{1}. \tag{37}\]
Let \(\mathrm{\bar{Y}}(\zeta)\) be the Vieta-Lucas wavelet series approximation to the solution of equations (35)-(37)
\[\mathrm{\bar{Y}}(\zeta)=\sum_{s=1}^{2^{k-1}}\sum_{m=0}^{M-1}\Lambda_{s,m} \Upsilon_{s,m}(\zeta)=\Lambda^{T}\Upsilon(\zeta). \tag{38}\]
Now by using (34), we obtain
\[\mathrm{\bar{Y}}^{\prime}(\zeta)=\Lambda^{T}\mathrm{D}\Upsilon(\zeta)\ \ \ \ \text{and}\ \ \ \ \mathrm{\bar{Y}}^{\prime\prime}(\zeta)=\Lambda^{T}\mathrm{D}^{(2)}\Upsilon( \zeta). \tag{39}\]
The residual function \(\mathrm{R}(\zeta)\) can be obtained by using equations (38) and (39) in equation (35) as
\[\mathrm{R}(\zeta)=\Lambda^{T}\mathrm{D}^{(2)}\Upsilon(\zeta)-\mathrm{F}( \zeta,\Lambda^{T}\Upsilon(\zeta),\Lambda^{T}\mathrm{D}\Upsilon(\zeta)). \tag{40}\]
The corresponding updation in equations (35) and (36) are as follows
\[\Lambda^{T}\Upsilon(0)=\alpha_{0},\ \ \ \Lambda^{T}\mathrm{D}\Upsilon(0)= \alpha_{1}, \tag{41}\]
or
\[\Lambda^{T}\Upsilon(0)=\beta_{0},\ \ \ \Lambda^{T}\Upsilon(\mathrm{l})= \beta_{1}. \tag{42}\]
When \(\mathrm{R}(\zeta)\) is zero, the exact solution is obtained, although it is quite difficult to get \(\mathrm{R}(\zeta)\) identically zero. So our primary focus is to make the residual value as small as possible. Thus, we utilize the weighted residual methods in order to minimize the residual function. Now, the weighted residual equation is written as
\[\langle\mathrm{W},\mathrm{R}(\zeta)\rangle=\int_{0}^{2}\mathrm{WR}(\zeta)\, \mathrm{d}\zeta=0, \tag{43}\]
where \(\mathrm{W}\) is the weighted function in integral sense.
Thus, three weighted residual approaches are presented which are based on the different choices of \(\mathrm{W}\).
**Approach - I : Collocation Approach**
In this approach, the weighted function is chosen as Dirac delta(\(\delta\)) function which vanishes everywhere except at the collocation points, this yields
\[\langle\delta(\zeta-\zeta_{\mathrm{i}}),\mathrm{R}(\zeta)\rangle=\int_{0}^{2 }\delta(\zeta-\zeta_{\mathrm{i}})\mathrm{R}(\zeta)\,\mathrm{d}\zeta,\ \ \mathrm{i}=1,2,3,\ldots,2^{k-1}\mathrm{M}-2,\]
which gives
\[\mathrm{R}(\zeta_{\mathrm{i}})=0,\ \ \ \ \mathrm{i}=1,2,3,\ldots,2^{k-1} \mathrm{M}-2. \tag{44}\]
Here, the extrema of Vieta-Lucas polynomials are chosen as the collocation points. Substituting the collocation points in equation (44) leads to the following system of algebraic equations as
\[\Lambda^{T}\mathrm{D}^{(2)}\Upsilon(\zeta_{\mathrm{i}})-\mathrm{F}(\zeta_{ \mathrm{i}},\Lambda^{T}\Upsilon(\zeta_{\mathrm{i}}),\Lambda^{T}\mathrm{D} \Upsilon(\zeta_{\mathrm{i}}))=0,\ \ \ \ \mathrm{i}=1,2,3,\ldots,2^{k-1}\mathrm{M}-2. \tag{45}\]
Now, the \(2^{k-1}\)M system of equations (equation (45) with conditions (41) or (42)) yield the values of unknown coefficients, and thus the required solution. The solution achieved in this sense will be called as \(\mathrm{Y}_{\mathrm{VLWC}}(\zeta)\) solution.
**Approach - II : Tau Approach**
In this approach, the weighted function is chosen to be the same as the Vieta-Lucas wavelets \(\Upsilon_{i}\), which yields \((2^{k-1}\mathrm{M}-2)\) nonlinear equations as
\[\langle\Upsilon_{\mathrm{i}},\mathrm{R}(\zeta)\rangle=\int_{0}^{2}\Upsilon_{ \mathrm{i}}\;\mathrm{R}(\zeta)\,\mathrm{d}\zeta=0,\qquad\mathrm{i}=1,2,3, \ldots,2^{k-1}\mathrm{M}-2. \tag{46}\]
Thus, solving \(2^{k-1}\)M equations (equation (46) with (41) or (42)), we get the unknown coefficients which leads to the appropriate solution, called as \(\mathrm{Y}_{\mathrm{VLWT}}(\zeta)\).
**Approach - III : Galerkin Approach**
The main idea behind this approach is to expand the solutions not only in terms of usual Vieta-Lucas wavelets expansion, but with some combinations of Vieta-Lucas wavelets which satisfy the boundary requirements. In this approach, the weighted functions are taken as the trial series solution, which gives
\[\langle\Psi_{\mathrm{i}},\mathrm{R}(\zeta)\rangle=\int_{0}^{2}\Psi_{\mathrm{ i}}\;\mathrm{R}(\zeta)\,\mathrm{d}\zeta=0,\qquad\mathrm{i}=1,2,3,\ldots,2^{k-1} \mathrm{M}, \tag{47}\]
where the trial series solution for Galerkin approach is written as
\[\mathrm{Y}_{\mathrm{VLWG}}(\zeta)=\Lambda^{\mathrm{T}}\Psi(\zeta), \tag{48}\]
and \(\Psi(\zeta)=\nu(\zeta)\Upsilon(\zeta)\), where \(\nu(\zeta)\) be the trial function. Now, solving \(2^{k-1}\)M system of equations from (47) provide the values of unknown coefficients, and thus we get the required solution \(\mathrm{Y}_{\mathrm{VLWG}}(\zeta)\).
## 7 Convergence and error bound estimation
**Theorem 7.1**.: _Let \(\mathrm{Y}(\zeta)\in\mathrm{L}^{2}_{w_{\mathrm{s}}}[0,2]\) then the Vieta-Lucas series defined in (21) converges to \(\mathrm{Y}(\zeta)\) by the use of Vieta-Lucas wavelets i.e._
\[\mathrm{Y}(\zeta)=\sum_{\mathrm{s}=1}^{\infty}\sum_{\mathrm{m}=0}^{\infty} \Lambda_{\mathrm{s,m}}\Upsilon_{\mathrm{s,m}}(\zeta).\]
Proof.: Suppose \(\mathrm{Y}(\zeta)\in\mathrm{L}^{2}_{w_{\mathrm{s}}}[0,2]\), where \(L^{2}_{w_{\mathrm{s}}}[0,2]\) be the Hilbert space and \(\Upsilon_{\mathrm{s,m}}\) defined in (19) forms an orthonormal basis with respect to weight function \(w_{\mathrm{s}}(\zeta)=w(2^{k}\zeta-\hat{s})\).
Consider, \(\mathrm{Y}(\zeta)=\sum_{\mathrm{m}=0}^{\mathrm{M}-1}\Lambda_{\mathrm{s,m}} \Upsilon_{\mathrm{s,m}}(\zeta)\) where \(\Lambda_{\mathrm{s,m}}=\langle\mathrm{Y}(\zeta),\Upsilon_{\mathrm{s,m}}(\zeta )\rangle_{w_{\mathrm{s}}(\zeta)}\) for a fixed s and \(\Upsilon_{\mathrm{s,m}}(\zeta)=\Upsilon_{\mathrm{j}}(\zeta)\), \(\varsigma_{\mathrm{j}}=\langle\mathrm{Y}(\zeta),\Upsilon_{\mathrm{j}}(\zeta)\rangle\). The sequence of partial sum \(\{\rho_{\mathrm{s}}\}\) of \(\{\varsigma_{\mathrm{j}}\Upsilon_{\mathrm{j}}(\zeta)\}_{\mathrm{j}\geq 1}\) is defined as \(\rho_{\mathrm{s}}=\sum_{\mathrm{j}=1}^{\mathrm{s}}\varsigma_{\mathrm{j}} \Upsilon_{\mathrm{j}}(\zeta)\).
Now,
\[\left\|\sum_{\mathrm{j}=\mathrm{m}+1}^{\mathrm{s}}\varsigma_{\mathrm{j}} \Upsilon_{\mathrm{j}}(\zeta)\right\|^{2}=\langle\sum_{\mathrm{j}=\mathrm{m}+1} ^{\mathrm{s}}\varsigma_{\mathrm{j}}\Upsilon_{\mathrm{j}}(\zeta),\sum_{\mathrm{ j}=\mathrm{m}+1}^{\mathrm{s}}\varsigma_{\mathrm{j}}\Upsilon_{\mathrm{j}}(\zeta) \rangle=\sum_{\mathrm{j}=\mathrm{m}+1}^{\mathrm{s}}|\varsigma_{\mathrm{j}}|^{2 },\ \ \mathrm{s}>\mathrm{m}.\]
Therefore,
\[\left\|\sum_{\mathrm{j}=\mathrm{m}+1}^{s}\varsigma_{\mathrm{j}}\Upsilon_{\mathrm{j} }(\zeta)\right\|^{2}=\sum_{\mathrm{j}=\mathrm{m}+1}^{s}|\varsigma_{\mathrm{j}}| ^{2},\ \ \mathrm{s>m}.\]
From Bessel's inequality, we know \(\sum_{\mathrm{j}=1}^{\infty}|\varsigma_{\mathrm{j}}|^{2}\) is convergent.
Thus we have
\[\left\|\sum_{\mathrm{j}=\mathrm{m}+1}^{s}\varsigma_{\mathrm{j}}\Upsilon_{ \mathrm{j}}(\zeta)\right\|^{2}\to 0\quad\ \ \mathrm{as}\ \ \mathrm{s}\to\infty.\]
Which implies
\[\left\|\sum_{\mathrm{j}=\mathrm{m}+1}^{s}\varsigma_{\mathrm{j}}\Upsilon_{ \mathrm{j}}(\zeta)\right\|\to 0,\]
and \(\{\rho_{\mathrm{s}}\}\) is a Cauchy sequence that converges to \(\rho(\mathrm{say})\).
Thus,
\[\langle\rho-\mathrm{Y}(\zeta),\Upsilon(\zeta)\rangle =\langle\rho,\Upsilon(\zeta)\rangle-\langle\Upsilon(\zeta), \Upsilon(\zeta)\rangle,\] \[=\langle\lim_{\mathrm{s}\to\infty}\rho_{\mathrm{s}},\Upsilon( \zeta)\rangle-\varsigma_{\mathrm{j}},\] \[=\varsigma_{\mathrm{j}}-\varsigma_{\mathrm{j}}.\]
Which implies
\[\langle\rho-\mathrm{Y}(\zeta),\Upsilon(\zeta)\rangle=0.\]
Therefore, \(\mathrm{Y}(\zeta)=\rho\) and \(\sum_{\mathrm{j}=1}^{s}\varsigma_{\mathrm{j}}\Upsilon_{\mathrm{j}}(\zeta)\) converges to \(\mathrm{Y}(\zeta)\) for \(s\to\infty\).
**Theorem 7.2**.: _Let \(Y(\zeta)\) be a second order square integrable function defined over \([0,2]\) with bounded second order derivative say \(|Y^{\prime\prime}(\zeta)|\leq H\) for some constant H. Then \(Y(\zeta)\) can be expanded as an infinite sum of Vieta Lucas wavelets and the series converges to \(Y(\zeta)\) uniformly, that is_
\[\mathrm{Y}(\zeta)=\sum_{\mathrm{s}=1}^{\infty}\sum_{\mathrm{m}=0}^{\infty} \Lambda_{\mathrm{s,m}}\Upsilon_{\mathrm{s,m}}(\zeta),\]
_where \(\Lambda_{\mathrm{s,m}}=\langle\mathrm{Y}(\zeta),\Upsilon_{\mathrm{s,m}}( \zeta)\rangle_{\mathrm{L}^{2}\omega_{\mathrm{s}}[0,2]}\)._
Proof.: From (22), we have
\[\Lambda_{\mathrm{s,m}} =\langle\mathrm{Y}(\zeta),\Upsilon_{\mathrm{s,m}}(\zeta)\rangle_ {\mathrm{L}^{2}\omega_{\mathrm{s}}[0,2]}=\int_{0}^{2}\mathrm{Y}(\zeta)\Upsilon _{\mathrm{s,m}}(\zeta)\mathrm{w}_{\mathrm{s}}(\zeta)\,\mathrm{d}\zeta,\] \[=\int_{\frac{\hat{s}+2}{2^{\hat{s}}}}^{\frac{\hat{s}+2}{2^{\hat{s }}}}\mathrm{Y}(\zeta)2^{\frac{\hat{s}}{2}}\sqrt{\frac{1}{2\pi}}\mathrm{VL}_{ \mathrm{m}}(2^{\mathrm{k}}\zeta-\hat{s})\,\mathrm{w}_{\mathrm{s}}(2^{\mathrm{ k}}\zeta-\hat{s})\,\mathrm{d}\zeta,\]
where \(\hat{s}=2(2s-1)\). Substituting \(2^{k}\zeta-\hat{s}=2\cos\delta\) and from the definition of Vieta-Lucas polynomial, we obtain
\[\Lambda_{\mathrm{s,m}}= 2^{\frac{s}{2}}\sqrt{\frac{1}{2\pi}}\int_{0}^{\pi}\mathrm{Y} \left(\frac{2\cos\delta+\hat{s}}{2^{\mathrm{k}}}\right)\mathrm{VL}_{ \mathrm{m}}(\delta)\,\mathrm{w}_{\mathrm{s}}(\delta)\,\mathrm{d}\delta,\] \[= 2^{\frac{-\hat{s}}{2}}\sqrt{\frac{1}{2\pi}}\int_{0}^{\pi}\mathrm{ Y}\left(\frac{2\cos\delta+\hat{s}}{2^{\mathrm{k}}}\right)2\cos\mathrm{m}\delta\,\frac{1}{ \sqrt{4-(2\cos\delta)^{2}}}\,2\sin\delta\,\mathrm{d}\delta,\] \[= 2^{\frac{-\hat{s}}{2}}\sqrt{\frac{1}{2\pi}}\int_{0}^{\pi}\mathrm{ Y}\left(\frac{2\cos\delta+\hat{s}}{2^{\mathrm{k}}}\right)2\cos\mathrm{m}\delta\, \frac{1}{2\sqrt{1-\cos^{2}\delta}}\,2\sin\delta\,\mathrm{d}\delta,\] \[= 2^{\frac{-\hat{s}+1}{2}}\sqrt{\frac{1}{\pi}}\int_{0}^{\pi} \mathrm{Y}\left(\frac{2\cos\delta+\hat{s}}{2^{\mathrm{k}}}\right)\cos\mathrm{m }\delta\,\mathrm{d}\delta,\]
Using the integration by parts, we get
\[\Lambda_{\rm s,m}= \frac{2^{\frac{-3\mathrm{h+1}}{2}}}{\mathrm{m}}\sqrt{\frac{1}{\pi}} \int_{0}^{\pi}\mathrm{Y}^{\prime}\left(\frac{2\cos\delta+\hat{\mathrm{s}}}{2^{ \mathrm{k}}}\right)2\sin\mathrm{m}\delta\sin\delta\,\mathrm{d}\delta,\] \[= \frac{2^{\frac{-3\mathrm{h+1}}{2}}}{\mathrm{m}}\sqrt{\frac{1}{\pi} }\int_{0}^{\pi}\mathrm{Y}^{\prime}\left(\frac{2\cos\delta+\hat{\mathrm{s}}}{2^ {\mathrm{k}}}\right)(\cos\left(\mathrm{m}-1\right)\delta-\cos\left(\mathrm{m}+1 \right)\delta)\,\mathrm{d}\delta,\] \[= \frac{2^{\frac{-3\mathrm{h+1}}{2}}}{\mathrm{m}}\sqrt{\frac{1}{\pi} }\left(\int_{0}^{\pi}\mathrm{Y}^{\prime}\left(\frac{2\cos\delta+\hat{\mathrm{ s}}}{2^{\mathrm{k}}}\right)\cos\left(\mathrm{m}-1\right)\delta\,\mathrm{d} \delta-\int_{0}^{\pi}\mathrm{Y}^{\prime}\left(\frac{2\cos\delta+\hat{\mathrm{ s}}}{2^{\mathrm{k}}}\right)\cos\left(\mathrm{m}+1\right)\delta\,\mathrm{d} \delta\right),\] \[\implies\,|\Lambda_{\rm s,m}|\leq\,\frac{2^{\frac{-3\mathrm{h+1} }{2}}}{\mathrm{m}}\sqrt{\frac{1}{\pi}}(|\mathrm{I}_{1}|+|\mathrm{I}_{2}|), \tag{49}\]
where \(\mathrm{I}_{1}=\int_{0}^{\pi}\mathrm{Y}^{\prime}\left(\frac{2\cos\delta+\hat{ \mathrm{s}}}{2^{\mathrm{k}}}\right)\cos\left(\mathrm{m}-1\right)\delta\, \mathrm{d}\delta\) and \(\mathrm{I}_{2}=\int_{0}^{\pi}\mathrm{Y}^{\prime}\left(\frac{2\cos\delta+\hat{ \mathrm{s}}}{2^{\mathrm{k}}}\right)\cos\left(\mathrm{m}+1\right)\delta\, \mathrm{d}\delta\). Next we estimate \(\mathrm{I}_{1}\) and \(\mathrm{I}_{2}\) respectively. On Integrating \(\mathrm{I}_{1}\) by parts, we have
\[\mathrm{I}_{1}= \frac{2^{-1+1}}{\mathrm{m}-1}\int_{0}^{\pi}\mathrm{Y}^{\prime \prime}\left(\frac{2\cos\delta+\hat{\mathrm{s}}}{2^{\mathrm{k}}}\right)\sin \left(\mathrm{m}-1\right)\delta\sin\delta\,\mathrm{d}\delta,\] \[|\mathrm{I}_{1}|\leq \frac{2^{-1+1}}{\mathrm{m}-1}\int_{0}^{\pi}\left|\mathrm{Y}^{ \prime\prime}\left(\frac{2\cos\delta+\hat{\mathrm{s}}}{2^{\mathrm{k}}}\right) \right|\,\mathrm{d}\delta,\]
which gives
\[|\mathrm{I}_{1}|\leq\frac{\mathrm{H}\pi 2^{-\mathrm{k+1}}}{\mathrm{m}-1}. \tag{50}\]
Similarly on Integrating \(\mathrm{I}_{2}\), we obtain
\[\mathrm{I}_{2}= \frac{2^{-\mathrm{k+1}}}{\mathrm{m}+1}\int_{0}^{\pi}\mathrm{Y}^{ \prime\prime}\left(\frac{2\cos\delta+\hat{\mathrm{s}}}{2^{\mathrm{k}}}\right) \sin\left(\mathrm{m}+1\right)\delta\sin\delta\,\mathrm{d}\delta,\]
which gives
\[|\mathrm{I}_{2}|\leq\frac{\mathrm{H}\pi 2^{-\mathrm{k+1}}}{\mathrm{m}+1}. \tag{51}\]
By using (50) and (51) in (49), we obtain
\[|\Lambda_{\rm s,m}|\leq\frac{\mathrm{H}\sqrt{\pi}\,2^{\frac{-5\mathrm{h+5}}{2 }}}{\mathrm{m}^{2}-1},\ \ \ \ \ \mathrm{m}>1.\]
Since \(\mathrm{s}\leq 2^{\mathrm{k-1}}\) and \(\mathrm{s}\geq 1\). Therefore we get
\[|\Lambda_{\rm s,m}|\leq\frac{\mathrm{H}\sqrt{\pi}}{\mathrm{s}^{ \frac{2}{2}}(\mathrm{m}^{2}-1)}. \tag{52}\]
For \(\mathrm{m}=1\), we have
\[|\Lambda_{\rm s,1}|\leq\frac{\sqrt{\pi}}{2\ \mathrm{s}^{\frac{2}{2}}( \mathrm{m}^{2}-1)}\max_{0\leq\zeta\leq 2}|\mathrm{Y}^{\prime}(\zeta)|.\]
Also,
\[\left|\sum_{\rm s=1}^{\infty}\sum_{\rm m=0}^{\infty}\Lambda_{\rm s,m}\Upsilon_{\rm s,m}(\zeta)\right| \leq\left|\sum_{\rm s=1}^{\infty}\Lambda_{\rm s,0}\Upsilon_{\rm s,0}(\zeta)\right|+\sum_{\rm s=1}^{\infty}\sum_{\rm m=1}^{\infty}|\Lambda_{ \rm s,m}||\Upsilon_{\rm s,m}(\zeta)|\] \[\leq\left|\sum_{\rm s=1}^{\infty}\Lambda_{\rm s,0}\Upsilon_{\rm s,0}(\zeta)\right|+\sum_{\rm s=1}^{\infty}\sum_{\rm m=1}^{\infty}|\Lambda_{\rm s,m}|<\infty.\]
which implies that the series \(\sum_{\rm s=1}^{2^{\rm k-1}}\sum_{\rm m=0}^{\rm M-1}\Lambda_{\rm s,m}\) is absolutely convergent. Thus the series \(\sum_{\rm s=1}^{2^{\rm k-1}}\sum_{\rm m=0}^{\rm M-1}\Lambda_{\rm s,m}\Upsilon_{ \rm s,m}(\zeta)\) converges to \(\mathrm{Y}(\zeta)\) uniformly.
**Lemma 7.3**.: _Let \(f(\zeta)\) be a continuous, positive, decreasing function for \(\zeta\geq m\) if \(f(k)=\Lambda_{k}\), provided that \(\sum\Lambda_{s}\) is convergent and \(R_{s}=\sum_{k=s+1}^{\infty}\Lambda_{k}\), then \(R_{s}\leq\int_{s}^{\infty}f(\zeta)d\zeta\)[44]._
**Theorem 7.4**.: _For the Vieta-Lucas wavelets expansion, if \(Y(\zeta)\) satisfies the theorem (7.2), then the following error estimate holds for \(M>2\),_
\[\left\|\mathrm{Y}(\zeta)-\bar{\mathrm{Y}}(\zeta)\right\|_{\mathrm{w}_{*}}< \mathrm{H}\sqrt{\pi\frac{(\mathrm{M}^{2}-2\mathrm{M})\mathrm{ln}(\mathrm{M})- \mathrm{M}^{2}\mathrm{ln}(\mathrm{M}-2)+(2\mathrm{ln}(\mathrm{M}-2)-2)\mathrm{ M}+2}{(2^{5(2^{-1}-1)}5\ln 2)~{}4\mathrm{M}(\mathrm{M}-2)}}.\]
Proof.: Considering the Vieta-Lucas wavelets expansion as
\[\bar{\mathrm{Y}}(\zeta)=\sum_{s=1}^{2^{k-1}}\sum_{m=0}^{\mathrm{M}-1}\Lambda_ {s,m}\Upsilon_{s,m}(\zeta).\]
From (19), by using the orthogonality property of Vieta-Lucas wavelets with respect to weight function \(w_{s}(\zeta)\), we obtain
\[\left\|\mathrm{Y}(\zeta)-\bar{\mathrm{Y}}(\zeta)\right\|_{\mathrm{w}_{*}}^{2} =\sum_{s=2^{k-1}+1}^{\infty}\sum_{m=\mathrm{M}}^{\infty}\Lambda_{s,m}^{2}.\]
By using theorem (7.2), it can be expressed as
\[\left\|\mathrm{Y}(\zeta)-\bar{\mathrm{Y}}(\zeta)\right\|_{\mathrm{w}_{*}}^{2} <\mathrm{H}^{2}\pi\sum_{s=2^{k-1}+1}^{\infty}\sum_{m=\mathrm{M}}^{\infty} \frac{1}{2^{5k-5}(\mathrm{m}^{2}-1)^{2}}.\]
From lemma (7.3), we obtain
\[\left\|\mathrm{Y}(\zeta)-\bar{\mathrm{Y}}(\zeta)\right\|_{\mathrm{ w}_{*}}^{2} <\mathrm{H}^{2}\pi\int_{s=2^{k-1}}^{\infty}\int_{m=\mathrm{M}-1}^{ \infty}\frac{1}{2^{5\zeta-5}(\mathrm{z}^{2}-1)^{2}}\mathrm{d}\zeta\mathrm{d}z,\] \[=\mathrm{H}^{2}\pi\left(\int_{s=2^{k-1}}^{\infty}\frac{1}{2^{5 \zeta-5}}\mathrm{d}\zeta\right)\left(\int_{m=\mathrm{M}-1}^{\infty}\frac{1}{( \mathrm{z}^{2}-1)^{2}}\mathrm{d}z\right),\] \[=\mathrm{H}^{2}\pi\left(\frac{1}{2^{5(2^{k-1}-1)}5\ln 2}\right) \left(\frac{(\mathrm{M}^{2}-2\mathrm{M})\mathrm{ln}(\mathrm{M})-\mathrm{M}^{2 }\mathrm{ln}(\mathrm{M}-2)+(2\mathrm{ln}(\mathrm{M}-2)-2)\mathrm{M}+2}{4 \mathrm{M}(\mathrm{M}-2)}\right).\]
which completes the proof.
## 8 Numerical Examples
This section contains numerical examples that demonstrate the efficiency and reliability of the presented schemes.
**Example 8.1**.: Consider the non-homogeneous SDE [46]:
\[\mathrm{Y}^{\prime\prime}(\zeta)+\frac{1}{\zeta}\mathrm{Y}^{\prime}(\zeta)= \bigg{(}\frac{8}{8-\zeta^{2}}\bigg{)}^{2},\ \ \ \ 0\leq\zeta\leq 1, \tag{53}\]
with
\[\mathrm{Y}(\zeta)|_{\zeta=0}=0,\ \ \ \ Y^{\prime}(\zeta)|_{\zeta=0}=0. \tag{54}\]
The exact solution is
\[\mathrm{Y}_{\mathrm{Exact}}(\zeta)=2\log\bigg{(}\frac{8}{8-\zeta^{2}}\bigg{)}.\]
The following solutions are obtained by using the proposed numerical schemes at \(\eta\) = 12:
\[\mathrm{Y_{VLWC}}(\zeta) =(1.60\times 10^{-16})-(1.18\times 10^{-16})\zeta+\cdots+(1.82\times 1 0^{-5})\zeta^{11},\] \[\mathrm{Y_{VLWT}}(\zeta) =(1.83\times 10^{-16})+(4.51\times 10^{-17})\zeta+\cdots+(8.79 \times 10^{-4})\zeta^{11},\] \[\mathrm{Y_{VLWG}}(\zeta) =0.25\zeta^{2}-0.08\zeta^{3}+0.40\zeta^{4}+\cdots-(9.75\times 10^{- 4}\zeta^{13}).\]
**Example 8.2**.: Take the nonlinear SDE as [47]:
\[\mathrm{Y}^{\prime\prime}(\zeta)+\pi^{3}\frac{(\mathrm{Y}^{2}(\zeta))}{\sin{( \pi\zeta)}}=0,\ \ \ \ 0<\zeta<1, \tag{55}\]
with boundary restrictions
\[\mathrm{Y}(\zeta)|_{\zeta=0}=0,\ \ \ \ \mathrm{Y}(\zeta)|_{\zeta=1}=0. \tag{56}\]
The exact solution is
\[\mathrm{Y_{Exact}}(\zeta)=\frac{\sin{(\pi\zeta)}}{\pi}.\]
The following solutions are obtained for \(\eta\) = 6:
\[\mathrm{Y_{VLWC}}(\zeta) =(1.38\times 10^{-16})+0.99\zeta+0.10\zeta^{2}-2.20\zeta^{3}+1.10 \zeta^{4}-(7.04\times 10^{-13})\zeta^{5},\] \[\mathrm{Y_{VLWT}}(\zeta) =(-2.77\times 10^{-17})+0.93\zeta+0.64\zeta^{2}-3.62\zeta^{3}+2.56 \zeta^{4}-0.51\zeta^{5},\] \[\mathrm{Y_{VLWG}}(\zeta) =1.00\zeta-0.03\zeta^{2}-1.39\zeta^{3}-0.75\zeta^{4}+1.96\zeta^{ 5}-0.90\zeta^{6}+0.12\zeta^{7}.\]
**Example 8.3**.: Consider the nonhomogeneous Emden-Fowler type SDE [11]:
\[\mathrm{Y}^{\prime\prime}(\zeta)+\frac{8}{\zeta}\mathrm{Y}^{\prime}(\zeta)+ \zeta\mathrm{Y}(\zeta)=\zeta^{5}-\zeta^{4}+44\zeta^{2}-30\zeta,\ \ \ \ \zeta\geq 0, \tag{57}\]
with
\[\mathrm{Y}(\zeta)|_{\zeta=0}=0,\ \ \ \ \mathrm{Y}^{\prime}(\zeta)|_{\zeta=0}=0. \tag{58}\]
The following solutions are obtained for \(\eta\) = 5:
\[\mathrm{Y_{VLWC}}(\zeta) =(5.55\times 10^{-16})+(9.32\times 10^{-15})\zeta^{2}-\zeta^{3}+ \zeta^{4},\] \[\mathrm{Y_{VLWT}}(\zeta) =\zeta^{4}-\zeta^{3},\] \[\mathrm{Y_{VLWG}}(\zeta) =\zeta^{4}-\zeta^{3}.\]
The above solutions are computed in the interval \(0\leq\zeta\leq 2\) and it is observed that both \(\mathrm{Y_{VLWT}}\) and \(\mathrm{Y_{VLWG}}\) yield the exact solution.
**Example 8.4**.: Consider the nonlinear Emden-Fowler type SDE [48]:
\[\mathrm{Y}^{\prime\prime}(\zeta)+\frac{8}{\zeta}\mathrm{Y}^{\prime}(\zeta)+18 \mathrm{Y}(\zeta)=-4\mathrm{Y}(\zeta)\ln{(\mathrm{Y}(\zeta))},\ \ \ \ 0<\zeta\leq 1, \tag{59}\]
with initial conditions
\[\mathrm{Y}(\zeta)|_{\zeta=0}=1,\ \ \ \ \mathrm{Y}^{\prime}(\zeta)|_{\zeta=0}=0. \tag{60}\]
The approximated solutions obtained by using proposed schemes for \(\eta\) = 3 :
\[\mathrm{Y_{VLWC}}(\zeta) =\mathrm{e}^{(-3.33067\times 10^{-16})-\zeta^{2}},\] \[\mathrm{Y_{VLWT}}(\zeta) =\mathrm{e}^{-\zeta^{2}},\] \[\mathrm{Y_{VLWG}}(\zeta) =\mathrm{e}^{-\zeta^{2}}.\]
Here, \(\mathrm{Y_{VLWT}}\) and \(\mathrm{Y_{VLWG}}\) leads to the exact solution.
**Example 8.5**.: Let us consider the nonlinear Lane-Emden type SDE [11]:
\[\mathrm{Y}^{\prime\prime}(\zeta)+\frac{2}{\zeta}\mathrm{Y}^{\prime}(\zeta)-6 \mathrm{Y}(\zeta)=4\mathrm{Y}(\zeta)\ln{(\mathrm{Y}(\zeta))},\ \ \ \ 0<\zeta\leq 1, \tag{61}\]
with initial conditions
\[\mathrm{Y}(\zeta)|_{\zeta=0}=1,\ \ \ \mathrm{Y}^{\prime}(\zeta)|_{\zeta=0}=0. \tag{62}\]
The approximated solutions obtained by using proposed schemes at \(\eta=3\):
\[\mathrm{Y}_{\mathrm{VLWFC}}(\zeta) =\mathrm{e}^{(3.8857\times 10^{-16})+\zeta^{2}},\] \[\mathrm{Y}_{\mathrm{VLWFC}}(\zeta) =\mathrm{e}^{\zeta^{2}},\] \[\mathrm{Y}_{\mathrm{VLWG}}(\zeta) =\mathrm{e}^{\zeta^{2}}.\]
The solutions obtained by \(\mathrm{Y}_{\mathrm{VLWT}}\) and \(\mathrm{Y}_{\mathrm{VLWG}}\) yield the exact solution.
## 9 Results and Discussions
Figure 1 demonstrates the solution curves of the exact solution and approximate solutions by the proposed numerical approaches for Example 8.1 and 8.2 at different resolutions. It is observed from the zoomed profile of figure 1 that as the resolution increases the approximate solutions become more accurate and overlap the
Figure 1: Solution curves for (a)-(b) Example 8.1 and (c)-(d) Example 8.2.
Figure 2: Solution curves for (a) Example 8.3 (b) Example 8.4 and (c) Example 8.5.
Figure 3: Comparison of logarithmic values of absolute errors at different resolutions for Example 8.1 by proposed approaches.
Figure 4: Comparison of logarithmic values of absolute errors of proposed schemes for Example 8.1 at \(\eta=8,10\) and \(12\).
exact solution curve. The solution plots for Example 8.3, 8.4 and 8.5 are shown in figure 2. It can be seen from figure 2 that the proposed approaches provide accurate results at the smaller resolutions which proves that all three proposed numerical approaches are reliable. Figure 3 depicts a comparison of logarithmic values of absolute errors at various resolutions for Example 8.1, as obtained by proposed numerical approaches. It demonstrates that the errors are highest for \(\eta=6\) and least for \(\eta=12\), indicating that as the resolution increases, the error reduces for all three proposed numerical schemes. Figure 4 shows a comparative analysis of the presented numerical schemes at various resolutions, and it is observed that the errors are bounded for all of the proposed schemes, demonstrating the efficiency and accuracy of the schemes.
The absolute error obtained by the proposed methods for Example 8.1 and Example 8.2 are compared in Table 1. In Example 8.3, 8.4 and 8.5, \(\mathrm{Y_{VINW}(\zeta)}\) and \(\mathrm{Y_{VINW}(\zeta)}\) produce the exact solution. Table 2 compares the absolute error obtained by \(\mathrm{Y_{VINWC}(\zeta)}\) with the existing findings. The solutions obtained from the proposed approaches are in good agreement with the existing results, demonstrating the reliability of the proposed schemes.
## 11 Declaration
**Conflict of interest**: Authors have no conflict of interest.
**Availability of data and material**: Not applicable.
**Compliance with ethical standards.**
|
2302.04985 | Event Temporal Relation Extraction with Bayesian Translational Model | Existing models to extract temporal relations between events lack a
principled method to incorporate external knowledge. In this study, we
introduce Bayesian-Trans, a Bayesian learning-based method that models the
temporal relation representations as latent variables and infers their values
via Bayesian inference and translational functions. Compared to conventional
neural approaches, instead of performing point estimation to find the best set
parameters, the proposed model infers the parameters' posterior distribution
directly, enhancing the model's capability to encode and express uncertainty
about the predictions. Experimental results on the three widely used datasets
show that Bayesian-Trans outperforms existing approaches for event temporal
relation extraction. We additionally present detailed analyses on uncertainty
quantification, comparison of priors, and ablation studies, illustrating the
benefits of the proposed approach. | Xingwei Tan, Gabriele Pergola, Yulan He | 2023-02-10T00:11:19Z | http://arxiv.org/abs/2302.04985v1 | # Event Temporal Relation Extraction with Bayesian Translational Model
###### Abstract
Existing models to extract temporal relations between events lack a principled method to incorporate external knowledge. In this study, we introduce _Bayesian-Trans_, a Bayesian learning-based method that models the temporal relation representations as latent variables and infers their values via Bayesian inference and _translational functions_. Compared to conventional neural approaches, instead of performing point estimation to find the best set parameters, the proposed model infers the parameters' posterior distribution directly, enhancing the model's capability to encode and express uncertainty about the predictions. Experimental results on the three widely used datasets show that Bayesian-Trans outperforms existing approaches for event temporal relation extraction. We additionally present detailed analyses on uncertainty quantification, comparison of priors, and ablation studies, illustrating the benefits of the proposed approach.1
Footnote 1: Experimental source code is available at [https://github.com/Xingwei-Warwick/Bayesian-Trans](https://github.com/Xingwei-Warwick/Bayesian-Trans)
## 1 Introduction
Understanding events and how they evolve in time has been shown beneficial for natural language understanding (NLU) and for a growing number of related tasks (Cheng et al., 2013; Wang et al., 2018; Ning et al., 2020; Geva et al., 2021; Sun et al., 2022). However, events often form complex structures with each other through various temporal relations, which is challenging to track even for humans (Wang et al., 2020).
One of the main difficulties is the wide variety of linguistic expressions of temporal relations across different contexts. Although many of them share some linguistic similarities, most of the topics in which they occur are characterized by some shared but unspoken knowledge that determines how temporal information is expressed. For example, when it comes to health, prevention is widely practised, with many treatments (e.g., vaccinations) being effective only if administered _before_ the onset of a disorder. On the contrary, in the automotive industry, it is common that most people repair their car _after_ a problem occurs. However, despite its simplicity, such commonsense knowledge is rarely stated explicitly in text and varies greatly across different domains. For example, in Figure 1, a detection model lacking the commonsense knowledge that _vaccination_ can protect people from infection, tends to get confused by the complex linguistic structures in the excerpt and returns the wrong prediction entailing that '_died_' happens after '_vaccinated_'. Instead, with the consideration of prior temporal knowledge involving the _vaccination_ event from an external knowledge source ATOMIC (Hwang et al., 2021), a model gives the correct prediction that '_died_' occurs before '_vaccinated_'.
Methods proposed in recent studies for event relation extraction are mostly end-to-end neural architectures making rather limited use of such commonsense knowledge (Han et al., 2019, 2019). Only a few works have explored the incorporation of external knowledge to mitigate the scarcity of event annotations (Ning et al., 2019; Wang et al., 2020).
Figure 1: Comparison between with or without external knowledge incorporation on event relation extraction.
Nevertheless, these approaches typically update the event representations with knowledge features derived from external sources, lacking a principled way of updating models' beliefs in seeing more data in the domains of interests.
In this work, we posit that the Bayesian learning framework combined with translational models can provide a principled methodology to incorporate knowledge and mitigate the lack of annotated data for event temporal relations. Translational models, such as TransE (Bordes et al., 2013), are energy-based models based on the intuition that the relations between entities can be naturally represented by geometric translations in the embedding space. More concretely, a relation between a _head entity_ and a _tail entity_ holds if there exists a _translational_ operation bringing the _head_ close to the _tail_ vector.
Specifically, we introduce a novel Bayesian Translational model (Bayesian-Trans) for event temporal relation extraction. Compared to conventional neural translational models, which only yield a point estimation of the network parameters, the Bayesian architecture can be seen as an ensemble of an infinite number of neural predictors, drawing samples from the posterior distribution of the translational parameters, refining its belief over the initial prior. As a result, event temporal relations are determined by the stochastic translational parameters drawn from posterior distributions. Additionally, such posteriors are conditioned upon the prior learned on external knowledge graphs, providing the commonsense knowledge required to interpret more accurately the temporal information across different contexts. As shown in the results obtained from the experimental evaluation on three commonly used datasets for event temporal relation extraction, the combination of translational models and Bayesian learning is particularly beneficial when tailored to the detection of event relations. Moreover, a favorable by-product of our Bayesian-Trans model is the inherent capability to express degrees of uncertainty, avoiding the overconfident predictions on out-of-distribution context. Our contributions are summarized in the following:
* We formulate a novel Bayesian translational model for the extraction of event temporal relations, in which event temporal relations are modeled through the stochastic translational parameters, considered as latent variables in Bayesian inference.
* We devise and explore \(3\) different priors under Bayesian framework to study how to effectively incorporate knowledge about events.
* We conduct thorough experimental evaluations on three benchmarking event temporal datasets and show that Bayesian-Trans achieves state-of-the-art performance on all of them. We also provide comprehensive analyses of multiple aspects of the proposed model.
## 2 Related Work
This work is related to at least three lines of research: event temporal relation detection, prior knowledge incorporation, and graph embedding.
### Event Temporal Relation
Similar to entity-level relation extraction (Zeng et al., 2014; Peng et al., 2017), the latest event temporal relation extraction models are based on neural networks, but in order to learn from limited labeled data and capture complex event hierarchies, a wide range of optimization or regularization approaches have been explored. Ning et al. (2019) proposed an LSTM-based network and ensured global consistency of all the event relations in the documents by integer linear programming. Wang et al. (2020) employed RoBERTa (Liu et al., 2019) and converted a set of predefined logic rules into differentiable objective functions to regularize the consistency of the relations inferred and explore multi-task joint training. Tan et al. (2021) proposed using hyperbolic-based methods to encode temporal information in a hyperbolic space, which has been shown to capture and model asymmetric temporal relations better than their Euclidean counterparts. Hwang et al. (2022) adopted instead a probabilistic box embeddings to extract asymmetric relations. Wen and Ji (2021) proposed to add an auxiliary task for relative time prediction of events described over an event timeline. Cao et al. (2021) developed a semi-supervised approach via an uncertainty-aware self-training framework, composing a training set of samples with actual and pseudo labels depending on the estimated uncertainty scores. None of the aforementioned approaches explored Bayesian learning for incorporating prior event temporal knowledge.
### Incorporation of Prior Knowledge
Knowledge plays a key role in understanding event relations because people often skip inessential details and express event relations implicitly which
is difficult to understand without relevant knowledge. For example, TemProb(Ning et al., 2018) contains temporal relation probabilistic knowledge which is encoded by Siamese network and incorporated into neural models as additional features Ning et al. (2019); Wang et al. (2020); Tan et al. (2021). Unlike previous works, we combine the Bayesian Neural Network with distance-based models, treating the translational parameters as latent variables to be inferred. To this end, we adopt the variational inference Kingma and Welling (2014); Blei et al. (2016); Gui et al. (2019); Pergola et al. (2021); Zhu et al. (2022), and derive the prior distribution of the temporal relation information from commonsense knowledge bases Pergola et al. (2021); Lu et al. (2022). Christopoulou et al. (2021) explored a similar intuition of using knowledge base priors as distant supervision signals, but the approach and the task are different.
### Graph Embedding Learning
Multi-relational data are commonly interpreted in terms of directed graphs with nodes and edges representing entities and their relations, respectively. Several works have recently focused on modelling these multi-relational data with relational embeddings by detecting and encoding local and global connectivity patterns between entities.
TransE Bordes et al. (2013) has been a seminal work adopting geometric translations of entities to represent relations in the embedding space. If a relation between a head and a tail entity holds, it is encoded via the translational parameters learned at training time.
However, TransE cannot model symmetry relation well by simple addition which led to several subsequent studies exploring diverse types of transformation resulting in a family of _translational models_Wang et al. (2014); Ji et al. (2015); Lin et al. (2015). Among them, Balazevic et al. (2019) proposed to utilize the Poincare model, mapping the entity embeddings onto a Poincare ball, and using the Poincare metric to compute the score function and predict their relations. Chami et al. (2020) further expanded the idea of embedding learning over manifolds by additionally considering reflections and rotations and redefining the translation over a learned manifold.
Although translational models are shown efficient in modeling graph relation, they provide relatively limited interaction between nodes than neural network-based methods, such as Graph Neural Networks Estrach et al. (2014); Chami et al. (2020). Under this framework, nodes in a graph are neural units, which can iteratively propagate information through edges, and whose representations are learnt during the training process. In particular, Relational Graph Convolutional Networks (RGCN) Schlichtkrull et al. (2018) encode relational data through link prediction and entity classification tasks, while enforcing sparsity via a parameter-sharing technique. Although modeling knowledge graphs has been one of the main focuses of the above-mentioned graph learning approaches, they lack any systematic mechanism to inject prior knowledge and update it during training.
## 3 Bayesian-Trans Model
In identifying temporal relations between events, we aim at predicting the relation type of two events given in text, commonly denoted as _head_ event \(x_{h}\) and _tail_ event \(x_{t}\):
\[\hat{y}=\operatorname*{arg\,max}_{y\in\mathcal{R}}p(y|x_{h},x_{t}) \tag{1}\]
where \(\mathcal{R}\) denotes a set of possible relation types, while \(x_{h}\) and \(x_{t}\) the head and tail event triggers, respectively. Assuming that a set of latent variables \(\boldsymbol{\Lambda}\) denotes the collection of all relation-specific transformation parameters \(\boldsymbol{\Lambda}_{r}\). For example, in the knowledge embedding learning model such as MuRE Balazevic et al. (2019), the head entity is first transformed through a relation-specific matrix \(\mathbf{W}_{r}\), followed by a relation-specific translation vector \(t_{r}\), then \(\boldsymbol{\Lambda}_{r}=\{\mathbf{W}_{r},t_{r}\}\). By Bayesian learning, the probability of inferring a relation type \(r\) can be written as:
\[p(y=r|x_{h},x_{t})=\int_{\boldsymbol{\Lambda}}p(y_{r}|x_{h},x_{t},\boldsymbol{ \Lambda})p(\boldsymbol{\Lambda}|\mathcal{G})d_{\boldsymbol{\Lambda}} \tag{2}\]
Here, \(p(\boldsymbol{\Lambda}|\mathcal{G})\) denotes the prior distribution of \(\boldsymbol{\Lambda}\) derived from an existing knowledge graph encoded as \(\mathcal{G}\). Directly inferring Eq. (2) is intractable. But we can resort to amortised variational inference to learn model parameters. In what follows, we present our proposed Bayesian learning framework built on translational models for event temporal relation extraction, called **Bayesian-Trans**, with its architecture shown in Figure 2.
In particular, the context \(S\) in which the two events occur is the input to our Bayesian-Trans. First, we encode \(S\) via a pre-trained language model generating the contextual embeddings
and \(\mathrm{e}_{t}\) for the triggers of the head and tail events, respectively. The contextualised event trigger representations, \(\mathrm{e}_{h}\) and \(\mathrm{e}_{t}\), are fed as input into a Bayesian translational module. This module, by means of variational inference, determines the parameters of the translational model, encoding the posterior distribution of the temporal relations conditioned upon the input events. Finally, we use a score function on the translated head and tail triggers to predict their temporal relation. We provide a more detailed description in the following.
### Contextual Encoder
The proposed model uses COMET-BART Hwang et al. (2021) as the context encoder. COMET-BART is a BART pre-trained language model Lewis et al. (2020) fine-tunned on ATOMIC Bosselut et al. (2019); Hwang et al. (2021), which is an event-centric knowledge graph encoding inferential knowledge about entities and events, including event temporal relations. The COMET-BART is able to generate consequence events given the antecedent event and a relation with good accuracy thus is regarded encodes knowledge well. Following the approach adopted in previous works Ning et al. (2019); Wang et al. (2020); Tan et al. (2021), we use the representation of the first token of an event trigger as the contextual embedding of that event2, \(\mathrm{e}_{h},\mathrm{e}_{t}=\text{COMET-BART}(x_{h},x_{t})\), where \(\mathrm{e}_{h},\mathrm{e}_{t}\in\mathbb{R}^{d}\). The event representations are then concatenated together and fed through MLPs to generate the parameters of the variational distribution, from which the latent event-pair representation \(z\) is sampled. \(z\) is then mapped to the parameter space of the translational model as \(\mathbf{\Lambda}\).
Footnote 2: We conducted some exploratory experiments adopting the last token or the average representation, but results showed that the first token was still the best option in this context.
### Incorporating Knowledge via Bayesian Learning
The proposed model utilizes relation embeddings for classifying event relation in a similar manner as the translational models in knowledge graph embedding, such as TransE Bordes et al. (2013). If the embedding of the tail event is close enough to the embedding of head event after applying a series of relation-specific transformation, the relation stands, and vice versa. A wide range of translational models typically proposed for learning knowledge graph embeddings can be adopted in the proposed Bayesian-Trans. Additionally, to incorporate prior knowledge, we extend translational models to operate within the Bayesian inference framework. We proceed with introducing a standard translational model in the context of temporal relations, and describe how we extend it to work in the Bayesian framework.
Translational ModelGenerally speaking, a translational model uses _relation representations_\(\mathbf{\Lambda}_{r}\) to perform "translation" for relation \(r\) on the head and tail events. Then, the transformed head and tail event embeddings are compared using a _distance-based score function_, whose score is indicative of the temporal relation between the events. The score function \(\phi(\cdot)\) takes the general form:
\[\phi(\mathrm{e}_{h},r,\mathrm{e}_{t})=-d(\mathcal{T}^{h}_{\mathbf{A}_{r}}( \mathrm{e}_{h}),\mathcal{T}^{t}_{\mathbf{A}_{r}}(\mathrm{e}_{t})) \tag{3}\]
where \(r\) is a relation type, \(\mathcal{T}_{\mathbf{\Lambda}_{r}}(\cdot)\) is a function depending on the parameters \(\mathbf{\Lambda}_{r}\) of relation \(r\) to transform the event embeddings \(e_{h}\) and \(e_{t}\), and \(d(\cdot)\)
Figure 2: The network structure of Bayesian-Trans. Context sentences are first fed into a COMET encoder to generate event representations. With MLP layers, the event representations are mapped to generate a variational distribution of relation representations which is guided by KG priors. The relation representations are then used in the translational model to generate prediction scores.
is any distance metrics (e.g., Euclidean distance). We explored several models with different translation functions and distance metrics in the context of temporal relations, including TransE Bordes et al. (2013), AttH Chami et al. (2020), MuRE Balazevic et al. (2019) and MuRP Balazevic et al. (2019), and based on our preliminary results3, we eventually adopted MuRE as it strikes a good balance of training efficiency and accuracy of temporal relation classification. We define the scoring function in the proposed model as follows:
Footnote 3: Experimental results using different translational models are shown in Table A1.
\[\phi(\mathrm{e}_{h},r,\mathrm{e}_{t})=-\|\mathbf{W}_{r}\mathrm{e}_{h}+ \mathrm{t}_{r}-\mathrm{e}_{t}\|_{2}^{2} \tag{4}\]
where \(\mathbf{W}_{r}\in\mathbb{R}^{d\times d}\) is a diagonal relation matrix and \(t_{r}\in\mathbb{R}^{d}\) a translation vector of relation \(r\), \(\mathbf{\Lambda}_{r}=\{\mathbf{W}_{r},\mathrm{t}_{r}\},r\in\mathcal{R}\).
Although the number of parameters to train is rather low, the number of annotated samples is usually small compared to the wide range of linguistic expressions capturing temporal relations. We thus extend the MuRE model into a Bayesian framework to enhance its scalability by treating the translational parameters \(\mathbf{\Lambda}\) as latent variables. The proposed framework enhances generalization by defining a variational inference process that optimizes the regularization and leverages the additional information injected via the prior distributions.
Bayesian InferenceAs shown in the inference equation 2, the prior is derived from an external knowledge graph, such as ATOMIC, as a means to inject prior information about events and temporal relations. In particular, \(\mathbf{\Lambda}\) is assumed to follow a Gaussian distribution with unit variance and with mean determined by the relation representations trained on the knowledge graph. The probability function is formulated as a softmax function over a pre-defined scoring function:
\[p(y_{r}|\mathrm{e}_{h},\mathrm{e}_{t},\mathbf{\Lambda})=\frac{\exp\left(\phi( \mathrm{e}_{h},r,\mathrm{e}_{t})\right)}{\sum_{r^{\prime}\in\mathcal{R}}\exp \left(\phi(\mathrm{e}_{h},r^{\prime},\mathrm{e}_{t})\right)} \tag{5}\]
with \(\mathrm{e}_{h}\) and \(\mathrm{e}_{t}\) denoting the embedding for the head and the tail events, respectively.
Yet, Eq. (2) is intractable and cannot be inferred directly. Thus, we resort to amortized variational inference by introducing a variational posterior \(q_{\theta}(\mathbf{\Lambda}|\mathbf{x}_{h},x_{t})\), which follows the isotropic Gaussian distribution and can be modeled as:
\[\mu=f_{\mu}(\mathrm{e}_{h};\mathrm{e}_{t}) \Sigma=\text{diag}\Big{(}f_{\Sigma}(\mathrm{e}_{h};\mathrm{e}_{t })\Big{)} \tag{6}\] \[q_{\theta}(\mathbf{\Lambda}|\mathrm{e}_{h},\mathrm{e}_{t})= \mathcal{N}(\mathbf{\Lambda}|\mu,\Sigma),\]
where \(f_{\mu}\) and \(f_{\Sigma}\) are both fully connected layers that map the event pair representation into the parameters of the variational distribution.
Following the amortized variational inference, we maximize the evidence lower bound (ELBO) \(\mathcal{L}_{e}\), defined in Eq. (7), and approximated by a Monte Carlo estimation with sample size \(N\), as described in Eq. (8):
\[\mathcal{L}_{e} =\mathbb{E}_{q_{\theta}(\mathbf{\Lambda}|x_{h},x_{t}),\{x_{h},x_{ t}\}\in\mathcal{D}}\Big{[}\log p_{\theta}(y|x_{h},x_{t},\mathbf{\Lambda})\Big{]}-\] \[\quad\text{Reg}\Big{(}q_{\theta}(\mathbf{\Lambda}|x_{h},x_{t}, \mathcal{G})||p(\mathbf{\Lambda}|\mathcal{G})\Big{)} \tag{7}\] \[\approx\frac{1}{N}\sum_{n=1}^{N}\sum_{\{x_{h},x_{t}\}\in\mathcal{D }}\Big{[}\log p_{\theta}(y|x_{h},x_{t},\mathbf{\Lambda}^{(n)})-\] \[\quad\text{Reg}\Big{(}q_{\theta}(\mathbf{\Lambda}^{(n)}|x_{h},x_{ t},\mathcal{G})||p(\mathbf{\Lambda}^{(n)}|\mathcal{G})\Big{)}\Big{]} \tag{8}\]
where \(\text{Reg}(\cdot)\) is a regularization term which will be discussed in 3.3. To train end-to-end a fully differentiable model, we adopt the reparameterization trick Kingma and Welling (2014).
### Prior Distribution and Regularization
We proceed to discuss how the Bayesian framework enabled the incorporation of prior acquired from an external knowledge source. Then, we provide the details of how we compute the regularization term to induce a more stable training.
Prior DistributionOne of the main advantages of the Bayesian inference framework is the possibility to inject commonsense knowledge into the model through the prior distribution of the latent variables, i.e., \(p(\mathbf{\Lambda}|\mathcal{G})\) in Eq. (2), where \(\mathbf{\Lambda}\) are the translational parameters and \(\mathcal{G}\) denotes an external knowledge graph, in our case, the ATOMIC knowledge graph Hwang et al. (2021). ATOMIC is a commonsense knowledge graph containing inferential knowledge tuples about entities and events encoding social and physical aspects of human everyday experiences. For our task of event temporal relation extraction, we are only interested in the events linked via temporal relations, such as 'IsBefore' (23,208 triples) or 'IsAfter' (22,453 triples). By conducting link prediction on these links, we use relation embeddings learnt using an RGCN Schlichtkrull et al. (2018) as the mean
of the prior distribution for the translational latent variables. For the relations in the experiment dataset that do not have applicable counterparts in ATOMIC (e.g., Vague), we set their priors to standard Gaussian. The variance of the priors is defined as the identity matrix.
Specifically, we use COMET-BART to encode the event nodes from ATOMIC, then use their context embeddings as the node features in the RGCN. In our preliminary experiment, we also found that RGCN cannot train well on the commonsense graph with only the event-event relation links. The graph is too sparse which makes the information difficult to propagate through the nodes. Thus, we added semantic similarity links based on the cosine similarity of the event context embeddings. During the training of the RGCN, the node embeddings are kept frozen. After the training of the link prediction task, we extract the relation embeddings of the RGCN.
Regularization TermTo mitigate the posterior collapse problem Lucas et al. (2019) and have a stable inference process, we adopt the Maximum Mean Discrepancy (MMD)4 which is an estimation of Wasserstein distance Tolstikhin et al. (2018) as the regularization term (Eq. 8).
Footnote 4: MMD calculation can be found in Appendix A.
## 4 Experimental Setup
DatasetsWe evaluated the proposed Bayesian-Trans model on three event temporal relation datasets: MATRES Ning et al. (2018), Temporal and Causal Reasoning (TCR) Ning et al. (2018), and TimeBank-Dense (TBD) Cassidy et al. (2014). TimeBank-Dense is a densely annotated dataset focusing on the most salient events and providing \(6\) event temporal relations. MATRES follows a new annotation scheme which focuses on main time axes, with the temporal relations between events determined by their endpoints, resulting in a consistent inter-annotator agreement (IAA) on the event annotations Ning et al. (2018). TCR follows the same annotation scheme, yet with a much smaller number of event relation pairs than in MATRES. Table 1 shows the statistics of the datasets.
BaselinesWe compare the proposed Bayesian-Trans5 with the following baselines:
Footnote 5: Hyperparameter setting can be found in Appendix B.
CogCompTime Ning et al. (2018) is a multi-step system which detect temporal relation using semantic features and structured inference.
BiLSTM is a basic relation prediction model built by Han et al. (2019).
LSTM + knowledge Ning et al. (2019) incorporates knowledge features learnt from an external source and optimize global consistency by ILP.
Deep Structured Han et al. (2019) adds a structured support vector machine on top of a BiLSTM.
Joint Constrained Learning Wang et al. (2020) constrains the training of a RoBERTa-based event pair classifier using predefined logic rules, while knowledge incorporation and global optimization are also included.
Poincare Event Embedding Tan et al. (2021) learns event embeddings based on a Poincare ball and determines the temporal relation base on the relative position of events.
HGRU + knowledge Tan et al. (2021) is a neural architecture processing temporal relations via hyperbolic recurrent units which also incorporates knowledge features like LSTM + knowledge.
Relative Event Time Wen and Ji (2021) is a neural network classifier combining an auxiliary task for relative time extraction over an event timeline.
UAST Cao et al. (2021) is an uncertainty-aware self-training model. We show the result of the model which is trained on all the labeled data.
## 5 Experimental Results
Temporal Relation ClassificationWe first compare Bayesian-Trans with the most recent approaches for temporal event classification in Table 2, including methods with or without commonsense knowledge injection. The results are obtained by training models on the MATRES training set and evaluated on both the MATRES test set and TCR. Table 3 shows results from the TBD dataset which are generated using the provided train, development, and test sets. We report F\({}_{1}\) score on MA
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Class** & MATRES & TCR & TBD \\ \hline Before & \(6,852\) & \(1,780\) & \(2,590\) \\ After & \(4,752\) & \(862\) & \(2,104\) \\ Equal/Simultaneous & \(448\) & \(4\) & \(215\) \\ Vague/None & \(1,425\) & N/A & \(5,910\) \\ Include & N/A & N/A & \(836\) \\ IsIncluded & N/A & N/A & \(1,060\) \\ \hline Total & \(12,740\) & \(2,646\) & \(12,715\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The statistics of MATRES, TCR, and TBD.
TRES and TCR following the definition in Ning et al. (2019), and micro-F\({}_{1}\) on TimeBank-Dense. Compared with existing methods, the proposed Bayesian-Trans has generally better performance on all three datasets, with more noticeably improvements on MATRES. Bayesian-Trans has significant performance gains over previous methods with knowledge incorporation, which shows that it can utilize knowledge more extensively. Details of the per-class performance can be found in Table A2 and A3.
Ablation StudyWe conducted an ablation study to highlight the impact of the different modules composing Bayesian-Trans. The results are shown in Table 4. In particular, we have the following variants: (1) RoBERTa\(+\)MLP, using RoBERTa to encode the context and then feeding representations of head and tail events to a multi-layer perceptron (MLP) for temporal relation classification; (2) RoBERTa\(+\) Vanilla MuRE, using MuRE to extract temporal relations without modeling its parameters as latent variables; (3) RoBERTa\(+\)Bayesian-Trans, our proposed model by replacing COMET-BART with RoBERTa as the text encoder; (4) COMET-BART\(+\)MLP, using COMET-BART as context encoder and an MLP for temporal relation classification; and (5) COMET-BART\(+\) Vanilla MuRE, the proposed model without Bayesian learning or knowledge incorporation. The results demonstrate that COMET-BART is a better choice as the context encoder. Using MuRE for event temporal knowledge embedding learning does not bring any improvement compared to using a simple MLP layer for event temporal relation prediction (see (1) cf. (2), and (4) cf. (5)). Regardless of the contextual encoder used, the results of (3) and (6) show the benefit of employing Bayesian learning which naturally incorporates prior knowledge of event temporal relations learned from an external knowledge source for event temporal relation detection. With our proposed Bayesian translational model, we observe an improvement of \(0.9-1.8\%\) in micro-F\({}_{1}\) on MATRES and \(0.2-2.5\%\) in micro-F\({}_{1}\) on TimeBank-Dense compare to their non-Bayesian counterparts.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model** & **MATRES** & **TBD** \\ \hline (1) RoBERTa \(+\) MLP & \(81.5\) & 62.8 \\ (2) RoBERTa \(+\) Vanilla MuRE & \(80.4\) & 60.5 \\ (3) RoBERTa \(+\) Bayesian-Trans & \(82.2\) & 63.0 \\ (4) COMET-BART \(+\) MLP & \(81.8\) & 63.2 \\ (5) COMET-BART \(+\) Vanilla MuRE & \(81.8\) & 62.6 \\ \hline (6) COMET-BART \(+\) Bayesian-Trans & \(\mathbf{82.7}\) & \(\mathbf{65.0}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation test results on MATRES and TBD.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{MATRES} & \multicolumn{3}{c}{TCR} \\ \cline{2-7}
**Model** & P & R & F\({}_{1}\) & P & R & F\({}_{1}\) \\ \hline CogCompTime Ning et al. (2018) & \(61.6\) & \(72.5\) & \(66.6\) & - & - & \(70.7\) \\ Poincaré Event Embeddings Tan et al. (2021) & \(74.1\) & \(84.3\) & \(78.9\) & \(85.0\) & \(86.0\) & \(85.5\) \\ Relative Event Time Wen and Ji (2021) & \(78.4\) & \(85.2\) & \(81.7\) & \(84.3\) & \(\mathbf{86.8}\) & \(85.5\) \\ \hline LSTM + knowledge Ning et al. (2019) & \(71.3\) & \(82.1\) & \(76.3\) & - & - & \(78.6\) \\ Joint Constrained Learning Wang et al. (2020) & \(73.4\) & \(85.0\) & \(78.8\) & \(83.9\) & \(83.4\) & \(83.7\) \\ HGRU + knowledge Tan et al. (2021) & \(79.2\) & \(81.7\) & \(80.5\) & \(88.3\) & \(79.0\) & \(83.5\) \\ \hline Bayesian-Trans & \(\mathbf{79.6}\) & \(\mathbf{86.0}\) & \(\mathbf{82.7}\) & \(\mathbf{89.8}\) & \(82.6\) & \(\mathbf{86.1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimental results on MATRES and TCR. The first three lines contain methods without commonsense knowledge incorporation. The rest are methods which inject commonsense knowledge. The results of Wang et al. (2020) and Wen and Ji (2021) on TCR are generated from our run of the source code provided by the authors since they are not available in their original papers. The others are taken from the cited papers.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Model** & Micro-F\({}_{1}\) \\ \hline BiLSTM Han et al. (2019) & \(61.9\) \\ Deep Structured Han et al. (2019) & \(63.2\) \\ Relative Event Time Wen and Ji (2021) & \(63.2\) \\ UAST Cao et al. (2021) & \(64.3\) \\ \hline Bayesian-Trans & \(\mathbf{65.0}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experimental results on TBD. All compared methods do not incorporate commonsense knowledge explicitly. The result of Wen and Ji (2021) is generated from our run of the source code provided by the authors since they are not available in their original paper. The others are taken from the cited papers.
Effects of the PriorsWe further investigate the impact of different priors on the model performance. Inspired by the work on VAEs by Burda et al. (2016) and Truong et al. (2021), we employed an _'activity' score_, \(\tau=Cov_{e_{h},e_{t}}(\mathbb{E}_{\theta(\mathbf{\Lambda}|e_{h},e_{t})}[ \mathbf{\Lambda}])\) to evaluate the quality and diversity of the latent encodings. The intuition behind the "activity" score is that if a latent dimension encodes relevant information and is not redundant, its value is expected to vary significantly over different inputs. By computing the score across all the test instances, every dimension of \(\mathbf{\Lambda}\) is given an 'activity' value. Latent units with a higher value are considered more active and thus more informative. Figure 3 shows activity scores with respect to different prior distributions, including the standard Gaussian prior and priors learned on ATOMIC using MuRE or RGCN, in which the latent variables are the least active when using standard Gaussian as the prior distribution. The higher activation is obtained using the priors learnt on the external knowledge base. In particular, the prior based on RGCN and MuRE over ATOMIC displays the most active units, with RGCN showing the most active units on average. Table 5 shows the performance of the proposed model based on different priors. Two-sided Welch's t-test (\(p<0.05\)) also supports that the RGCN-learned prior improves over standard Gaussian prior.
Uncertainty QuantificationWe present an analysis of uncertainty quantification of the Bayesian-Trans predictions. We adopted the uncertainty quantification methods as in Malinin and Gales (2018), computing the entropy (_total uncertainty_) and mutual information (_model uncertainty_) to visualize the predictive probabilities on a 2-simplex. Each forward pass on the same test instance is represented as a point on the simplex. For the sake of clarity of the visualization, we removed the Equal class, which is hardly ever predicted by the models.
In one of the test cases (Figure 4(a)), the true label is "_die_" before "_vaccinate_". This example exhibits a rather complex linguistic structure, as such, the model exhibits some uncertainty. Most of the predictions located at the corner are associated with Before, but there also are several predictions scattered around it. We then simplified the sentence structure by removing "_but four_", and fed the modified sentence to the same model. This time, the model predicted the right temporal relation with much lower uncertainty (Figure 4(b)).
In another case study (Figure 4(c)), the true label is "_depart_" after "_reveal_". This test case is rather straightforward, because of the explicit temporal word "before". The model predicted After with high confidence, as shown by the predictive probabilities cluster at the top of the simplex. To show the impact of the temporal description, we
Figure 4: Examples of temporal relations in text and uncertainty quantification (entropy and mutual information) for the Bayesian-Trans model. Examples (_a_),(_b_) show how simplifying the linguistic structure without altering the temporal relation increases the model confidence. While examples (_c_),(_d_) illustrate the model’s detection of temporal linguistic hints and its confidence.
Figure 3: The box chart of the activity scores across all the dimensions of the latent encoding \(\mathbf{\Lambda}\) with respect to the priors used in the model.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Dataset** & Standard Gaussian & MuRE & RGCN \\ \hline MATRES & \(81.2\) & \(81.8\) & \(82.7\) \\ TCR & \(84.3\) & \(85.4\) & \(86.1\) \\ TBD & \(63.6\) & \(64.6\) & \(65.0\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: F\({}_{1}\) values based on different priors used in the proposed model.
swapped it from "_before_" to "_after_" and fed it to the same model. The model recognized the reversed meaning and correctly predicted Before with low uncertainty (Figure 4(d)). The above cases demonstrate that the proposed model reacts to different inputs with reasonable uncertainty, on both the total and model uncertainty scores.
## 6 Conclusion
We propose a principled approach to incorporate knowledge for event temporal relation extraction named Bayesian-Trans, which models the relation representations in the MuRE translational model, as latent variables. The latent variables are inferred through variational inference, during which commonsense knowledge is incorporated in the form of the prior distribution. The experiments on MATRES, TCR, and TBD show that Bayesian-Trans achieves state-of-the-art performance. Comprehensive analyses of the experimental results also demonstrate the characteristics and benefits of the proposed model.
## Limitations
Our approach takes an event pair as input for the prediction of their temporal relation. We observe that if two events reside in different sentences, the error rate increases by 19%. A promising future direction is to construct a global event graph where temporal relations of any two events are refined with the consideration of global consistency constraints, for example, no temporal relation loop is allowed in a set of events. Our current work only deals with even temporal relations, it could be extended to consider other event semantic relations such as causal, hierarchical or entailment relations. The event temporal knowledge in this paper is derived from ATOMIC which can possibly be extended to more sources. Bayesian learning could also be extended to life-long learning. But we need to explore approaches to address the problem of catastrophic forgetting. We didn't exhaustively investigate all the translational models due to the large volume of work in that area. There might be a translational model which can achieve better performance, but the core idea of the proposed framework stays the same.
## Ethical Considerations
The goal of the proposed method is to understand the temporal relation between events based on the descriptions in the given text. What the method can achieve in the most optimistic scenario is no more than giving the same text to a human reader and letting him or her explain the event relations. Therefore, the ethical concerns only come from the data collection. In this paper, we only use publicly available datasets which have already been widely used in the research field. As for potential application, as long as the user collects the training data legally, the proposed method does not have the potential to have a direct harmful impact.
## Acknowledgements
This work was supported in part by the UK Engineering and Physical Sciences Research Council (grant no. EP/T017112/1, EP/V048597/1, EP/X019063/1). YH is supported by a Turing AI Fellowship funded by the UK Research and Innovation (grant no. EP/V020579/1). This work was conducted on the UKRI/EPSRC HPC platform, Avon, hosted in the University of Warwick's Scientific Computing Group. XT was partially supported by the Research Development Fund (RDF) 2022/23 (University of Warwick): _'An Event-Centric Dialogue System for Second Language Learners'_.
|
2309.07126 | Comparison of stochastic stability boundaries for parametrically forced
systems with application to ship rolling motion | Numerous accidents caused by parametric rolling have been reported on
container ships and pure car carriers (PCCs). A number of theoretical studies
have been performed to estimate the occurrence condition of parametric rolling
in both regular and irregular seas. Some studies in random wave conditions have
been the approximate extension of the occurrence conditions for regular waves
(e.g. Maki et al). Furthermore, several researches have been based on the
stochastic process in ocean engineering (Roberts and Dostal). This study
tackled the parametric rolling in irregular seas from the stability of the
system's origin. It provided a novel theoretical explanation of the instability
mechanism for two cases: white noise parametric excitation and colored noise
parametric excitation. The authors then confirmed the usefulness of the
previously provided formulae by Roberts and Dostal through numerical examples. | Atsuo Maki, Yuuki Maruyama, Yaliu Liu, Leo Dostal | 2023-04-08T09:05:00Z | http://arxiv.org/abs/2309.07126v1 | Comparison of stochastic stability boundaries for parametrically forced systems with application to ship rolling motion
###### Abstract
Numerous accidents caused by parametric rolling have been reported on container ships and pure car carriers (PCCs). A number of theoretical studies have been performed to estimate the occurrence condition of parametric rolling in both regular and irregular seas. Some studies in random wave conditions have been the approximate extension of the occurrence conditions for regular waves (e.g. [1]). Furthermore, several researches have been based on the stochastic process in ocean engineering (Roberts[2] and Dostal[3]). This study tackled the parametric rolling in irregular seas from the stability of the system's origin. It provided a novel theoretical explanation of the instability mechanism for two cases: white noise parametric excitation and colored noise parametric excitation. The authors then confirmed the usefulness of the previously provided formulae by Roberts and Dostal through numerical examples.
Keywords:Parametric rolling irregular seas Stochastic differential equation Lyapunov exponent
## 1 Introduction
Parametric rolling, one of the threats to oceangoing vessels, is triggered by the change in the restoring moment caused by waves and from the viewpoint of nonlinear dynamical system theory, corresponds to parametric resonance. In the late 1990s, several accidents were reported on container vessels because of parametric rolling [4], and since the 2000s, there have been also reports of it occurring at PCTCs [5].
In our paper survey, we have found Watanabe's work [6] conducted in the 1930s. There is a long research history on parametric rolling, and theoretical studies have often been aimed at estimating the conditions of occurrence and motion amplitude of parametric rolling in regular waves. We present a review of representative studies to determine the conditions and amplitudes of parametric rolling in regular waves below. Kerwin [7] theoretically analyzed parametric rolling using the harmonic balance method with success, thus leading to subsequent works for predicting parametric rolling in regular seas by Zavodney et al. [8], Francescutto [9], Bulian [10], Spyrou[11], Umeda et al. [12], Maki et al. [1], and Sakai et al. [13].
Research on parametric rolling in irregular waves has been very active in recent years, with a history that can be traced back to the 1980s [14; 15; 1]. Many studies have been conducted on this more generalized problem, taking a stochastic process approach in the fields of control engineering and mechanical engineering. For example, Samuels performed numerical experiments in which the damping term of the second-order differential equation was parametrically oscillated by white noise [16]. Caughey discussed this topic [17; 18]. Khasminskii [19] discussed the stability of systems with parametric excitation terms, and Kozin [20] further developed Khasminskii's ideas, deriving stability conditions and comparing them with numerical calculations. For an overview of the progress made in these studies over time, references [21] and [22] can be consulted.
However, the problem we are addressing is not a system with a white noise parametric excitation term but one with a colored noise parametric excitation. For colored noise, obtaining results on the stability of the system's origin becomes difficult, as the solution method used in the white noise case cannot be directly applied. To address this, Infante [23] obtained the condition of stability using the Lyapunov stability theory, while Arnold [24] used perturbation
expansions and calculated Lyapunov exponents to obtain analytical relations for stability.
For systems with parametric excitation terms of the colored noise type, the problem can sometimes be solved using stochastic averaging methods. Stratonovich [25] and Khasiminskii [26] developed the Stratonovich-Khasiminskii limit theorem, which Roberts applied to ship problems [2]. More recently, Dostal proposed an energy-based stochastic averaging method [3], and Maruyama developed enhancements to this approach [27]. Liu provides an extensive survey of the applicability of this method [28]. Maruyama [29] also established an estimation method using the higher-order moment technique and successfully estimated the lateral angular acceleration using the results of this method [30]. Most of these studies focused on obtaining the PDF of the response, but some, such as the studies by Ariaratnam and Tam [31] and [3], refer to the stability of the origin of the system.
This study aims to demonstrate the physics and mechanism of parametric rolling in irregular seas from the stability of the upright condition (the system's origin). In regular head sea cases, it is easy to understand that parametric rolling is attributed to the loss of asymptotic stability of the upright condition. For instance, as Maki et al. [1] demonstrated, its destabilization because of parametric excitation can be theoretically estimated by the averaging method. However, the occurrence of parametric rolling in irregular seas can be explained by the loss of stability of the system's origin. To explain this physics, the authors first present the system's stability with a stochastic coefficient modeled by the Wiener process. Secondly, they demonstrate the stability of the ship equation of motion with parametric excitation. In both considerations, several formulae to predict the stability threshold are examined.
Initial results from the investigation presented in this study were initially described by Maki et al. [32]. In this paper, the results are presented more extensively, with more details, and with some revisions.
## 2 Notations
In this study, the \(n\)-dimensional Euclidean space is denoted by \(\mathbb{R}^{n}\), and the set of real numbers for \(n=1\) is denoted by \(\mathbb{R}\). The expectation operation is denoted by \(\mathbb{E}\), and \(t\) represents time. The overdot of time-dependent variables indicates the derivative with respect to time \(t\).
## 3 Equation of motion
In this study, the authors deal with the single-DoF (Degree-of-Freedom) roll equation of motion:
\[\ddot{\phi}+2\zeta\,\dot{\phi}+c_{1}\phi+f(t)\,\phi=h(t\,). \tag{1}\]
In this equation, \(\phi(t)\) is the roll angle, \(\dot{\phi}(t)\) is the roll angular velocity, and \(\ddot{\phi}(t)\) is the roll angular acceleration. \(2\zeta\) is the damping coefficient, \(c_{1}\) is the restoring moment coefficient, \(f(t)\) is the parametric excitation term based on the restoring moment variation, and \(h(t)\) is the roll moment caused by waves, each a function of time with a stochastic variation. In the colored noise case, the term \(f(t)\phi\) is estimated using Grim's effective wave concept [33; 34].
This equation has linear damping and restoring components; thus, the nonlinear components are not essential for assessing the stability of the origin.
## 4 Parametric oscillation for white noise
In this section, the authors briefly review the existing results conducted for the system with white noise parametric excitation. This is useful for understanding the physics of random parametric excitations, and therefore they consider the following parametric excitation term:
\[f(t)\phi(t)\mathrm{d}t=\Gamma\phi(t)\mathrm{d}W(t) \tag{2}\]
Here, \(\Gamma\) is a intensity strength of a white noise, and \(W(t)\) is a 1D standard Wiener process that satisfies the relation
\[\begin{cases}\mathbb{E}[W(s)-W(t)]=0\\ \mathbb{E}[(W(s)-W(t))^{2}]=|t-s|\\ \qquad\text{and}\\ \begin{cases}\mathbb{E}[W(s)W(t)]=\min(s,t)\\ \qquad\text{or}\\ \mathbb{E}[\mathrm{d}W(s)\mathrm{d}W(t)]=\delta(t-s)\end{cases}\end{cases} \tag{3}\]
and \(\mathrm{d}W(t)\) is the increment of this Wiener process. Here, \(\delta(t)\) means the Dirac's delta function.
### Some remarks on the stability in stochastic systems
In this subsection, we analyze the stability of the system origin with a parametric excitation term represented by white noise. We briefly explain the mathematical concepts used in this analysis.
The problem addressed here is almost identical to that described in a recent paper by the authors, where the problem of ship maneuvering motion under white noise multiplicative noise from the aspect of stochastic disturbances is tackled[35]. The governing equations, in that case, are almost identical to those in the present paper. However, attention must be paid to the presence of the Wong-Zakai's correction term [36], which appears when a real system is reduced to an Ito-type system of stochastic differential equations. If the Wong-Zakai's correction term is present, it can strongly affect the stability of the system's origin, as
discussed in [21]. However, as mentioned in our previous work[35], the Wong-Zakai's correction term does not exist in the system dealt with here.
In the following subsections, the authors will demonstrate the method to identify the system's stability driven by white noise. First, the authors must carefully examine the relationship between the real system and the corresponding SDE to analyze the random system. The present system is the same as the one dealt with in our previous paper[35]; hence, the Wong-Zakai's correction term[36] is zero; this topic is discussed in the paper of Kozin[21].
The roll angle and roll angular velocity are now represented as a state variable \(x(t)\in\mathbb{R}^{2}\), hereafter.
\[x(t)\equiv\begin{bmatrix}x_{1}(t)\\ x_{2}(t)\end{bmatrix}=\begin{bmatrix}\phi(t)\\ \phi(t)\end{bmatrix} \tag{4}\]
Consider the following stochastic differential equation (SDE: Stochastic Differential Equation).
\[\mathrm{d}x(t)=\mu(x(t))\mathrm{d}t+\sigma(x(t))\mathrm{d}W(t) \tag{5}\]
where \(\mu(x(t)):\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}\), and \(\sigma(x(t)):\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}\).
Since there does not exist the Wong-Zakai's correction term[36], the drift and diffusion terms for the present system are given by \(\mu(x(t),t)\) and \(\sigma(x(t),t)\), respectively
\[\begin{cases}\mu(x(t),t)=\begin{bmatrix}x_{2}(t)\\ -\kappa x_{2}(t)-c_{1}x_{1}(t)\end{bmatrix}\\ \sigma(x(t),t)=\begin{bmatrix}0\\ \Gamma x_{1}(t)\end{bmatrix},\end{cases} \tag{6}\]
Finally, the SDE that the authors tackled is as follows:
\[\begin{cases}\mathrm{d}x_{1}(t)=x_{2}(t)\mathrm{d}t\\ \mathrm{d}x_{2}(t)=-(2\zeta x_{2}(t)+c_{1}x_{1}(t))\mathrm{d}t+\Gamma x_{1}(t) \mathrm{d}W(t)\end{cases} \tag{7}\]
We define the infinitesimal operator \(\mathcal{L}\left[\cdot\right]\) to analyze the dynamics of the functional \(f(x(t))\in\mathbb{R}\) of \(x(t)\) as:
\[\begin{cases}\mathcal{L}\left[\cdot\right]\equiv\left[\frac{ \partial(\cdot)}{\partial x}\right]\mu(x(t),t)\\ \qquad+\frac{1}{2}\sigma^{T}(x(t),t)\left[\frac{\partial^{2}( \cdot)}{\partial x^{2}}\right]\sigma(x(t),t).\\ \qquad\text{or}\\ \mathcal{L}\left[\cdot\right]\equiv\sum_{i=1}^{2}\mu_{i}(x(t),t)\frac{ \partial}{\partial x_{i}}\\ \qquad+\frac{1}{2}\sum_{i=1}^{2}\sum_{j=1}^{2}\left[\sigma(x(t),t )\sigma^{T}(x(t),t)\right]_{ij}\frac{\partial}{\partial x_{i}\partial x_{j}} \end{cases} \tag{8}\]
For the system that the authors tackle, the infinitesimal operator for a scalar function \(F(x(t))\in\mathbb{R}\) becomes:
\[\begin{split}\mathcal{L}F(x(t))=& x_{2}(t)\frac{ \partial F(x(t))}{\partial x_{1}}\\ &-(2\zeta x_{2}(t)+c_{1}x_{1}(t))\frac{\partial F(x(t))}{ \partial x_{2}}\\ &+\frac{\Gamma^{2}}{2}x_{1}^{2}(t)\frac{\partial^{2}F(x(t))}{ \partial x_{2}^{2}}\end{split} \tag{9}\]
The final term in this equation, \(\Gamma^{2}/2\cdot x_{1}^{2}(t)\cdot\partial^{2}F(x)/\partial x_{2}^{2}\), is a characteristic term that appears in the framework of stochastic differential equations and can serve as a correction term for the dynamics of \(F(x)\) because of the addition of noise. This term can either destabilize or stabilize the system.
We consider the system which does not have the term of parametric excitation, that is \(\Gamma x_{1}(t)\mathrm{d}W(t)\):
\[\begin{cases}\mathrm{d}x_{1}(t)=& x_{2}(t)\mathrm{d}t\\ \mathrm{d}x_{2}(t)=&-(2\zeta x_{2}(t)+c_{1}x_{1}(t))\mathrm{d}t+\Gamma\mathrm{ d}W(t)\end{cases} \tag{10}\]
For the above system, the infinitesimal operator for a scalar function \(F(x)\) becomes:
\[\begin{split}\mathcal{L}F(x(t))=& x_{2}(t)\frac{ \partial F(x(t))}{\partial x_{1}}\\ &-(2\zeta x_{2}(t)+c_{1}x_{1}(t))\frac{\partial F(x(t))}{ \partial x_{2}}\end{split} \tag{11}\]
Therefore, the additive Wiener process does not stabilize or destabilize the present system without parametric excitation. This is an important point to note.
Concerning the system in Eq. (10), several researches have been conducted in the 1960s-1970s [37; 38; 39; 23; 20]. Here, the authors briefly introduce these theories and numerical results.
### Stability of second moment[37; 40]
In this subsection, the authors show the results based on the moment method [37; 40]. They apply the infinitesimal operator from Eq. (8) to \(x_{1}^{2}(t)\), \(x_{1}(t)x_{2}(t)\), and \(x_{2}^{2}(t)\), yielding:
\[\begin{cases}&\mathcal{L}x_{1}^{2}(t)=2x_{1}(t)x_{2}(t)\\ &\mathcal{L}x_{1}(t)x_{2}(t)=x_{2}^{2}(t)-2\zeta x_{1}(t)x_{2}(t)-c_{1}x_{1} ^{2}(t)\\ &\mathcal{L}x_{2}^{2}(t)=-4\zeta x_{2}^{2}(t)-2c_{1}x_{1}(t)x_{2}(t)+ \Gamma^{2}x_{1}(t)\end{cases} \tag{12}\]
Here, the authors apply the following relation between the expectation and differentiation operators:
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathbb{E}[g(x(t))]=\mathbb{E}[\mathcal{L}g(x(t))]. \tag{13}\]
Then, the following moment equation can be derived:
\[\frac{\mathrm{d}}{\mathrm{d}t}\begin{pmatrix}\mathbb{E}[x_{1}^{2}(t)]\\ \mathbb{E}[x_{1}(t)x_{2}(t)]\\ \mathbb{E}[x_{2}^{2}(t)]\end{pmatrix}= \tag{14}\] \[\begin{pmatrix}0&2&0\\ -c_{1}&-2\zeta&1\\ \Gamma^{2}&-2c_{1}&-4\zeta\end{pmatrix}\begin{pmatrix}\mathbb{E}[x_{1}^{2}(t)] \\ \mathbb{E}[x_{1}(t)x_{2}(t)]\\ \mathbb{E}[x_{2}^{2}(t)]\end{pmatrix}\]
This equation contains no information beyond the second-order moments and has a closed structure; however, as Bogdanoff and Kozin [37] state, if a parametric excitation term of the colored noise is added, then the closed structure cannot be established anymore.
For the above system, the characteristic equation becomes:
\[s^{3}+6\zeta s^{2}+(8\zeta^{2}+4c_{1})s+(8c_{1}\zeta-2\Gamma^{2})=0 \tag{15}\]
Here, by applying the Routh-Hurwitz stability criteria, we can obtain an inequality that represents the stability boundary of the 2nd moment:
\[\Gamma^{2}=4c_{1}\zeta \tag{16}\]
### Infante's approach[23]
As the authors explained in section 5.2, Infante's result applies to the white noise case. Therefore, for our system, that is Eq. (10), since \([f(t)^{2}]=\sigma^{2}\), the boundary of stability is:
\[\Gamma^{2}=4c_{1}\zeta^{2} \tag{17}\]
Kozin's [41] results apply to Infante's approach since \(f(t)\) is not limited to white noise, allowing us to extend the results to the colored noise problem.
### Kozin's approach[41]
Kozin [41] proposed a methodology to estimate a system's stability with multiplicative noise based on Khasminskii's work [19]. This methodology allows us to calculate \(J_{1}\) to study a system's stability with colored noise, not just with white noise, as in Infante's approach.
\[\begin{split}\mathcal{J}_{1}=C\int_{-\pi/2}^{\pi/2}& \left(\cos 2\varphi-\frac{4\zeta}{\Gamma^{2}}\tan^{2}\varphi \right.\\ &\left.+\left(1-c_{1}\right)\frac{2}{\Gamma^{2}}\tan\varphi \right)\eta_{\zeta}(\varphi)\mathrm{d}\varphi\end{split} \tag{18}\]
Here, \(C\) is a positive constant, and \(\eta_{\zeta}\) is defined as follows:
\[\begin{split}\eta_{\zeta}(\varphi)=&\exp\left[- \frac{2}{3\Gamma^{2}}\tan\varphi\left(3c_{1}+3\zeta\tan\varphi+\tan^{2} \varphi\right)\right]\\ &\cdot\int_{-\pi/2}^{\varphi}\exp\left[\frac{2}{3\Gamma^{2}}\tan \theta\big{(}3c_{1}+3\zeta\tan\theta\right.\\ &\left.+\tan^{2}\theta\big{)}\right]\sec^{2}\theta\mathrm{d} \theta\end{split} \tag{19}\]
The integrand has singularities at \(\pm\pi/2\), making its numerical computation challenging. Detailed descriptions can be found in [20] and our previous literature [35]. Once \(\mathcal{J}_{1}\) has been successfully calculated, the system's origin is considered stable if the following equation is satisfied.
\[\mathcal{J}_{1}=0 \tag{20}\]
### Arnold's approach for white-noise case[24]
As shown in sec. 5.3, Arnold obtained the stability boundary for the colored noise case, which the authors apply to the present white-noise problem. Now, they calculate eq. (25). In the present case, \(C_{f}\) defined in eq. (26) becomes:
\[C_{f}(t)=\mathbb{E}[f(t)f(0)]=\delta(t), \tag{21}\]
where \(\delta(t)\) denotes the Dirac's delta function. Here, we define the spectral density of \(C_{f}(t)\) as \(S_{f,f_{\mathrm{f}}}\). Then, Eq. (38) can be calculated.
\[\Gamma^{2}=8(1-\zeta^{2})\zeta \tag{22}\]
### Numerical results for white-noise system
Next, the authors show comparisons of the theoretical formulae with numerical results. A Monte Carlo simulation is employed using Euler Maruyama's scheme [42]. For each coefficient combination of \(\zeta\) and \(\Gamma\), 20 sample paths are generated, where the initial condition is \(x_{1}(0)=0.1\) and \(x_{2}(0)=0.1\). After a time of 160 sec, it is judged whether the path has converged to the system origin or not. Fig. 1 and Fig. 2 show the comparative results for \(c_{1}=1\) and \(c_{1}=2\), respectively. It can be seen that Kozin's method accurately predicts the boundary of the stability. Additionally, Arnold's method also predicts the boundary well in the vicinity of \(\Gamma^{2}=0\), since it is based on the assumption of \(\Gamma^{2}\to 0\). However, the methods based on the 2nd moment stability and Infante's method do not quantitatively correlate with MCS results, although the boundary trend can be qualitatively represented.
## 5 Parametric oscillation for colored noise
The stability of the following type of differential equation is analyzed in this section.
\[\begin{cases}\dot{x}_{1}(t)=x_{2}(t)\\ \dot{x}_{2}(t)=-2\zeta x_{2}(t)-(c_{1}+f(t))x_{1}(t)+h(t),\end{cases} \tag{23}\]
which includes the colored noise processes \(f(t)\) and \(h(t)\).
### Modeling of parametric excitation term
The following expressions of the parametric excitation terms are adopted in section 5. Let \(f(t)\) be denoted as \(P(t)\). The parametric excitation terms adopted in this section are calculated by first obtaining the GM variation, which is calculated from the computed restoring arm GZ for each regular wave height with a ratio of the ship length and wavelength of \(1\) (\(\lambda/L=1\)) based on the Froude-Krylov assumption [43]. This GM variation is then combined with the time series data of Grim's effective waves to obtain the parametric excitation term \(P(t)\).
\[f(t)=P(t)=c_{1}P^{\prime}(t). \tag{24}\]
where \(\omega_{0}=\sqrt{c_{1}}\) or \(c_{1}=\omega_{0}^{2}\) denotes the natural roll frequency and \(A_{\mathrm{w}}\) denotes the effective wave amplitude. The parametric excitation terms in the section are expressed using the polynomial approximation of the relation between the change in the restoring force \(\Delta\)GM and wave amplitude amidships.
Figure 1: Comparison of the stability diagram between numerical and analytical results, \(c_{1}=1.0\).
Figure 2: Comparison of the stability diagram between numerical and analytical results, \(c_{1}=2.0\).
Thereby, the processes \(P(t)\) and \(h(t)\) are stationary Gaussian processes with spectral density \(S_{f,f,t}\), and \(S_{h_{k}h_{t}}\) as:
\[\left\{\begin{aligned} S_{f,f,t}&=\frac{1}{2\pi}\int_{- \infty}^{\infty}C_{f}(t)e^{-i\omega t}\mathrm{d}t\\ S_{h_{k}h_{k}}&=\frac{1}{2\pi}\int_{-\infty}^{ \infty}C_{h}(t)e^{-i\omega t}\mathrm{d}t\end{aligned}\right. \tag{25}\]
where
\[\left\{\begin{aligned} C_{f}(t)&=\mathbb{E}[f(t)f(0)] \\ C_{h}(t)&=\mathbb{E}[h(t)h(0)]\end{aligned}\right. \tag{26}\]
It is worth noting that there exists a relation \(G(\omega)=2S(\omega)\) between the two-sided spectrum \(S(\omega)\) and the single-sided spectrum \(G(\omega)\).
### Infante's approach[23]
In this subsection, the author slightly generalizes the method of Infante [23] to the present problem. It is now assumed that the system is driven by physical noise (colored noise).
\[\dot{x}_{1}(t)+2\zeta\dot{x}_{1}(t)+(c_{1}+f(t))x_{1}(t)=0 \tag{27}\]
Here, \(f(t)\) is a zero-mean, stationary, ergodic physical noise. We rewrote the equation as follows:
\[\dot{x}(t)=[A+F(t)]x \tag{28}\] \[\left\{A=\begin{bmatrix}0&1\\ -c_{1}&-2\zeta\end{bmatrix},F=\begin{bmatrix}0&0\\ -f(t)&0\end{bmatrix}\right.\]
A positive-defined matrix \(B\) is introduced as follows:
\[B=\begin{bmatrix}\alpha_{1}^{2}+\alpha_{2}&\alpha_{1}\\ \alpha_{1}&1\end{bmatrix} \tag{29}\]
According to Infante's paper [23], if the following relation is satisfied for some positive \(\epsilon\), then the main theory holds: let \(\lambda_{\max}[X]\) be the maximum eigenvalue of matrix \(X\).
\[\mathbb{E}[\lambda_{\max}[A^{\top}+F^{\top}(t)+B(A+F(t))B^{-1}]]<-\epsilon, \tag{30}\]
then the system is almost asymptotically stable, yielding:
\[\mathbb{E}[\lambda_{\max}[A^{\top}+F^{\top}(t)+B(A+F(t))B^{-1}]] \tag{31}\] \[=-2\zeta+\] \[\sqrt{4(\zeta-\alpha_{1})^{2}+\frac{[\alpha_{2}+\alpha_{1}^{2}-c _{1}-f(t)+2\alpha_{1}(\zeta-\alpha_{1})]^{2}}{\alpha_{2}}}\]
Here, we define the elements of the \(B\) matrix as follows:
\[\left\{\begin{aligned} \alpha_{1}&=\zeta\\ \alpha_{2}&=\zeta^{2}+c_{1}\end{aligned}\right., \tag{32}\]
For the above, by using Schwarz's inequality, the following result can be finally obtained:
\[\mathbb{E}[(f(t))^{2}]<4c_{1}\zeta^{2} \tag{33}\]
### Arnold-Dostal's approach for colored-noise case[24; 3]
Arnold et al. [24] have obtained asymptotic results for the stability of the equation
\[\ddot{x}_{1}(t)+2\zeta\dot{x}_{1}(t)+(c_{1}+f(t))x_{1}(t)=0, \tag{34}\]
where \(f(t)\) denotes a stationary Gaussian process with spectral density \(S_{f,f,t}\).
The top Lyapunov exponent was determined for the system of \(y(t)=x_{1}(t)\exp(\zeta t)\):
\[\lambda_{y}=\lim_{t\to\infty}\log(|y(t)|^{2}+|\dot{y}(t)|^{2})^{\frac{1}{2}}, \tag{35}\]
where \(y(t)\) satisfies the following differential equation:
\[\ddot{y}(t)+[(c_{1}-\zeta^{2})-f(t)]y(t)=0, \tag{36}\]
The top Lyapunov exponent \(\lambda_{x}\) was determined for the system of \(x(t)\), which can be represented as:
\[\lambda_{x}\approx-\zeta+\frac{\pi}{4(c_{1}-\zeta^{2})}S_{f,f_{t}}\left(2\sqrt {c_{1}-\zeta^{2}}\right) \tag{37}\]
A negative Lyapunov exponent yields the stability of the corresponding system; thus, if \(\lambda_{x}<0\) in Eq. (34), the SDE in Eq. (34) is stable The condition of \(\lambda_{x}=0\) for negligibly small \(P(t)\) becomes:
\[-\zeta+\frac{\pi}{4(c_{1}-\zeta^{2})}S_{f,f_{t}}\left(2\sqrt{c_{1}-\zeta^{2}} \right)=0. \tag{38}\]
This is an implicit function for \(\zeta\) or \(c_{1}\) because \(S_{f,f_{t}}\) implicitly includes \(H_{1/3}\). Therefore, iterative methods, such as Newton's method, should be applied to obtain the stability boundary.
### Ariaratnam and Tam's approach [31]
Ariaratnam and Tam [31] obtained the stability boundary with the use of the stochastic averaging method and utilized the outcome to Eq. (27).
Here, they analyze with the use of the stochastic averaging theorem proposed by Stratonovich[25] and Khasminskii[26], the derived averaged equation for roll amplitude \(A\) being as follows:
\[\mathrm{d}A=\left(-\alpha A+\frac{\beta}{2A}\right)\mathrm{d}t+ \left(\gamma A^{2}+\beta\right)\mathrm{d}W(t) \tag{39}\] \[\text{where }\left\{\begin{aligned} \alpha&=\zeta-\frac{3\pi}{8c_{1}}S_{f,f_{t}}( \omega)\\ \beta&=\frac{\pi}{c_{1}^{2}}S_{h,h_{k}}(\omega)\\ \gamma&=\frac{1}{4c_{1}}S_{f,f_{t}}(\omega)\end{aligned}\right.\]
Ariaratnam and Tam[31] obtained the moment stability conditions for the equation they averaged.
#### 5.4.1 First moment stability
\[\zeta>\frac{3\pi}{8c_{1}}S_{f,f_{t}}(2\omega_{0}) \tag{40}\]
#### 5.4.2 Second moment stability
\[\zeta>\frac{\pi}{2c_{1}}S_{f,f_{t}}(2\omega_{0}) \tag{41}\]
#### 5.4.3 Condition on the PDF
By solving the Fokker-Planck equation, the stationary probability density function (PDF) was obtained as follows:
\[\begin{split}\mathcal{P}(A)&=2\gamma\nu\beta^{ \nu}\frac{A}{(\gamma A^{2}+\beta)^{\nu+1}}\\ &\text{where}\ \nu=\frac{1}{2}+\frac{\alpha}{\gamma}\end{split} \tag{42}\]
For the above PDF, the condition of stability is considered to be \(\nu>0\), yielding:
\[\zeta>\frac{\pi}{4c_{1}}S_{f,f_{t}}(2\omega_{0}) \tag{43}\]
The two results obtained here are identical to those obtained by Roberts [2] in later years. These results also demonstrate that the external moment moment term, \(h(t)\), does not affect the system's stability.
Note that Eq. 43 exactly matches the Arnold-Dostal result (Eq. 38) if \(\zeta\) is sufficiently small.
### Results of the energy-based averaging method [3; 27]
Here, we present an approach using an energy-based stochastic average method. The target system is as follows:
\[\dot{x}+2\zeta\dot{x}+(c_{1}+P(t))x=0 \tag{44}\]
Here, the Hamiltonian \(\mathcal{H}\) of the system is:
\[\mathcal{H}(x,\dot{x})=\frac{\dot{x}^{2}}{2}+\frac{c_{1}}{2}x^{2}. \tag{45}\]
Then, the following equation can be obtained:
\[\begin{split}&\frac{\mathrm{d}}{\mathrm{d}t}x=\dot{x}\\ &\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{H}=-2\epsilon\zeta\dot{x }^{2}-\sqrt{\epsilon}P(t)x\dot{x}\end{split} \tag{46}\]
The 1D SDE with respect to \(\mathcal{H}\) can be obtained as:
\[\mathrm{d}\mathcal{H}=(m_{1}(\mathcal{H})+m_{2}(\mathcal{H}))\mathrm{d}t+ \sigma(\mathcal{H})\mathrm{d}W \tag{47}\]
In the above equation, drift terms, i.e. \(m_{1}\) and \(m_{2}\), and diffusion term \(\sigma^{2}\) can be represented by:
\[\begin{cases}m_{1}(\mathcal{H})=-2\zeta\mathcal{H}\\ m_{2}(\mathcal{H})=k\mathcal{H}\\ \sigma^{2}(\mathcal{H})=k\mathcal{H}^{2}\end{cases} \tag{48}\]
where
\[k=\frac{1}{c_{1}}\int_{0}^{\infty}R\left(\tau\right)\cos 2\sqrt{c_{1}}\tau \mathrm{d}\tau \tag{49}\]
FPK equation can be obtained from Eq.(47, and then the PDF for \(\mathcal{H}\) is obtained as:
\[\begin{split}\mathcal{P}(\mathcal{H})&=\frac{C}{k \mathcal{H}^{2}}\exp\left(2\int_{\mathcal{H}_{\mathrm{int}}}^{\mathcal{H}} \frac{-2\zeta\theta+k\theta}{k\theta^{2}}\mathrm{d}\theta\right)\\ &=C^{\prime}\mathcal{H}^{-\frac{4\zeta}{2}}\end{split} \tag{50}\]
Here, \(C\) and \(C^{\prime}\) denote the normalization constant of the PDF. Using the transformation formula for the probability density, the probability density function for roll amplitude is:
\[\mathcal{P}(A)=\mathcal{P}(\mathcal{H})\left|\frac{\mathrm{d}\mathcal{H}}{ \mathrm{d}A}\right|=C^{\prime\prime}A^{\left(1-\frac{8\zeta}{2}\right)} \tag{51}\]
Here, \(C^{\prime\prime}\) also denotes the normalization constant of the PDF. From this, the asymptotic behavior of the probability density function can be categorized into three cases for the relation between \(k\) and the linear roll damping coefficient \(2\zeta\).
\[\begin{split}&(i)\quad 8\zeta>k>0\quad\lim_{A\to+0}\mathcal{P}(A)\to+ \infty\\ &(ii)\quad 8\zeta=k\quad\quad\lim_{A\to+0}\mathcal{P}(A)=\text{Const.}\\ &(iii)\quad k>8\zeta>0\quad\lim_{A\to+0}\mathcal{P}(A)\to 0 \end{split} \tag{52}\]
Now calculate the equation of the boundary from \((ii)\) in Eq.(52). Let \(k\) denote the spectrum \(S_{P}(\omega)\) of \(P(t)\). Therefore, we have:
\[k=\frac{\pi}{2c_{1}}S_{P}(2\sqrt{c_{1}}) \tag{53}\]
By utilizing the above relation, \(8\zeta=k\) becomes as follows.
\[S_{P}(2\sqrt{c_{1}})=\frac{16}{\pi}c_{1}\zeta. \tag{54}\]
However, even if \(\mathcal{P}(A)\to+\infty\), it cannot be said that \(\mathcal{P}(A)\neq 0\) when \(A>0\), so it may be a non-conservative side estimation as shown in Fig.4. This is because the condition in question represents the boundary between the behavior of two PDFs, Type A and Type B, as shown in Fig.3. This is the boundary of the bifurcation phenomenon of the probability density function; see, for example, page 506 of Arnold[44].
Thus, this condition does not directly indicate the stability of the system's origin.
Roberts [2] presented a conditional expression for the probability density function from the FPK equation with \(\lambda>0\), where \(\lambda\) is expressed as:
\[\lambda=\frac{\zeta-M}{M} \tag{55}\]
Here, \(M\) means:
\[M=\frac{\pi}{8c_{1}}S_{P}(2\sqrt{c_{1}}) \tag{56}\]
Therefore, the boundary proposed by Roberts is:
\[S_{P}(2\sqrt{c_{1}})=\frac{8}{\pi}c_{1}\zeta. \tag{57}\]
The above expression is obtained for the system with an external force. Furthermore, consider the following limit:
\[\lim_{\Lambda\rightarrow+0}\mathcal{P}(A)\rightarrow+\infty \tag{58}\]
By considering \(1+2\lambda<0\), the following condition can be obtained:
\[2\zeta<M \tag{59}\]
The boundary equation obtained from this method agrees with the boundary equation obtained from the energy-based stochastic averaging method.
### Results and discussions
This subsection compares the analytical conditions presented so far with numerical calculations. Fig. 4 shows the final results obtained for the ITTC spectrum. The subject ship is C11, and the graph shows a comparison of the results of theoretical calculations and numerical simulations of the stochastic averaging method, Infante's approach, and Arnold's approach.
#### 5.6.1 Infante's method
The results based on Infante's method show that the estimation is overly safe. This can be attributed to two factors. Firstly, Infante's method uses Eq. 29 to set the matrix \(B\) related to the Lyapunov function. Even though the Lyapunov function is used to obtain the stability condition, it is a sufficient condition and may not be the optimal choice. This likely leads to an overly safe estimation. Secondly, Infante transforms the equation using Schwartz's inequality, another factor contributing to the overly safe estimation.
#### 5.6.2 Arnold-Dostal's method
From the figure, the results based on Arnold-Dostal's method explain the numerical results relatively well. However, since this theory is based on the assumption that \(\sigma\to 0\), the estimation accuracy is expected to deteriorate as \(\sigma\) increases, as seen in the results for white noise. Nonetheless, for the faced problem of vessel motion, the assumptions of Arnold-Dostal's method are considered to be valid to a great extent.
#### 5.6.3 Averaging method
The analytical condition based on the averaging method appears to be essentially an estimation on the non-conservative side. This is because, as mentioned above, even if \(\mathcal{P}(A)\rightarrow+0\) and \(\mathcal{P}(A)\rightarrow+\infty\), \(\mathcal{P}(A)=0\) is not necessarily guaranteed with \(A>0\). Therefore, it is important to note that this is a condition that indicates the behavior of the probability density function near the origin, not directly indicating the origin's stability. Consequently, this method has the potential to be an estimation on the non-conservative side.
## 6 Mitigation of parametric rolling due to rudder control
As Soder et al. [45] have conducted, there is the potential to reduce the risk of parametric rolling using rudder control. The rudder is an inherent piece of equipment for the ship, making it an attractive option as it does not require any new active actuators or additional anti-rolling tanks. Furthermore, the control aims to restore the asymptotic stability of the system's origin, which is the major difference from regular roll mitigation controllers by rudder or fin-stabilizers. Thus, once the stability of the system's origin is restored, it is expected that roll motion will be completely eliminated in random head seas.
\[\ddot{x}_{1}(t)+2\zeta\dot{x}_{1}(t)+(c_{1}+f(t))x_{1}(t)=f_{\rm R}\delta_{\rm R} \tag{60}\]
Figure 3: Schematic view of two PDFs
Here, \(f_{R}\) denotes the hydrodynamic derivative of the roll moment with respect to the rudder angle \(\delta_{\text{R}}\). Suppose that \(\delta_{\text{R}}\) has a feedback form as follows:
\[\delta_{\text{R}}(t)=k_{1}x_{1}(t)+k_{2}x_{2}(t) \tag{61}\]
In this research, the delay of rudder action is ignored for the sake of brevity; thereby, the system becomes:
\[\begin{split}\ddot{x}_{1}(t)+2\zeta^{\prime}\dot{x}_{1}(t)+(c_{1} ^{\prime}+f(t))x_{1}(t)=0\\ \text{where}\begin{cases}2\zeta^{\prime}\equiv 2\zeta-k_{2} \\ c_{1}^{\prime}\equiv c_{1}-k_{1}\end{cases}\end{split} \tag{62}\]
If \(2\zeta^{\prime}\) and \(c_{1}^{\prime}\), i.e., \(k_{1}\) and \(k_{2}\), are selected below the thresholds for stability, then parametric rolling can be completely prevented. The authors have demonstrated the control policy in the previous sections, and it can be said that control to increase the "apparent damping force" of \(2\zeta^{\prime}\equiv 2\zeta-k_{2}\) will lead to complete prevention of parametric rolling.
## 7 Conclusion
In this study, the stability of the system with stochastically varying parametric excitation terms is discussed. Estimation formulae to predict the occurrence of instability because of the inclusion of multiplicative noise are introduced. The equations presented here are primarily based on those proposed by previous researchers, with only minor modifications to make them applicable to the parametric rolling of ships. The results of Arnold, for example, show that the stability boundaries can be captured with a relatively high accuracy, which is promising for practical use in the near future.
###### Acknowledgements.
This study was supported by a Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science (JSPS KAKENHI Grant #22H01701). Further, this work was partly supported by the JASNAOE collaborative research program / financial support. The authors are also thankful to Enago (www.enago.jp) for reviewing the English language.
## Conflict of interest
The authors declare that they have no conflict of interest.
|
2306.09956 | Quantum Effects on the Synchronization Dynamics of the Kuramoto Model | The Kuramoto model serves as a paradigm for describing spontaneous
synchronization in a system of classical interacting rotors. In this study, we
extend this model to the quantum domain by coupling quantum interacting rotors
to external baths following the Caldeira-Leggett approach. Studying the
mean-field model in the overdamped limit using Feynman-Vernon theory, we show
how quantum mechanics modifies the phase diagram. Specifically, we demonstrate
that quantum fluctuations hinder the emergence of synchronization, albeit not
entirely suppressing it. We examine the phase transition into the synchronized
phase at various temperatures, revealing that classical results are recovered
at high temperatures while a quantum phase transition occurs at zero
temperature. Additionally, we derive an analytical expression for the critical
coupling, highlighting its dependence on the model parameters, and examine the
differences between classical and quantum behavior. | Anna Delmonte, Alessandro Romito, Giuseppe E. Santoro, Rosario Fazio | 2023-06-16T16:41:16Z | http://arxiv.org/abs/2306.09956v1 | # Quantum Effects on the Synchronization Dynamics of the Kuramoto Model
###### Abstract
The Kuramoto model serves as a paradigm for describing spontaneous synchronization in a system of classical interacting rotors. In this study, we extend this model to the quantum domain by coupling quantum interacting rotors to external baths following the Caldeira-Leggett approach. Studying the mean-field model in the overdamped limit using Feynman-Vernon theory, we show how quantum mechanics modifies the phase diagram. Specifically, we demonstrate that quantum fluctuations hinder the emergence of synchronization, albeit not entirely suppressing it. We examine the phase transition into the synchronized phase at various temperatures, revealing that classical results are recovered at high temperatures while a quantum phase transition occurs at zero temperature. Additionally, we derive an analytical expression for the critical coupling, highlighting its dependence on the model parameters, and examine the differences between classical and quantum behavior.
## I Introduction
Synchronization is an emergent collective phenomenon that can be observed in various physical systems, such as pendula [1], fireflies [2; 3], and neurons [4]. In classical mechanics, synchronization can occur when two or more oscillators interact with each other through a common coupling [5; 6]. Classical synchronization has been witnessed in systems that can operate in the quantum regime [7; 8; 9], which is nowadays accessible to experiments due to the recent advancements in the field of quantum technologies. For example, optomechanical devices [10; 11; 12] have allowed the coupling between light and mechanical motion to be controlled, leading to the possibility of implementing non-linear dynamics that can result in a synchronized motion.
These perspectives, with their possible applications in quantum technologies, have also posed a number of new questions on how to characterize and quantify synchronization in quantum systems. Synchronization becomes even more intriguing, as it has to deal with quantum fluctuations and entanglement. Furthermore, it may be a useful resource for quantum technological applications [13; 14], for example in quantum thermal machines [15; 16; 17]. An intense theoretical activity aimed at quantifying synchronization in quantum systems from continuous variables [18; 19; 20; 21; 22; 23; 24; 25; 26; 27] to discrete degrees of freedom [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. Different measures of synchronization have been introduced, ranging from phase-space or correlation quantities [39; 40; 41; 42; 43; 44] to information-theoretical approaches [45; 40; 46].
This large body of work, however, did not address a seemingly natural question: how to extend a paradigmatic model of classical synchronization, the Kuramoto model [47], to study synchronization in the presence of quantum fluctuations [47]. This is the starting point for our work. The classical model describes the behaviour of interacting rotors with a non-linear dynamics, and exhibits a phase transition from a dynamically disordered phase, to an ordered one characterized by phase locking. Generalizations of the model [48; 49; 50; 51; 52; 53; 54] have allowed to explore and enrich the phase diagram by studying also the effects of noise, inertia, disorder and long-range interactions on the emergence of synchronization. Despite efforts to study and understand the emergence of collective behavior in a semiclassical regime, where quantum fluctuations become relevant and modify the system's dynamics [55], a systematic analysis of spontaneous synchronization in the fully quantum regime is still lacking. Can quantum synchronization emerge in a low-temperature regime or do quantum fluctuations dominate the system's behavior, preventing spontaneous synchronization?
In this paper, we address this problem by exploring whether the Kuramoto model can be extended to the quantum regime. We study the dynamics of the model from high to low temperature and show that synchronization survives quantum fluctuations and a quantum phase transition is still present in the zero-temperature limit.
The rest of the article is organized as follows: in Sec. II we describe the celebrated Kuramoto model with a path integral formalism. We also present its phase diagram and the most relevant results, focusing in particular on the generalized massive model. In Sec. III we propose a new quantum model, based on the classical model we discuss in II. The limits in which the model is studied are discussed, and the order parameter to detect quantum synchronization is defined. In this section we also show that in the high-temperature regime, our model reproduces correctly the classical one. In Sec. III.1 we introduce the self-consistent equation to determine the order parameter. The self-consistent equation allows us to study the phase diagram of the quantum model in the overdamped regime, and in particular to determine analytically the critical coupling above which the system enters a synchronized phase. The results of this anal
ysis are reported in Sec. IV. The last section, Sec. V, we present some conclusions that can be drawn from our study.
## II Classical Kuramoto model
The Kuramoto model describes the behaviour of \(N\) interacting planar rotors. It exhibits two phases: an incoherent phase characterized by the rotors moving independently, and a synchronized phase in which the system behaves collectively. The mechanism underlying in the synchronization process is _phase locking_, it causes the emergence of a fixed relation between the phases of the rotors.
The state of each rotor is characterised by a phase \(\theta_{i}\), an angular velocity \(v_{i}\), with \(i=1,...,N\). The evolution of this state is determined by a frequency \(\omega_{i}\), and damping \(\gamma\). The characteristic frequencies \(\omega_{i}\) are independent of each other and, throughout the paper, will be drawn from an even, unimodal frequency distribution \(g(\omega)\), with average \(\overline{\omega}=\left\langle\omega\right\rangle_{g(\omega)}=0\) and variance \(\sigma^{2}\).
We consider here the massive version of this model described by the following set of Langevin equations:
\[\left\{\begin{array}{c}\dot{\theta}_{i}=v_{i}\\ \\ m\dot{v}_{i}+m\gamma v_{i}=F[\mathbf{\theta};\omega_{i}]+\xi_{i}\end{array}\right. \tag{1}\]
Here, \(\mathbf{\theta}=(\theta_{1},...,\theta_{N})\) and
\[F[\mathbf{\theta};\omega_{i}]=\omega_{i}-\frac{J}{N}\sum_{j=1}^{N}\sin(\theta_{i} -\theta_{j})\;. \tag{2}\]
The noise in the Langevin equation is a Gaussian stochastic process with \(\left\langle\xi_{i}(t)\right\rangle=0\), \(\left\langle\xi_{i}(t)\xi_{j}(t^{\prime})\right\rangle=2D\delta_{i,j}\delta(t-t ^{\prime})\). The initial conditions \(\mathbf{\theta}(0)=\mathbf{\theta}_{0}\), \(\mathbf{v}(0)=\mathbf{v}_{0}\) are drawn independently for every rotor from a distribution \(\rho(\theta_{0},v_{0})\).
In the massless limit (\(m\gamma=\mathrm{const}\), \(m\to 0\)) the Langevin equation reduces to the Kuramoto-Sakaguchi model [56]
\[\dot{\theta}_{i}=F[\mathbf{\theta},\omega_{i}]+\xi_{i}\hskip 28.452756pt\forall i=1,..,N\;. \tag{3}\]
The synchronized behaviour is signaled by a non-zero value, in the stationary state, of the modulus of the complex order parameter \(\psi(t)\) defined as
\[\psi(t)=r(t)\,e^{i\varphi(t)}=\frac{1}{N}\sum_{j=1}^{N}e^{i\theta_{j}(t)}\;. \tag{4}\]
The modulus \(r\) of the order parameter \(\psi\) is bounded to the interval \(r(t)\in[0,1]\). It efficiently detects synchronization since it averages to zero if the rotors evolve incoherently. The phase \(\varphi\) corresponds to the phase of the collective motion of the rotors.
In the thermodynamic limit, the definition of the order parameter is regarded as an average over the frequency distribution, the noise distribution and the distribution of the initial conditions, namely
\[\psi(t)=\int_{-\infty}^{\infty}\!d\omega\,g(\omega)\left\langle e^{i\theta(t; \omega,\xi)}\right\rangle_{\xi,\theta_{0},v_{0}}\;, \tag{5}\]
where the phase \(\theta(t;\omega,\xi)\) satisfies the Langevin equation (1) with \(F[\mathbf{\theta};\omega]=F[\theta;\omega,\psi]=\omega-Jr\sin(\theta-\varphi)\). Notice that, to formally decouple the evolution of the rotors, we have used \(\frac{J}{N}\sum_{j=1}^{N}\cos(\theta_{i}-\theta_{j})=\frac{J}{2}e^{i\theta_{i} }\left(\frac{1}{N}\sum_{j=1}^{N}e^{-i\theta_{j}}\right)+\mathrm{c.c.}=\mathrm{ Jr}\cos(\theta_{i}-\varphi).\) This results in (5) becoming a self-consistent equation for the order parameter.
For further convenience it is useful to express the average that defines the order parameter in a path integral form [50]. Discretizing _a la Ito_, one can express (5) as (see Appendix A for details):
\[\psi(t)=\int_{0}^{2\pi}\!d\theta\,e^{i\theta}\int_{-\infty}^{\infty}\!dv\int_ {-\infty}^{\infty}\!d\omega\,g(\omega)\,\rho(\theta,v,t;\omega,\psi)\;. \tag{6}\]
Here \(\rho(\theta,v,t;\omega,\psi)\) is a probability distribution that quantifies the probability for the \(i^{th}\) rotor to have phase and angular velocity \((\theta,v)\) at time \(t\), and can be expressed as [57; 58; 59], [60][Chap.4].
\[\rho(\theta,v,t;\omega,\psi)= \,\mathcal{N}\int_{0}^{2\pi}\!d\theta_{0}\int_{-\infty}^{\infty} \!dv_{0}\int_{\theta(0)=\theta_{0}}^{\theta(t)=\theta}\!\!\mathcal{D}\theta \int_{v(0)=v_{0}}^{v(t)=v}\!\!\!\mathcal{D}v\,\delta(\dot{\theta}(\tau)-v( \tau))\,e^{iS_{el}[\theta(\tau),v(\tau)]}\,\rho(\theta_{0},v_{0}), \tag{7}\]
Figure 1: Representation of the classical Kuramoto model for \(N=5\). A set of rotors (red dots on dashed circles) evolve with independent frequencies. Their mutual interactions (green lines) induce a phase transition to synchronised dynamics.
where the classical action is given by:
\[S_{cl}=\frac{i}{4D}\int_{0}^{t}dt^{\prime}\Big{(}m\dot{v}(t^{\prime})+m\gamma v(t ^{\prime})-F[\theta(t^{\prime});\omega,\psi(t^{\prime})]\Big{)}^{2}\;. \tag{8}\]
Solving the self-consistent equation for the order parameter allows to gain knowledge about the phase transition. The behavior of the system is determined by the interplay between the coupling strength \(J\), the width of the frequency distribution, \(\sigma\), and of the noise distribution, \(D\). In general, the phase transition for the model described by Eq. (1) is first-order, but it becomes a continuous phase transition in the overdamped limit [5; 50].
An interesting case is the massless model, for which the critical coupling is known analytically to be
\[J_{C}^{cl}=2\Big{(}\int_{\infty}^{\infty}\!d\omega\,g(\omega)\,\frac{D}{D^{2}+ \omega^{2}}\Big{)}^{-1}\;. \tag{9}\]
From this formula, the effects of noise and the width of the frequency distribution are evident. Noise hinders the phase-locking mechanism, and so does increasing the variance of the frequency distribution. In the case of the noiseless model, the critical coupling becomes simply \(J_{C}^{cl}=\frac{2}{\pi g(0)}\).
## III Quantum Kuramoto model
Our goal is now to construct a model that reproduces the classical one in the high-temperature limit. Since the classical Kuramoto-Sakaguchi model is characterized by noise and dissipation, it cannot be obtained as a classical limit of a quantum Hamiltonian model. Thus, we introduce dissipation in the quantum regime via a Caldeira-Leggett [61] model made of \(N\) interacting rotors, each one linearly coupled to a different and independent bath of harmonic oscillators. The baths are assumed to be identical.
The Lagrangian describing this model is
\[\mathscr{L}_{\rm TOT}=\mathscr{L}_{S}+\mathscr{L}_{B}+\mathscr{L}_{SB}\;. \tag{10}\]
\(\mathscr{L}_{S}\) is the Lagrangian of the system of rotors
\[\mathscr{L}_{S}=\sum_{i=1}^{N}\left[\frac{m\dot{\theta}_{i}^{2}}{2}+\omega_{i }\theta_{i}+\frac{J}{2N}\sum_{j\neq i}\cos\left(\theta_{i}-\theta_{j}\right) \right]\;. \tag{11}\]
The characteristic frequencies \(\omega_{i}\) are once again drawn from a distribution \(g(\omega)\) having the same characteristics as in the classical case. \(\mathscr{L}_{B}\) is the baths' Lagrangian:
\[\mathscr{L}_{B}=\sum_{i=1}^{N}\mathscr{L}_{B_{i}}=\sum_{i=1}^{N}\sum_{j_{i}=1 }^{M}\left[\frac{M\dot{x}_{j_{i}}^{2}}{2}-\frac{1}{2}M\Omega_{j_{i}}^{2}x_{j_ {i}}^{2}\right]\;. \tag{12}\]
\(\mathscr{L}_{SB}\) is the interaction Lagrangian
\[\mathscr{L}_{SB}=\sum_{i=1}^{N}\mathscr{L}_{SB_{i}}=C\sum_{i=1}^{N}\theta_{i} \sum_{j_{i}=1}^{M}x_{j_{i}}\;. \tag{13}\]
In order to decouple the equations, as in the classical case, it is convenient to work in the thermodynamic limit with a mean-field model. We assume \(\frac{1}{N}\sum_{j=1}^{N}e^{i\theta_{j}}=\left\langle\frac{1}{N}\sum_{j=1}^{N }e^{i\theta_{j}}\right\rangle+\delta\psi\) with \(\delta\psi\) infinitesimal and define \(\psi=re^{i\varphi}=\left\langle\frac{1}{N}\sum_{j=1}^{N}e^{i\theta_{j}}\right\rangle\) where the quantum averages are taken over the rotors' reduced density matrix. The mean-field Lagrangian \(\mathscr{L}\) becomes up to first order in \(\delta\psi\)
\[\mathscr{L}=\sum_{i=1}^{N}\left(\mathscr{L}_{S_{i}}+\mathscr{L}_{B_{i}}+ \mathscr{L}_{SB_{i}}\right)\,, \tag{14}\]
with \(\mathscr{L}_{S_{i}}=\frac{m\dot{\theta}_{i}^{2}}{2}+\omega_{i}\theta_{i}+Jr \cos(\theta_{i}-\varphi)\). Notice that the mean-field model is described by a Lagrangian decoupled in a sum of terms depending only on the \(i^{th}\) rotor. From now on for brevity
\[V[\theta]=-\omega\theta-Jr\cos\left(\theta-\varphi\right)\;. \tag{15}\]
In order to reproduce the friction term in Eq. (1) in the classical limit, The Caldeira-Leggett's model requires an Ohmic bath [61]. We therefore demand that, for each and every bath, the distribution of frequencies for the collection of harmonic oscillators is
\[\sum_{i=1}^{M}\frac{C^{2}}{2M\Omega_{i}}\,\delta(\Omega_{i}-\nu)\,\xrightarrow[ M\to\infty]{}\,\frac{m\gamma\nu}{\pi}\Theta(\omega_{c}-\nu)\;, \tag{16}\]
Figure 2: Representation of the quantum Kuramoto model. Same as Fig.1, with the rotors as quantum systems. The coupling to independent identical quantum baths is explicitly shown as an orange line connecting to a set of harmonic oscillators (blue boxes)
where \(\Theta(\cdot)\) is the Heaviside fucntion and \(\omega_{c}\) is a cutoff for the frequencies of the baths' harmonic oscillators.
A few comments about the definition domain of the phases \(\theta_{i}\) are in order. Two choices are possible: the phases of the rotors can be defined over the circle, i.e. \(\theta_{i}\in[0,2\pi]\), or they can be defined over the line \(\theta_{i}\in\mathbb{R}\). The difference lies in identifying or not the position \(\theta\) with \(\theta+2n\pi\), \(n\in\mathbb{Z}\). The Langevin equation that describes the classical model in Eq. (1) is not dependent on the choice of the phases' domain since it is inherently \(2\pi\) periodic. For the quantum model we define \(\theta_{i}\in\mathbb{R}\). The tilted potential term \(\omega\theta\) and the linear coupling with the bath are compatible with this choice [62][Chap.2]. We note that with this choice of phase domain, the Lagrangian in Eq. (11) can be regarded as the Lagrangian of a resistively shunted Josephson Junction [63].
In order to understand if the system described by the mean-field Lagrangian (14) can sustain synchronization, an order parameter should be defined. In analogy with the classical case, we define
\[\psi(t)=r(t)\,e^{i\varphi(t)}=\frac{1}{N}\sum_{j=1}^{N}\mathrm{Tr}\big{\{}e^{i \theta_{j}}\,\rho_{S}(t)\big{\}} \tag{17}\]
where \(\rho_{S}(t)\) is the reduced density matrix of the system of rotors evolved to time \(t\) after tracing out the baths' degrees of freedoms. Notice that, once again, in the mean-field approximation this is a self-consistent equation for the order parameter, since the density matrix evolves with a Lagrangian dependent on \(\psi\).
The initial state of the evolution is chosen to be separable in the rotors; the mean-field approximation and the choice of independent baths allow to maintain the density matrix separable in the rotors at any time. For this reason, from now on, the discussion will focus only on the evolution of the density matrix \(\rho_{SB}^{(i)}\) of a single rotor and its own bath. The initial density matrix for the \(i^{th}\) rotor and the bath is also assumed to be separable: \(\rho_{SB}^{(i)}=\rho_{i}\otimes\rho_{B}\).
The evolution of the reduced density matrix of a single rotor can be obtained through Feynman-Vernon method [64; 65], appropriate to treat quantum systems with a classical limit given by a stochastic equation of motion [66; 67; 68; 69]. Applying the Feynman-Vernon method along the lines of Ref. [61], the density matrix element \(\rho_{i}(\theta_{1},\theta_{2})=\langle\theta_{2}|\hat{\rho}_{i}|\theta_{1}\rangle\) is given at time \(t\) by
\[\rho_{i}(\theta_{1},\theta_{2},t)=\int_{-\infty}^{\infty}\!d\theta_{1}^{ \prime}\int_{-\infty}^{\infty}\!d\theta_{2}^{\prime}\int_{\theta_{1}(0)= \theta_{1}^{\prime}}^{\theta_{1}(t)=\theta_{1}}{\cal D}\theta_{1}\,\int_{ \theta_{2}(0)=\theta_{2}^{\prime}}^{\theta_{2}(t)=\theta_{2}}{\cal D}\theta_ {2}\,e^{\frac{i}{\hbar}(S_{0}[\theta_{1}]-S_{0}[\theta_{2}])}\,\mathscr{F}[ \theta_{1},\theta_{2}]\,\rho_{i}(\theta_{1}^{\prime},\theta_{2}^{\prime},0) \tag{18}\]
where \(S_{0}=\int_{0}^{t}dt^{\prime}\mathscr{L}_{S_{i}}(t^{\prime})\) for the \(i^{th}\) rotor, and \(\mathscr{F}[\theta_{1},\theta_{2}]\) is the Feynman-Vernon's influence functional that accounts for the effects of the interaction with the bath.
We can regard the previous equation as an evolution of the density matrix due to an effective action \(S_{\mathrm{eff}}[\theta_{1},\theta_{2}]=S_{0}[\theta_{1}]-S_{0}[\theta_{2}]- i\hbar\ln\mathscr{F}[\theta_{1},\theta_{2}]\). Switching to the more convenient variables \(\theta_{+}=(\theta_{1}+\theta_{2})/2\), \(\theta_{-}=\theta_{1}-\theta_{2}\):
\[S_{\mathrm{eff}}[\theta_{+},\theta_{-}]=\int_{0}^{t}\!dt^{\prime}\left(m\hat{ \theta}_{+}\hat{\theta}_{-}-\sum_{q=\pm 1}(-1)^{q}V[\theta_{+}+q\tfrac{\theta_{-}}{2}]-m \gamma\theta_{-}\hat{\theta}_{+}+\frac{iD}{\hbar}\int_{0}^{t}\!dt^{\prime \prime}\,\theta_{-}(t^{\prime})K(t^{\prime}-t^{\prime\prime})\theta_{-}(t^{ \prime\prime})\right)\,. \tag{19}\]
In the previous equation \(D=m\gamma k_{B}T\), and \(K(t)\) is given by the Fourier transform \(K(t)=\int_{-\omega_{c}}^{\omega_{c}}\frac{d\nu}{2\pi}{\cal K}(\nu)e^{-i\nu t}\), with
\[{\cal K}(\nu)=\frac{\hbar\nu}{2k_{B}T}\coth\left(\frac{\hbar\nu}{2k_{B}T} \right)\,. \tag{20}\]
The temperature \(T\) is set by the bath, and the signatures of its interaction with the rotor are the friction term \(m\gamma\theta_{-}\hat{\theta}_{+}\) and the imaginary term containing the memory kernel \(K(t)\). These terms are given, respectively, by the imaginary and real part of the bath's correlation function (\(\hbar\alpha(t^{\prime}-t^{\prime\prime})\) in the notation of Ref. [61]). In Eq. (19), we have neglected a Lamb-shift term in the energy originating from the influence functional, which does not affect the dynamics of the model [66].
Before proceeding with the calculation of the order parameter, it is worth noticing how the classical dynamics is recovered in the high temperature limit of the quantum model. In the infinite temperature limit, the memory kernel becomes \(K(t)\xrightarrow{T\to\infty}\delta(t)\) and the imaginary damping term prevents \(\theta_{-}\) to vary [66], [62][Chap.5]. This means that, starting from a "classical" diagonal state with \(\theta_{-}(0)=0\), the density matrix will always remain diagonal (the off-diagonal term are exponentially suppressed). Moreover, expanding up to first order \(\theta_{-}(t)\), the evolution becomes
\[\rho_{i}(\theta_{+},\theta_{-},t)=\int_{-\infty}^{\infty}\!d\theta_{+}^{ \prime}\int_{\theta_{+}(0)=\theta_{+}^{\prime}}^{\theta_{+}(t)=\theta_{+}}\!{ \cal D}\theta_{+}\int_{\theta_{-}(0)=\theta_{-}^{\prime}=0}^{\theta_{-}(t)=0} \!{\cal D}\theta_{-}\,e^{\frac{i}{\hbar}S_{\mathrm{eff}}[\theta_{+},\theta_{- }]}\,\rho_{i}(\theta_{+}^{\prime},\theta_{-}^{\prime}=0,0)\;, \tag{21}\]
with the effective action being
\[S_{\rm eff}[\theta_{+},\theta_{-}]=\int_{0}^{t}\!dt^{\prime}\Bigg{(}\frac{iD}{ \hbar}\theta_{-}^{2}(t^{\prime})-\theta_{-}(t^{\prime})\Big{(}m\ddot{\theta}_{ +}+m\gamma\dot{\theta}_{+}-\omega+Jr\sin(\theta_{+}-\varphi)\Big{)}\Bigg{)}\;, \tag{22}\]
where we have used the fact that, at first order in \(\theta_{-}\), \(V[\theta]\approx\theta_{-}F[\theta_{+}]\). With the change of variables \(\frac{\theta_{-}(\tau)}{\hbar}\rightarrow\eta(\tau)\), one recovers the classical effective action for the stochastic process (1) showed in Eq. (18) after the integration of the \(\delta\)-function in the angular velocity. Thus, the quantum model reproduces correctly the massive Kuramoto-Sakaguchi model in the infinite temperature limit.
### The self-consistent equation for the order parameter
In order to obtain a self-consistent equation for the order parameter, we start noticing that the reduced density matrix of the rotors at time \(t\) is given by \(\rho_{S}(t)=\bigotimes_{i=1}^{N}\rho_{i}(t)\). Eq. (17) then takes the form
\[\psi(t) = \frac{1}{N}\sum_{j=1}^{N}\,{\rm Tr}\big{\{}e^{i\theta_{j}}\,\rho _{j}(t)\big{\}}\bigotimes_{k\neq j,k=1}^{N}{\rm Tr}\{\rho_{k}\}\] \[= \frac{1}{N}\sum_{j=1}^{N}{\rm Tr}\big{\{}e^{i\theta_{j}}\,\rho_{j }(t)\big{\}}\;.\]
In the thermodynamic limit the previous expression corresponds to an average over the frequency distribution \(g(\omega)\). Denoting as \(\rho(t)\) the density matrix of a single rotor we have:
\[\psi(t)=\int_{-\infty}^{\infty}d\omega\,g(\omega)\int_{-\infty}^{\infty}\!d \theta_{+}\,e^{i\theta_{+}}\,\rho(\theta_{+},\theta_{-}=0,t)\;. \tag{24}\]
To get a more explicit expression of the self-consistent equation for the order parameter, the path integration in Eq. (18) should be performed. The specific form of the potential (15) appearing in the effective action does not allow for a general calculation of the path integral. However, the knowledge of the behaviour of the classical model helps: in the overdamped limit the classical phase transition to the synchronized phase is of second-order. We expect that the quantum phase transition in the overdamped limit is second order too. If this is the case, assuming \(J\sim J_{C}\), we can perform a perturbative expansion in \(r\) for \(r\sim 0\) of (24) to gain insight onto the quantum dynamics. Thus, we will hereafter focus explicitly on the overdamped regime \(\frac{m\gamma}{\hbar}\gg 1\) of the model.
The perturbative expansion of terms in the evolution of the density matrix in Eqs. (18),(19) involves only the approximation
\[\exp\Biggl{\{}\frac{-iJr}{\hbar}\int_{0}^{t}dt^{\prime}\sum_{q= \pm}q\cos\Bigl{(}\theta_{+}(t^{\prime})-q\frac{\theta_{-}(t^{\prime})}{2}- \varphi(t^{\prime})\Bigr{)}\Biggr{\}}\sim 1 - \frac{iJr}{\hbar}\int_{0}^{t}ds\cos\Bigl{(}\theta_{+}(s)- \frac{\theta_{-}(s)}{2}-\varphi(s)\Bigr{)}\] \[+ \frac{iJr}{\hbar}\int_{0}^{t}ds\cos\Bigl{(}\theta_{+}(s)+\frac{ \theta_{-}(s)}{2}-\varphi(s)\Bigr{)}\;.\]
_Ansatz_\(\varphi(t)=0\) and \(r\sim 0\) constant, yields, for \(t\rightarrow\infty\)
\[r=rJ_{C}\lim_{t\rightarrow\infty}\,\int_{-\infty}^{\infty}\!d\theta_{+}e^{i \theta_{+}}\rho^{\prime}(\theta_{+},\theta_{-},t) \tag{25}\]
where
\[\rho^{\prime}(\theta_{+},0,t) = \frac{-i}{2\hbar}\sum_{c,c^{\prime}=\pm 1}c^{\prime}\int_{0}^{t} ds\int_{-\infty}^{\infty}d\theta_{+}^{\prime}\int_{-\infty}^{\infty}d \theta_{-}^{\prime}\int_{\theta_{+}(0)=\theta_{+}^{\prime}}^{\theta_{+}(t)= \theta_{+}}{\cal D}\theta_{+}\int_{\theta_{-}(0)=\theta_{-}^{\prime}}^{\theta _{-}(t)=0}{\cal D}\theta_{-} \tag{26}\] \[\exp\biggl{\{}\frac{i}{\hbar}S_{\rm eff}^{\prime}[\theta_{+}, \theta_{-};s,t]_{c,c^{\prime}}\biggr{\}}\,\rho(\theta_{+}^{\prime},\theta_{-} ^{\prime},0)\;,\]
and
\[S^{\prime}_{\rm eff}[\theta_{+},\theta_{-};s,t]_{c,c^{\prime}} = S^{\prime}_{Re}[\theta_{+},\theta_{-};s,t]_{c,c^{\prime}}+i\,S^{ \prime}_{Im}[\theta_{-};s,t]_{c,c^{\prime}} \tag{27}\] \[= \int_{0}^{t}dt^{\prime}\biggl{\{}m\,\dot{\theta}_{+}\dot{\theta} _{-}-m\gamma\,\theta_{-}\dot{\theta}_{+}+\omega\,\theta_{-}\] \[\qquad\qquad+i\hbar c\,\theta_{+}(t^{\prime})\delta(t^{\prime}-s )-\frac{i\hbar c^{\prime}}{2}\,\theta_{-}(t^{\prime})\delta(t^{\prime}-s)+ \frac{iD}{\hbar}\int_{0}^{t}dt^{\prime\prime}\theta_{-}(t^{\prime})K(t^{\prime }-t^{\prime\prime})\theta_{-}(t^{\prime\prime})\biggr{\}}\;.\]
The details of the derivation of Eqs. (27) can be found in Appendix (B).
If Eq. (25) admits a solution for \(r\neq 0\), then the model admits a phase transition to a synchronized state in the overdamped limit, and the resulting \(J_{C}\) gives the value of the critical coupling for the phase transition. Notice that the first order expansion does not contain information about the order of the phase transition, that could only be understood through a third order expansion. The goal is now to find the critical coupling and to study its dependence on the parameters that characterize the system: \(m,\gamma,k_{B}T\), and the variance \(\sigma\) of the even unimodal frequency distribution \(g(\omega)\).
The expansion to first order in the self-consistent equation has produced a Gaussian path integral that can now be performed [68; 69]. The calculation can be performed via a decomposition of the effective action (27) in its real and imaginary part \(S^{\prime}_{\rm eff}=S^{\prime}_{Re}[\theta_{+},\theta_{-}]+iS^{\prime}_{Im}[ \theta_{-}]\), as can be seen from the previous equation.
The calculation, reported in Appendix B, produces the following result for the first order expansion of the self-consistent equation:
\[r = rJ_{C}\lim_{t\to\infty}\int_{-\infty}^{\infty}\!\!d\omega\,g( \omega)\int_{-\infty}^{\infty}\!\!d\theta_{+}\,e^{i\theta_{+}}\left(\frac{-i}{ 2\hbar}\right)\,\frac{m\gamma}{2\pi\hbar}\sum_{c,c^{\prime}=\pm 1}c^{ \prime}\int_{0}^{t}ds\int_{-\infty}^{\infty}d\theta^{\prime}_{+}\int_{-\infty} ^{\infty}\!\!d\theta^{\prime}_{-}\,\rho(\theta^{\prime}_{+},\theta^{\prime}_{ -},0) \tag{28}\] \[\qquad\qquad\exp\biggl{\{}-\,\frac{1}{\hbar}S^{\prime}_{Im}[ \tilde{\theta}_{-};s,t]_{c,c^{\prime}}-\frac{i}{\hbar}m\,\theta^{\prime}_{-} \dot{\tilde{\theta}}_{+}(0)+ic\,\tilde{\theta}_{+}(s)\biggr{\}}\]
where \(\tilde{\theta}_{\pm}\), are the solutions of the equations
\[\left\{\begin{array}{ll}\ddot{\tilde{\theta}}_{-}(t^{\prime})&- \gamma\dot{\tilde{\theta}}_{-}(t^{\prime})-\frac{\hbar c}{m}\delta(t^{\prime }-s)=0\\ \ddot{\tilde{\theta}}_{+}(t^{\prime})&+\gamma\dot{\tilde{\theta}}_{+}(t^{\prime })-\frac{\omega}{m}+\frac{\hbar c^{\prime}}{2m}\delta(t^{\prime}-s)=0\end{array} \right.. \tag{29}\]
The explicit solution of these equations is reported in appendix C.
The term \(c=1\) in the sum does not contribute because it gets completely damped by the imaginary part of the action. From now on we will consider only the term \(c=-1\). The expression for \(\ddot{\tilde{\theta}}_{+}(0)\) and \(\tilde{\theta}_{+}(s)\) can be deduced from the result in appendix C. It is convenient to proceed by calculating the integration over \(\theta_{+}\) first. Isolating the integration and all the terms of (28) involving \(\theta_{+}\), we find
\[\frac{m\gamma}{2\pi\hbar}\int_{-\infty}^{\infty}\!\!d\theta_{+}e^{i\theta_{+} \left(-\frac{m_{*}}{2\hbar}\theta^{\prime}_{-}-e^{-\gamma*}\right)})=\delta \left(\theta^{\prime}_{-}-\frac{\hbar}{m\gamma}e^{-\gamma s}\right)\;. \tag{30}\]
This constraints the initial value of \(\theta_{-}\) as determined by the delta function. In the massless limit (\(\gamma\to\infty\)) the value that the delta function selects is \(\theta^{\prime}_{-}=0\), in the general overdamped limit \(\frac{m\gamma}{\hbar}\gg 1\) (\(m\) and \(\gamma\) finite) we select \(\theta^{\prime}_{-}\sim 0\). Performing now the integration over \(\theta^{\prime}_{-}\) to eliminate the delta function and denoting \(\tilde{\theta}_{-}(t^{\prime})|_{\theta^{\prime}_{-}=\frac{\hbar}{m\gamma}e^{- \gamma s}}\) as \(\theta^{*}_{-}(t^{\prime};s,t)\), the self-consistent equation becomes
\[r=\frac{-iJ_{C}r}{2\hbar}\lim_{t\to\infty}\int_{-\infty}^{\infty}\!\!d\omega\, g(\omega)\,\int_{0}^{t}\!ds\,\sum_{c^{\prime}=\pm 1}c^{\prime}\exp\biggl{\{}ic^{ \prime}\frac{\hbar}{2m\gamma}-i\,\frac{\omega(t-s)}{m\gamma}-\,\frac{1}{ \hbar}S^{\prime}_{Im}[\theta^{*}_{-};s,t]\biggr{\}} \tag{31}\]
where we have used the fact that in the overdamped limit
\[\int_{-\infty}^{\infty}\!\!d\theta^{\prime}_{+}\,\rho(\theta^{\prime}_{+}, \frac{\hbar}{m\gamma}\,e^{-\gamma s})\sim\int_{-\infty}^{\infty}\!\!d\theta^{ \prime}_{+}\,\rho(\theta^{\prime}_{+},0)={\rm Tr}\{\rho(t=0)\}=1\;.\]
This approximation becomes exact in the massless limit. Also notice that the sign of the sine, \(\sin\left(\frac{\hbar}{2m\gamma}\right)\sim\frac{\hbar}{2m\gamma}\)
is always defined (and positive) in the overdamped limit, and that the expression does not depend on the initial state of the system.
We conclude that a non-trivial solution to the self-consistent equation exists (if the time limit exists), thus a phase transition happens at \(J_{C}\).
## IV Results
From the analysis in Section III.1, and Eq. (31) in particular, it follows that the quantum Kuramoto model in the overdamped limit admits a transition to a synchronized phase with a critical coupling given by
\[J_{C}=2m\gamma\frac{1}{\lim_{t\rightarrow\infty}\int_{-\infty}^{\infty}d\omega \,g(\omega)\,\int_{0}^{t}\!ds\,e^{-i\frac{\omega(t-s)}{m\gamma}-\frac{1}{k}S^{ \prime}_{Im}[\theta^{*}_{-};s,t]}} \tag{32}\]
In the limit of high temperature \(\hbar\gamma\beta\ll 1\) and vanishing mass, the critical value (7) is recovered. In this limit \(K(\tau)\rightarrow\delta(\tau)\), and \(\theta^{*}_{-}\) has non-zero value only on the time interval \([s,t]\), over which \(\theta^{*}_{-}(t^{\prime})=\frac{\hbar}{m\gamma}\) (see Appendix C). This implies that \(S^{\prime}_{Im}[\theta^{*}_{-}]=\hbar\,\frac{k_{B}T}{m\gamma}\,(t-s)\), thus
\[J_{C}\bigg{|}_{\hbar\gamma\beta\ll 1}=2\left(\int_{-\infty}^{\infty}d\omega\,g( \omega)\,\frac{k_{B}T}{(k_{B}T)^{2}+\omega^{2}}\right)^{-1}=J_{C}^{d}(k_{B}T)\;.\]
The last result corresponds to the classical result reported in the literature for \(D=k_{B}T\)[5; 50]. It is important to keep in mind that it holds only in the classical regime \(\hbar\gamma\beta\gg 1\), nonetheless we will extend this formula to low temperatures in order to provide a comparison with the behaviour of the quantum result in the following discussion.
The dependence of the critical coupling on temperature and the comparison with its classical counterpart are reported in Fig. 3, where a Gaussian frequency distribution for the characteristic frequencies has been chosen to obtain the plots. From the analysis of panel (a) and (b), the existence of three regimes emerges. In the _classical regime_, defined by \(\frac{k_{B}T}{\hbar\gamma}\gtrsim 1\), the classical results are recovered (cf pink line corresponding to the classical critical coupling for the overdamped massless model). The critical coupling in this latter case is (7), and becomes \(J_{C}^{d}=2k_{B}T\) for \(k_{B}T\gg\sigma\). The plots shows that this limit is reached asymptotically also by the quantum results.
A _semiclassical region_ is met when decreasing the temperature. Comparing the result in this region with the classical one extended to lower temperature, we notice that the quantum results start to deviate quantitatively from the classical, although the behaviour of the quantum and classical critical coupling remains qualitatively similar. Notably, the quantum critical coupling is consistently higher than the classical one, as shown in panel (c). This is due to the emergence of quantum fluctuations, that as an extra source of noise, make the synchronization harder to be established [55]. The behaviour of \(J_{C}\) in this region can be understood through the expansion in \(\theta_{-}\sim 0\) already performed in Eq. (21) to recover the classical limit. Going beyond the first order expansion therein, a semi-classical regime is obtained from a third-order expansion[66].
The latter yields a potential term:
\[-Jr\sum_{s=\pm 1}(-1)^{s}\cos\left(\theta_{+}+s\frac{\theta_{-}}{2}\right) \sim Jr\left(1+\Delta_{J}\right)\theta_{-}\sin\left(\theta_{+}-\varphi\right)\;, \tag{33}\]
with \(\Delta_{J}\propto\frac{\hbar}{m\gamma}\frac{\hbar\gamma}{k_{B}T}\) considering \(\gamma\) to set the relevant time scale.
The above expansion tells us that in the semiclassical regime the motions happens in a potential of the same form of the classical one, but renormalized by \(\Delta_{J}\). This yields the first deviation from the classical behaviour.
The _quantum region_, characterised by \(\frac{k_{B}T}{\hbar\gamma}\sim 0\) shows the most significant deviations from the classical result. In this region quantum fluctuations make the behaviour of the two critical couplings different and the deviation appears to be stronger than linear in the decrease of temperature
This is particularly evident from panel (c) showing the ratio between the quantum and the classical extended result \(\frac{J_{C}}{J_{C}^{d}}\). The ratio is greater than one, enforcing the fact the it is more difficult to reach the synchronized phase in the quantum regime, as it increases approaching zero temperature.
An important result emerging from this discussion is the existence of a finite critical coupling at any temperature ranging from \(T=0\) to infinite \(T\). A phase transition to a synchronized state is possible at every temperature and thus quantum fluctuations do not menage to prevent the emergence of this collective phenomenon.
Another analysis should be carried on. Figure 4 shows the critical coupling for a fixed value of the overdamped ratio and for different choices of \(\sigma\), the variance of the frequency distribution \(g(\omega)\). This plot suggests that in the quantum realm, the width of the frequency distribution affects the critical coupling more than in the classical case. In both cases (quantum and classical), we notice that the wider the frequency distribution, the more difficult to synchronize. The quantum regime seems to be more affected by this effect. This can be understood studying the behaviour of just two rotors. Suppose the rotors \(\theta_{1}\) and \(\theta_{2}\), have characteristic frequencies \(\omega_{1}=\sigma\), \(\omega_{2}=-\sigma\). Their phase difference \(\Theta_{-}=\theta_{1}-\theta_{2}\) has a behaviour that is determined by the washboard potential \(V[\Theta_{-}]=-\sigma\Theta_{-}-J\cos(\Theta_{-})\) and the coupling with the bath. If \(\Theta_{-}\) is locked in a minimum of the potential, phase locking happens. From the shape (suppose \(\sigma<J\)) of the potential it is clear that decreasing \(\sigma\), the height of the energy barrier that separates two minima (\(\Delta V=2J\sqrt{1-\frac{\sigma^{2}}{J^{2}}+2\sigma\sin^{-1}\left(\frac{\sigma }{J}\right)-\pi\sigma}\)) increases, making it easier to lock \(\Theta_{-}\) in a minimum. Notice that, in the semiclassical region, Eq. (33) suggests that the amplitude of the oscillations are given by \(J(1+\Delta_{J})>J\), explaining the enhancement of the disorder effect outside the classical regime,
It should also be noticed that for any finite variance \(\sigma\) of the frequency distribution, the critical coupling at zero temperature is finite, yielding a quantum phase transition to a synchronized state. The interplay of different element emerges from this plot: decreasing the the temperature increases the effects of quantum fluctuations and yields higher critical couplings with respect to the classical case, decreasing the width of the distribution
Figure 4: Critical coupling as a function of temperature for different choices of the variance \(\sigma\) of the Gaussian frequency distribution. The overdamped ratio is fixed\(\frac{m\gamma}{\hbar}=7\). The dots shown for \(k_{B}T=0\) represent \(\frac{J^{cl}_{B}}{\hbar\gamma}\) for the noiseless model compatible with \(D=k_{B}T=0\). The colors of the dots are related to the choice of \(\sigma\) as shown in the legend.
Figure 3: Temperature-dependence of the inverse of critical coupling in units of temperature (a) and (b) in logarithmic scale for different values of \(\frac{m\gamma}{\hbar}\). The legend in (a) holds for all the panels. In (a) the classical critical coupling (pink line) is plotted for reference for high temperatures. (c) Ratio between the quantum results and the classical one (extended to low temperatures) vs temperature. The difference in behaviour of the classical and quantum critical couplings is clearly evident close to zero temperature. All results are obtained for \(g(\omega)\) Gaussian with zero mean and \(\sigma=2\)
helps the emergence of synchronization.
These considerations are summarized in Figure 5, that shows the phase diagram of the quantum model. The perspective to study the model in this plot is reversed: the coupling is fixed to a value \(J\) and the plot shows the behaviour of the critical temperature \(T_{C}\), below which the system is synchronized. The synchronized phase corresponds to the coloured one in the plot, the white part corresponds to the incoherent motion phase. Two important features are captured in this picture. The quantum phase transition is evident: it is required a finite coupling to synchronize the system at zero temperature. Moreover, the classical result for the massless overdamped model, given by \(k_{B}T_{C}=J/2\) is reached by our results in the high temperature limit.
## V Discussion and Conclusions
In this work we introduced a generalization to the quantum regime of the well-known Kuramoto model. The model is built out of quantum interacting rotors coupled to environments modelled _a la_ Caldera-Leggett as a collection of harmonic oscillators. The Feynman-Vernon technique allows us to obtain an evolution for the reduced density matrix that describes the subsystem of the rotors. The coherences in the reduced density matrix are exponentially damped in the high-temperature limit, yielding a classical distribution that satisfies the Klein-Kramer's equation associated to the classical stochastic process that defines the noisy classical Kuramoto model. The mean-field quantum model has been studied in its overdamped limit, that enables to perform a perturbative expansion around the critical coupling and carrying out the calculations analytically.
This shows that the introduced quantum Kuramoto model in the overdamped regime admits a phase transition from a incoherent motion phase, to a synchronized one. The phase transition occurs at any temperature, yielding also a quantum phase transition at zero temperature. The critical coupling for the phase transition has been calculated analytically. It correctly reproduces the classical one in the high-temperature and shows deviation from this result extended to lower temperatures. In particular two regions can be observed, beyond the classical regime. A semiclassical region is first met when decreasing the temperature below the inverse damping rate \(\frac{1}{\hbar\gamma}\), here quantum fluctuations make the critical coupling slowly deviate from the classical result, yet behaving qualitatively similarly to the classical one. The quantum region, met around zero temperature \(\frac{k_{B}T}{\hbar\gamma}\ll 1\), shows significant deviations from the classical result: around zero temperature a sudden increase of the ratio \(\frac{J_{C}}{J_{C}^{\mathrm{cl}}}\) happens, and at \(T=0\) the critical coupling for the quantum model is higher than the classical one but finite (for finite variances of the distribution of the characteristic frequencies). The ratio between the quantum and the classical extended result is always greater than one, signaling the fact that quantum fluctuations play against the emergence of synchronization without destroying it.
###### Acknowledgements.
We thank Stefano Ruffo for stimulating discussions. The research was partly supported by EU Horizon 2020 under ERC-ULTRADISS, Grant Agreement No. 834402. R.F. and G.E.S. acknowledge that their research has been conducted within the framework of the Trieste Institute for Theoretical Quantum Technologies (TQT). G.E.S and R.F. acknowledge financial support from PNRR MUR project PE0000023-NQSTI. G.E.S. acknowledges financial support from the project QuantERA II Programme STAQS project that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017733. R.F. acknowledges financial support by the European Union (ERC, RAVE, 101053159). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
Figure 5: Phase diagram of the quantum model in the coupling-temperature space, where the coloured region indicates the synchronized phase. A quantum phase transition at zero temperature is apparent. The result is obtained for a Gaussian distribution of frequencies with zero mean and variance \(\sigma=2\).
## Appendix A Path integral formulation for the classical stochastic process
The average value over disorder of the observable \(e^{i\theta(t;\omega,\xi)}\) must encode the fact that the phase \(\theta\) satisfies the Langevin equation (1). With a path integral formalism and discretizing _a la Ito_, it gives:
\[\left\langle e^{i\theta(t;\omega;\xi)}\right\rangle_{\xi} = {\cal N}\int_{0}^{2\pi}\!\!d\theta\int_{-\infty}^{\infty}\!\!dv\int {\cal D}\xi\int_{\theta(0)=\theta_{0}}^{\theta(t)=\theta}\!\!{\cal D}\theta \int_{v(0)=v_{0}}^{v(t)=v}{\cal D}v\,\exp\biggl{\{}-\frac{1}{4D}\int_{0}^{t}dt ^{\prime}\xi^{2}(t^{\prime})+i\theta(t)\biggr{\}} \tag{40}\] \[\delta(\dot{\theta}_{\tau}-v_{\tau})\,\delta\bigl{(}m\dot{v}_{ \tau}+m\gamma v_{\tau}-F[\theta_{\tau};\omega,\psi]+\xi_{\tau}\bigr{)}\;.\]
The second delta function, enforcing the Langevin dynamics, can be represented in exponential form through the use of an auxiliary field \(\eta(\tau)\):
\[\delta\bigl{(}m\dot{v}_{\tau}+m\gamma v_{\tau}-F[\theta_{\tau}; \omega,\psi]+\xi_{\tau}\bigr{)} \propto \int{\cal D}\eta\,\exp\biggl{\{}i\int_{0}^{t}dt^{\prime}\eta(t^{ \prime})\xi(t^{\prime})\biggr{\}} \tag{41}\] \[\exp\biggl{\{}i\int_{0}^{t}dt^{\prime}\eta(t^{\prime})\,[m\dot{v} (t^{\prime})+m\gamma v(t^{\prime})-F[\theta(t^{\prime});\omega,\psi(t^{\prime })]]\biggr{\}}\;.\]
Notice that choosing this representation of the delta function we now have a Gaussian path integral over the noise variable \(\xi(t)\) with both quadratic and linear term. This integration generates another Gaussian form in the auxiliary variable \(\eta\):
\[\left\langle e^{i\theta(t;\omega;\xi)}\right\rangle_{\xi} = {\cal N}\int_{0}^{2\pi}\!\!d\theta e^{i\theta}\int_{-\infty}^{ \infty}\!\!dv\int{\cal D}\eta\int_{\theta(0)=\theta_{0}}^{\theta(t)=\theta}\! \!{\cal D}\theta\int_{v(0)=v_{0}}^{v(t)=v}{\cal D}v\,\delta(\dot{\theta}(\tau) -v(\tau)) \tag{42}\] \[\exp\biggl{\{}-D\int_{0}^{t}dt^{\prime}\eta^{2}(t^{\prime}) \biggr{\}}\exp\biggl{\{}i\int_{0}^{t}dt^{\prime}\eta(t^{\prime})\Bigl{(}m\dot{ v}(t^{\prime})+m\gamma v(t^{\prime})-F[\theta(t^{\prime});\omega,\psi(t^{\prime})] \Bigr{)}\biggr{\}}\;.\]
It is straightforward to see that the Gaussian integration over the auxiliary field \(\eta\) yields:
\[\left\langle e^{i\theta(t;\omega;\xi)}\right\rangle_{\xi} = {\cal N}\int_{0}^{2\pi}\!\!d\theta\,e^{i\theta}\int_{-\infty}^{ \infty}\!\!dv\int_{\theta(0)=\theta_{0}}^{\theta(t)=\theta}\!\!{\cal D}\theta \int_{v(0)=v_{0}}^{v(t)=v}{\cal D}v\,\delta(\dot{\theta}(\tau)-v(\tau)) \tag{43}\] \[\exp\biggl{\{}-\frac{1}{4D}\int_{0}^{t}dt^{\prime}\Bigl{(}m\dot{ v}(t^{\prime})+m\gamma v(t^{\prime})-F[\theta(t^{\prime});\omega,\psi(t^{\prime})] \Bigr{)}^{2}\biggr{\}}\;.\]
The previous expression describes the average value of the observable \(e^{i\theta(t;\omega,\xi)}\) for a stochastic process in the form of Eq. (1). It is also convenient to write down the same average value for the same stochastic process in the massless limit. Keeping in mind that the only constraint is now satisfying the Langevin equation \(\dot{\theta}=F[\theta;\omega,\psi]+\xi(t)\), one can follow the previous steps and write:
\[\left\langle e^{i\theta(t;\omega;\xi)}\right\rangle_{\xi}={\cal N}\int_{0}^{2 \pi}\!\!d\theta\,e^{i\theta}\int_{\theta(0)=\theta_{0}}^{\theta(t)=\theta}{\cal D }\theta\exp\biggl{\{}-\frac{1}{4D}\int_{0}^{t}dt^{\prime}\Bigl{(}\dot{\theta} (t^{\prime})-F[\theta(t^{\prime});\omega,\psi(t^{\prime})]\Bigr{)}^{2}\biggr{\}}\;. \tag{44}\]
## Appendix B First order expansion of the self-consistent equation
The expansion to first order of the self-consistent equations along with the _Ansatz_\(\varphi(t)=0\) and \(r\sim 0\) constant, yields
\[r = -\frac{iJ_{C}r}{\hbar}\lim_{t\to\infty}\int_{-\infty}^{\infty}\! \!d\omega\,g(\omega)\,\int_{-\infty}^{\infty}\!\!d\theta_{+}\,e^{i\theta_{+}}\, \int_{-\infty}^{\infty}\!\!d\theta^{\prime}_{\pm}\,\int_{\theta_{+}(0)=\theta^ {\prime}_{+}}^{\theta_{+}(t)=\theta_{+}}{\cal D}\theta_{+}\int_{\theta_{-}(0)= \theta^{\prime}_{-}}^{\theta_{-}(t)=0}{\cal D}\theta_{-} \tag{45}\] \[\int_{0}^{t}\!ds\,\left(\cos\biggl{(}\theta_{+}(s)-\frac{\theta_{ -}(s)}{2}\biggr{)}-\cos\biggl{(}\theta_{+}(s)+\frac{\theta_{-}(s)}{2}\biggr{)}\right)\] \[\exp\biggl{\{}\frac{i}{\hbar}\int_{0}^{t}dt^{\prime}\Bigl{(}m \dot{\theta}_{-}\dot{\theta}_{+}-m\gamma\theta_{-}\dot{\theta}_{+}+\omega \theta_{-}\Bigr{)}\biggr{\}}\exp\biggl{\{}-\frac{D}{\hbar^{2}}\int_{0}^{t}dt ^{\prime}dt^{\prime\prime}\theta_{-}(t^{\prime})K(t^{\prime}-t^{\prime\prime}) \theta_{-}(t^{\prime\prime})\biggr{\}}\,\rho(\theta^{\prime}_{+},\theta^{\prime}_{ -})\;.\]
Rewriting the trigonometric term in exponential form, one gets straightforwardly
\[r = -\frac{iJ_{C}r}{2\hbar}\lim_{t\to\infty}\int_{-\infty}^{\infty}\!d \omega\,g(\omega)\,\int_{-\infty}^{\infty}\!d\theta_{+}\,e^{i\theta_{+}}\int_{- \infty}^{\infty}\!d\theta^{\prime}_{\pm}\,\int_{\theta_{+}(0)=\theta^{\prime}_ {+}}^{\theta_{+}(t)=\theta_{+}}\!\mathcal{D}\theta_{+}\,\int_{\theta_{-}(0)= \theta^{\prime}_{-}}^{\theta_{-}(t)=0}\!\mathcal{D}\theta_{-} \tag{10}\] \[\int_{0}^{t}ds\sum_{c,c^{\prime}=\pm}\!c^{\prime}\exp\!\left\{ic \theta_{+}(s)-\frac{icc^{\prime}}{2}\,\theta_{-}(s)\right\}\] \[\exp\!\left\{\frac{i}{\hbar}\int_{0}^{t}dt^{\prime}\left(m\, \hat{\theta}_{-}\!\hat{\theta}_{+}-m\gamma\,\theta_{-}\hat{\theta}_{+}+ \omega\theta_{-}\right)\right\}\,\exp\!\left\{-\frac{D}{\hbar^{2}}\int_{0}^{t }dt^{\prime}dt^{\prime\prime}\theta_{-}(t^{\prime})K(t^{\prime}-t^{\prime \prime})\theta_{-}(t^{\prime\prime})\right\}\!\rho(\theta^{\prime}_{+},\theta^ {\prime}_{-})\;.\]
Denoting the argument of the exponential as \(S^{\prime}_{\rm eff}[\theta_{+},\theta_{-}]\) and regarding it as a new effective action, one finally gets Eq. (27).
The path integration that now should be calculated is simply Gaussian. Decomposing the new effective action (27) in its real and imaginary part \(S^{\prime}_{\rm eff}=S^{\prime}_{Re}[\theta_{+},\theta_{-}]+iS^{\prime}_{Im}[ \theta_{-}]\), one can easily get the result of the path integration. The real part of the effective action contains quadratic and linear terms in both fields \(\theta_{+}\) and \(\theta_{-}\). The imaginary part contains only a quadratic term in \(\theta_{-}\). Notice that there are not quadratic terms just in the field \(\theta_{+}\).
The strategy is now to find the saddle point of \(S^{\prime}_{Re}\), i.e. to find \(\tilde{\theta}_{+}(t^{\prime})\) and \(\tilde{\theta}_{-}(t^{\prime})\) such that the first derivative in both fields is zero, respecting the boundary conditions \(\tilde{\theta}_{\pm}(t)=\theta_{\pm}(t)\) and \(\tilde{\theta}_{\pm}(0)=\theta_{\pm}(0)\).
Then, the following change of variables should be performed: \(\theta_{\pm}(t^{\prime})=\tilde{\theta}_{\pm}(t^{\prime})+\delta\theta_{\pm}( t^{\prime})\), with \(\delta\theta_{\pm}(0)=\delta\theta_{\pm}(t)=0\). The path integration will be now performed over the fields \(\delta\theta_{+}(\tau)\) and \(\delta\theta_{-}(\tau)\). After having performed the change of variables, it is convenient to rewrite the action in matrix form:
\[S^{\prime}_{\rm eff}[\delta\theta_{+},\delta\theta_{-};s,t]_{c,c^ {\prime}}=S^{\prime}_{\rm eff}[\tilde{\theta}_{+},\tilde{\theta}_{-};s,t]_{c, c^{\prime}}+\] \[= \frac{1}{2}\int_{0}^{t}dt^{\prime}dt^{\prime\prime}\left(\delta \theta_{+}(t^{\prime})\ \delta\theta_{-}(t^{\prime})\right)A(t^{\prime}-t^{\prime\prime})\begin{pmatrix} \delta\theta_{+}(t^{\prime\prime})\\ \delta\theta_{-}(t^{\prime\prime})\end{pmatrix}+\int_{0}^{t}dt^{\prime}B(t^{ \prime})^{T}\begin{pmatrix}\delta\theta_{+}(t^{\prime})\\ \delta\theta_{-}(t^{\prime})\end{pmatrix}\]
with
\[A(t^{\prime}-t^{\prime\prime}) = \begin{pmatrix}0&-m\delta^{\prime\prime}(t^{\prime}-t^{\prime \prime})+m\gamma\delta^{\prime}(t^{\prime}-t^{\prime\prime})\\ -m\delta^{\prime\prime}(t^{\prime\prime}-t^{\prime})-m\delta^{\prime}(t^{ \prime\prime}-t^{\prime})&\frac{2iD}{\hbar}K(t^{\prime}-t^{\prime\prime})\end{pmatrix}\;,\] \[B(t^{\prime}) = \frac{iD}{\hbar}\left(\int_{0}^{t}dt^{\prime\prime}\left(K(t^{ \prime}-t^{\prime\prime})+K(t^{\prime\prime}-t^{\prime})\tilde{\theta}_{-}(t^{ \prime\prime})\right)\right)\]
and \(\mathcal{N}(t)=\frac{m\gamma}{2\pi\hbar(1-e^{-\gamma\tau})}\). \(A\) is the matrix that contains the coefficients of the quadratic terms in the fields \(\tilde{\theta}_{-}\), and \(B\) contains the coefficients of the linear terms. Notice that the first entry of both \(A\) and \(B\) is zero, since there aren't quadratic and linear terms in the fields \(\delta\theta_{+}\) only.
The Gaussian path integration over the variables \(\delta\theta_{\pm}\) yields
\[\rho^{\prime}(\theta_{+},0,t) = \frac{-i}{2\hbar}\sum_{c,c^{\prime}=\pm 1}c^{\prime}\int_{0}^{t}ds \int_{-\infty}^{\infty}\!d\theta^{\prime}_{+}\,\int_{-\infty}^{\infty}\,d \theta^{\prime}_{-}\,\mathcal{N}(t)\,\exp\!\left\{\frac{i}{\hbar}S^{\prime}_{ \rm eff}[\tilde{\theta}_{+},\tilde{\theta}_{-};s,t]_{c,c^{\prime}}\right\} \tag{11}\] \[\exp\!\left\{-\frac{1}{4}\int_{0}^{t}dt^{\prime}dt^{\prime\prime}B (t^{\prime})^{T}A^{-1}(t^{\prime}-t^{\prime\prime})B(t^{\prime\prime})\right\}\,.\]
Since the inverse of the matrix \(A\) has the form \(A^{-1}=\begin{pmatrix}a&b\\ c&0\end{pmatrix}\), the matrix product appearing above gives \(B^{T}A^{-1}B=0\) and does not contribute to the result of the path integration.
Thus, the last issue we are left with is finding the solutions to the saddle point equations:
\[\left.\frac{\partial S^{\prime}_{Re}}{\partial\theta_{+}}\right|_{ \tilde{\theta}_{\pm}} = \tilde{\tilde{\theta}}_{-}(t^{\prime})-\gamma\dot{\tilde{\theta}}_{ -}(t^{\prime})-\frac{\hbar c}{m}\delta(t^{\prime}-s)=0 \tag{12}\] \[\left.\frac{\partial S^{\prime}_{Re}}{\partial\theta_{-}}\right|_{ \tilde{\theta}_{\pm}} = \tilde{\tilde{\theta}}_{+}(t^{\prime})+\gamma\dot{\tilde{\theta}}_{+}(t^ {\prime})-\frac{\omega}{m}+\frac{\hbar cc^{\prime}}{2m}\delta(t^{\prime}-s)=0\]
The solutions to these equations have jump discontinuities in the derivatives, the jump is proportional to \(\frac{\hbar}{m\gamma}\) thus it gets smaller in the overdamped limit. In the massless limit the discontinuities appear directly in the solutions (not just in the derivatives). The solutions to the saddle point equations can be found in appendix C.
Substituting the solutions (C1),(C2), in the first order approximation of the self-consistent equation, one gets in the infinite time limit
## Appendix C Solutions to the saddle point equations
The solutions to the saddle point equations (10) that respects the boundary conditions \(\tilde{\theta}_{\pm}(0)=\theta^{\prime}_{\pm}\), \(\tilde{\theta}_{-}(t)=0\), \(\tilde{\theta}_{+}(t)=\theta_{+}\) are:
\[\tilde{\theta}_{-}(t^{\prime}) = \begin{cases}\theta^{\prime}_{-}+\frac{\left(e^{-\gamma t^{\prime} }-1\right)\left(-\theta^{\prime}_{-}+\frac{hce^{\gamma t}-hc-\gamma m\theta^{ \prime}_{-}e^{\gamma t}}{\gamma m\left(e^{\gamma t}-1\right)}-\frac{\gamma^{t -\gamma t}\left(hce^{\gamma t}-hc-\gamma m\theta^{\prime}_{-}e^{\gamma t}+ \right)}{\gamma m\left(e^{\gamma t}-1\right)}\right)}{e^{\gamma t}-1}&0\leq t^{ \prime}\leq s\\ \frac{e^{-\gamma t}\left(e^{\gamma t^{\prime}}-e^{\gamma t}\right)\left(che^{ \gamma t}-ch-\gamma m\theta^{\prime}_{-}e^{\gamma t}+\right)}{\gamma m\left(e^ {\gamma t}-1\right)}&s<t^{\prime}\leq t\\ -\frac{\left(e^{-\gamma t^{\prime}}-1\right)\left(\frac{\gamma^{t}\left(e^{- \gamma t}-e^{-\gamma t}\right)\left(-che^{\gamma t}\right)\left(-che^{\gamma t }+che^{-2\gamma m\theta^{\prime}_{+}+2\gamma\theta_{+}m-2t\omega}\right)}{2 \gamma m\left(e^{\gamma t}-1\right)}+\theta^{\prime}_{+}-\theta_{+}+\frac{ \omega t^{\prime}}{\gamma m}\right)}{e^{-\gamma t}-1}+\theta^{\prime}_{+}+ \frac{\omega t^{\prime}}{\gamma m}&0\leq t^{\prime}<s\\ -\frac{e^{\gamma t}\left(e^{-\gamma t^{\prime}}-e^{-\gamma t}\right)\left(-che ^{\gamma t}e^{\gamma t}+che^{\prime}-2\gamma m\theta^{\prime}_{+}+2\gamma \theta_{+}m-2t\omega\right)}{2\gamma m\left(e^{\gamma t}-1\right)}+\theta_{+} +\frac{\omega t^{\prime}}{\gamma m}&s\leq t^{\prime}\leq t\end{cases} \tag{11}\]
It is interesting to notice that in the massless limit (\(\frac{m\gamma}{\hbar}=\)cost, \(\gamma\rightarrow\infty\)), the expressions of the saddle point equations get easier but discontinuities appear:
\[\lim_{\begin{subarray}{c}\gamma\rightarrow\infty\\ m\gamma/\hbar=\text{cost}\end{subarray}}\tilde{\theta}_{-}(t^{\prime}) = \begin{cases}\theta^{\prime}_{-}&0\leq t^{\prime}<s\\ \theta^{\prime}_{-}-c\frac{\hbar}{m\gamma}&s<t^{\prime}<t\\ 0&t^{\prime}=t\end{cases} \tag{12}\] \[\lim_{\begin{subarray}{c}\gamma\rightarrow\infty\\ m\gamma/\hbar=\text{cost}\end{subarray}}\tilde{\theta}_{+}(t^{\prime}) = \begin{cases}\theta^{\prime}_{+}&t^{\prime}=0\\ \theta_{+}+cc^{\prime}\frac{\hbar}{2m\gamma}+\frac{\omega(t^{\prime}-t)}{m \gamma}&0<t^{\prime}<s\\ \theta_{+}+\frac{\omega(t^{\prime}-t)}{m\gamma}&s\leq t^{\prime}\leq t\end{cases} \tag{13}\]
|
2305.01682 | Frustration induced Itinerant Ferromagnetism of Fermions in Optical
Lattice | When the Fermi Hubbard model was first introduced sixty years ago, one of the
original motivations was to understand correlation effects in itinerant
ferromagnetism. In the past two decades, ultracold Fermi gas in an optical
lattice has been used to study the Fermi Hubbard model. However, the metallic
ferromagnetic correlation was observed only in a recent experiment using
frustrated lattices, and its underlying mechanism is not clear yet. In this
letter, we point out that, under the particle--hole transformation, the
single-particle ground state can exhibit double degeneracy in such a frustrated
lattice. Therefore, the low-energy state exhibits valley degeneracy,
reminiscent of multi-orbit physics in ferromagnetic transition metals. The
local repulsive interaction leads to the valley Hund's rule, responsible for
the observed ferromagnetism. We generalize this mechanism to distorted
honeycomb lattices and square lattices with flux. This mechanism was first
discussed by M\"uller-Hartmann in a simpler one-dimension model. However, this
mechanism has not been widely discussed and has not been related to
experimental observations before. Hence, our study not only explains the
experimental findings but also enriches our understanding of itinerant
ferromagnetism. | Chengshu Li, Ming-Gen He, Chang-Yan Wang, Hui Zhai | 2023-05-02T18:00:03Z | http://arxiv.org/abs/2305.01682v1 | # Frustration induced Itinerant Ferromagnetism of Fermions in Optical Lattices
###### Abstract
When the Fermi Hubbard model was first introduced sixty years ago, one of the original motivations was to understand correlation effects in itinerant ferromagnetism. In the past two decades, ultracold Fermi gas in an optical lattice has been used to study the Fermi Hubbard model. However, the metallic ferromagnetic correlation was observed only in a recent experiment using frustrated lattices, and its underlying mechanism is not clear yet. In this letter, we point out that, under the particle-hole transformation, the single-particle ground state can exhibit double degeneracy in such a frustrated lattice. Therefore, the low-energy state exhibits valley degeneracy, reminiscent of multi-orbit physics in ferromagnetic transition metals. The local repulsive interaction leads to the valley Hund's rule, responsible for the observed ferromagnetism. We generalize this mechanism to distorted honeycomb lattices and square lattices with flux. This mechanism was first discussed by Muller-Hartmann in a simpler one-dimension model. However, this mechanism has not been widely discussed and has not been related to experimental observations before. Hence, our study not only explains the experimental findings but also enriches our understanding of itinerant ferromagnetism.
Itinerant ferromagnetism has been discovered in nature for thousands of years. However, a complete understanding of its microscopic origin is still challenging. The Stoner mean-field theory has predicted itinerant ferromagnetism in metal when the repulsive interaction between fermions exceeds a critical value [1]. When such ferromagnetism occurs, the Pauli exclusion principle increases kinetic energy considerably. Therefore, the critical interaction strength predicted for Stoner ferromagnetism is comparable to the Fermi energy. Under such a strong interaction strength, the correlation effect can no longer be ignored. The competition from other strongly correlated non-magnetic states usually takes over Stoner ferromagnetism. Therefore, understanding itinerant ferromagnetism becomes essentially a strongly correlated problem.
When the Hubbard model was introduced in the middle of the last century [2; 3; 4], one major purpose was to understand the correlation effect in ferromagnetism. Unfortunately, the consensus on ferromagnetism in the Hubbard model is still limited after many decades [5; 6; 7; 8]. Rigorous results can only be obtained for exceptional cases such as the Nagaoka ferromagnetism [9; 10; 11] and the flat band ferromagnetism [12]. The Nagaoka ferromagnetism considers a single hole doping away from half-filling in the limit of infinite repulsion, and the flat band ferromagnetism requires fine-tuning of particle hopping to reach a flat band dispersion. Moreover, the single-band Hubbard model is usually oversimplified to directly compare with experiments on real materials.
Ultracold atoms in optical lattices directly realize the Hubbard model and provide a new opportunity to study physics therein [13; 14]. However, the temperature of atoms in optical lattices cannot be cooled much below the kinetic energy, which prevents observing possible low-temperature orderings, such as fermion pairing, at this stage. In contrast, ferromagnetism often occurs in a relatively high temperature. It is conceivable that one does not need to enter an extremely low-temperature regime in order to study ferromagnetism in optical lattices. Hence, understanding the nature of ferromagnetism is a potential goal for the current quantum simulations with optical lattices.
Nevertheless, although the Fermi Hubbard model has been realized with ultracold atoms in optical lattices for nearly two decades, itinerant ferromagnetism has not been observed in this system until a very recent experiment [15]. In this experiment, a novel experimental technology allows continuously tuning the lattice geometry from a square to a frustrated triangular lattice. Short-range ferromagnetic correlation has been observed in the particle doping regime when the lattice geometry is tuned close to triangular regime. However, a convincing theoretical understanding of the physical mechanism behind the observed ferromagnetism is still lacking.
_Review of Experimental Setting._ This experiment reports an actively phase stabilized optical lattice that can continuously tune the lattice geometry from square lattice to triangular lattice. The tight-binding model of this lattice is shown in Fig. 1(a). This experiment explores spin-\(1/2\) fermions in such lattice, and the model is written as
\[\hat{H}=-t\sum_{\{ij\}\sigma}\hat{c}^{\dagger}_{i\sigma}\hat{c}_{j\sigma}-t^{ \prime}\sum_{\{(i\not\in\mathcal{J})\}}\hat{c}^{\dagger}_{i\sigma}\hat{c}_{j \sigma}+U\sum_{i}\hat{n}_{i\uparrow}\hat{n}_{i\downarrow}, \tag{1}\]
where \(\sigma=\uparrow,\downarrow\) denotes two spin components. The hopping between the nearest neighbor \((ij)\) is denoted by \(t\), and between the next nearest neighbor along the dashed line
direction \(\langle\langle ij\;\;\mathbf{\chi^{\prime}}\rangle\rangle\) is denoted by \(t^{\prime}\). Both \(t\) and \(t^{\prime}\) are positive. The next nearest hopping along another diagonal direction is negligible. When \(t^{\prime}\) increases from zero to \(t^{\prime}=t\), it continuously tunes the lattice geometry from a square lattice to a triangular lattice. \(U\) denotes the interaction strength of on-site repulsion, and \(U/t\approx 9\) in this experiment.
This experiment finds ferromagnetic correlation when \(t^{\prime}/t>0.5\) and when total fermion density \(n\) exceeds half-filling \(n=1\) and is somewhat close to \(n=1.5\). The regime where ferromagnetism is observed is marked by shaded yellow area in Fig. 1(b). This experiment has only explored the density regime \(0.5<n<1.5\) and it is not clear whether the ferromagnetic correlation can also exist when \(n\) exceeds \(1.5\). The physical origin of this observed ferromagnetism is also not clear yet. Possible scenarios mentioned include the existence of van Hove singularity at \(n=1.5\) for triangular lattice and the possible connection to the Nagaoka ferromagnetism [15; 16; 17].
_Particle-Hole Transformation._ For the benefit of later discussion, we make a particle-hole transformation \(\tilde{c}_{i\sigma}\rightarrow(-1)^{i_{x}+i_{y}}\tilde{c}_{i\sigma}^{\dagger}\), where \(i=(i_{x},i_{y})\) is the site label. If \(t^{\prime}=0\), this transformation keeps the form of Hamiltonian Eq. (1) invariant. \(t^{\prime}\)-term is what causes frustration in this lattice, and for the same reason, \(t^{\prime}\)-term is not invariant under the particle-hole transformation. Instead, this transformation changes \(t^{\prime}\) to \(-t^{\prime}\). This corresponds to inserting a \(\pi\)-flux in each triangle. For positive \(t^{\prime}\), the system essentially describes particles moving in a real potential and all hopping terms have negative matrix elements that satisfies Feynman's no-node theorem [18]. Therefore, the single-particle ground state cannot have degeneracy. However, for negative \(t^{\prime}\), the condition for no-node theorem is no longer satisfied, and the single-particle ground state can have degeneracy. It turns out that this degeneracy is crucial for explaining this observed ferromagnetism, as it will become clear later.
Under this transformation, \(n=n_{\uparrow}+n_{\downarrow}\) becomes \(2-n\), that is, the particle doping is mapped to hole doping. Thus, the particle-hole transformation maps \((t^{\prime},n)\) to \((-t^{\prime},2-n)\) in the phase diagram shown in Fig. 1(b). Therefore, we can focus solely on the density regime \(0<n\leqslant 1\) but include both positive and negative \(t^{\prime}\). With this mapping, the regime where ferromagnetism is observed is mapped to the low-density regime with \(t^{\prime}<-0.5\), as marked as shaded blue area in Fig. 1(b).
_Numerical Results._ We first present our numerical results in Fig. 2[19]. All the calculations below are done for infinite positive \(U\). The first calculation is exact diagonalization for two particles with different system sizes, reminiscent of different densities. This calculation can cover the low-density regime up to \(n=0.5\). Due to the \(SU(2)\) spin rotational symmetry, the total spin is a good quantum number and all quantum states with the same total spin are degenerate. We find a level crossing between spin singlet and spin triplet states, as shown in Fig. 2(a). The spin triplet states have a lower energy when \(t^{\prime}<t^{\prime}_{\rm c}\), and the value of \(t^{\prime}_{\rm c}\) depends on density and lies between \(-1.0\) and \(-0.5\), as one can see from circles in Fig. 2(c).
The second calculation is the density-matrix renormalization group (DMRG) calculation with finite number of fermions on different strip geometry. We can calculate \(\langle\hat{\mathbf{S}}_{\rm tot}^{2}\rangle\) with \(\hat{\mathbf{S}}_{\rm tot}=\sum_{i}\hat{\mathbf{S}}_{i}\) for the ground state, and \(S_{\rm tot}\) is given by \(\langle\hat{\mathbf{S}}_{\rm tot}^{2}\rangle=S_{\rm tot}(S_{\rm tot}+1)\). We also find a transition from \(S_{\rm tot}=N/2\) to \(S_{\rm tot}=0\) around \(t^{\prime}=t^{\prime}_{\rm c}\), as shown in Fig. 2(b). Fig. 2(c) also collects \(t^{\prime}_{\rm c}\) obtained by all DMRG calculations with different fermion number and different system sizes.
A notable feature in Fig. 2(c) is that for all calculations, \(t^{\prime}_{\rm c}\) approaches \(-0.5\) at the low-density limit when \(n\to 0\). Below we will explain why \(t^{\prime}=-0.5\) is a special point, and the ferromagnetism emerged in this regime will be called the _Muller-Hartmann Mechanism_. In this regime, the general trend is that \(t^{\prime}_{\rm c}\) decreases as \(n\) increases.
Nearby half-filling with \(n=1\), it is known that hole doping can result in Nagaoka ferromagnetism at infinite \(U\). Strictly speaking, Nagaoka ferromagnetism can only be proved for single-hole doping with \(t^{\prime}<0\). However, our numerical results show that when \(t^{\prime}=0\), the Nagaoka
Figure 1: (a) The tunable optical lattice realized in a recent experiment. (b) The ferromagnetic correlation found in the experiment is marked by the shaded yellow regime in the \(n-t^{\prime}\) phase diagram, which is mapped to the shaded blue area under the particle–hole transformation. (c) Honeycomb lattice with the nearest neighbor and the horizontal next nearest neighbor hopping. (d) Square lattice with the nearest neighbor hopping only, but with magnetic flux in each plaquette. The red lines in (a) and (c) denote the one-dimensional chain considered by Müller-Hartmann’s paper.
ferromagnetism can exist up to \(\sim 20\%\) of hole doping, that is, for \(0.8\lesssim n<1\). This is consistent with previous numerical results [20]. Hence, we attribute the ferromagnetism nearby half-filling as _Nagaoka Mechanism_. In this regime, \(t^{\prime}_{\rm c}\) also decreases when \(n\) decreases from \(n=1\), and this density dependence of \(t^{\prime}_{\rm c}\) is opposite to that found in the low-density regime.
Hence, we have identified two different mechanisms of ferromagnetism from our numerical results, and they emerge from the low-density limit and nearby half-filling, respectively. They are characterized by different density dependence of \(t^{\prime}_{\rm c}\). The trend of experimental observed ferromagnetic boundary, upon the particle-hole transformation, is consistent with the Muller-Hartmann mechanism.
_Muller-Hartmann Mechanism._ To see why \(t^{\prime}=-0.5\) is special, we first look at the single-particle dispersion which reads
\[\mathcal{E}(k_{x},k_{y}) =-2t\cos(k_{x})-2t\cos(k_{y})-2t^{\prime}\cos(k_{x}+k_{y})\] \[=-4t\cos(k_{+})\cos(k_{-})-2t^{\prime}\cos(2k_{+}), \tag{2}\]
where \(k_{\pm}=(k_{x}\pm k_{y})/2\). It is easy to see that the dispersion minimum occurs at \(k_{-}=0\). As shown in Fig. 3(a), when \(t^{\prime}/t>-0.5\), the band dispersion has a unique minimum at \(k_{+}=0\). However, when \(t^{\prime}/t<-0.5\), the band dispersion displays two degenerate minima along \(k_{+}\) axes. That is to say, a Lifshitz transition occurs when \(t^{\prime}/t=-0.5\).
The paper by Muller-Hartmann first pointed out that ferromagnetism can occur in the low-density regime when the band minima display double degeneracy [21], followed by few related works in later literatures [22]. This paper considered a one-dimensional chain with next nearest hopping, and the Hamiltonian reads
\[\hat{H}=-\sum_{i\sigma}(t\hat{c}^{\dagger}_{i,\sigma}\hat{c}_{i+1,\sigma}+t^{ \prime}\hat{c}^{\dagger}_{i,\sigma}\hat{c}_{i+2,\sigma}+{\rm h.c.})+U\sum_{i} \hat{n}_{i\uparrow}\hat{n}_{i\downarrow}. \tag{3}\]
In this model, the band minima also displays double degeneracy when \(t^{\prime}/t<-1/4\), where Muller-Hartmann argued that ferromagnetism can occur in the low-density regime. We note that this one-dimensional chain can be naturally embedded into the two-dimensional lattice we considered, as shown in Fig. 1(a). In other word, the two-dimensional lattice realized in this cold atom experiment can be viewed as a two-dimensional generalization of Muller-Hartmann's work.
Figure 2: (a) Two-body problem in a \(4\times 4\) lattice with open boundary condition. The singlet and triplet state energies as a function of \(t^{\prime}\). (b) \(S_{\rm tot}\) of the ground state calculated by the DMRG method for \(N=12\) particles in a \(4\times 8\) strip. (c) The shaded area schematically shows the possible ferromagnetic regime in the \(n-t^{\prime}\) phase diagram. Each point denote \(t^{\prime}_{\rm c}\) between ferromagnetic and non-magnetic phase at different densities. The numerical calculations include exact diagonalizations of two-particle problem in different system sizes and DMRG calculations of finite number of particles on different system sizes. All DMRG calculations have bond dimension \(1400\). In all cases, \(t=1\) and \(U\) is set to positive infinite.
Figure 3: (a) Single-particle dispersion \(\mathcal{E}(k_{x},k_{y})\) of the Hamiltonian Eq. (1) for three different values of \(t^{\prime}\). \(t^{\prime}=-0.2\) (left), \(=-0.5\) (middle) and \(=-0.8\) (right). (b) The real and imaginary parts of two valley Wannier wave functions. These two wave functions are constructed by using the Bloch wave functions around each minimum of single-particle dispersion, as indicated by the solid circle in the right dispersion in (a).
The physical mechanism behind this ferromagnetism can be called the _Valley Hund's rule_[23]. Considering the two-particle case, they can be respectively placed in each minimum of single-particle dispersion. For spin-triplet states, the spatial wave function is anti-symmetrized, and the interaction energy automatically vanishes. However, for the spin-singlet state, one has to apply extra projection to prevent double occupation, which inevitably increases the kinetic energy.
Using the low-energy Bloch wave functions around each minimum, we can construct the effective valley Wannier wave functions for each valley, as shown in Fig. 3(b). These valley Wannier wave functions extend over several lattice sites. The on-site repulsion between fermions effectively introduces Hund's rule between these valley Wannier wave functions. In real materials, itinerant ferromagnetism usually occurs in transition metals with partially filled \(d\)-orbit, and Hund's rule coupling between these \(d\)-orbits is crucial for ferromagnetism. That is to say, the multi-orbital physics plays an essential role. Muller-Hartmann's mechanism says that even for a single-band model, when the single-particle ground state has degeneracy, the valley degeneracy can lead to an emergent multi-orbital physics, resulting in itinerant ferromagnetism at the low-density regime. Since Hund's rule is a local ferromagnetic coupling, we expect that the short-range ferromagnetic correlation caused by this mechanism is robust at finite temperatures. Moreover, since the Hund's rule coupling is the leading order effect of local repulsion while the anti-ferromagnetic super-exchange interaction is a second-order effect of local repulsion, we also expect that this ferromagnetism is stable with finite interaction strength.
_Generalization to Honeycomb Lattice._ To further confirm our understanding of emergent ferromagnetism in this experiment, we generalize this idea to other models, where degenerate ground states can also be found. The first generalization is fermions in a distorted honeycomb lattice. Similar to the square lattice model, the nearest neighbor hopping is denoted by \(t\). We compress the lattice along the horizontal direction such that we should also include the next nearest hopping \(t^{\prime}\) along the horizontal dash line direction, as shown Fig. 1(c). This lattice can also be viewed as a set of \(t-t^{\prime}\) chains along \(\hat{x}\) direction considered by Muller-Hartmann's paper, coupled vertically along \(\hat{y}\) direction. We introduce a similar particle-hole transformation that only changes sign of \(t^{\prime}\). The ground state of this dispersion also shows a Lifshitz transformation from single to double degeneracy at \(t^{\prime}=-0.25\), as shown in Fig. 4(b). The phase diagram obtained by DMRG calculation is shown in Fig. 4(a). This phase diagram contains two ferromagnetic regimes, one in the low-density regime around \(t^{\prime}=-0.25\) and the other nearby half-filling for \(t^{\prime}<0\). Clearly, they respectively manifest Muller-Hartmann and Nagaoka mechanisms [24]. Unlike the square lattice case, these two ferromagnetism mechanisms occur in two disconnected regimes in the phase diagram, clearly visualizing their difference.
_Generalization to Flux Lattice._ The second generalization is square lattice with a magnetic flux in each pla
Figure 5: (a) \(S_{\rm tot}\) obtained by DMRG calculation. \(S_{\rm max}=N/2\). Here we consider square lattice with nearest hopping only, but with magnetic flux \(\phi\). \(\phi=0,\pi,2\pi/3,\pi/2\) and \(2\pi/5\) from the top to the bottom. The DMRG calculation is performed in a \(4\times 10\) strip with bond dimension \(\chi=1400\). (b) The single-particle dispersion around the bottom of the band for different magnetic flux corresponding to (a).
Figure 4: (a) The ferromagnetic regime indicated by DMRG calculation in the \(n-t^{\prime}\) phase diagram of the distorted honeycomb lattice. The model is shown in Fig. 1(c), in which \(t^{\prime}\) denotes the next nearest hopping along the horizontal dashed line. The DMRG calculation is performed in a \(3\times 6\) zigzag cylinder, with bond dimension \(\chi=1400\). The dashed line indicates the value of \(t^{\prime}\) for the Lifshitz transition. (b) The single-particle dispersion of the distorted honeycomb lattice, with \(t^{\prime}=-0.1\) (left), \(=-0.25\) (middle) and \(=-0.4\) (right).
quette. Here we only consider the nearest hopping in square lattice, and the magnetic flux only introduces a phase in the hopping and does not cause Zeeman splitting. However, with magnetic flux \(\phi=2\pi/m\) in each plaquette (\(m\) is an integer), translation of one lattice spacing along \(\hat{x}\) direction does not commute with one lattice space translation along \(\hat{y}\) direction, and only commutes with translation of \(m\) lattice spacing along \(\hat{y}\) direction. This enlarges the unit cell and leads to \(m\)-fold degeneracy of dispersion spectrum, as shown in Fig. 5(b). For a fixed magnetic flux, the only tunable parameter is density. In Fig. 5(a), we show the total magnetization as a function of fermion density. It is interesting to note that when \(m=1\), the single-particle dispersion is not be degenerate but the hopping satisfies the condition for Nagaoka ferromagnetism. Hence, DMRG finds ferromagnetism nearby half-filling. In contrast, when \(m>1\), hopping terms no longer satisfy the condition for Nagaoka ferromagnetism, but the ground state degeneracy appears. We find a critical \(n_{\rm c}\) and the system is ferromagnetism when \(n<n_{\rm c}\). We find that \(n_{\rm c}\) is approximately given by \(1/m\), which corresponds to a half-filled lowest band. Therefore, ferromagnetism occurs in low-density regime [25]. This clearly shows Nagaoka and Muller-Hartmann are two different mechanisms.
_Conclusion and Discussions._ In summary, we analyze a recent experiment that first discovers itinerant ferromagnetism in ultracold atom realization of the Fermi Hubbard model. We attribute the mechanism of this ferromagnetism to the appearance of single-particle ground-state degeneracy. At low density, the valley degeneracy is reminiscent of multi-orbital physics in transition metals, and the local repulsive interaction can lead to a valley Hund's rule, resulting in ferromagnetic correlations. This mechanism was first discussed by Muller-Hartmann in a simpler one-dimension model.
We remark that by talking about ferromagnetism, we implicitly assume that the Hamiltonian of the system should not break time-reversal symmetry. In the presence of time-reversal symmetry, the single-particle ground state should not degenerate for normal kinetic energy term due to Feynman's no-node theorem. In order for the Muller-Hartmann mechanism to apply, the key is frustration plus particle-hole transformation. Because of the lattice frustration, we can effectively inset flux to the hopping terms after a particle-hole transformation, such that the condition for Feynman's theorem is violated. That is why frustration plays an essential role here. Previously, frustration has been considered an essential ingredient for the emergence of exotic phases such as spin liquids [26]; the discussion here shows that frustration can also be a key ingredient responsible for the emergence of ferromagnetism.
_Acknowledgement._ We thank Hong Yao and Yingfei Gu for helpful discussions. This work is supported by Innovation Program for Quantum Science and Technology 2021ZD0302005, the Beijing Outstanding Young Scholar Program, the XPLORER Prize and China Postdoctoral Science Foundation (Grant No. 2022M711868). C.L. is supported by Chinese International Postdoctoral Exchange Fellowship Program and Shuimu Tsinghua Scholar Program at Tsinghua University.
|
2310.12448 | Quantum computer error structure probed by quantum error correction
syndrome measurements | With quantum devices rapidly approaching qualities and scales needed for
fault tolerance, the validity of simplified error models underpinning the study
of quantum error correction needs to be experimentally evaluated. In this work,
we have assessed the performance of IBM superconducting quantum computer
devices implementing heavy-hexagon code syndrome measurements with increasing
circuit sizes up to 23 qubits, against the error assumptions underpinning code
threshold calculations. Circuit operator change rate statistics in the presence
of depolarizing and biased noise were modelled using analytic functions of
error model parameters. Data from 16 repeated syndrome measurement cycles was
found to be inconsistent with a uniform depolarizing noise model, favouring
instead biased and inhomogeneous noise models. Spatial-temporal correlations
investigated via $Z$ stabilizer measurements revealed significant temporal
correlation in detection events. These results highlight the non-trivial
structure which may be present in the noise of quantum error correction
circuits, revealed by operator measurement statistics, and support the
development of noise-tailored codes and decoders to adapt. | Spiro Gicev, Lloyd C. L. Hollenberg, Muhammad Usman | 2023-10-19T03:55:44Z | http://arxiv.org/abs/2310.12448v2 | # Quantum computer error structure probed by quantum error correction syndrome measurements
###### Abstract
With quantum devices rapidly approaching qualities and scales needed for fault tolerance, the validity of simplified error models underpinning the study of quantum error correction needs to be experimentally evaluated. In this work, we have directly assessed the performance of superconducting devices implementing heavy-hexagon code syndrome measurements with increasing circuit sizes up to 23 qubits, against the error assumptions underpinning code threshold calculations. Data from 16 repeated syndrome measurement cycles was found to be inconsistent with a uniform depolarizing noise model, favouring instead biased and inhomogeneous noise models. Spatial-temporal correlations investigated via \(Z\) stabilizer measurements revealed significant temporal correlation in detection events. These results highlight the non-trivial structure which may be present in the noise of quantum error correction circuits and support the development of noise-tailored codes and decoders to adapt.
## 1 Introduction
Quantum computing has the potential to offer significant computational advantage for many computationally intensive tasks such as quantum materials simulations [1], optimisation [2], machine learning [3, 4, 5], and search in large databases [6]. However, currently available Noisy Intermediate Scale Quantum (NISQ) [7] devices exhibit prohibitively high error rates which can significantly hinder the successful execution of sufficiently deep quantum circuits relevant for algorithms of practical interest [8]. Although error rates of quantum devices can be expected to gradually decrease in the next few years, the implementation of quantum error correction (QEC) will be necessary to overcome the detrimental impact of errors and noise in a scalable manner [9]. The ultimate goal of these efforts is to execute error-corrected quantum algorithms of practical interest, a milestone known as fault-tolerant quantum computing (FTQC) [10]. This requires overcoming challenges distinct from those of focus in the NISQ era [11, 12]. Competitive FTQC resource estimates, usually based on surface code QEC, require millions of physical qubits and extended periods of stable operation to encode the required number of logical qubits and facilitate logical qubit operations [13, 14, 15]. Accurate evaluation of device performance as progress is made towards this regime is needed to estimate how and when FTQC will be a realistic possibility.
The progression of device quality towards FTQC standards is usually indicated by comparing device error rates, such as those found with methods like randomized benchmarking [16, 17], to QEC code threshold error rates calculated by simulation [18, 19, 20]. As these simulations often make many simplifying approximations and assumptions regarding noise, they can often be treated only as order of magnitude estimates for the parameters necessary/sufficient for real physical devices performing QEC, such as \(\approx 1\%\) for surface codes [21, 22]. Additionally, the error rates and decoherence times obtained by characterization methods can also falsely suggest that noise in such devices is well described by single-qubit processes, when in reality additional effects such as cross-talk and leakage may also be present [23]. Simultaneous randomized benchmarking partially addresses effects arising from concurrent
operations [24]. However, such methods still evaluate performance using sets of circuits that are largely disjoint from those of FTQC. Metrics of device noise derived primarily from QEC-related circuits are more ideal indicators of progress made towards FTQC as they are sensitive precisely to the characteristics of noise relevant for FTQC.
Recently, Debroy et al. attempted to address the impact of context-related errors with the Context Aware Fidelity Estimation (CAFE) framework [25]. This was shown to be able to obtain knowledge of qubit error characteristics present specifically during stabilizer measurement which was not visible in individual gate characterization. There has been persistent effort towards the idea of using measurements produced by QEC circuits themselves as data for characterization protocols. The foundational results on QEC-based noise characterization initially investigated protocols theoretically and made arguments that relevant error characteristics can be extracted exclusively using (sometimes modified) stabilizer measurement circuits [26, 27, 28]. Later, similar arguments were made when investigating the accurate tracking of time-dependent error characteristics [29, 30]. Wagner et al. continued this effort by providing additional clarification as to the characteristics that can be learned using exclusively syndrome data [31, 32]. With the use of repetition codes, Wootton demonstrated experimentally that error rates can be found for superconducting transmon qubits based on syndrome change events of repetition codes [33]. With quantum device sizes and available control techniques continuing to progress, more of these techniques will likely see further development and testing in the near future.
Superconducting quantum devices can now contain up to hundreds of individually addressable physical qubits [34, 35]. However, still in the NISQ era, the performance evaluation of these devices has focused primarily on metrics of individual circuit operations [36, 37], general-purpose quantum device metrics [38], demonstrations of quantum properties [39] and optimized instances of individual NISQ algorithms [40]. Multiple groups have also characterized the performance of implementations of small QEC codes, including details of operator measurement behaviour [41, 42, 43, 44, 45, 46, 47, 48, 49]. The conclusions drawn from these results all have the benefit of being naturally sensitive to device details that are relevant to FTQC. Recent results include work from groups that benchmark the logical error rates of codes at different distances [41, 50], perform logical qubit operations with lattice surgery [51, 44] and verify topological properties [52, 53]. These act as effectively system-scale benchmarks but often use operator measurement performance to accurately characterize device performance in the intermediate regime between individual qubits and gates and complete quantum algorithms. Previous experimental literature featuring heavy-hexagon codes has focused on characterizing logical qubit preparation, measurement, and decoding for heavy-hexagon codes of distance-2 and distance-3 using superconducting transmon qubits [47, 48]. These investigations, based on data after the decoding process, revealed instances of leakage, cross-talk and unwanted interactions which can be present in real QEC operation, highlighting the inadequacy of the uniform depolarizing noise model at the foundation of the field. Here, we have conducted a study focused on what the statistics of quantum error correction operator measurements alone can reveal about a quantum device's noise characteristics at different circuit scales. The methodology we use allows an investigation of device characteristics relevant for quantum error correction circuits without requiring focus on the additional details of decoding, particularly relevant to the above threshold regime.
In this work, we investigated the performance of transmon superconducting devices [54] performing syndrome measurement circuits of heavy-hexagon codes [20, 47, 48]. The performance across different scales of circuits relevant to heavy-hexagon codes is examined by applying techniques previously introduced for other QEC codes [49, 41, 33, 42]. We found that circuits of all scales investigated in our work required noise models beyond uniform depolarizing noise in order to be consistent with experimental results. Effects consistent with features of inhomogeneity, biased measurement errors, amplitude damping noise and a bias towards \(Z\) errors are present in the noise of the system. Additionally, performance of larger circuits is not readily explicable by the results of performance of smaller circuits. When modelling these circuits with standard QEC error models, our results show that the effective error rates necessary to describe results
tend to increase with respect to circuit size. We conclude by discussing the severity of these effects, possible approaches to their management, and the implications they would have on FTQC.
The remainder of this paper is organized as follows. In Sec. 2.1 we briefly review error models of quantum circuits. In Sec. 2.2 we give a brief overview of measurements in QEC. In Sec. 2.3 we investigate the performance of individual gauge operator measurement circuits. In Sec. 2.4 we investigate the performance of simultaneous measurement of multiple gauge operators to infer measurements of stabilizer generators. Repeated measurement of stabilizer operators is investigated in Sec. 2.5 and Sec. 2.6, where we focus on individual measurements and correlations among measurements respectively. Throughout Sec. 2, we compare experimental results with simulation to investigate whether the observed behaviour is explicable by standard error models used in QEC literature and whether the performance scales according to expectations. Finally, in Sec. 3 we provide a summary of our work.
## 2 Results
### Error Models of Quantum Circuits
Noise is pervasive in quantum systems, and quickly drowns out quantum effects if left unchecked. Examples of contributions to noise include qubit frequency drift, environmental interactions, thermal relaxation and stochastic/systematic gate errors. In quantum circuits, these phenomena are often modelled by applying noise channels to qubits after every intended operation applied to them. Due to their efficient simulation, Pauli noise models, and in particular the uniform depolarizing noise model, are a popular category of error models and underlie results regarding threshold theorems in QEC [55]. Many distinct noise models have been developed, however, which requires care to be taken when comparing different results.
Figure 1 (a) shows one particular example of a quantum circuit preparing and measuring a GHZ state with noise channels shown explicitly, which are similar to those used in the rest of this text. The distinct channels shown correspond to reset noise after every instance of qubit preparation, delay noise after every instance where a qubit is idle, two-qubit noise after every two-qubit gate and readout noise after every measurement operation. Single qubit depolarizing noise corresponds to the operation,
\[\rho\rightarrow(1-p)\rho+\frac{p}{4}\sum_{i=0}^{3}\sigma_{i}\rho\sigma_{i}, \tag{1}\]
where \(p\) is the depolarization parameter, \(\sigma_{i}\) corresponds to the single qubit Pauli operators and \(\sigma_{0}\) corresponds to the identity [56]. Similarly, two-qubit depolarizing noise corresponds to the operation,
\[\rho\rightarrow(1-p)\rho+\frac{p}{16}\sum_{i=0}^{3}\sum_{j=0}^{3}\sigma_{i} \otimes\sigma_{j}\rho\sigma_{i}\otimes\sigma_{j}. \tag{2}\]
State preparation and measurement fails with probability \(p/2\), consistent with the probability of depolarizing noise events causing such processes to fail. Modifications to these channels can be made in order to better fit experimental device characteristics. For example, biased noise models increase the probability of \(Z\)-type errors occurring compared to other errors [57]. When parameterized with a bias parameter \(\eta\) and error rate \(p\), single qubit errors perform the operation,
\[\rho\rightarrow(1-p)\rho+p\sum_{i=1}^{3}r_{i}\sigma_{i}\rho\sigma_{i}, \tag{3}\]
where \(r_{1}=r_{2}=1/[2(\eta+1)]\) and \(r_{3}=\eta/(\eta+1)\). Lastly, the inhomogeneous noise model modifies uniform depolarizing noise by allowing varying noise intensity across different qubits (see Appendix Sec. A.1 for further descriptions of some other channels the nuances involved with parameterization).
Multiple additional choices/assumptions are made when constructing noise models. For example, a choice must be made on whether each individual single qubit gate is modelled to take a single time step, with an associated error channel, or be treated differently. As single-qubit gates are often the fastest and lowest error rate gates in superconducting systems, they are often neglected in regimes where other gate errors dominate. Another nuance is whether to use \(p\) to represent the error rate, corresponding to the probability that a nontrivial Pauli error is applied, or to represent the depolarization probability, corresponding to the probability that qubits become maximally mixed. In this work, we have parameterized uniform depolarizing noise using a depolarization parameter and biased noise using an error
rate. In general, a particular noise model is fully specified only when each distinct noise channel being used is provided.
Figure 1 (b), shows how measurement probabilities differ for a pristine circuit compared to uniform depolarizing noise, \(Z\)-biased noise and inhomogeneous error models. We can see that some detail of the noise model is imprinted in the noise statistics of \(Z\) basis measurements. Other basis measurements would also need to be performed to capture all available information.
While the measurement statistics of quantum circuits are a function of the noise present in the system, the converse is not true in general, with distinct noise models being able to give identical measurement statistics. The simplest manifestation of this phenomenon occurs for the task of distinguishing reset errors from measurement errors in a simple prepare and measure circuit for the \(\ket{0}\) state. This generalizes to larger systems and other errors, even if access is given to all possible circuits [58]. Nevertheless, measurement statistics can still help rule out particular classes of noise models, in favour of others, which can help develop understanding regarding the noise present in circuits and assist the design of QEC codes.
### Quantum Error Correction Measurements
Operator measurement circuits are expected to constitute a large portion of the instructions given to quantum devices executing fault-tolerant quantum algorithms [13, 14, 15]. Figure 2 shows examples of these circuits for heavy-hexagon codes. Ideally, multiple instances of these circuits are performed in parallel across a quantum device for codes of any size, using circuits of constant depth. The measurement results of these circuits reveal information about errors which may be present at the start of the circuit or which occur throughout the circuit [59]. Importantly, measurements made in this way do not reveal any information about the encoded quantum information itself. The accurate execution of these circuits is one of the most important contributions to the performance of a quantum error correction implementation.
In this work, we investigate methods of evaluating the performance of stabilizer measurement in quantum error correction at multiple system scales. We begin our investigation by examining the performance of the smallest elements of heavy-hexagon quantum error correction circuits. This corresponds to evaluating qubit, gate and low depth circuit performance.
### Gauge Operator Measurement
We begin our main investigation by considering the measurement of individual operators. The individual operators able to be measured for heavy-hexagon codes are the gauge operators of the code. Their simultaneous measurement can be used to infer the result of stabilizer operator measurements, which is discussed later in Sec. 2.4. Measurement of quantum operators corresponds to projection operations applied to the wavefunction of a quantum state [56]. The applied projection is in general stochastic, but is heralded by the eigenvalue obtained of the measured observable. As these measurements are expected to comprise a significant portion of the circuit operations applied to a fault-tolerant quantum computer, characterization and optimized execution of these circuits is of paramount importance.
A performance metric for these individual operator measurement circuits should capture the degree to which these circuits faithfully measure the required operator. This corresponds to the simultaneous process of applying a correct projection
Figure 1: Simulation of quantum circuits under the influence of error models. (a) shows a quantum circuit for the preparation of a GHZ state, with errors indicated after every circuit operation. Reset, measurement, identity and two-qubit errors are represented by red, pink, yellow and cyan markers respectively. (b) shows the \(Z\)-basis measurement statistics obtained under no noise (dotted outline) compared with \(p=0.05\) uniform depolarizing noise, \((p,\eta)=(0.0375,10)\) biased noise and inhomogeneous noise corresponding to \(p=0.01\) everywhere except qubit 2 which was given \(p=0.2\) for single qubit gates.
to the quantum state and returning the correct corresponding eigenvalue. Here, we investigated the accuracy of the returned eigenvalues with the use of prepare and measure circuits [42]. Using 27-qubit processors, we can prepare \(2^{n}\) computational basis states to measure \(Z\) operators and \(2^{n}\) of related product states in the \(X\) basis in order to measure \(X\) operators. Here \(n\) is the number of data qubits corresponding to the operator being measured. We assign a measurement change rate for each initial state by recording the fraction of shots for which the measured stabilizer value was inconsistent with the initially prepared state. Results are shown using ibmq_montreal in Figure 3.
Figure 3 shows the change rates obtained when measuring gauge operators which correspond to two data qubit \(ZZ\) gauge operators, four data qubit flagged \(XXXX\) gauge operators and two data qubit flagged \(XX\) gauge operators. To resolve two-qubit gate connectivity requirements, the \(X\) operators make use of additional ancilla qubits referred to as flag qubits [20]. Tile representations of these operators are shown in the first column of Figure 3. Uniform time-step circuits in the second column of Figure 3 perform prepare and measure experiments on product state inputs. Lines between the control and target of each CNOT gate are sometimes partially covered to allow simultaneously executing gates to be drawn on the same time step.
The third column of Figure 3 shows exact change rates which occur in simulation for each operator as a function of the depolarizing noise parameter, \(p\) (see Appendix Sec. A.1 for relation to depolarizing error rate), as well as the lowest order dependence (dotted line). The error model used applies depolarizing noise of strength determined by parameter \(p\) after every time step, with two qubit depolarizing noise applied after each CNOT. We note that the coefficient of the linear dependence of each operator change rate function corresponds to the number of unique time steps where a depolarizing error can change the relevant ancilla qubits measurement result divided by two (see Appendix Sec. C.1.1). Under this error model, performance of \(XXXX\) flagged operator measurements diminishes most rapidly, followed by \(XX\) flagged operator measurements and lastly \(ZZ\) operator measurements, which is as expected given the depth and number of qubits in each circuit. The relation between operator change rate and noise intensity supports the intuition that each quantity should be able to be readily esti
Figure 2: A heavy-hexagon code and the associated gauge operators. (a) shows qubits of a distance-5 heavy-hexagon code on a heavy-hexagon lattice, together with two-qubit \(Z\) gauge operators (blue) and four-qubit \(X\) flagged gauge operators (red), which also act on two qubits at boundaries. Labelled examples of a two-qubit \(Z\) gauge operator and a four-qubit \(X\) gauge operator are shown adjacent. (b) and (c) show circuits to measure two-qubit \(Z\) gauge operators and four-qubit \(X\) flagged gauge operators, respectively.
mated when given the other. This is especially true in the approximately linear regime corresponding to \(p<1\%\), where FTQC is expected to be below threshold and hence viable.
Lastly, the fourth column of Figure 3 shows experimental results for the change rates observed on ibmq_montreal when different product states were prepared as input states to transpiled versions of the circuits shown in the second column of Figure 3. Bar heights correspond to averages observed across across 34 device calibrations and error bars correspond to the standard deviation observed across these calibrations.
We find that on average the highest accuracy is obtained when measuring \(ZZ\) operators on \(Z\) basis product states. The lowest average change rate occurs for \(|00\rangle\), followed by \(|01\rangle\), \(|11\rangle\) and finally \(|10\rangle\). When simulated with a uniform depolarizing error model, input states do not affect the change rate of the gauge operators. If the error rates of the \(X\) gates required to prepare \(|1\rangle\) states are not neglected, then, considering typical results from randomized benchmarking errors, the experimental preparation of \(|1\rangle\) states would have an error rate increased by approximately \(2\times 10^{-4}\). This is negligible compared to the typical preparation and readout assignment error rates on the order of \(10^{-2}\). The asymmetry present in the experimental data can be understood as arising from asymmetric measurement and thermal relaxation errors. The lowest change rate occurring for \(|00\rangle\) is consistent with it being the least negatively affected by the phenomena of relaxation errors and biased measurement, due to \(|0\rangle\) states being lower energy than \(|1\rangle\) states and the \(ZZ\) eigenvalues taking a \(+1\) value.
For the \(X\) operator measurements of Figure 3, we find that that the average change rate of states with \(+1\) eigenvalues is significantly lower than the average change rate of states with \(-1\) eigenvalues. Again, such a dependence on the input state does not occur during uniform de
Figure 3: Gauge operator measurement circuit evaluation for the ibmq_montreal device. The three rows (a), (b) and (c) correspond to the benchmarking of \(ZZ\) gauge operators, \(XX\) flagged gauge operators and \(XXXX\) flagged gauge operators. The first column shows the diagram tile representation of each operator. The second column Shows circuits which were used for theoretical simulation. The third column shows theoretical operator change rates, \(R\), as a function of depolarizing error parameter, \(p\), for each operator measurement circuit. A dotted line shows the linear component which corresponds to the contribution from single error events. The fourth column shows the experimentally measured operator change rates for each separable eigenstate of the operator under investigation. Values are calculated by averaging over 34 calibrations and all error bars represent one standard deviation.
polarizing noise simulations, but can be better explained by including noise structure such as asymmetric measurement (see Appendix Figure 12). The average change rates for the \(ZZ\), \(XX\) flagged and \(XXXX\) flagged operators observed experimentally were approximately 0.08, 0.18 and 0.17 respectively. When compared with uniform depolarizing noise simulation, this corresponds to error models with depolarizing noise parameters, \(p\), of 0.03, 0.04, and 0.03 respectively. Finally, for this sub-investigation, we take note of the large standard deviation observed in the results across different calibrations, consistent with effective error rate parameters changing by as much as \(\pm 50\%\). Similar results are not uncommon among error rates found with randomized benchmarking experiments due to processes such as drift [60]. When calculating binomial confidence intervals for the mean across calibrations much smaller error bars can be achieved (see Figure 15). This suggests that differences in change rates that depend on data qubit input state on average are quite statistically significant, but can also vary significantly across different calibrations and device locations.
Our results found that two-qubit \(ZZ\) gauge operators had the lowest average change rates, with the change rates of the \(XX\) flagged gauge operators and \(XXXX\) flagged gauge operators being slightly higher on average. Experimentally, operator change rates were found to depend on the initial state of the circuit, which does not occur for uniform depolarizing noise simulations. Additional results when gauge operator types are reversed can be found in Appendix Figure 19 and give results consistent with the presence of noise beyond uniform depolarizing noise.
### Stabilizer Generator Measurement
We continue our investigation by investigating measurements of stabilizer operators of the heavy-hexagon code on superconducting devices. For the heavy-hexagon code, stabilizer operators correspond to products of gauge operators, with the eigenvalue of the stabilizer operator corresponding to the product of the eigenvalues of the gauge operators which composed it. For example, the eigenvalue of the six-qubit Bacon-Shor type stabilizer operator of the distance-3 heavy-hexagon code \(X_{0}X_{1}X_{3}X_{4}X_{6}X_{7}\), numbering data qubits top left to bottom right (see Figure 4), can be calculated by taking the product of results of gauge operators measurements of \(X_{0}X_{1}\) and \(X_{3}X_{4}X_{6}X_{7}\)[20]. The number of qubits the Bacon-Shor type stabilizers act upon increase with code distance, and for general code distances, \(d\), is equal to \(2d\) qubits. The surface-code type stabilizers, however, always act on either two or four qubits. An example of a two-qubit surface-code type stabilizer operator corresponding to a single gauge operators is \(Z_{2}Z_{5}\) and a four-qubit surface-code type stabilizer operator is \(Z_{0}Z_{1}Z_{3}Z_{4}\), which has its eigenvalue values calculated based on the product of the gauge operator eigenvalues of \(Z_{0}Z_{3}\) and \(Z_{1}Z_{4}\). The commutation of the gauge operators with the stabilizer operators and logical operators is what allows the eigenvalues of stabilizer operators to be calculated using measurements of the smaller gauge operators, without effecting subsequent operator measurements of interest [61].
We measured individual stabilizer operators of the distance-3 code on the ibmq_montreal device and calculated individual stabilizer operator change rates. This was done by performing prepare and measure circuits similar to those
Figure 4: Stabilizer operator evaluation for the ibmq_montreal device. Stabilizer change rates, \(R\), for \(Z\) stabilizers and \(X\) stabilizers are shown in a) and b) respectively. Theoretical change rates of each the four-qubit \(Z\) stabilizers and the six-qubit \(X\) stabilizers are shown as a function of \(p\), the depolarizing noise parameter. A dotted line shows the linear component of the dependence.
used for gauge operator characterization, however with the simultaneous measurement of each gauge operators needed for each stabilizer operator. Results are shown in Figure 4. Experimentally, we find average change rates of 0.21 for four-qubit surface code type operators and 0.37 for Bacon-Shor type operators. The uniform depolarizing noise parameter, \(p\), consistent with these average experimental stabilizer change rate is 0.044 for four-qubit surface code type \(Z\) operators and 0.052 for six-qubit Bacon-Shor type \(X\) operators. This is noticeably larger than the depolarizing noise parameters found from gauge operator change rates. Using \(p_{\mathrm{min}}=0.03\) and \(p_{\mathrm{max}}=0.04\), the least and greatest depolarizing noise parameter fitted from the gauge measurement results, for consistent results, we required change rates between 0.15 and 0.19 for the four-qubit surface code type operators and between 0.27 and 0.32 for the six-qubit Bacon-Shor type operators. Experimentally, we measured larger average change rates for both stabilizer operators, with Bacon-Shor type operators increasing the most significantly. This increase suggests that the larger circuits required to measure stabilizer operators have additional sources of error, which cannot be explained by a uniform depolarizing noise model. As the gauge operators which correspond to each stabilizer operator share no support, this increase in depolarizing noise parameter suggests increased effective error rates due to effects such as cross-talk.
For the heavy-hexagon code, stabilizer change rates can also be calculated for the stabilizer operator measurements based on gauge operator measurement experimental data. As the measurement of individual \(X\) and \(Z\) stabilizers can be decomposed into the independent measurement of \(XXXX\) flagged gauge operators, \(XX\) flagged gauge operators and \(ZZ\) operators, their theoretical change rates can be calculated from the change rates of of smaller operators using the formula,
\[P(A\oplus B)=P(A)(1-P(B))+P(B)(1-P(A)), \tag{4}\]
where \(A\) and \(B\) are independently events and \(P(A\oplus B)\) is the probability that exactly one has happened. This can be applied to the stabilizers composed of two gauge operators in the distance-3 heavy-hexagon code under the assumption that gauge operator changes are independent events. Under the uniform depolarizing noise model this assumption is trivially true as the sets of qubits consisting of each gauge operator measurement circuit are disjoint.
As a final comparison, we compare gauge operator measurement with stabilizer operator measurement using only experimental change rate data and Equation 4. We note that this requires the assumption that there are no significant error correlations occurring in the combined system. With this assumption, the expected change rates are 0.15 and 0.29 for the surface code type stabilizers and Bacon-Shor type stabilizers respectively, which are consistent with the theoretical predictions and lower than what is realized in experiment.
As stabilizer operators are higher weight and measured as a product of gauge operator measurements, the operator change rates are generally expected to increase compared to individual gauge operator change rates. However, experimental results show that change rates increase more than what would be expected based on models which ignored the effects of correlated errors. This is consistent with past observations of cross-talk in superconducting transmon devices in the context of gate benchmarking and heavy-hexagon code decoding in the presence of multiple simultaneous operations including mid-circuit measurement [24, 47, 48].
### Repeated Operator Measurements
We now turn our attention to simultaneous operator measurements, as these will need to be performed repeatedly in order to be able to correct circuit errors in two-dimensional QEC codes. Results are shown in Figure 5, with parts (a-c) showing change rates of circuits consisting of 16 repetitions of \(Z\) operator measurements, \(X\) operator measurements, and full heavy-hexagon syndrome measurement respectively. When fitting experimental results to a uniform depolarizing noise model, using the mean squared error in predicted results as a cost function, the optimal simultaneous fit was found with a depolarization parameter of \(p=0.048\). This is consistent with values suggested by the stabilizer generator measurement characterization of Sec. 2.4. However, for each experiment, multiple distinct phenomena arise which warrant further discussion.
In Figure 5 (a) we show the behaviour of
gauge operator change rates across 16 cycle circuits, with data qubits prepared in the \(|0\rangle^{\otimes 99}\) state (see Appendix Figure 9). Experimentally, we find a significant spread in change rates across different operators and different cycles. Under the uniform depolarizing noise model this does not occur, due to all \(Z\) gauge operator measurement circuits being close to equivalent, up to small differences associated with order of measurement. In the experimental data we observe that operator change rates are low for the first cycle, tend to increase as more cycles are performed, and decrease for the final cycle. The reductions in change rates observed for \(Z\) gauge operators in the first and final cycle are consistent with the uniform depolarizing noise model and arise due to these cycles corresponding to comparisons between operator values inferred from data qubit preparation and measurement [41, 50]. However, the tendency for change rates to increase across the intermediate cycles cannot be explained by a uniform depolarizing noise model, which has constant change rates across the intermediate cycles. The phenomenon of increasing change rates has been attributed to leakage in the past but can also have contributions from measurement error asymmetry (when \(-1\) eigenvalue measurements experience greater errors than \(+1\) eigenvalue measurements) when all the eigenvalues are set to \(+1\)[41].
In Figure 5 (b) we show the behaviour of \(X\) gauge operator change rates across 16 cycle circuits, measuring only the \(X\) gauge operators of a distance-3 heavy-hexagon code and preparing data qubits in the \(|+\rangle^{\otimes 9}\) state (see Appendix Figure 10). Experimentally, we find that the change rates of each operator remain close to 50%, which corresponds to the high error rate regime. The blue two-qubit \(X\) gauge operator has a change rate slightly beneath 50% for approximately half of the intermediate cycles. However, other than this, there is no consistent distinction in change rate between four- and two-qubit \(X\) gauge operators. There is also no distinct reduction in experimental operator change rates at the initial and final cycle as expected from theory. In order to achieve similar effects under a uniform depolarizing noise model, a depolarizing parameter significantly larger than what is consistent with results in Figure 5 part (a) is required. These results suggest that the increased circuit size of repeated measurements significantly increases the
Figure 5: Operator change rates, \(R\), as a function of measurement cycle for repeated operator measurement experiments on ibmq_montreal and in simulation under uniform depolarizing, biased and inhomogeneous noise models. Error bars are calculated 95% confidence intervals. Results when only \(Z\) gauge operators are measured are shown in (a). Results when only \(X\) gauge operators are measured are shown in (b). Results when both \(X\) and \(Z\) stabilizer operators are measured are shown in (c). The uniform depolarizing noise simulation had a depolarization parameter \(p=0.048\). The biased noise simulation had error rate \(p=0.055\) and bias parameter \(\eta=8\). The inhomogeneous noise simulation had mean depolarizing noise parameter of \(\bar{p}=0.044\). The legend on the right indicates which operator each coloured line corresponds to for each row. Overlapping simulation lines are indicated by boxes.
effective error rate compared to individual stabilizer measurement shown in Figure 4 part (a). In the context of quantum circuits, this is consistent with the presence of cross-talk. The increased change rate of \(X\) operators is consistent with mid-circuit measurement-induced phase rotations previous observed in heavy-hexagon code decoding [47]. Accurate modelling of cross-talk requires noise models beyond the independent depolarizing noise model, for example with the use of noise operators which apply noise channels to qubits beyond those where a gate was performed [50, 62]. Further study is required to develop accurate measures of the significance of these effects on different classes of quantum circuits.
Lastly, we investigate the performance of 16 cycles of repeated full syndrome measurement circuits for both \(X\) and \(Z\) stabilizers of the distance-3 heavy-hexagon code. Data qubits were prepared in the \(|0\rangle^{\otimes 9}\) state and \(X\) stabilizers were measured first, as shown in Appendix Figure 11. Figure 5 (c) shows the rates at which stabilizer operators changed values each cycle over a circuit of 16 cycles. Theoretically, under uniform depolarizing noise, we expect change rate curves to arrange into three groups corresponding to (in order from highest to lowest expected change rate) \(X\) stabilizers, four-qubit \(Z\) stabilizers and two-qubit \(Z\) stabilizers. Experimentally, we found that both of the \(X\) stabilizers maintained change rates close to 50% for majority of the cycles. The \(Z\) stabilizers retained more varied error rates amongst each other and across different cycle numbers. Compared to \(Z\) gauge operator measurement circuits, most operator change rates are significantly higher. The exception is one operator which corresponds both to the two-qubit stabilizer coloured purple in (c) and and the gauge operator coloured red in (a). Under a uniform depolarizing noise model, this should not be the case. Other, less readily explicable, features were also present, such as decreasing stabilizer change rates present within the first four cycles and increasing stabilizer change rates present within the last four cycles. These effects do not occur in simulations using only time-independent depolarizing noise.
Difficulty was experienced in fitting the change rate curves to the experimental data in Figure 5 using the uniform depolarizing noise model. Increasing the depolarization parameter increases both \(X\) and \(Z\) operator change rates, however an improved fit required \(X\) operator change rates to increase while decreasing \(Z\) operator change rates. A phenomenon which can cause this effect is the presence of noise biased towards \(Z\) errors. To test whether this is sufficient to explain the results observed experimentally, we modelled the system with biased uniform depolarizing noise described by error rate \(p\) and bias parameter \(\eta\) (see Appendix Sec. A.1). We found an improved fit by using \(p=0.06\) and \(\eta=16\), which indicates that experimental results are better described by an error model with \(Z\) errors occurring with sixteen times as much probability of other possible errors (those which include only \(X\) or \(Y\) components). This is consistent with other results shown in the literature, where noise of superconducting qubits is also found to be biased towards \(Z\) errors [63].
While the use of biased noise slightly improves the fit to experimental change rates compared to isotropic depolarizing noise, adding noise bias was unable to split the change rate curves on equivalent operators. This was expected to be caused by the common simplifying assumption that all qubits are described by identical, homogeneous, error characteristics. To investigate whether dropping this assumption can be another alternative to achieve a better fit, we fit the experimental data to an inhomogeneous noise model, where each qubit has an independent depolarizing noise parameter. To keep the parameter space small, the depolarizing parameter associated with two qubit operations was set to the arithmetic mean of the depolarizing parameters of the two participating qubits (see Appendix Figure 18). Results are shown in the final data column of Figure 5. Using the mean squared error of predicted change rates as a cost function, difficulty was experience finding a simultaneous good fit to all three experiments due to an increased parameter space and the phenomenon of local minima in the cost landscape. Fitting parameters to one experiment at a time leads to poor generalization across other experiments (see Appendix Figure 17). This is also likely influenced by theoretical limits to the information provided by operator measurements in fitting noise models [31]. An average depolarizing noise parameter of \(\bar{p}=0.044\) was found after simultaneous optimization. Similar qualities of fit are likely possible with modified parameters, due to information theoretical limits
as well as local minima. While using inhomogeneous noise models allows many of the orderings in experimental change rate to be made correctly modelled, important discrepancies remain which are indicative of addition features beyond bias and inhomogeneity.
We finalize our discussion of Figure 5 by noting the remaining discrepancies between experiment and theory which cannot be resolved by inhomogeneous noise models or biased noised model as they are often described in the literature. The absence of dips in change rates at the first and last cycle of repeated \(X\) gauge measurements were unable to be modelled accurately by either technique. Theoretically, these appear due to the reduced number of time steps between a stabilizer measurement and the time steps where data qubits have been prepared or measured. Experimentally, this does not appear to be the case for \(X\) gauge operators, where initial and final change rates remain at the 0.5. Further increasing the bias towards \(Z\) errors would ideally improve the fit, however we found that increasing the bias also reduced the change rate dips of the \(Z\) operators, which clearly did occur in experiment. This relation between bias parameter \(\eta\) and \(Z\) operator initial and final change rate follows from the state preparation and measurement error rate of \(p(2\eta+1)/(2\eta+2)\), which corresponds to the probability that either an \(X\) or \(Z\) error occurs [57]. This effect can be understood as a conservative upper bound in the high \(\eta\) regime, where state preparation and measurement error rates will approach \(p\), and avoids the need of introducing another parameter. However, in our case this definition also restricts our ability to fit the experimental data. Regarding inhomogeneous error models, multiple phenomena may occur which can cause inhomogeneous depolarizing noise to be an insufficient explanation. One example is the purple \(Z\) stabilizer of Figure 5(a) having a lower change rate than the red \(Z\) stabilizer of Figure 5(c). As these are the same operators, one would expect the larger circuit of part (c) to cause more errors to occur and hence increase the operator change rate in (c) compared to (a). This is indeed what occurs in simulations. However, experimentally this did not occur, suggesting additional important details of the noise model of the experimental results remain, beyond bias and inhomogeneity.
### Spatial-Temporal Correlations
Lastly, we investigated the properties of the relatively low \(Z\) operator detection events by investigating their spatial and temporal correlations with the formula,
\[p_{ij}=\frac{\left\langle x_{i}x_{j}\right\rangle-\left\langle x_{i}\right\rangle \left\langle x_{j}\right\rangle}{(1-2\left\langle x_{i}\right\rangle)(1-2 \left\langle x_{j}\right\rangle)}, \tag{5}\]
which is a measure of the correlation of detection events \(x_{i}\) and \(x_{j}\) (with \(p_{ij}\) set to 0 for \(i=j\)) [41]. Figure 6 shows this matrix, where the top left corner corresponds to a uniform depolarizing noise simulation with depolarization parameter \(p=0.035\) and the the bottom right corner corresponds to measurements on ibmq_montreal. From simulation, we expect three different classes of large correlations to be present. Space-like errors cause large correlations between simultaneous measurement changes of operators which both act on at least one shared qubit. Time-like errors cause large correlations between subsequent measurement changes of the same operator (one changes due to the measurement error and another change back if the subsequent cycle is measured correctly). Space-time-like errors cause large correlations, among operators which share a qubit, but between two subsequent measurement cycles. These are the weakest correlations in simulation due to the low number of time steps for which space-time-like errors can occur. Other correlation matrix elements are expected to be small, as they correspond to the simultaneous occurrence of more than one error. The ibmq_montreal device shows signs of all three classes of errors occurring during the \(Z\) operator measurement circuits. However, their relative magnitudes vary significantly across different pairs of subsequent time steps and adjacent ancilla qubits. The operator that ancilla qubit 4 measures shows much lower time-like correlations than any of the other operators (with numbering consistent with Appendix Figure 8). Operator 4 also has much lower space-like and space-time-like correlations with operator 1 than expected in simulation. This suggests that other, more significant, error processes are present for operator 4 which act to decorrelate detection events which are expected to be strongly correlated. Finally, we also observed significant additional correlations outside of the regions which correspond to the three main error classes. Contributions to
these additional correlations can include leakage, cross-talk and asymmetric readout errors. Additional results showing correlations present in different backends and larger cycle differences can be found in Appendix Figures 20 and 21.
## 3 Discussion and Conclusion
We investigated the performance of heavy-hexagon code stabilizer measurement circuits on IBM quantum devices and found that uniform uncorrelated depolarizing noise models fail to explain many phenomena observed. These include a dependence of operator change rates on initially prepared states and effective error rates increasing significantly as larger circuits are run. In particular, operator change rates of individual stabilizer measurement circuits are greater than expected from the behaviour of individual gauge operator measurement circuits. When investigating the temporal characteristics of circuit noise, we found increasing change rates, again indicative of noise present beyond uniform depolarizing noise. The temporal data curves also corresponded much better to simulations with noise biased towards \(Z\) errors, or inhomogeneous noise. Finally, observing correlations in operator changes leads to the conclusion that significant sources of non-depolarizing noise events are present in the system.
While uniform depolarizing noise is a convenient noise model for classical simulation, as it can be efficiently simulated, can intuitively be understand as random symmetric Pauli operations, and can be used to study worst-case scenario bounds, it was not found to adequately model stabilizer statistics in transmon superconducting devices. As QEC investigations predominantly focus on depolarizing noise, this may lead to uncertainty when considering the viability of FTQC as devices scale up. One way to address this is by applying noise tailoring with techniques such as randomized compiling, which would map device noise to a Pauli noise model [64]. As non-local correlations may yet remain (distinct from those introduced by syndrome measurement circuits under circuit noise), care must be made that FTQC protocols remain robust when scaling to larger distances [62]. At the same time, techniques can continue to develop which address more general noise models, such as those featuring inhomogeneous noise [65], asymmetric coherent rotations [66], biased noise [57], amplitude and phase damping [67], and correlated errors [68].
There are multiple ways to extend our investigation. In the results shown, we did not utilize dynamical decoupling, which can be expected to combat coherent errors as well as cross-talk present in the device [69]. Dynamical decoupling pulses can be implemented in any idle times between gates and the efficacy of dynamical decoupling schemes can be investigated in terms of their reduction in operator change rates or correlation matrix elements [70]. We also did not optimize scheduling beyond an 'as late as possible' scheduler, which may have had a significant effect with the presence of mid-circuit measurement. More accurate noise models may also be pursued to fit the experimental data. These may include additional effects beyond depolarization such as leakage and cross-talk [50]. It would also be very enlightening to investigate how readily
Figure 6: Correlation matrix for repeated measurements of \(Z\) gauge operators of a distance-3 heavy-hexagon code on ibmq_montreal (bottom right) and when simulated with \(p=0.035\) depolarizing error parameter noise (upper left). Black and grey lines separate different \(Z\) operator ancilla qubits and different measurement cycles (\(\Delta c\)) respectively. The initial state of all data qubits was \(|0\rangle\). Symbols S, T, ST and C provide examples of matrix entries indicating the presence of space-like, time-like, space-time-like and correlated errors respectively. Matrix values with magnitude above \(0.05\) have been clipped. See Appendix Figures 20 and 21 for results on other devices and across larger cycle differences respectively.
randomized compiling can be used to map characteristics of noise to the well studied depolarizing noise model for various QEC codes [64]. Alternatively, the behaviour of larger operator measurement circuits can be investigated, such as the larger Bacon-Shor stabilizers of larger distance codes or investigating evidence of logical error correlations in systems of multiple logical qubits.
Our results support the need to further develop techniques to tailor quantum error correction protocols to particular characteristics of noise expected to be present in future devices running circuits of fault-tolerant quantum computing.
**Data Availability:** The data that support the findings of this study are available within the article and the Appendix. Further information can be provided upon reasonable request to the corresponding author.
**Code Availability:** The source code used to generate figures in this work can be provided upon reasonable request to the corresponding author.
**Acknowledgements:** The research was supported by the University of Melbourne through the establishment of the IBM Quantum Network Hub at the University. Computational resources were provided by the National Computing Infrastructure (NCI) and the Pawsey Supercomputing Research Center through the National Computational Merit Allocation Scheme (NCMAS). This research was supported by The University of Melbourne's Research Computing Services and the Peta-scale Campus Initiative.
**Author Contributions:** M.U. and L.C.L.H. planned and supervised the project. S.G. carried out the simulations and ran the circuits on IBM devices. All authors contributed to the analysis of data. S.G wrote the manuscript with input from M.U. and L.C.L.H.
**Competing Financial Interests:** The authors declare no competing financial or non-financial interests.
|
2308.10015 | DyFFPAD: Dynamic Fusion of Convolutional and Handcrafted Features for
Fingerprint Presentation Attack Detection | Automatic fingerprint recognition systems suffer from the threat of
presentation attacks due to their wide range of deployment in areas including
national borders and commercial applications. A presentation attack can be
performed by creating a spoof of a user's fingerprint with or without their
consent. This paper presents a dynamic ensemble of deep CNN and handcrafted
features to detect presentation attacks in known-material and unknown-material
protocols of the liveness detection competition. The proposed presentation
attack detection model, in this way, utilizes the capabilities of both deep CNN
and handcrafted features techniques and exhibits better performance than their
individual performances. We have validated our proposed method on benchmark
databases from the Liveness Detection Competition in 2015, 2017, and 2019,
yielding overall accuracy of 96.10\%, 96.49\%, and 94.99\% on them,
respectively. The proposed method outperforms state-of-the-art methods in terms
of classification accuracy. | Anuj Rai, Parsheel Kumar Tiwari, Jyotishna Baishya, Ram Prakash Sharma, Somnath Dey | 2023-08-19T13:46:49Z | http://arxiv.org/abs/2308.10015v4 | DyFFPAD: Dynamic Fusion of Convolutional and Handcrafted Features for Fingerprint Presentation Attack Detection
###### Abstract
Automatic fingerprint recognition systems suffer from the threat of presentation attacks due to their wide range of applications in areas including national borders and commercial applications. Presentation attacks can be performed by fabricating the fake fingerprint of a user with or without the intention of the subject. This paper presents a dynamic ensemble of deep learning and handcrafted features to detect presentation attacks in known-material and unknown-material protocols. The proposed model is a dynamic ensemble of deep CNN and handcrafted features empowered deep neural networks both of which learn their parameters together. The proposed presentation attack detection model, in this way, utilizes the capabilities of both classification techniques and exhibits better performance than their individual results. The proposed model's performance is validated using benchmark LivDet 2015, 2017, and 2019 databases, with an overall accuracy of 96.10%, 96.49%, and 95.99% attained on them, respectively. The proposed model outperforms state-of-the-art methods in benchmark protocols of presentation attack detection in terms of classification accuracy.
Fingerprint Biometrics, Presentation Attack Detection, Hybrid Architecture, Handcrafted Features, Deep CNN.
## 1 Introduction
Automatic Fingerprint Recognition Systems (AFRS) are most widely used for the purpose of person authentication and verification. Their user-friendliness and cost-effectiveness make them popular while validating the identity of persons at airports, national borders, and distribution of government-funded aid. The utilization of these systems in a wide range of applications, also makes them vulnerable to some internal and external security threats. A Presentation Attack (PA) is an external attack performed by presenting an artificial artifact of a user's finger to the sensor of AFRS. PAs or spoofs can be created either by a cooperative method or a non-cooperative method. Fabrication materials such as latex, woodglue, gelatine, silicon, ecoflex, etc. are available at a reasonable cost to create the spoof of a fingerprint. Fingerprint Presentation Attack Detection (FPAD) is a countermeasure to prevent these attacks and to empower an AFRS to detect PAs. The recent FPAD methods suggested by various researchers are categorized into two broad classes including hardware-based and software-based methods. Hardware-based methods are quite expensive due to the utilization of additional sensing devices that measures the natural properties of a finger such as heart rate, odor, temperature, etc. The utilization of these sensors makes hardware-based methods less user-friendly and quite expensive for an organization to adopt. On the other side, software-based methods require only fingerprint samples which makes them user-friendly to the end user and cost-effective to the organization as compared to hardware-based methods.
State-of-the-art software-based methods can be further classified as Perspiration and pore based-methods [7, 2] Statistical and handcrafted feature-based methods [21, 20, 25] and Deep-learning based methods [4, 3]. Perspiration-based methods suffer from humidity and temperature which sometimes causes the rejection of genuine fingerprint samples. Similarly, pore-based methods necessitate the sensing device to capture a high-definition image of the fingertip. This necessity impacts the overall cost of perspiration and pore-based methods. Statistical and handcrafted feature based-methods extract some predefined features from the input fingerprint sample to classify them as live or spoof. These methods are affected by the quality of the fingerprint samples. However, some of these methods including Sharma et al. [21], Sharma et al. [20] and Rattani et al. [19] have shown good FPAD capability while the spoof sample is created using known material but could not resemble the same performance while being tested on the spoofs fabricated with unknown materials.
In this paper, we propose an end-to-end model that exhibits a dynamic ensemble of handcrafted and deep features. The proposed model consists of two sub-models that work together to classify the live and spoof fingerprint samples. The first sub-model is a Deep Neural Network (DNN) empowered with an image descriptor namely Local Phase Quantization (LPQ) and a set of handcrafted features. The handcrafted features include Ridge Valley Clarity (RVC), Frequency Domain Analysis (FDA), Gabor quality, and Orientation Flow (OFL), etc. On the other hand, the second sub-model is a DenseNet classifier which has shown a remarkable performance while being tested on some well-known image classification problems including MNIST [6], CIFAR [24], and imagenet [5]. The proposed method, Dynamic Fusion of convolutional and handcrafted Features for Fingerprint Presentation Attack Detection (DyFFPAD), is an end-to-end approach that dynamically incorporates deep and handcrafted features. The proposed model is validated
using benchmark Liveness Detection competition (LivDet) databases and the proposed method outperforms the state-of-the-art methods in intra-sensor, same-material and cross-material protocols. A detailed comparison with the state-of-the-art methods in benchmark protocols indicates the supremacy of the proposed method.
The main contributions of this paper are highlighted below.
1. A detailed study is presented that shows the impact of PAs on DenseNet and DNNs trained with handcrafted features and LPQ.
2. A novel end-to-end architecture is proposed that embodies the capabilities of convolutional filters along with LPQ and handcrafted features to detect PAs created using known and unknown materials.
3. An exhaustive comparison of the proposed method's performance with state-of-the-art methods has been done in intra-sensor, known-material and unknown-material paradigms and the proposed method outperforms others in terms of classification accuracy.
The remainder of this paper is organized as follows. Section 2 discusses the state-of-the-art methods suggested by researchers for the detection of PAs along with their advantages and limitations. Section 3 describes the designing and working of the proposed architecture. In section 4, experimental results are given and comparative analysis is discussed in 5.3. The conclusion is discussed in section 6.
## 2 Related Work
PAs are the most concerning security threat to the AFRS. The applications of AFRS in security and commercial areas brought the attention of researchers toward the development of cost-effective and user-friendly solutions to this problem. In this section, several methods put forth by researchers to keep the AFRS safe against PAs are discussed. These methods are further classified as perspiration and pore based-methods, statistical and handcrafted feature-based methods, and deep learning-based methods as per the resources utilized. The methods that fall into these categories are discussed in this section, along with their advantages and disadvantages.
### _Perspiration and pore based-methods_
Perspiration is caused by the presence of tiny holes or pores in the finger skin. Because this inherent property is not present in spoofs created with spoofing materials, it is utilized by the researchers to develop a method that distinguishes between live and spoof fingerprints. In this attempt, Abhyankar et al. [2] proposed a wavelet-based approach to detect PAs using the fingerprint's perspiration characteristics. Because pores are difficult to reflect in spoofs at the moment of fabrication, the number of pores in a live fingerprint and its spoofs made with different materials may change. Similarly, Espinoza [7] utilized this dissimilarity as a distinguishing feature to detect PAs. The proposed approach is validated using a custom-made fingerprint database. Marcialis et al. [14] also proposed a pore-based technique for the detection of PAs. The system detects the number of pores present in the fingerprint impressions which are captured at an interval of five seconds. The number of pores present in both impressions is used as a feature for detecting PAs. The suggested approach is validated using a custom-made fingerprint database that contains 8960 live and 7760 spoof fingerprint samples. Regardless of how the sweat pattern is employed to detect PAs, its presence is dependent on the temperature of the surrounding environment. Sometimes, even a live finger does not display this attribute, in a dry environment, which occasionally prompts the FPAD technique operating on this property to discard the live sample from the authentication process. Similarly, the pore extraction process is expensive since it requires fingerprint-sensing equipment that can obtain high-definition samples (\(>=\)1000 pixels per inch). Perspiration and pore-based approaches are less cost-effective and user-friendly due to the reasons discussed above.
### _Statistical and handcrafted feature based-methods_
The skin of the finger and the fabricated spoofs have distinct natural properties such as color, and moisture, which are reflected in the quality of fingerprint samples. This phenomenon motivated the researchers to utilize the quality of fingerprints as a distinguishing feature. This section describes some of the approaches that fall within this category. Park et al. [18] used statistical characteristics including deviation, variance, skewness, kurtosis, hyper-skewness, hyper-flatness, average brightness, standard deviation, and differential image to train SVM to distinguish live samples from spoofs. The proposed method is validated using the ATVSFFp database which has 272 live and 270 spoof fingerprint samples. Xia et al. [25] proposed a novel image descriptor that collects intensity variance as well as gradient features of fingerprints to generate a feature vector. This feature vector is then utilized to train the SVM classifier to distinguish between live and spoof samples. The validation of the proposed work is performed using the LivDet 2011 and 2013 databases. Kim et al. [13] introduced a unique image descriptor that uses the fingerprint sample's local coherence as a feature for SVM training. The suggested technique is validated using the ATVSFFp and LivDet databases from 2009, 2011, 2013, and 2015. Yuan et al. [27] proposed a method for detecting PAs that makes use of the gradient property. It creates image gradient matrices for different quantization operators by computing two co-occurrence matrices using the laplacian operator. Furthermore, the matrices are used as a feature vector for training the backpropagation neural network. The proposed method is validated using LivDet 2013 database. Similar to the properties discussed above, the live finger and spoof have different levels of elasticity. The uneven width of the ridges and valleys, as well as the other characteristics of the input fingerprint samples, reflect this irregularity. In an attempt to develop an FPAD model, Sharma et al. [21] utilized various quality-based features. They extracted Ridge and Valley Smoothness (RVS), Ridge and Valley Clarity (RVC), Orientation Certainty Level (OCL), and Frequency Domain Analysis (FDA), as quality features that are combined together to form a feature vector. After this, the feature vector is utilized to train the Random-Forest classifier. The proposed method is validated using LivDet 2009, 2011, 2013, and 2015 databases.
Similarly, Sharma et al. [20] presented a novel feature called Local Adaptive Binary Pattern (LABP), which is a modification to the existing Local Binary Pattern (LBP) local image descriptor. They employed this feature in conjunction with existing Binary Statistical Image Features (BSIF) and Complete Local Binary Pattern (CLBP) to train the SVM classifier. The proposed method is validated using LivDet 2009, 2011, 2013, and 2015 databases. The effectiveness of these methods is determined by the quality of the input fingerprint sample, which is further determined by the sensing equipment. Some of the aforementioned approaches, such as [21, 20], and [13], have demonstrated good performance against PAs generated using known fabrication materials, but not against spoofs created using unknown fabrication materials.
### _Deep learning-based-methods_
The possession of convolutional layers in deep CNNs empowers them to extract minute features from the input samples. These models have shown greater classification capability while being tested on imagenet [5], CIFAR [24], and MNIST [6] databases. Their advantages over traditional methods, attracted researchers to involve CNNs in the detection of PAs also. In this section, the state-of-the-art methods that use deep learning for classification are discussed. Uliyan et al. [23] proposed deep features-based methods for the detection of PAs. It utilizes a Deep Boltzmann Machine for the extraction of features and to find the complex relationship among them. The proposed method exhibits better performance while being compared with statistical and hand-crafted feature-based methods. Chugh et al. [4] suggested a deep learning-based method that uses minutiae-centered fingerprint images to train and test a MobileNet classifier. A fingerprint is cropped into a finite number of patches based on the number of minutiae points and then cropped patches are fed to a CNN model which generates a liveness score for every patch. The global liveness score for an input sample is computed by the fusion of the liveness score generated for all the patches. The proposed method is tested on LivDet 2011, 2013, 2015, and Michigan State University's (MSU) FPAD database. Because new fabrication materials are being discovered nowadays, it is difficult to generalize an FPAD model to perform FPAD in an open-set paradigm. Chugh et al. [3], in addition to their prior work, proposed another approach for detecting spoofs made using unknown components. They suggested an image synthesis technique for generating fresh fingerprint patches, which contributes to better training of the CNN classifier against the spoofs fabricated with unknown materials. The proposed method is validated using LivDet 2017, ATVSFP, and MSU-FPAD databases. Zhang et al. [28] proposed a CNN architecture comprised of a series of improved residual connected blocks. This redesigned architecture detects PAs with less over-fitting and less processing time. The proposed technique is validated using Livdet 2013 and 2015 databases. The Deep learning-based methods are proven quite effective when being applied in the area of image classification problems but still, they suffer from low classification accuracy in the area of FPAD. The fingerprint samples and their spoofs have a limited amount of discriminating features, hence, a lot of work is required to be done in this area.
## 3 Proposed Work
In this paper, an ensemble of deep and handcrafted features is proposed for the detection of spoofs in intra-sensor same materials and cross-material protocols of FPAD. It consists of two modules working together for the accomplishment of the task. The first module is a DNN that is fed with the features extracted by LPQ and existing handcrafted features. The second module, on other side, is DenseNet CNN classifier which has shown remarkable performance while being validated on various image databases including imagenet [5]. Both of the modules are fused dynamically which results in better learning of them as compared with their individual performances. The details of the utilized LPQ, handcrafted features, DenseNet, and DyFFPAD is given in the following subsections.
### _Local Phase Quantization (LPQ)_
LPQ [8] is an image descriptor that works on the blur-insensitive property of the fourier transform. It is robust against the blur and redundant information present in input sample. We have utilized it as a prominent feature due to its capability of exploiting the minute information which gets missing in the fabrication process of spoof. The formulation of the LPQ descriptor is denoted as Eq. 1.
\[f_{x}(u)=\sum f(y)w(y-x)e^{-j2\pi uy} \tag{1}\]
Here \(f_{x}\) denotes the output which is local fourier coefficients at four different frequency values, \(w()\) is a window function defining the neighborhood, and \(f()\) is the output short-term fourier transform.
### _Handcrafted Features_
The fabrication process impacts the quality of the spoofs created with fabrication materials. According to recent research [21], handcrafted features have an impact on identifying PAs. In this work, we have utilized some existing handcrafted features including ridge-valley clarity, ridge-valley smoothness, frequency domain analysis, number of abnormal ridges and valleys, orientation certainly level, and gabor quality to estimate the quality of the input fingerprint sample. The details of the aforementioned features are described in the following subsections.
#### 3.2.1 Frequency Domain Analysis (FDA) [30]
FDA of a local block is determined by extracting the ridge-valley structure's 1D signature. The frequency of the sinusoidal ridge-valley structure is calculated using the discrete fourier transform of this 1D signature. Live fingerprints exhibit a consistent frequency of sinusoidal ridge-valley patterns, but fake fingerprints vary. Eq. 2 denotes the calculation of local FDA quality (FDAI).
\[FDA=\frac{A(F_{max}+C(A(F_{max}-1)+A(F_{max}+1))}{\sum_{F=1}^{N/2}} \tag{2}\]
#### 3.2.2 Orientation Certainty Level (OCL) [30]
The block-wise intensity of the energy concentration along the dominant ridge direction is measured as OCL. The covariance matrix \(M_{cov}\) for intensity gradients of a block is computed as Eq. 3
\[M_{cov}=\frac{1}{m\times n}\sum_{m\times n}\begin{cases}\begin{bmatrix}dx\\ dy\end{bmatrix}[dxdy]\end{cases}=\begin{bmatrix}a&c\\ c&b\end{bmatrix} \tag{3}\]
The eigenvalues \(\lambda_{min}\), \(\lambda_{max}\) of \(M_{cov}\) are computed as Eq. 3 and 4.
\[\lambda_{min}=\frac{a+b-\sqrt{(a-b)^{2}+4c^{2}}}{2} \tag{4}\]
\[\lambda_{max}=\frac{a+b+\sqrt{(a-b)^{2}+4c^{2}}}{2} \tag{5}\]
The eigenvalues are further utilized to compute the OCL as follows.
\[\text{OCL}=\begin{cases}1-\frac{\lambda_{max}}{\lambda_{min}},&\text{if, } \lambda_{max}>0\\ 0&\text{Otherwise}\end{cases}\]
#### 3.2.3 Gabor Quality [30]
To compute Gabor quality at the local level, a Gabor filter bank with multiple orientations is applied on each pixel. A Gabor filter bank is applied to each pixel in the block. The Gabor response for a fingerprint block with a normal ridge-valley pattern will be high for one or a few filters with orientations similar to the block orientation, whereas it will be low and stable for a corresponding block with an incorrect ridge-valley structure. Finally, the Gabor quality (G) of the block is determined as the standard deviation of the Gabor filter bank's output.
#### 3.2.4 Ridge-Valley Clarity (RVC) [21]
The separation between two successive ridges and valleys is found to be essentially constant in live fingerprints. This separation, on the other hand, might vary in spoofs due to the variation in the sizes of ridges and valleys in a block. The variation in the size of ridges and valleys is caused by the varying elasticity of the skin and the spoofing materials. The ridge-valley clarity is computed by finding the average ridge and valley widths of a block. The local ridge valley clarity \(RVC^{l}\) is calculated using Eq. 6
\[RVC^{l}=\frac{(rw-\overline{rw})+(vw-\overline{vw})}{rw_{sum}+vw_{sum}} \tag{6}\]
#### 3.2.5 Number of abnormal ridges and valleys [21]
The ridge and valley width is found to be in a range of 5 to 10 pixels in samples of human fingers captured from a sensing device with a resolution of 500 dpi. Since the human skin and spoofing materials have different levels of elasticity, the width of ridges and valleys differs in live and spoof fingerprints. A ridge or a valley present in a local block is considered abnormal if the deviation of its width in different rows of the block is above a certain threshold. The threshold value is decided by performing several experiments. The formulation of \(R^{l}_{ab}\) and \(V^{l}_{ab}\) is given as Eq. 7 and Eq. 8 respectively.
\[R^{l}_{ab}=\sum_{c=1}^{|R|}\begin{cases}1,&\text{if, }std(rw_{c})>t_{w}\\ 0&\text{Otherwise}\end{cases} \tag{7}\]
\[V^{l}_{ab}=\sum_{c=1}^{|V|}\begin{cases}1,&\text{if, }std(vw_{c})>t_{w}\\ 0&\text{Otherwise}\end{cases} \tag{8}\]
#### 3.2.6 Ridge-Valley Smoothness (RVS) [21]
It denotes the smoothness of ridge width and valley width which is exhibited by the live sample but not the spoof. This irregularity in the spoofs is caused due to the varying elasticity levels of human skin and fabrication materials. The pressure applied on the fingertip at the time of sample collection is also one of the reasons behind this. This feature is computed block-wise by first cropping the rotational block with a vertical ridge-valley structure. The resulting block is then binarized and pixels are labeled as ridge or valley with the help of linear regression algorithm. After this, the width of ridges and valleys is computed for each horizontal line of the block that has an alternate pattern of ridge and valley. In last the RWS and VWS of that block are calculated by averaging the standard deviations of widths of each ridge and valley.
#### 3.2.7 Feature vector from quality features
The final feature vector which is represented as Eq. (9) is generated using the mean and standard deviation of the above features after they have been computed for all blocks of a fingerprint image.
\[Q=\{RWS^{\mu},RSW^{\sigma},VWS^{\mu},VSW^{\sigma},R^{\mu}_{ab},V ^{\mu}_{ab},RVC^{\mu},\] \[RVC^{\sigma},FDA^{\mu},FDA_{\sigma},OCL^{\mu},OCL_{\sigma}, Gabor^{ \mu}\} \tag{9}\]
### _DenseNet-121_
DenseNet [10] is a CNN model that consists of layers connected to their previous layers in a feed-forward manner. Most of the CNN architectures suffer from the problem of vanishing gradient that appears in deep CNNs while ReLU is utilized as an activation function. Due to the use of a large number of ReLU activation functions, the gradient of the loss function approaches zero which results in overfitting in the model. As fingerprint images have limited texture and color information as compared with the images in imagenet datasets, CNNs face the problem of vanishing gradient. The dense connections among the layers inside convolutional blocks as well as batch-normalization operation in the transition block reduce the problem of vanishing gradient.
The DenseNet model (version-121) is composed of four dense blocks, each with six, twelve, twenty-four, and sixteen convolution layers. Figure 1 depicts the internal architecture of densenet. Each dense layer consists of three operations including batch-Normalization, rectified Linear Unit (ReLU), and convolution. The size of the convolutional filter is kept as \(3\times 3\). Each dense layer collects feature maps from its preceding layers. The output of a dense block can be formulated as Eq. (10).
\[X_{n}=F_{n}[A_{0},A_{1},A_{2},A_{3},.....A_{n-1}] \tag{10}\]
where \(A_{0},A_{1},A_{n-1}\) represents the concatenation of all feature maps from layers 0, 1, 2,..., n-1. \(F_{n}\), denotes a function that performs batch normalization followed by a convolution operation. In addition, a transition block is used after each dense block to perform a convolution operation with kernel size as \(1\times 1\). This convolution operation is followed by a pooling operator that reduces the size of feature maps after each dense block. The internal architecture of the dense layer and transition layer is depicted in Fig. 2.
### _Deep Neural Network (DNN)_
DNN is an artificial neural network with multiple fully connected layers between the input and output layers. The working of DNNs is inspired by the human brain as their neurons share similar characteristics. We have used DNN to test the individual performance of the handcrafted features and LPQ as well as designing the proposed architecture.
### _Proposed DyFFPAD Model_
#### 3.5.1 Designing DyFFPAD
The architecture of the proposed DyFFPAD is depicted in Fig. 3. It is a dynamic ensemble of a Deep Neural Network (DNN) and a DenseNet classifier. The DNN works on the feature vectors generated from handcrafted features and LPQ image descriptors. Both feature extraction methods provide a set of feature values combined to form a feature vector. The feature vector is fed as an input to DNN. The output of the DNN is a set of 32 values. The second part of the model is a DenseNet CNN's convolutional base which provides 2048 feature maps of size \(7\times 7\) each. The pooling operation is applied on the feature maps and further, a DNN with three fully connected layers having 512, 256, and 32 neurons is added to the output of the pool layer. The last layer having 32 neurons is concatenated with the first DNN and fed as input to the third DNN which has a single neuron in the last for binary classification. In the forward pass of the training process, a confidence score is calculated by the model. The loss between the output score and the expected output is calculated which is back-propagated to the CNN as well as DNNs for the learning of their parameters. The proposed model works in an end-to-end manner for the forward as well as backward pass of the model and reduces the complexities of static collaborations of handcrafted features and CNNs.
#### 3.5.2 Pre-processing of input samples
The fingerprint samples captured with the sensing devices have white space around the fingertip impression. This white space is meaningless and is required to be removed for the extraction of the desired features from fingerprint samples. The input fingerprint undergoes the pre-processing operation for the extraction of the region of interest from the input fingerprint sample.
#### 3.5.3 Feature Extraction
The LPQ and handcrafted features are extracted from the pre-processed fingerprints to form a combined feature vector. The details of the utilized features are described in section 3.2.
#### 3.5.4 Training of DyFFPAD
The proposed end-to-end DyFFPAD architecture is trained from scratch on benchmark LivDet databases. For faster training of the model, we used imagenet weights to initialize the parameters of DenseNet's convolution base rather than using random values. The findings of the proposed model in intra-sensor, same-material and cross-material protocols are discussed in section 4.
Fig. 1: Internal architecture of DenseNet-121 classifier
Fig. 2: Composition of dense and transition block
## 4 Experimental Setup
### _Database_
To validate the performance of the proposed model, experiments are performed on LivDet 2015, 2017, and 2019 databases. Each database is prepared with multiple sensing devices. For training and testing, fingerprint samples are arranged in a separate dataset. The details of all the utilized databases are mentioned in Table I.
### _Performance Metrics_
The proposed model's performance is evaluated using the ISO/IEC IS 30107 criteria [1]. The Attack Presentation Classification Error Rate (APCER) shows the proportion of misclassified spoof fingerprint samples, while the Bonafide Presentation Classification Error Rate (BPCER) shows the percentage of misclassified live fingerprint samples. Equations (11) and (12) denote APCER and BPCER, respectively.
\[APCER=\frac{\text{Number of mis-classified fake samples}}{\text{Total fake samples}}\times 100 \tag{11}\]
\[BPCER=\frac{\text{Number of mis-classified live samples}}{\text{Total live samples}}\times 100 \tag{12}\]
The Average classification error (ACE) is calculated by averaging APCER and BPCER and is used to assess the overall performance of the system. The formulation of ACE is represented by Eq. (13).
\[ACE=\frac{APCER+BPCER}{2} \tag{13}\]
The ACE is further used to calculate the accuracy of the proposed model, which is written as Eq. (14).
\[Accuracy=100-ACE \tag{14}\]
### _Implementation Details_
The suggested model is implemented in Python and employs the Tensorflow-Keras library. All training and testing were carried out using an NVIDIA TESLA P100 GPU. Each model is trained from scratch for 300 epochs, which took about 3-4 hours to converge.
### _Ablation Study_
In order to compare the performance of the proposed model with handcrafted features, LPQ image descriptor, and existing DenseNet classifier we have trained the corresponding DNN or CNN models. The handcrafted features and LPQ are used for the training of a DNN while DenseNet-121 is trained on original fingerprint samples. The performance of the individual models and the proposed model is validated using LivDet 2017 which is a challenging database. The findings of this experiment are reported in Table II. Table II clearly indicates the supremacy of the DyFFPAD over the DNNs trained with LPQ, Handcrafted features along with LPQ, and DenseNet classifier. The proposed DyFFPAD models attain an overall classification accuracy of 96.55% as compared with 72.21% (LPQ+DNN), 90.91% (LPQ+Handcrafted+DNN) and 93.64% (DenseNet-121 classifier). The performance is also compared using Receiver Operating Characteristic (ROC) curve which is depicted in Fig. 4.
## 5 Experimental Results and Comparative Analysis
### _Experimental Results_
Based on the arrangement of training and testing live and spoof samples captured, the proposed model's performance is validated in two separate benchmark scenarios: intra-sensor and known spoof material and intra-sensor and unknown spoof material. Apart from this, we have reported a study that shows a comparison between the performances of LPQ, handcrafted features, DenseNet, and the proposed method. The following subsections consist of the findings of this study along with a description of the above-mentioned scenarios and the findings of the proposed method.
#### 5.1.1 **Intra-Sensor and Known Spoof Material**
The training and testing fingerprint samples are acquired using the same sensing device in this experimental setup. The spoof samples from both the training and testing
Fig. 3: Block Diagram of DyFFPAD Architecture
datasets are fabricated using same spoofing materials. LivDet 2015 partially falls into this category because it's two-thirds of spoof testing samples are captured with known spoof materials. The findings of the proposed model on the LivDet 2015 database are shown in Table III, which demonstrates that the proposed model achieves an average BPCER of 5.79% and APCER of 2.62%, as stated by the column "APCER (Known)".
#### 5.1.2 **Intra-Sensor and Unknown Spoof Material**
In this experimental setup, the fingerprint samples from the training and testing datasets are acquired using the same sensing device but spoof samples in both datasets are fabricated using different materials. Validation in this protocol assesses the FPAD system's ability to protect the AFRS in a real-world scenario, as an attacker may present an artifact of a user's fingerprint fabricated with newly discovered fabrication materials that the FPAD model has not seen. LivDet 2017 and 2019 are captured in the same way as the training and testing spoof samples are fabricated from different materials. The findings of the proposed method on the aforementioned databases are reported in Table IV. Table IV shows that the proposed model achieves a BPCER of 4.53%, APCER of 2.64% and ACE of 3.51% on the LivDet 2017 database. Similarly, the proposed model classifies the live and spoof samples with an error of 5.91% and 2.39% respectively on LivDet 2019. The proposed method also confronts the spoof samples present in LivDet 2015 database with an average APCER of 4.13% as mentioned by the column "APCER (unknown)" in Table III.
### _Discussion_
The live and spoof fingerprint samples have different textures, and ridge valley widths due to different elasticity levels of the finger skin and spoofing materials. The possession of convolutional layers enables the CNNs to classify the input samples by extracting the discriminating features from input fingerprint samples. On the other hand, the dynamic ensemble of DNN and CNN enables both to learn their parameters in a better way and reduces the need for separate training of them. The proposed method's findings are compared to existing methods tested on benchmark databases, which are discussed in the subsections below.
### _Comparative Analysis_
In this section, the proposed method's findings are compared with state-of-the-art methods in benchmark protocols as per the arrangement of spoofs fabricated with known and unknown materials. A detailed comparative analysis is given in the following subsections.
#### 5.3.1 **Comparison with existing methods on LivDet 2015 database**
The LivDet 2015 consist of the spoof samples captured with known and unknown spoofing materials. A detailed comparison of the proposed method's performance with state-of-the-art methods is mentioned in Table V. By Seeing 5 its is clearly evident that the classification performance of the proposed method is better than the method discussed in [17, 21, 12, 15, 23, 20], and [13] while the performance is comparable with the methods discussed in [22] and [28].
Fig. 4: Receiver Operating Characteristic (ROC) curve for LivDet 2017 Digital Persona, Greenbit, and Orcanthus sensors
\begin{table}
\begin{tabular}{|l|l|c|c|c|} \hline
**Method** & **Accuracy** & **Accuracy** & **Accuracy** & **Accuracy** \\ (**Oceans**) & (**Digital Personal**) & (**Greenbit**) & **Average** \\ \hline
**LPQ+DNN** & 84.91 & 77.06 & 54.66 & 72.21 \\ \hline
**Handcrafted+LPQ+DNN** & 88.89 & 90.23 & 93.62 & 90.91 \\ \hline
**DenseNet-121** & 92.79 & 94.19 & 93.55 & 93.64 \\ \hline
**DJFPAD** & **96.34** & **97.14** & **96.00** & **96.49** \\ \hline \end{tabular}
\end{table} TABLE II: Findings of the ablation study performed on LivDet 2017 database
#### 5.3.2 Comparison with existing methods on LivDet 2017 database
The performance of the proposed method is also compared with state-of-the-art methods tested on LivDet 2017 database. The training and testing spoof samples in this database are fabricated utilizing different spoofing materials, making it more difficult for an FPAD model to classify. The proposed method is able to classify the live and spoof samples with better accuracy. Table VI shows that the proposed method outperforms the methods presented in [4, 3, 29], and [9] when evaluated on the fingerprint samples captured with orcanthus and digital person sensors. The proposed method also outperforms the methods mentioned above with an average classification accuracy of 96.49%. This comparison reveals that the dynamic ensemble of handcrafted and deep features is able to detect spoofs regardless of the material used for fabrication.
#### 5.3.3 Comparison with existing methods on LivDet 2019 database
Table VII compares the proposed model's findings to state-of-the-art methods tested on the LivDet 2019 database and it is evident that the proposed method outperforms the method proposed in [3, 11] and the participating FPAD algorithms, namely JungCNN, JWL LivDet, and ZJUT DET while being tested on the samples captured with orcanthus and digital persona sensors.
The comparative analysis mentioned above concludes that the proposed method consistently outperforms state-of-the-art methods in the intra-sensor paradigm of FPAD, regardless of whether the spoof samples are made of known or unknown materials. When compared to standard CNN-based techniques, the dynamic ensemble of DNN and CNN enables them to learn their parameters more effectively.
### _Evaluation of DyFFPAD in High-Security Systems_
An FPAD model must be assessed for performance in high-security systems as well, as its primary goal is not only to attain the lowest possible APCER, BPCER, and ACE. We provide the results of the suggested model utilizing the Detection Error Trade-off (DET) curve in this study. A DET curve is a graphical representation of the error rates attained by a binary classification system by varying the classification threshold value. The DET curves for all datasets from the LivDet 2015, 2017, and 2019 databases are shown in Fig. 5. By seeing Fig. 5 it is evident that the proposed model achieves a BPCER of less than 1% to achieve an APCER of 1% when evaluated on crossmatch, and it is in the range of 15% - 40% for biometricka, greenbit and digital persona sensors of LivDet 2015 database. On the LivDet 2017 database, the model is able to keep the BPCER in the range of 7% - 18% when testing spoof samples are obtained using unknown spoof materials. Similarly, the model maintains a BPCER of less than 5% for greenbit and orecathus sensors of LivDet 2019 database.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Method** & \begin{tabular}{c} **Accuracy** \\ **(Orcanthus)** \\ \end{tabular} & \begin{tabular}{c} **Accuracy** \\ **(Digital Personal)** \\ \end{tabular} &
\begin{tabular}{c} **Accuracy** \\ **(Greenbit)** \\ \end{tabular} & **Average** \\ \hline Jung CNN [16] & 99.13 & 81.23 & 99.06 & 93.14 \\ \hline Chugh et al. [3] & 97.50 & 83.64 & 99.73 & 93.62 \\ \hline Witl, LivDet [16] & 97.45 & 88.86 & 99.20 & 93.17 \\ \hline ZJUT DU [16] & 97.50 & 88.77 & 99.20 & 95.16 \\ \hline
**DyFFPAD** & **98.32** & **91.57** & **98.08** & **95.99** \\ \hline \end{tabular}
\end{table} TABLE VII: Comparison with state-of-the-art methods on LivDet 2019 database in intra-sensor paradigm
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|} \hline
**Database** & **Sensor** & **BPCER** & **APCER** & **ACE (\%)** \\ \hline \multirow{4}{*}{**LivDet 2017**} & **Digital Persona** & 3.34 & 2.46 & 2.87 \\ \cline{2-6} & **Orcanthus** & 5.59 & 2.04 & 3.66 \\ \cline{2-6} & **Greenbit** & 4.68 & 3.44 & 4.00 \\ \cline{2-6} & **Average** & **4.53** & **2.64** & **3.51** \\ \hline \multirow{4}{*}{**LivDet 2019**} & **Digital Persona** & 12.86 & 4.74 & 8.43 \\ \cline{2-6} & **Greenbit** & 2.06 & 1.80 & 1.92 \\ \cline{1-1} \cline{2-6} & **Orcanthus** & 2.83 & 0.65 & 1.69 \\ \cline{1-1} \cline{2-6} & **Average** & **5.91** & **2.39** & **4.01** \\ \hline \end{tabular}
\end{table} TABLE IV: Intra-Sensor performance on LivDet 2017 and 2019 databases
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|} \hline
**Database** & **Sensor** & **BPCER** & **APCER** & **ACE (\%)** \\ \hline \multirow{4}{*}{**LivDet 2017**} & **Digital Persona** & 3.34 & 2.46 & 2.87 \\ \cline{2-6} & **Greenhus** & 5.59 & 2.04 & 3.66 \\ \cline{1-1} \cline{2-6} & **Greenbit** & 4.68 & 3.44 & 4.00 \\ \cline{1-1} \cline{2-6} & **Average** & **4.53** & **2.64** & **3.51** \\ \hline \multirow{4}{*}{**LivDet 2019**} & **Digital Persona** & 12.86 & 4.74 & 8.43 \\ \cline{1-1} \cline{2-6} & **Greenbit** & 2.06 & 1.80 & 1.92 \\ \cline{1-1} \cline{2-6} & **Orcanthus** & 2.83 & 0.65 & 1.69 \\ \cline{1-1} \cline{2-6} & **Average** & **5.91** & **2.39** & **4.01** \\ \hline \end{tabular}
\end{table} TABLE IV: Intra-Sensor performance on LivDet 2017 and 2019 databases
\begin{table}
\begin{tabular}{|l|l|c|c|c|} \hline
**Method** & \begin{tabular}{c} **Accuracy** \\ **(Orcanthus)** \\ \end{tabular} & \begin{tabular}{c} **Accuracy** \\ **(Digital Personal)** \\ \end{tabular} &
\begin{tabular}{c} **Accuracy** \\ **(Greenbit)** \\ \end{tabular} & **Average** \\ \hline Jung CNN [16] & 99.13 & 81.23 & 99.06 & 93.14 \\ \hline Chugh et al. [3] & 97.50 & 83.64 & 99.73 & 93.62 \\ \hline Witl, LivDet [16] & 97.45 & 88.86 & 99.20 & 93.17 \\ \hline ZJUT DU [16] & 97.50 & 88.77 & 99.20 & 95.16 \\ \hline
**DyFFPAD** & **98.32** & **91.57** & **98.08** & **95.99** \\ \hline \end{tabular}
\end{table} TABLE VII: Comparison with state-of-the-art methods on LivDet 2019 database in intra-sensor paradigm
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|} \hline
**Method** & \begin{tabular}{c} **Accuracy** \\ **(Orcanthus)** \\ \end{tabular} & \begin{tabular}{c} **Accuracy** \\ **(Digital Personal)** \\ \end{tabular} &
\begin{tabular}{c} **Accuracy** \\ **(Greenbit)** \\ \end{tabular} & **Average** \\ \hline Jung CNN [16] & 99.13 & 81.23 & 99.06 & 93.14 \\ \hline Chugh et al. [3] & 97.50 & 83.64 & 99.73 & 93.62 \\ \hline Witl, LivDet [16] & 97.45 & 88.86 & 99.20 & 93.17 \\ \hline ZJUT DU [16] & 97.50 & 88.77 & 99.20 & 95.16 \\ \hline
**DyFFPAD** & **98.32** & **91.57** & **98.08** & **95.99** \\ \hline \end{tabular}
\end{table} TABLE VII: Comparison with state-of-the-art methods on LivDet 2019 database
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|} \hline
**Database** & **Sensor** & **BPCER** & **APCER** & **ACE (\%)** \\ \hline \multirow{4}{*}{**LivDet 2017**} & **Digital Persona** & 3.34 & 2.46 & 2.87 \\ \cline{2-6} & **Greenhus** & 5.59 & 2.04 & 3.66 \\ \cline{1-1} \cline{2-6} & **Greenbit** & 4.68 & 3.44 & 4.00 \\ \cline{1-1} \cline{2-6} & **Average** & **4.53** & **2.64** & **3.51** \\ \hline \multirow{4}{*}{**LivDet 2019**} & **Digital Persona** & 12.86 & 4.74 & 8.43 \\ \cline{1-1} \cline{2-6} & **Greenbit** & 2.06 & 1.80 & 1.92 \\ \cline{1-1} \cline{2-6} & **Orcanthus** & 2.83 & 0.65 & 1.69 \\ \cline{1-1} \cline{2-6} & **Average** & **5.91** & **2.39** & **4.01** \\ \hline \end{tabular}
\end{table} TABLE IV: Intra-Sensor performance on LivDet 2017 and 2019 databases
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Method** & \begin{tabular}{c} **Accuracy** \\ **(Orcanthus)** \\ \end{tabular} & \begin{tabular}{c} **Accuracy** \\ **(Digital Personal)** \\ \end{tabular} &
\begin{tabular}{c}
## 6 Conclusion
The deployment of an AFRS in security and commercial applications makes them vulnerable to be deceived by creating PAs. In this paper, an FPAD mechanism is presented which has shown the capability of detecting spoofs when they are created using known and unknown fabrication materials. The method is suitable for real-life applications where an attacker can utilize novel materials for the fabrication of the spoof. Existing handcrafted as well as deep learning-based methods are found to be insufficient in detecting PAs while being tested in the aforementioned scenarios. In this paper, a novel end-to-end model is presented which utilizes the handcrafted and convolutional features together for the detection of spoofs. Additionally, it suggests an ensemble of both in an end-to-end manner, making it appropriate for real-time conjunction with the AFRS. The proposed method is tested on benchmark databases in different experimental protocols. The efficacy of the proposed model is compared with state-of-the-art methods including statistical and handcrafted feature-based methods, perspiration and pore-based methods, and deep learning-based methods. In the future, we will investigate the proposed model's capabilities for cross-sensor validation using benchmark fingerprint databases.
|
2301.13017 | Einstein's equations and the pseudo-entropy of pseudo-Riemannian
information manifolds | Motivated by the corrected form of the entropy-area law, and with the help of
von Neumann entropy of quantum matter, we construct an emergent spacetime by
the virtue of the geometric language of statistical information manifolds. We
discuss the link between Wald and Jacobson approaches of thermodynamic/gravity
correspondence and Fisher pseudo-Riemannian metric of information manifold. We
derive in detail Einstein's field equations in statistical information
geometric forms. This results in finding a quantum origin of a positive
cosmological constant that is founded on Fisher metric. This cosmological
constant resembles those found in Lovelock's theories in a de Sitter background
as a result of using the complex extension of spacetime and the Gaussian
exponential families of probability distributions, and we find a time varying
dynamical gravitational constant as a function of Fisher metric together with
the corresponding Ryu-Takayanagi formula of such system. Consequently, we
obtain a dynamical equation for the entropy in information manifold using
Liouville-von Neumann equation from the Hamiltonian of the system. This
Hamiltonian is suggested to be non-Hermitian, which corroborates the approaches
that relate non-unitary conformal field theories to information manifolds. This
provides some insights on resolving "the problem of time". | Hassan Alshal | 2023-01-30T15:52:34Z | http://arxiv.org/abs/2301.13017v2 | # Einstein's Equations and the Entropy of pseudo-Riemannian Information Manifolds
###### Abstract
Motivated by the corrected form of the entropy-area law, and with the help of von Neumann entropy of quantum matter, we construct an emergent spacetime by the virtue of the geometric language of statistical information manifolds. We discuss the link between Wald-Jacobson approaches of thermodynamic/gravity correspondence and Fisher _pseudo_-Riemannian metric of information manifold. We derive in detail Einstein's field equations in statistical information geometric forms. This results in finding a quantum origin of a positive cosmological constant that is founded on Fisher metric. This cosmological constant resembles those found in Lovelock's theories in a de Sitter background as a result of using the complex extension of spacetime and the Gaussian exponential families of probability distributions, and we find a time varying dynamical gravitational constant as a function of Fisher metric together with the corresponding Ryu-Takayanagi formula of such system. Consequently, we obtain a dynamical equation for the entropy in information manifold using Liouville-von Neumann equation from the Hamiltonian of the system. This Hamiltonian is suggested to be non-Hermitian, which corroborates the approaches that relate non-unitary conformal field theories to information manifolds. This provides some insights on resolving "the problem of time".
###### Contents
* 1 Introduction
* 2 Information and Spacetime Thermodynamics
* 3 Entropy and Riemannian Geometry
* 3.1 Vector space construction
* 3.2 Density manifold
* 3.3 Manifold metric
* 3.4 Euclidean structure of the space of observables
* 3.5 Fisher metric and Kullback-Leibler divergence
* 3.6 Hessian structure and Einstein tensor
* 3.7 Obtaining a pseudo-Riemannian information manifold
* 4 Entropy of the Information Manifold
* 5 Discussions and Conclusions
* 6 Appendix
## 1 Introduction
One of the main challenges in physics is to find a fundamental dynamics between geometry/gravity and quantum matter. And many approaches such as string theory, gravity/field correspondence, and loop quantum gravity try to tackle this problem in different ways and frameworks; check [1] for detailed review. In this paper, we approach the problem from the perspective of information manifolds and entropy. In the last few years, information geometry has earned a great interest in fields like machine learning [2] and deep learning in physics [3]. The information manifold and entropy concepts, particularly the _relative entropy_, are extremely useful in understanding many physical patterns that include, but not limited to, quantum computers [4], chemistry [5], biological systems [6] and even economy [7]. The entropy-area law, corrected and generalized by the outside von Neumann entropy [8, 9, 10], is introduced to the quantum information geometry in order to check that geometry obeys the second law of thermodynamics and preserves information [11]. Our proposal to approach the gravity/quantum problem arises intuitively from looking at the information paradox in black hole physics from the entanglement entropy perspective [12, 13], along with the formally established the second law of thermodynamics for black holes Noetherian charges [14, 15, 16] and the quantum origin of spacetime and Einstein equations [17].
The outline of this work is organized as follows. Right after this passage, and within the introduction section, we summarize the reasons behind correlating the holographic principle with relevant entropy, coarse-grained entropy, Kullback-Leibler divergence, and
Fisher information metric. We then investigate in section (2) the corrected entropy-area law by von Neumann entropy in order to both satisfy the second law of thermodynamics and preserve information. We compute a form of corrected entropy-area law in the light of Liouville-von Neumann equation so that we get a dynamical relation between the quantum Hamiltonian and the time variation of both entropy and area. Later in the same section, we briefly review the gravity/thermodynamics correspondence, developed by Jacobson [17] and by Wald [14] since we will use that later in the following sections. We comment on the relation between the quantum origin of spacetime and the dynamical equation of expansion rate. In section (3), we provide a detailed study of the geometric interpretation of coarse-grained entropy. We cover the essential structures of the information geometry, and we apply the thermodynamics/geometry correspondence to derive a quantum information geometry form of Einstein field equation. The Fisher information metric will be introduced to measure how far the _cumulant probability density functions_ are away from each others after being varied with respect to the microscopic variables. In other words, the Fisher metric tells us about the quota of information the microscopic variables carry in the statistical manifolds [18, 19]. Thus, the cumulant probability density functions become good candidates to count on developing the Fisher metric, instead of using moments of distributions, when we study the correlations between variables. For example, think of the mean square error as a second moment of the error. When it is differentiated with respect to the microscopic value, the varied cumulant probability density measures the level of proficiency in the model-data fitting1. This also appears in Kullback-Leibler (KL) divergence that measures the difference between probability distributions. The last concept plays a fundamental role in relating Fisher metric to Shannon/von Neumann entropy, which is what we show later. And that concept can be seen mathematically as a second derivative with respect to the microscopic variables2 acting on the KL divergence, which is nothing but the inverse of the probability distribution of Shannon/von Neumann entropy. Such structure is very similar to the mathematical definitions of curvatures in Riemannian manifolds, which is also what we introduce in the same section. Thus, the Fisher metric is in fact a metric from which the curvature structure in the information manifold is obtained [21] once proved to be endowed with other properties of Riemannian manifolds. This will lead to reformulate Einstein field equations in the corresponding information manifold. Additionally, a positive cosmological term in the Einstein equations is obtained in the information manifold. Later, we relate the components of the field equations to commutant functions and get a more detailed informatic description to the gravitational constant. In section (4), we introduce a dynamical equation of entropy in the information manifold using only quantum information geometry without using any classical components. It is a new combination of von Neumann master equation, the von Neumann entropy, and the black hole entropy formula, and its generalization to statistical manifolds. Then we use the
RT formula for the statistical manifold to describe its corresponding entropy. In section (5), we discuss and comment on the findings.
The so-far achieved developments in entropic information theory have led to conceptually rich ideas like entanglement entropy [22]. This line of thoughts can be traced back to the discovered relation between the area and the black hole entropy in the Bekenstein-Hawking [9, 23]. Then, the holographic principle, developed by 't Hooft [24] and Susskind [25], suggests finding a correspondence between the \(3D\) volume and the \(2D\) area, which leads to the known gauge/gravity correspondence, or the AdS/CFT as a quantum field theory with local degrees of freedom [26], and the sufficiently described entropy at the _microscopic level_ in contrary to Bekenstein-Hawking law. Thus, we are allowed to address the entanglement entropy [27, 28], which describes the quantum information load in the quantum states, as a stored information encoded on the geometric features of the space [29]. But since different \(3D\) surface geometries are indeed associated with different entanglement entropies, the area in Bekenstein-Hawking law is suggested to be replaced by another area law for the _extremal surfaces_, known as Ryu-Takayanagi (RT) surfaces in holography models [12, 13], which provides a clue that we can rethink of the entropy, which is built of microscopic variables, as the fundamental underpinnings of the spacetime classical geometry upon reintroducing RT surfaces to Wald's formula for the entropy of black holes [30].
But the complicated relations among the microscopic variables, together with the difficulties associated with measuring those variables, are the reasons of why it is always more convenient to express the statistical phenomena corresponding to the microscopic variables in terms of the stochastic variables that are ruled by more fundamental and relatively easier-to-measure laws. Such reductionism in the description automatically will lead to select some microscopic variables to be _relevant_ and other microscopic variables to be _irrelevant_[19]. Relevant variables are all parameterized as functions in time or any other affine parameter. We can think of the relation between the microscopic and macroscopic variables like the relation between the speed of gas molecules and the temperature of the whole sample, both can be used to describe different types of thermodynamical energies. These microscopic relevant variables should be averaged, using the relevant density function, such that they define the macroscopic or the stochastic variables. And the averaging process could be done using any density matrix as we discuss in details in subsection (3.1). Additionally, any two different densities, constructed from those relevant variables, are favored over each other according to which of them is capable of defining maximum entropy that corresponds to least amount of data loss due to the unfavorable unavoidable effects coming from irrelevant information. Such density is known as the _canonical coarse-grained density_, and its corresponding entropy is known as the _relevant entropy_. This is the candidate entropy to describe RT formula of statistical manifolds as discussed in section (3).
Due to the mathematical difficulties a person might encounter while trying to find the exact states and their corresponding densities, it is suggested to replace the general relevant entropy related to holography with other more specified entropies: the _coarse-grained
entropy_ and _the relative entropy_. Also, we emphasize that the coarse-grained entropy is "lossy but true" entropy [31] as it depends on the macroscopic variables of the system, see Ref. [32] for more information. It is argued that we can coarse-grain any type of quantum entropy, such as entanglement entropy, using _observational entropy_[33]. The technique of coarse-graining obtained with help of the projectors acting on Hilbert space has been previously used in information geometry3[19]. It is worth noting that observational entropy is a quantum analogue of the classical Boltzmann entropy. More importantly, observational entropy shows the measurement limitations when one tries to get more precise information even if the density state is more precisely known. Thus, the process of coarse-graining is inevitable because even pure states span over more than one macrostate in the phase space due to superposition. Yet, observational entropy is bounded from below by von Neumann entropy, and equals to the later if the former satisfies the coarse-graining conditions in Ref. [33], i.e. after several consecutive coarse-graining processes, one can end up having a fine-grained entropy. Generally for finite-dimensional systems, the observational entropy can be expressed as a _relative entropy_. More precisely, observational entropy can take the form of Kullback-Leibler divergence, which is a type of relative entropy, from which we obtain Fisher information metric in statistical information manifolds4.
Footnote 3: Balian _et al._[19] did not restrict the physical observables to be the Hilbert space projectors but Safranek _et al._ did [33].
Footnote 4: It is argued that semiclassical coarse-graining of holographic states, as realized in _tensor networks_, results in a flow in spacetimes approaching RT surface [34].
## 2 Information and Spacetime Thermodynamics
In order to obtain an entropy-area law that respects the second law of thermodynamics and preserves information, we need the full entropy of a black hole to contain both the entropy that represents what is inside the horizon and the entropy of the quantum matter outside the horizon [35, 11, 36]. This means the Bekenstein generalized entropy law [8, 9, 10] would take the following form
\[S_{\rm BH}=\frac{A_{\rm H}}{4G\hbar}+S_{\rm matter}, \tag{1}\]
where \(S_{\rm BH}\) is the full entropy of the black hole, \(A_{\rm H}\) is the area of black hole horizon and \(S_{\rm matter}\) is the von Neumann entropy of the matter outside the black hole. The constants are Planck constant \(\hbar\) and Newton's gravitational constant \(G\). On one side, von Neumann entropy for a quantum-mechanical system described by a density matrix \(\rho\) is given by
\[S_{\rm matter}=-\operatorname{Tr}(\rho\ln\rho). \tag{2}\]
On the other side, the time-evolution equation of the density matrix \(\rho\) is given by Liouville-von Neumann equation [37],
\[\frac{d\rho}{dt}=\frac{1}{i\hbar}[H,\rho], \tag{3}\]
where \(H\) is the Hamiltonian of the considered quantum system. Since the trace operator commutes with the differential time operator, we can in principle write a time evolution equation for the von Neumann entropy. For that purpose, we use the straightforward mathematical trick
\[\frac{d\rho}{dt}=\frac{d}{dt}(\rho\ln\rho)-\frac{d\rho}{dt}\ln\rho. \tag{4}\]
Using Eq. (4) in Eq. (3), we get the quantum time evolution equation as follows
\[i\hbar\frac{d}{dt}(\rho\ln\rho)-i\hbar\frac{d\rho}{dt}\ln\rho=[H,\rho]. \tag{5}\]
We take the trace of both sides and use the fact that trace operator commutes with the differential time operator. Then, we substitute Eq. (2) in Eq. (5) to get
\[-i\hbar\ \frac{d}{dt}\ S_{\rm matter}={\rm Tr}\left[i\hbar\frac{d\rho}{dt}\ln \rho+[H,\rho]\right]. \tag{6}\]
Thus, an equation for the time evolution of von Neumann entropy is obtained. For a black hole system, the von Neumann entropy satisfies the second law of thermodynamics only through introducing the coarse-grained entropy as we mentioned in the introduction. Therefore, we use Eq. (1) to rewrite the time evolution of entropy as follows
\[-i\hbar\frac{dS_{\rm BH}}{dt}+i\frac{1}{4G}\frac{dA_{\rm H}}{dt}={\rm Tr} \left[i\hbar\frac{d\rho}{dt}\ln\rho+[H,\rho]\right]. \tag{7}\]
The last equation introduces a a relation between black hole full entropy, black hole horizon area and density matrix of quantum states. The equation depends on the fundamental constants \(\hbar\) and \(G\) that are suggested to connect gravity and quantum matter. As we observe here, we did not make any additional assumptions to get to Eq. (7). It does follow from the Liouville-von Neumann equation and direct mathematical manipulations. It appears that Eq. (7) is a quantum/semi-classical form of entropy-area law for the black hole. It introduces a relation between geometry (Area) and quantum matter (Density Matrix). We will reconsider this relation after we connect entropy as a macroscopic quantum quantity to the area as a geometric quantity. But before that, it is worth noting that the two concepts are related in general within the _non-dissipative_ systems, i.e. systems with \(dS_{\rm BH}/dt=0\). This assumption is valid as \(S_{\rm BH}\) is the _total_ corrected entropy of the black hole, and the RT extremal surface, or "island", is realized [13]. Such extremal surface realization is related to the existence of flat plateau in the corresponding Page curve [38]. Therefore, Eq. (7) becomes
\[i\frac{1}{4G}\frac{dA_{\rm H}}{dt}={\rm Tr}\left[i\hbar\frac{d\rho}{dt}\ln \rho+[H,\rho]\right]. \tag{8}\]
Upon solving the previous equation, either the horizon area would have the fine-grained entropic definition, which is expected as the horizon should contains the information of the entangled particles that fall inside it, or the fine-grained quantum part of the black hole entropy would have a geometric meaning. Before we discuss the second meaning, which is what we do in section (4), we emphasize that all previous equations are derived assuming that Liouvillian mechanics stems from \(d\rho/dt=0\). Also, the Hamilton-Jacobi equation shows that \(H=-\)\(\partial\mathcal{A}/\partial t\), where \(\mathcal{A}\) is the action. Wald [14] noticed that all what one needs to do is to express the entropy as a function of the density of the state then applies Liouvillian mechanics to solve the Hamilton-Jacobi equation in order to generate the action that will be extremized to get the conserved quantities together with Euler-Lagrange equations. This is why we swiftly review the gravity/thermodynamics correspondence developed by Jacobson [17, 39] based on Wald approach [14].
Assuming Rindler frame of references, Jacobson found that the Einstein equations can be obtained from the entropy-horizon area relation together with the laws of thermodynamics [17]. According to Unruh's radiation [40], the radiation temperature detected by a Rindler observer is directly proportional to the uniform acceleration "\(a\)" of that observer. Applying the equivalence principle to Rindler transformation affirms the local flatness condition [41, 42]. Thus, each point in the spacetime has its own local Rindler horizons, both past and future, with Killing fields in the null directions of the horizons. The relations between heat flux and the black hole hairs, i.e. the mass and the angular momentum 5, are discussed in [14, 43, 44, 45] through the Hamiltonian formulation of the first law of thermodynamics. In brief, the heat flow of such system could be defined to obey the averaged null energy condition along a time-like geodesic
Footnote 5: Electromagnetic energy is better represented separately inside the density matrix \(\rho\).
\[\delta Q=-a\int T_{\mu\nu}k^{\mu}k^{\nu}tdt\delta A=0, \tag{9}\]
where \(T_{\mu\nu}\) is the energy momentum tensor, \(k^{\mu}\) is the null vector, \(\delta A=\sqrt{\gamma}dA\) is the variation in the congruence cross sectional area of the horizon, and \(\sqrt{\gamma}\) is the determinant of the induced metric of the \(2D\) spatial area element of the horizon element \(dA\). The last equation stems originally from Wald's formula [39, 46]. And for the null geodesic of \(k^{\mu}\) that generates the horizon, and upon cancelling the higher order contributions, the Raychaudhuri equation gives
\[\theta=-tR_{\mu\nu}k^{\mu}k^{\nu}. \tag{10}\]
where the affine parameter \(\theta\) is the rate of change of \(\sqrt{\gamma}\)[47], i.e.
\[\theta=\frac{1}{\sqrt{\gamma}}\frac{d}{dt}\left(\sqrt{\gamma}\right) \tag{11}\]
which also describes the expansion of \(\delta A\). The last two equations can be can be combined to give
\[\theta A\frac{d\theta}{dA}=-R_{\mu\nu}k^{\mu}k^{\nu}. \tag{12}\]
As in [17], Eq. (9) and Eq. (10) yield the relations
\[\frac{\delta Q}{\delta A}=\frac{a\hbar}{2\pi\ell_{P}^{2}}, \tag{13}\]
which demands that the Einstein equations become
\[G_{\mu\nu}=\frac{2\pi\ell_{P}^{2}}{\hbar}T_{\mu\nu}. \tag{14}\]
where the \(G_{\mu\nu}\) is the Einstein tensor and \(\ell_{P}\) is the Planck length. This establishes that Einstein equation is indeed an equation of state as Jacobson emphasizes.
It is worth mentioning that another approach of understanding gravity as an entropic force is suggested by Verlinde in [48, 49] but we will not review it here. Rather, one might say that nothing new in this section. But Eq. (12) and Eq. (13) entice us to consider the existence of a new relation between \(dS=\delta Q/T\) and \(\delta\theta\) at the microscopic level of \(\rho\) considering the von Neumann definition of entropy. Such \((S,\theta)\) relation is indeed a dynamical relation, and we will study that relation in the context of information geometry in section (4). But first, we need to study the meaning of the geometric quantities \(G_{\mu\nu}\) and \(\theta\) in the classical spacetime using Fisher Riemannian metric of information manifold. In the next section, we adopt the Fisher metric as it guarantees extending the geometric quantities to any quantum theory. Moreover, the observational entropy is realized as the Kullback-Leibler divergence, which is another information quantity from which we can obtain the Fisher metric.
## 3 Entropy and Riemannian Geometry
With help of _Lie group thermodynamics_ in the framework of the covariant formalization of geometrized thermodynamics [50, 51], we can expand the analysis that resulted in Eq. (8) in the speculative direction of the geometric interpretation of the fine-grained quantum part of the black hole entropy. Balian _et al._ gauge theory of thermodynamics [19], as an extension of Fisher Riemannian metric of information manifold, guarantees extending the geometric interpretation to any quantum theory. We introduce Balian _et al._ Riemann metric in the density matrix space as the Hessian of the fine-grained entropy. This metric would help identifying the information loss, and hence, could resolve the information loss paradox. The metric is situated as a canonical function in between the space of states and the space of observables. This involves Legendre transforms just like those in Liouvillian mechanics.
### Vector space construction
According to Balian _et al._[19], it is useful to focus only on the space of the density states as the density state is more suitable for information gathering in comparison with observables
themselves. Instead of using the density state as an explicit parameter to describe the density state space metric, we would rather use the fine-grained entropy to define such metric as the density state could be an _incomplete description_ for the information relative to the observables, i.e. the quantum relation of the average values of any observable \(\hat{\mathcal{O}}\)
\[\langle\hat{\mathcal{O}}\rangle=\mathrm{Tr}\left[\hat{\mathcal{O}} \ \hat{\rho}\right], \tag{15}\]
does not define a unique \(\rho\) as there are different \(\rho\)'s that sufficiently define the observable average value. Besides that the fine-grained entropy function has a _global maximum_, as we infer from Eq. (2) and Fig. (1), the entropy makes the densities satisfying Eq. (15) equivalent, and hence, the loss in the information of the observables, due to the different \(\rho\)'s, is irrelevant. But the density corresponding to maximum entropy \(\rho_{0}\) is more favorable to describe the density space metric as, by construction, it has the minimum information to calculate different types of observables. Even for the irrelevant information, \(\rho_{0}\) contains the least of them. This is why the entropy \(S[\rho_{0}]\), known as the relevant entropy, is one assigned to describe the macroscopic thermal phenomena such as the thermodynamically gravitational quantities in the dissipative systems. But for non-dissipative systems we can use the usual \(S[\rho]\). To elaborate the above discussion, we focus in Eq. (15) on the \(\hat{\rho}\hat{\mathcal{O}}\) components. It is natural to choose the density \(\hat{\rho}_{\hat{\mathcal{O}}}\) to be the density proportional to the eigenprojectors of \(\hat{\mathcal{O}}\) in the corresponding sub-Hilbert space. Therefore, all the off-diagonal elements in the general \(\hat{\rho}\) will be disregarded, and the left information is stored in the \(\hat{\rho}_{\hat{\mathcal{O}}}\). By defining the \(\hat{\rho_{0}}\) as the density that has all the relevant information of all observables similar to \(\hat{\mathcal{O}}\), it becomes clearer why \(S[\rho_{0}]>S[\rho]\).
We construct a geometric interpretation to the previously mentioned thoughts such that the "variation of the relevant entropy represents a transfer of information between the relevant and the irrelevant variables." and "dissipation appears as a leakage of the relevant information" [19]. With the help of the algebraic structure endowed in the observable space, the geometric construction in the language of manifolds6 would help us in explaining Eq. (7-8) in a purely quantum information way. The observable \(\hat{\mathcal{O}}\) has eigenvectors \(|\alpha^{k}\rangle\) in the space states, and therefore, it can be written as
Footnote 6: In that sense, the density matrix \(\rho\) should be seen as a _pre-probability_[52].
\[\langle\hat{\mathcal{O}}\rangle=\sum_{k}\langle\alpha^{k}|\rho| \alpha^{k}\rangle\mathrm{Tr}\Big{[}|\alpha^{k}\rangle\langle\alpha^{k}| \mathcal{O}\Big{]}. \tag{16}\]
To define the vector space of the observable \(\hat{\mathcal{O}}\), we set the components and the bases to be
\[\mathfrak{o}_{\mu} :=\langle\alpha^{k}|\rho|\alpha^{k}\rangle, \tag{17a}\] \[f^{\mu} :=|\alpha^{k}\rangle\langle\alpha^{k}|\mathcal{O}, \tag{17b}\]
respectively as the components and the bases of the _Liouville vector_\(\vec{\mathcal{O}}=\mathfrak{o}_{\mu}f^{\mu}\).
As the set of all observables \(\hat{\mathcal{O}}\) is characterized by the \(\rho\), and \(\rho\) itself is an operator that can be written in terms of any set of orthonormal bases \(\{|i\rangle\}\). Then, we can rewrite Eq. (15-17) such that we define the density operator as a vector \(\vec{\rho}\) with components
\[\rho^{\mu}:=\langle f^{\mu}\rangle=\operatorname{Tr}\left[f^{\mu}\rho\right]. \tag{18}\]
Then, when \(\mathcal{O}\) becomes \(\rho\), Eq. (15) becomes
\[\overrightarrow{\langle\hat{\rho}\rangle}:=\vec{\rho}=e_{\mu}\rho^{\mu}, \tag{19}\]
where \(\rho^{\mu}\) act as the components of the averaged observable \(\langle\hat{\rho}\rangle\) in its _Liouville vector representation_\(\overrightarrow{\langle\hat{\rho}\rangle}\), and the basis components for such vector are defined as \(e_{\mu}:=|i\rangle\langle i|\) that are dual to \(f^{\mu}\), i.e. \(e_{\mu}\cdot f^{\nu}=\delta_{\mu}^{\ \nu}\). In such representation, Eq. (15) can be seen as a _bilinear relation_
\[||\vec{\mathcal{O}}||=\langle\vec{\mathcal{O}},\vec{\rho}\rangle:=\mathfrak{o} _{\mu}\rho^{\mu}. \tag{20}\]
Figure 1: Density manifold and the evolution of entropy surfaces away from reduced state \(\rho_{0}\).
To make the above construction more elaborate, let's say that \(\hat{\mathcal{O}}\) commutes with the spatial observable \(\hat{x}\), i.e. the \(\vec{\mathcal{O}}\) is a function in the components of \(\hat{x}\). Then, the density \(\rho^{\mu}\) would correspond to the averaged components \(\langle x^{\mu}\rangle\). As the density space is made of all the \(\langle x^{\mu}\rangle\) including the irrelevant ones, this means it is valid to embed the function \(\vec{\mathcal{O}}\) as a hypersurface in the space spanned by \(\rho^{\mu}\) after being expressed in terms of the dual basis of \(\vec{\rho}\). This surface is called the _the surface of reduced states_, and it is extremized at \(\rho_{0}\). At the same time, it is much easier to describe the density in terms of the microscopic variables \(\xi^{\mu}\) and the stochastic variables \(\mathbb{X}^{\mu}\), both as functions parameterized by the spacetime observables \(x^{\nu}\) or their averages, and we will do that later in subsection (3.5). But for now, we just finished setting the stage up for the debut of the density space. Next, we show this space could be promoted to a density manifold.
### Density manifold
In the density space, \(\rho\) is guaranteed to be a function in time such that Eq. (3) is satisfied. Geometrically this means that \(\rho(t)\), as a point in such space, evolves from the point \(P(\rho_{0})\) on the surface \(\Sigma_{P}\), where \(S[\rho]=S[\rho_{0}]\), to another point on the same \(\Sigma_{P}\) with \(S[\rho]\) along some trajectory \(\lambda\)[53]. This is defined as an exponential function just like how we relate, in Riemannian geometry, a vector to the points belonging to the trajectory that the vector is tangent to. Thus
\[\exp_{P}:t\cdot\rho^{\mu}\rightarrow\lambda_{\rho^{\mu}}(t). \tag{21}\]
As \(t\in[0,1]\), then \(\exp(1.\rho^{\mu})=\lambda_{\rho^{\mu}}(1)\) is the final point \(P(\rho)(t)\) at the trajectory. Meanwhile \(\exp(0.\rho^{\mu})=\lambda_{\rho^{\mu}}(0)\) is the initial point \(P(\rho_{0})\). The \(\rho_{0}^{\mu}\) can take any direction. Therefore, its \(i^{th}\) component \(\rho^{i}\) along the tangent of the trajectory is given by
\[\rho_{0}^{i}\equiv\frac{d\lambda(t)}{dt}\bigg{|}_{t=0}. \tag{22}\]
Moreover, the \(i^{th}\) component of \(\rho_{0}^{i}\) is allowed to be in the direction of any irrelevant basis \(\rho^{\mu}\) as long as it is tangent to the trajectory \(\lambda\). This motivates considering the exponential function (21) as a diffeomorphism between the neighborhood of point \(P\), which belongs to the state space, and the vector \(\rho_{0}^{\mu}\), which belongs to the tangent space \(\mathrm{T}_{P}\Sigma\) at point \(P\). If we normalize \(\rho_{0}^{i}\forall i\), then the orthonormal basis \(\{\rho_{0}^{\mu}\}\) of \(\mathrm{T}_{P}\Sigma\) provides the isomorphism
\[E:\mathbb{R}^{n}\xrightarrow{\sim}\mathrm{T}_{P}\Sigma, \tag{23}\]
such that \(E(\xi^{1},\cdots,\xi^{n})|_{P}=c_{i}\rho_{0}^{i}\).
The previous manifold-related definitions are valid regardless whether the surface is extremized on not. Therefore, there exists _charts_\(\psi\) on manifold \(\mathcal{M}\) containing all \(\lambda\)'s and \(\mathrm{T}_{Q}\Sigma\), for all \(Q\in\mathcal{M}\), such that
\[\psi:=(\exp\cdot E)^{-1}:\mathcal{M}\rightarrow\mathbb{R}^{n}. \tag{24}\]
Also, the exponential map of the density manifold acts similarly to how the known the exponential map \(\exp(tV)=\lambda_{V}(t)\) works between any Lie group and its Lie algebra, where \(V\in\) the algebra, \(\lambda_{V}(t)\in\) the group, and \(t\in\mathbb{R}\). Then, there exists an analytic diffeomorphism in a neighborhood \(U\) of \(V=0\) such that, for the coordinates \(\xi^{\mu}\in\mathbb{R}^{n}\) defined by the isomorphism \(E\) in Eq. (23), we find that \(\exp(\xi_{i}V^{i})\in\exp(U)\). This would define the necessary canonical chart.
Moreover, the entropy \(S[\rho]\) never loses information as long as the initial point \(P(\rho_{0})\) evolves in a Hamiltonian trajectory. However, when we disregard the irrelevant information, we do something similar to the _shift and lapse_ such that the entropy \(S[\rho]=S[\rho_{0}]\) evolves in time through the extemized and non-extremized surfaces, i.e. we can practice _push-forward_, \(f_{*}:\Sigma_{Q}\to\Sigma_{P}\), (or _pull-back_) transformations between \(\Sigma_{P}\) and \(\Sigma_{Q}\), together with applying the rules of Lie derivatives in order to relate the surfaces to each others. Now we are ready to introduce a metric on this _information manifold_.
### Manifold metric
We notice that the exponent function in Eq. (21) transfers \(\rho^{\mu}\) to a tangent space with same properties but at different point. Since the logarithmic function is the opposite to the exponent, then, in light of Eq. (20), we can safely say that \(\ln\rho\) does the same to \(\rho\) except that the domain here becomes the _dual tangent space_. This implies that the entropy \(S[\rho]\) acts as a bilinear map between the density \(\rho\) and the _information content_\((-\ln\rho)\). As both \(\rho\) and \((-\ln\rho)\) are unique geometric vectors, the entropy provides us with a linear isomorphic relationship between the tangents and the dual tangent spaces, i.e. the duality
\[\rho\equiv\widetilde{(\ln\rho)}. \tag{25}\]
is legitimate. Therefore, there exists a map \(\,\mathcal{G}\) on the \(\rho\) space such that
\[\mathcal{G}:(\ln\rho)_{\mu} \mapsto\mathcal{G}_{\mu\nu}\rho^{\nu}, \tag{26a}\] \[\mathcal{G}:(\rho)^{\mu} \mapsto\mathcal{G}^{\mu\nu}(\ln\rho)_{\nu}. \tag{26b}\]
The map \(\,\mathcal{G}\) is shown to be symmetric, real, and has positive eigenvalues in the Liouville vector representation [19]. It is the best function to play the role of metic in the \(\rho\) space. Then, we can use Eq. (26) to demonstrate the entropy as the bilinear function
\[S\langle\rho,\ln\rho\rangle:=-\,\mathcal{G}_{\mu\nu}(\rho(\xi))\rho^{\mu}\rho ^{\nu}. \tag{27}\]
The \(\rho\) space is dense enough-satisfying the topological features of manifolds-such that we can introduce the infinitesimal change \(d\rho\). Therefore, Eq. (27) can be redefined infinitesimally such that the second differential in the entropy becomes the metric itself. And the \(\rho\) space becomes eligible for a promotion to be a Riemannian manifold endowed with the invariant distance
\[-ds^{2}:=d^{2}S\langle\rho,\ln\rho\rangle=-\,\mathcal{G}_{\mu\nu}(\rho(\xi))d \rho^{\mu}d\rho^{\nu}, \tag{28}\]
where the map (26) is explicitly defined as
\[\mathcal{G}(e_{\mu},e_{\nu}):=\mathcal{G}_{\mu\nu}(\rho(\xi))=-\frac{d^{2}S}{d \rho^{\mu}d\rho^{\nu}}. \tag{29}\]
We now can introduce the Hamiltonian in Eq. (7-8) as a _superoperator_7\(\mathcal{H}\) such that
Footnote 7: The prefix _super_ has no Grassmann rings, i.e. has nothing to do with Supersymmetry or graded algebra in general.
\[\mathcal{H} :=\mathcal{H}_{\nu}{}^{\mu}e_{\mu}\otimes f^{\nu}, \tag{30a}\] \[\mathcal{H}_{\mu}{}^{\nu} =\langle\mathcal{H}e_{\mu},f^{\nu}\rangle. \tag{30b}\]
If we set \(\mathcal{O}=\mathcal{H}\), then Eq. (3), or the commutator in Eq. (15), becomes the Liouville superoperator
\[\frac{d\rho}{dt}=\mathscr{L}\rho=-i[\mathcal{H},\rho]. \tag{31}\]
Eq. (30) helps defining the components of \(\mathscr{L}\) as
\[\mathscr{L}_{\mu}{}^{\nu}=-i\mathrm{Tr}f^{\nu}[\mathcal{H},e_{\mu}]=-i \mathrm{Tr}[f^{\nu},\mathcal{H}]e_{\mu}. \tag{32}\]
Reintroducing the Liouville operator as a superoperator excavates its "super power" such that it manifestly plays the role of the Lie derivatives on the Riemannian \(\rho\) manifold. The Jacobian of transformations between \(\Sigma_{P}\) and \(\Sigma_{Q}\) is given by
\[J:=\det\left[\frac{\partial\rho(t+\delta t)}{\partial\rho(t)}\right]=1-i \mathscr{L}\delta t \tag{33}\]
or
\[i\mathscr{L}_{\nu}{}^{\mu}\rho^{\nu}:=\lim_{\delta t\to 0}\frac{\rho^{\mu}(Q)(t+ \delta t)-\rho_{0}^{\mu}(P)(t)}{\delta t}. \tag{34}\]
It is obvious that, with the above geometric interpretation, we also can introduce the evolution superoperator
\[\mathscr{U}:=\exp[-i\mathscr{L}(\delta t)], \tag{35}\]
which plays a role similar to that of the Lie algebra, or the Killing fields, over the usual Riemannian manifolds. We find Eq. (29) leads us to define the _dual vector_\((-\ln\rho)\) as
\[-(\ln\rho) :=\varrho_{\mu}f^{\mu}, \tag{36a}\] \[\varrho_{\mu} :=(-\ln\rho)_{\mu}\equiv\frac{\partial S}{\partial\rho^{\mu}}. \tag{36b}\]
Then, we introduce the Legendre transformation
\[\mathfrak{S}(\varrho_{\mu}) = S-\langle\ln(\rho)\rangle\] \[= S+\varrho_{\mu}\rho^{\mu}.\]
This transform reintroduces the \(\rho\) components to be
\[\rho^{\mu}:=\frac{\partial\mathfrak{S}}{\partial\varrho_{\mu}}. \tag{38}\]
Therefore, the metric in Eq.(29) can contravariantized as
\[\mathcal{G}^{\mu\nu}(\rho(\xi)) := \frac{\partial^{2}\mathfrak{S}}{\partial\varrho_{\mu}\varrho_{\nu}} \tag{39}\] \[= \frac{\partial\rho^{\kappa}}{\partial\varrho_{\mu}}\frac{\partial \rho^{\lambda}}{\partial\varrho_{\nu}}\frac{\partial^{2}\mathfrak{S}}{\partial \rho^{\kappa}\partial\rho^{\lambda}}.\]
Consequently, Eq. (28) is transformed into
\[d^{2}\mathfrak{S}=d^{2}S-\left[d^{2}\varrho_{\mu}\rho_{\mu}+2d\varrho_{\mu}d \rho_{\mu}+\varrho_{\mu}d^{2}\rho_{\mu}\right]. \tag{40}\]
The metricity \(\nabla\mathcal{G}=0\), or the parallel transport along geodesics \(\nabla_{\rho}\rho=0\), implies that there exists a set of connection coefficients \(\{\}\) on the \(\rho\) manifold similar to the Levi-Civita connections on the Einstein manifold. This means that both metric and connections are related through
\[\mathcal{G}(\nabla_{e^{\lambda}}e_{\mu},e_{\nu}):=\left\{{}^{ \kappa}_{\lambda\mu}\right\}\mathcal{G}_{\kappa\nu}(\rho(\xi)), \tag{41a}\] \[\left\{{}^{\kappa}_{\lambda\mu}\right\}=\frac{1}{2}\mathcal{G}^{ \kappa\mu}(\rho(\xi))\bigg{[}\partial_{\mu}\mathcal{G}_{\mu\lambda}(\rho(\xi ))+\partial_{\lambda}\mathcal{G}_{\mu}(\rho(\xi))-\partial_{\iota}\mathcal{G} _{\mu\lambda}(\rho(\xi))\bigg{]}, \tag{41b}\]
which is enough to introduce geodesic equations, Riemann curvature tensor and related other tensors. Additionally, Eq. (29) and Eq. (39) reveal a Hessian structure on the density manifold such that Eq. (41a) can be rearranged to get the corresponding connection coefficients of the first kind
\[\left\{{}_{\lambda\mu\nu}\right\} =-\frac{1}{2}\frac{\partial^{3}S}{\partial\rho^{\lambda}\partial \rho^{\mu}\partial\rho^{\nu}}\] \[=\frac{1}{2}\frac{\partial\ \mathcal{G}_{\lambda\mu}(\rho(\xi))}{ \partial\rho^{\nu}}. \tag{42}\]
In light of Balian _et al._ metric, Eq. (31) and Eq. (36a) yield
\[\mathscr{L}_{\mu}^{\ \nu}\varrho_{\nu}=\mathcal{G}_{\mu\lambda}\mathscr{L}_{ \nu}^{\ \lambda}\rho^{\nu}. \tag{43}\]
In light of Eq. (26), the symmetric property of Balian _et al_ metric, and the covariant form of the Liouville superoperator \(\mathscr{L}_{\mu\nu}=\mathcal{G}_{\mu\lambda}(\rho(\xi))\mathscr{L}_{\nu}^{\lambda}\), we differentiate Eq. (43) such that
\[0=\frac{1}{2}\frac{\partial\ \mathcal{G}_{\mu\lambda}(\rho(\xi))}{ \partial\rho^{\kappa}}\mathscr{L}_{\nu}^{\ \lambda}\rho^{\nu}+\frac{1}{2}\left(\mathscr{L}_{\mu\kappa}+\mathscr{L}_{\kappa \mu}\right). \tag{44}\]
Then, we substitute Eq. (42) and Eq. (31) in Eq. (44) such that
\[-i\frac{d\mathcal{G}_{\mu\nu}(\rho(\xi))}{dt}=\mathscr{L}_{\mu\nu}+\mathscr{L} _{\nu\mu}\, \tag{45}\]
which is the reason why we said before that \(\mathscr{U}\), as defined in Eq. (35), plays a role similar to that of Lie algebra or the Killing fields over the usual Riemannian manifolds.
### Euclidean structure of the space of observables
The density manifold reveals the Euclidean structure in the example we mentioned in subsection _vector space construction._ It means for an observable \(\vec{\mathcal{W}}\) there exists a component8 such that the orthogonal projection \(\mathscr{P}\) defines the components \(\upomega^{j},j\neq i\), as
Footnote 8: The components are those of the measured \(\upomega\) of the observable \(\vec{\mathcal{W}}\).
\[\left\langle\vec{\mathcal{W}}-\mathscr{P}\vec{\mathcal{W}},\vec{\mathcal{W}} \right\rangle=\left\langle\left(\vec{\mathcal{W}}-\upomega\right)^{j},\upomega _{i}\right\rangle=\delta_{i}{}^{j}\, \tag{46}\]
where the bilinear form is defined according to the map (26), i.e. we can infer that \(\mathscr{P}\) is a superoperator, with Greek indices, such that
\[\mathscr{P}\vec{\mathcal{W}}=\mathscr{P}_{\nu}{}^{\mu}\upomega_{\mu}f^{\nu}. \tag{47}\]
Therefore, we can connect this Euclidean space to the Riemannian density manifold by introducing a _vielbein_ structure9\(e_{i}{}^{\mu}\equiv\left(\upomega\right)_{i}{}^{\mu}\) such that the Euclidean flat metric corresponding to this structure is defined as
Footnote 9: Vielbein structure in information manifold is similar to those of the _pseudo_-Riemannian manifolds.
\[\upg_{ij}(\rho(\xi)):=\mathcal{G}_{\mu\nu}(\rho(\xi))e_{i}{}^{ \mu}e_{j}{}^{\nu}, \tag{48a}\] \[\upg^{jk}(\rho(\xi))\upg_{ki}(\rho(\xi))=\delta_{i}{}^{j}. \tag{48b}\]
In an information manifold this metric plays the same role the spatial spacetime metric \(\gamma_{ij}\) does in define the expansion parameter in Eq. (11). It is worth noting that the vielbein acts on the density vector as a projector operator to yield the components of the density vector in the Liouville space, which is another way to define the axes in Fig. (1), i.e. we could start from the projector operator and the vielbein structure backward until we reach the Liouville vector representation of the density operator; both approaches therefore are equivalent. And if we get back to Fig. (1) and choose a point \(P^{\prime}(t_{0})\in\Sigma_{Q}\) along the curve \(t_{0}\), then
\[\mathscr{P}\rho(t_{0})=\rho_{0}(t_{0}), \tag{49a}\] \[\rho^{i}:=\left\langle\upomega^{i}\right\rangle=\left\langle \upomega^{i},\rho\right\rangle=\left\langle\upomega^{i},\rho_{0}\right\rangle,\ \ \forall i. \tag{49b}\]
Now it is safe to infer that the distance on the surfaces \(\Sigma\) in Fig. (1) are given by
\[ds_{\Sigma}^{2} = \upg_{ij}(\rho(\xi))d\rho^{i}d\rho^{j} \tag{50}\] \[= \upg^{ij}(\rho(\xi))d\rho_{i}d\rho_{j}\,\]
where \(\upg_{ij}(\rho(\xi))\) could be not equal to \(\upg^{ij}(\rho(\xi))\) in general, i.e. \(d\rho_{i}\neq\upg_{ij}(\rho(\xi))d\rho^{j}\) necessarily as \(d\rho_{i}\) is more like the component \(\upsigma_{i}\) of \(\vec{\mathcal{O}}\) as defined previously. Last important manifold structure can be obtained by combining Eq. (22) and Eq.(49) such that
\[\left\langle\upomega^{i},\,\upomega\right\rangle=0\, \tag{51}\]
where the vector \(\upomega=(\rho-\rho_{0}^{i})\) is the tangent along the curve \(t_{0}\), see Fig. (1). Thus, we have a _vector bundle_ structure, where the _base_ is the surface \(\Sigma_{P}\) and the _fibres_ are the curves \(t_{0}+n\delta t,n\in\mathbb{N}\). See Ref. [54] for more about the relation with the _blurred space_.
### Fisher metric and Kullback-Leibler divergence
The previously constructed density manifold has Euclidean signature. So, we target constructing an information manifold with a _Lorentzian signature_\(\text{diag}(-1,+1,+1,\cdots)\), and we achieve this goal in subsection (3.7). For now, we focus on relating Balian _et al_ metric to Fisher metric. As the vacuum spacetime is in a "continuous" experience of quantum fluctuations, it can be optimized stochastically such that the expectation values of the operators over spacetime, i.e. the stochastic variables 10, are invariant under coordinate transformations between different frames of references [56]. This means we can define the density vectors \(\rho^{\mu}\), which is a function in the classical variables \(x^{\mu}\) characterizing the spacetime itself, as function in the stochastic variables \(\mathbb{X}^{\mu}\equiv\mathbb{X}^{\mu}(\langle x^{\mu}\rangle,\sigma_{x^{\mu}})\) that are functions in a \(2D\) space of averages \(\mathbb{X}^{\mu}(\langle x^{\mu}\rangle)\) and standard deviations \(\sigma_{x^{\mu}}\)11. Then, we may guess that Balian _et al._ metric \(\mathcal{G}_{\mu\nu}\left(\rho(\xi^{\mu})\right)\) in Eq. (28) could be expressed as an explicit function of \(\xi^{\mu}\) and \(\mathbb{X}^{\mu}\) variables, i.e. \(\mathcal{G}_{\mu\nu}\equiv\mathcal{G}_{\mu\nu}(\xi^{\mu};\mathbb{X}^{\mu})\). In order to check the validity of this guess, we need to check that the probability density stays the same when coordinate transformations are considered. The best candidate to test this requirement is the Kullback-Leibler divergence
Footnote 10: For more on the properties of the stochastic variables, see Ref. [55].
Footnote 11: See Ref. [57] for more on the \(2D\) manifold of averages and standard deviations in the exponential families of normal distribution.
\[D_{\text{KL}}=\sum_{\xi^{i};\mathbb{X}^{i}}\rho(\xi^{\mu};\mathbb{X}^{\mu})\ln \frac{\rho(\xi^{\mu};\mathbb{X}^{\mu})}{\rho(\xi^{\mu}+d\xi^{\mu};\mathbb{X}^{ \mu})}\Delta\mathbb{X}^{i}. \tag{52}\]
As the spacetime variables are infinitesimally changing and \(\rho\to d\rho\), then Eq. (52) becomes
\[D_{\text{KL}}=-\int\limits_{\mathcal{M}}\!\rho(\xi;\mathbb{X})\Big{[}\frac{1} {\rho(\xi;\mathbb{X})}\frac{\partial\rho(\xi;\mathbb{X})}{d\xi^{\mu}}d\xi^{ \mu}-\frac{1}{2}\frac{1}{\rho^{2}(\xi;\mathbb{X})}\frac{\partial\rho(\xi; \mathbb{X})}{d\xi^{\mu}}\frac{\partial\rho(\xi;\mathbb{X})}{d\xi^{\nu}}d\xi^{ \mu}d\xi^{\nu}\Big{]}d\mathbb{X}, \tag{53}\]
where \(\rho(\xi;\mathbb{X})\equiv\rho(\xi^{\mu};\mathbb{X}^{\mu})\). The second term in Eq. (53) is nothing but the _Fisher metric_
\[ds^{2}:=\sum_{i}\frac{(d\rho_{i})^{2}}{\rho_{i}}\, \tag{54}\]
which is the same metric in Eq. (28) but as an explicit function in \(\xi^{\mu}\) and \(\mathbb{X}^{\mu}\)[19]. Applying Fisher metric, chain rule, and the completeness relation of the density to Eq. (52) yields
\[D_{\text{KL}}=\int\limits_{\mathcal{M}}\frac{1}{2}\mathcal{G}_{\mu\nu}(\xi; \mathbb{X})d\xi^{\mu}d\xi^{\nu}d\mathbb{X}. \tag{55}\]
Eq. (52) can be seen as
\[D_{\text{KL}}\sim-d^{2}S=d\bigg{[}\frac{\partial S}{\partial\rho_{\mu}}d\rho^ {\mu}\bigg{]}=\frac{\partial^{2}S}{\partial\rho^{\mu}\partial\rho^{\nu}}\frac {\partial\rho^{\mu}}{\partial\xi^{\kappa}}\frac{\partial\rho^{\nu}}{\partial \xi^{\lambda}}d\xi^{\kappa}d\xi^{\lambda}. \tag{56}\]
By comparing Eq. (39), Eq. (55), and Eq. (56), and as the Kullback-Leibler divergence is nothing but a modified Shannon entropy, the covariant form of Eq. (39) says that
\[\mathcal{G}_{\mu\nu}(\varrho(\xi;\mathbb{X}))=\frac{\partial\varrho}{\partial \xi^{\mu}}\frac{\partial\varrho}{\partial\xi^{\nu}}. \tag{57}\]
As we notice, the density is no longer a vector, it is just a function, and the vectors of the new manifold are \(\partial_{\mu}\equiv\partial/\partial\xi^{\mu}\). If we strict expressing all densities as as functions in stochastic variables \(\mathbb{X}^{\mu}\) rather than the canonical ones \(\xi^{\mu}\), i.e. \(\rho\equiv\rho(\mathbb{X}^{\mu})\), then, with help of Eq. (15-20), we suppress Fisher metric density \(\xi^{\mu}\)-dependence, i.e. from now on the Balian _et al._ metric12\(\mathcal{G}_{\mu\nu}\) is not the metric we use. And Eq. (57) should be improved as
Footnote 12: We move from the information manifold of Balian _et al._[19] to the information manifold of Amari [58].
\[\langle\mathcal{G}_{\mu\nu}(\xi;\mathbb{X})\rangle=\langle\frac{\partial \varrho}{\partial\xi^{\mu}}\frac{\partial\varrho}{\partial\xi^{\nu}}\rangle. \tag{58}\]
In order to understand the last result, we need to get back to the Fisher metric. Without loss of generality, Eq. (53) describes the relation between the entropy at the reduced state \(\rho_{0}\) and any other state \(\rho\), see the Points \(P\) and \(Q\) in Fig. (1). Then, _in information manifold the probabilistic average of the stochastic Fisher metric \(\mathtt{g}_{\mu\nu}\) plays the same role the spacetime metric does in the Riemannian manifolds_, i.e. \(g_{\mu\nu}\simeq\mathtt{g}_{\mu\nu}\). Thus,
\[\mathtt{g}_{\mu\nu}:=\langle\mathcal{G}_{\mu\nu}(\xi,\mathbb{X})\rangle. \tag{59}\]
This is crucial for defining Einstein tensor and the analogue of the gravitational constant in the information manifold as we will see by the end of the next subsection (3.6).
### Hessian structure and Einstein tensor
For an information manifold endowed with exponential family of distribution, there exists a potential function \(\phi\) such that its Hessian defines the metric of that manifold [59]. Non-exponential families do the same but in a less straightforward way13, for more on that see Ref. [62, 63] and references there. For simplicity we discuss the Hessian structure of exponential families, but our discussion is applicable to non-exponential ones too. We share with Ref. [64] constructing Einstein tensor from the Fisher metric in information manifold. Also, we find Einstein tensor to be endowed with relevant information to construct the energy-momentum tensor from the varying cumulant partition function defined as a scalar field. This is guaranteed naturally since the entropy data of the system underpin the field strength of the system. Thus, we can consider Einstein equations in information manifold as the equations of coarse-grained states for the original microscopic system of
quantum field theories behind the classical ones. The major difference between this work and the endeavor of Ref. [64] is that, without assuming any family of exponentials, we find a _positively definite cosmological-like background term_ in the coarse-grained Einstein equations, particularly in the terms containing the derivatives of the Christoffel connections. This suggests redefining Einstein tensor in information manifold to become a Lovelock tensor. Consequently, coarse-graining the states could reveal extra disguised higher ordered curvature terms in the theory. Additionally, in contrary to Ref. [64] choice of real probability distributions that suit AdS/CFT, we impose the family of exponentials to be defined as complex probability distributions such that we construct Einstein tensor in a information manifold with Lorentzian signature. This does not change the result that the gravitational constant is indeed dynamical. However, it changes the sign of the extremal area and the entropy. This result is consistent with the fact that complex probabilities are associated with non-Hermitian Hamiltonians and their non-unitary transformations. Ref. [64] admits the problem of controlling the energy scale of the quantum field theory in the information geometric approach. In order to resolve that, we suggest repeating the process coarse-graining until fine-graining is achieved [33]. Consequently, the information approach would have a renormalization process, and the von Neumann equation will be fine-grained as suggested in Ref. [11]. Thus, our modifications may introduce quantum information geometry approach to dS/CFT [65, 66].
As shown in Appendix 2.7 of Ref. [61], the variational Bayesian inferences limited to exponential families, like the KL divergence, can be generalized through the technique of _parameter separation parameterization_. The technique is applicable to both exponential and non-exponential families of distributions by _linearly_ relating \(\rho\) to a _real valued potential function_14. The potential function \(\phi\) would help defining the metric of the information manifold as shown in the Appendix. Without loss of generality, the equality \(\partial_{\mu}\partial_{\nu}\phi=\mathtt{g}_{\mu\nu}\) in Eq. (A20) is very similar to the bilinear relation \(\mathrm{Hess}(\phi)=\nabla\nabla\phi\), or the _Hessian_, in Riemannian geometry [58]. We are allowed to do this comparison because of the bundle structure in the density manifold we referred to at the end of subsection (3.4). Remember that \(\varrho\in\mathrm{T}^{*}_{P}\Sigma\), i.e. the Hessian acts on the potential function \(\phi\) to get an element in the sections of tangent bundles \(\mathrm{Hess}(\phi)\in\Gamma(\mathrm{T}^{*}\Sigma\otimes\mathrm{T}^{*}\Sigma)\). Also, the density manifold and the canonical variable manifold, which could be the conventional spacetime, are related through transforming \(\varrho\) into \(\partial_{\mu}\varrho\). Then, we can apply a derivative on Eq. (A21) to get
Footnote 14: See function \(h(y)\) defined in Eq. 2.14 in Ref. [61]
\[\partial_{\lambda}\mathtt{g}_{\mu\nu}=\partial_{\lambda}\partial_{\mu} \partial_{\nu}\phi=\langle\partial_{\lambda}(\partial_{\mu}\partial_{\nu} \varrho)\rangle=\langle\partial_{\lambda}(\partial_{\mu}\varrho\partial_{\nu} \varrho)\rangle=-\langle\partial_{\lambda}\varrho\partial_{\mu}\varrho \partial_{\nu}\varrho\rangle, \tag{60}\]
where the tensor relation between the vectors \(\partial_{\mu}\varrho\) is suppressed, the last equality comes from \(\langle\partial_{\mu}(\partial_{\nu}\varrho)\rangle=\langle\partial_{\mu} \varrho\partial_{\nu}\varrho\rangle\), and the negative sign in last equality comes from the metric definition in Eq.(27). More obviously from the behavior of the derivatives, we have
\[\partial_{\lambda}\mathtt{g}_{\mu\nu}=\langle\partial_{\lambda}\partial_{\mu} \varrho\partial_{\nu}\varrho\rangle+\langle\partial_{\mu}\varrho\partial_{ \lambda}\partial_{\nu}\varrho\rangle. \tag{61}\]
Using the symmetric property of \(\mathfrak{g}_{\mu\nu}\), the last two equations lead us to
\[\partial_{\lambda}\mathfrak{g}_{\mu\nu}=\frac{1}{2}\Big{[}\langle \partial_{\lambda}\partial_{\mu}\varrho\partial_{\nu}\varrho\rangle+\langle \partial_{\mu}\varrho\partial_{\lambda}\partial_{\nu}\varrho\rangle-\langle \partial_{\lambda}\varrho\partial_{\mu}\varrho\partial_{\nu}\varrho\rangle\Big{]}. \tag{62}\]
In light of Eq. (41b), the last result is very enticing to define the Christoffel connection corresponding to the canonical variable manifold as [57, 58]
\[\mathbb{I}^{\lambda}_{\ \mu\nu} =-\frac{1}{2}\mathfrak{g}^{\lambda\kappa}\partial_{\kappa} \partial_{\mu}\partial_{\nu}\phi\] \[=\mathfrak{g}^{\lambda\kappa}\Big{[}\langle\partial_{\kappa} \varrho\partial_{\mu}\partial_{\nu}\varrho\rangle-\frac{1}{2}\langle \partial_{\kappa}\varrho\partial_{\mu}\varrho\partial_{\nu}\varrho\rangle \Big{]}. \tag{63}\]
Next, we use Eq. (63) to calculate the Ricci tensor and the Ricci scalar as functions in the dual density variables and their derivatives. The Ricci tensor is obtained as usual from the contracted Riemann tensor
\[\mathbb{R}_{\mu\nu}=\mathfrak{g}^{\alpha\beta}\mathbb{R}_{\alpha \mu\beta\nu}=\partial_{\alpha}\mathbb{I}^{\alpha}_{\ \mu\nu}-\partial_{\nu}\mathbb{I}^{\alpha}_{\ \mu\alpha}+\mathbb{I}^{\alpha}_{\ \beta\alpha}\mathbb{I}^{\beta}_{\ \mu\nu}-\mathbb{I}^{\alpha}_{\beta\nu} \mathbb{I}^{\beta}_{\ \mu\alpha}. \tag{64}\]
Meanwhile the Ricci scalar also is obtained as usual for the contacted Ricci tensor
\[\mathbb{R}=\mathfrak{g}^{\mu\nu}\mathbb{R}_{\mu\nu}. \tag{65}\]
Then, the Einstein equation in information manifold becomes
\[\mathbb{G}_{\mu\nu}=\mathbb{R}_{\mu\nu}-\frac{1}{2}\mathfrak{g}_ {\mu\nu}\mathbb{R}. \tag{66}\]
As shown in the Appendix, we deconstruct Eq. (66) into two main pieces. We analyze these pieces using the definitions of the metric and Christoffel connections in information manifolds such that the Hessian structure is realized. So, we can reintroduce Eq. (66) as
\[\mathbb{G}_{\mu\nu} =\Lambda\mathfrak{g}_{\mu\nu}+(\tilde{\mathbb{R}}_{\mu\nu}-\frac {1}{2}\mathfrak{g}_{\mu\nu}\tilde{\mathbb{R}})+\frac{1}{2}\mathfrak{g}_{\mu \nu}\tilde{\mathbb{R}}, \tag{67a}\] \[\text{or}\ \ \tilde{\mathbb{G}}_{\mu\nu}+\Lambda\mathfrak{g}_{\mu \nu}=\mathbb{G}_{\mu\nu}-\frac{1}{2}\mathfrak{g}_{\mu\nu}\tilde{\mathbb{R}}, \tag{67b}\]
where \(\tilde{\mathbb{R}}_{\mu\nu}\) is explained in Eq. 69, and \(\tilde{\mathbb{G}}_{\mu\nu}+\Lambda\mathfrak{g}_{\mu\nu}\) plays a rule similar to that of Lovelock tensor \(A_{\mu\nu}=G_{\mu\nu}+\Lambda g_{\mu\nu}\) in Einstein spacetime manifold [67].
The first new piece obtained from rearranging then deconstructing Eq. (66), which is Eq. (A9-A9\({}^{\prime}\)), gives a cosmological-like term in the information manifold as
\[\Big{(}\partial_{\alpha}\mathbb{I}^{\alpha}_{\ \mu\nu}-\partial_{\nu} \mathbb{I}^{\alpha}_{\ \mu\alpha}\Big{)}-\frac{1}{2}\mathfrak{g}_{\mu\nu} \mathfrak{g}^{\kappa\lambda}\Big{(}\partial_{\alpha}\mathbb{I}^{\alpha}_{\ \kappa\lambda}-\partial_{\kappa}\mathbb{I}^{\alpha}_{\ \lambda\alpha}\Big{)}=\frac{1}{2}D(\frac{D}{2}-1)\mathfrak{g}_{\mu\nu}\equiv \frac{1}{2}\Lambda\mathfrak{g}_{\mu\nu}\, \tag{68}\]
where \(\Lambda=D(D/2-1)\) is defined for \(D\geqslant 2\) dimensional manifold as in Lovelock theory of gravity [68]. This helps our information manifold pass the necessary condition, but not
sufficient on its own, to develop an additional Gauss-Bonnet term for the corresponding Riemann tensor that is expressed in its information form. The appearance of \(D(D/2-1)\) term in the theory is very tempting to study the effects of having higher-curvature terms in the context of holography. We leave that to be discussed hopefully in a future study. We notice that the cosmological constant we obtained does not depend on the manifold parameters. It only depends on the number of dimensions, and it vanishes when we consider \(D=2\). This may point to background symmetry behind the cosmological constant, and it may have a relation with vanishing cosmological constant in the context of some M/string theory [69]. More importantly, this piece in the \(\tilde{\mathbb{O}}_{\mu\nu}\) tensor should exist for all kinds of exponential or non-exponential probability distributions. It neither demands \(\rho\) to be exponential, Gaussian or complex one, nor to be in dS or AdS spaces. And this is our major difference between this work and the endeavors in Ref. [64].
The second piece after deconstructing Eq. (66) appears written in Eq. (A14) in the Appendix. This piece of non differentiated Christoffels, obtained from both \(\mathbb{R}_{\mu\nu}\) and \(\mathbb{g}_{\mu\nu}\mathbb{R}\), defines a new tensor
\[\widetilde{\mathbb{R}}_{\mu\nu} =\Big{(}\mathbb{I}_{\beta\alpha}^{\alpha}\mathbb{I}_{\mu\nu}^{ \beta}-\mathbb{I}_{\beta\nu}^{\alpha}\mathbb{I}_{\mu\alpha}^{\beta}\Big{)}- \frac{1}{2}\mathbb{g}_{\mu\nu}\mathbb{g}^{\kappa\lambda}\Big{(}\mathbb{I}_{ \beta\alpha}^{\alpha}\mathbb{I}_{\kappa\lambda}^{\beta}-\mathbb{I}_{\beta \kappa}^{\alpha}\mathbb{I}_{\lambda\alpha}^{\beta}\Big{)}\] \[=\frac{1}{4}\mathbb{g}^{\alpha\kappa}\mathbb{g}^{\beta\lambda} \Big{[}\langle\partial_{\alpha}\partial\partial_{\beta}\varrho\partial_{ \kappa}\varrho\rangle\langle\partial_{\mu}\varrho\partial_{\nu}\varrho \partial_{\lambda}\varrho\rangle-\langle\partial_{\nu}\varrho\partial_{\beta }\varrho\partial_{\kappa}\varrho\rangle\langle\partial_{\mu}\varrho\partial_ {\alpha}\varrho\partial_{\lambda}\varrho\rangle\Big{]}. \tag{69}\]
Without loss of generality 15, we follow the exponential family example in Ref. [64]. We use Eq. (A15) and after so Eq. (69) becomes
Footnote 15: We do not lose generality because the crucial steps in Eq.(A20–A21) are guaranteed by the previously mentioned technique of parameter separation parametrization to work even with non-exponential families [61].
\[\widetilde{\mathbb{R}}_{\mu\nu}=\frac{1}{4}\mathbb{g}^{\alpha \kappa}\mathbb{g}^{\beta\lambda}\Bigg{\{} \Big{[}\mathcal{G}_{\alpha\beta}\mathcal{G}_{\kappa\lambda}- \mathbb{g}_{\alpha\beta}\mathbb{g}_{\kappa\lambda}\Big{]}+\mathcal{G}_{\kappa \lambda}\Big{[}\dot{\mathcal{G}}_{\alpha\beta}\langle\mathbb{X}-\langle x \rangle\rangle+\frac{1}{2}\ddot{\mathcal{G}}_{\alpha\beta}\langle(\mathbb{X}- \langle x\rangle)^{2}\rangle+\cdots\Big{]}\] \[+\mathcal{G}_{\alpha\beta}\Big{[}\dot{\mathcal{G}}_{\kappa \lambda}\langle\mathbb{X}-\langle x\rangle\rangle+\frac{1}{2}\ddot{\mathcal{G }}_{\kappa\lambda}\langle(\mathbb{X}-\langle x\rangle)^{2}\rangle+\cdots \Big{]}\Bigg{\}}\times\partial_{\mu}\phi\partial_{\nu}\phi. \tag{70}\]
Or
\[\widetilde{\mathbb{R}}_{\mu\nu}=\frac{1}{2}\mathbb{O}_{D}\ \partial_{\mu}\phi \partial_{\nu}\phi. \tag{71}\]
where the dynamical entity \(\mathbb{O}_{D}\) is defined as
\[\mathbb{O}_{D}=\frac{1}{2}\mathbb{g}^{\alpha\kappa}\mathbb{g}^{ \beta\lambda}\Bigg{\{} \Big{[}\mathcal{G}_{\alpha\beta}\mathcal{G}_{\kappa\lambda}- \mathbb{g}_{\alpha\beta}\mathbb{g}_{\kappa\lambda}\Big{]}+\mathcal{G}_{\kappa \lambda}\Big{[}\dot{\mathcal{G}}_{\alpha\beta}\langle\mathbb{X}-\langle x \rangle\rangle+\frac{1}{2}\ddot{\mathcal{G}}_{\alpha\beta}\langle(\mathbb{X}- \langle x\rangle)^{2}\rangle+\cdots\Big{]}\] \[+\mathcal{G}_{\alpha\beta}\Big{[}\dot{\mathcal{G}}_{\kappa \lambda}\langle\mathbb{X}-\langle x\rangle\rangle+\frac{1}{2}\ddot{\mathcal{ G}}_{\kappa\lambda}\langle(\mathbb{X}-\langle x\rangle)^{2}\rangle+\cdots\Big{]} \Bigg{\}}. \tag{72}\]
Using the cumulant partition function \(\phi\) as a classical scalar field in the information manifold, we can define a Lagrangian in \(D=4\) for an effective field theory as
\[\mathbb{L}=\mathfrak{g}^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi, \tag{73}\]
and the corresponding energy-momentum tensor is
\[\mathbb{T}_{\mu\nu}=\mathfrak{g}_{\mu\lambda}\frac{\partial\mathbb{L}}{ \partial(\partial_{\lambda}\phi)}\partial_{\nu}\phi-\mathfrak{g}_{\mu\nu} \mathbb{L}=\frac{1}{\mathbb{G}_{4}}\Bigg{(}\widetilde{\mathbb{R}}_{\mu\nu}- \frac{1}{2}\mathfrak{g}_{\mu\nu}\widetilde{\mathbb{R}}\Bigg{)}=\frac{1}{ \mathbb{G}_{4}}\widetilde{\mathbb{G}}_{\mu\nu}\, \tag{74}\]
where obviously the last equation is Einstein equation with its _reduced_ tensors \(\widetilde{\mathbb{G}}_{\mu\nu}\) and \(\widetilde{\mathbb{R}}_{\mu\nu}\) as functions in the stochastic variables \(\mathbb{X}\) in such information geometry. Also, we can add the cosmological constant term we obtained before in Eq. (68). Comparing Eq. (74) with Eq. (14), we get
\[\mathbb{G}_{4}\simeq\frac{2\pi\ell_{P}^{2}}{\hbar}, \tag{75}\]
which again is nothing but the gravitational constant in the information form for such \(4D\) geometry. This indicates quantum information geometry induces gravity phenomena, and the gravitational constant is no longer constant besides its \(\hbar\) dependence. This may support studies that implies inducing gravity from quantum mechanics [70, 71, 72, 73, 74]. Also, It could be connected with realizing space and time as an approximate macroscopic concepts stem fundamentally from quantum field theories [75]. Additionally, a varying gravitational constant may give a clue to Dirac's large numbers hypothesis [76] that also implies varying gravitational constant is based on the simple analysis of dimensionless constants that are provided by nature.
The results obtained in Eq. (63-68) are discussed in details in the Appendix of the manuscript. There, the family of exponentials is not assumed to be either complex, which is what we discuss in the next subsection, or real as in Ref. [64], since we keep everything in the appendix in terms of an arbitrary \(\varrho\). Additionally in Ref. [64], after the Gaussian exponential was imposed, the "whole" Einstein tensor is equated to a "negative" cosmological constant. This is not what we obtain. Rather, we say that the Einstein tensor \(\mathbb{G}_{\mu\nu}\) should be split into \(\partial\mathbb{T}\) terms that correspond to a "positively" cosmological constant defined in terms of the dimension \(D\) as in Eq. (68), and \(\mathbb{T}\mathbb{T}\) terms that introduce a modified Ricci tensor \(\tilde{\mathbb{R}}_{\mu\nu}\) as the kinetic term \(\partial_{\mu}\phi\partial_{\nu}\phi\) in Eq. (70). Notice that the Lagrangian \(\mathbb{L}\sim\mathfrak{g}^{\mu\nu}\tilde{\mathbb{R}}_{\mu\nu}\sim\Box\phi\) renders the reduced Einstein tensor \(\tilde{\mathbb{G}}_{\mu\nu}\) upon applying the variational principle with respect to \(\phi\), while the \(\Lambda\mathfrak{g}_{\mu\nu}\) piece is produced from varying \(\mathbb{L}\) with respect to \(\mathfrak{g}_{\mu\nu}=\partial_{\mu}\partial_{\nu}\phi\). This reminds us with the \(Y\) piece in the modified Lagrangian of gravity in the framework of superstrings [77], both share the same Hessian structure. In brief, the Einstein tensor \(G_{\mu\nu}\) in Ref. [64] should rather be treated as a Lovelock tensor \(A_{\mu\nu}=G_{\mu\nu}+\Lambda g_{\mu\nu}\). Such tensor should be split into a cosmological term \(\Lambda\mathfrak{g}_{\mu\nu}\) and a modified Ricci tensor \(\tilde{\mathbb{R}}_{\mu\nu}\), and the later can be used to introduce a new modified Einstein tensor \(\tilde{\mathbb{G}}_{\mu\nu}\) that has no cosmological terms in it. All that can be obtained without assuming the densities to be expressed as family of exponentials or any other family.
### Obtaining a pseudo-Riemannian information manifold
A question remains about how to construct an arbitrary \((D+1)\)-dimensional information geometry with a _Lorentzian signature_\(\mathrm{diag}(-1,+1,\cdots)\). And without loss of generality, we follow the complex16 Gaussian ansatz in Ref. [78] to get a \((1+1)\)-dimensional information manifold with a Lorentzian signature \(\mathrm{diag}(-1,+1)\)
Footnote 16: We comment on the consequences of is ansatz in the discussion section.
\[\rho(t)=\frac{1}{\sqrt{2\pi}\sigma}\exp\left[-\frac{(t-i\langle t\rangle)^{2} }{2\sigma}\right]=\exp\left[-\ln(\sqrt{2\pi}\sigma)-\frac{t^{2}}{2\sigma^{2}} +\frac{it\langle t\rangle}{\sigma^{2}}+\frac{\langle t\rangle^{2}}{2\sigma^{ 2}}\right], \tag{76}\]
that requires components of the corresponding Fisher metric to be
\[\mathfrak{g}_{00} =\int d^{2}\xi\frac{1}{\rho(\xi)}\left(\frac{\partial\rho(\xi)}{ \partial\xi^{0}}\right)^{2}=-1, \tag{77a}\] \[\mathfrak{g}_{11} =\int d^{2}\xi\frac{1}{\rho(\xi)}\left(\frac{\partial\rho(\xi)}{ \partial\xi^{1}}\right)^{2}=1,\] (77b) \[\mathfrak{g}_{01} =\int d^{2}\xi\frac{1}{\rho(\xi)}\left(\frac{\partial\rho(\xi)}{ \partial\xi^{0}}\right)\left(\frac{\partial\rho(\xi)}{\partial\xi^{1}}\right)=0. \tag{77c}\]
As the Gaussian distribution requires defining a _complex exponential family_ using the following distribution
\[\rho(\xi^{\mu})=\exp\left[\xi^{\mu}\mathbb{E}_{\mu}(\mathbb{X}^{\nu})-\phi( \xi^{\nu})\right], \tag{78}\]
where the function \(\mathbb{E}_{\mu}(\mathbb{X}^{\nu})\) plays the role of any physical property related to the corresponding canonical or intensive variables \(\xi^{\nu}\), and \(\phi(\xi^{\nu})\) is the _cumulant partition function_ of the \(\mathbb{E}_{\mu}(\mathbb{X}^{\nu})\) states [58], see Eq. (A15) and after in the Appendix. By comparing Eq. (76) to Eq. (78), it is easy to notice that
\[\mathbb{E}_{\mu} :=(\mathbb{E}_{0},\mathbb{E}_{1})=(t,t^{2}), \tag{79a}\] \[\xi^{\mu} :=(\xi^{0},\xi^{1})=\left(\frac{i\langle t\rangle}{\sigma^{2}},- \frac{1}{2\sigma^{2}}\right),\] (79b) \[\phi(\xi^{\nu}) :=\frac{1}{2}\ln\left(-\frac{\pi}{\xi^{1}}\right)-\frac{(\xi^{0} )^{2}}{4\xi^{1}}=\ln\left(\sqrt{2\pi}\sigma\right)+\frac{\langle t\rangle^{2} }{2\sigma^{2}}, \tag{79c}\]
which defines the cumulant distribution as
\[\varrho:=-\ln(\rho(\xi)) \equiv\phi(\xi^{\nu})-\xi^{\mu}\mathbb{E}_{\mu}(\mathbb{X}^{\nu})\] \[=\ln(\sqrt{2\pi}\sigma)+\frac{t^{2}}{2\sigma^{2}}-\frac{it\langle t \rangle}{\sigma^{2}}-\frac{\langle t\rangle^{2}}{2\sigma^{2}}. \tag{80}\]
Notice that \(\mathbb{E}_{\mu}(\mathbb{X}^{\nu})\) is a function in \(t\), which is one of the components \(x^{\mu}\) of the classical spacetime, i.e. the stochastic variables \(\mathbb{X}^{\nu}\) can be parameterized generally by the classical
\(x^{\mu}\) components of the spacetime as we emphasized before in the beginning of subsection (3.5) when we mentioned the coarse-graining the properties of the manifold. Also, as \(\xi^{\mu}\) variables are functions in the averages and the standard deviations of \(x^{\nu}\), i.e. \(\xi^{\mu}\) are functions in statistical variables, we can suppress the variable \(\xi\) in the metric such that \(\mathcal{G}_{\mu\nu}\left(\xi,\mathbb{X}\right)\equiv\mathcal{G}_{\mu\nu}\left( \mathbb{X}\right)\). Moreover, defining the exponential family of probability distributions to be either complex, like how we have just done, or a real, as in Ref. [64], will not change the definition of the cumulant probability distribution17\(\varrho=\phi(\xi^{\nu})-\xi^{\mu}\mathbb{E}_{\mu}(\mathbb{X}^{\nu})\). Therefore, the results in the previous subsection hold correct in both approaches, where our approach is suitable for dS spaces while Ref. [64] fits for AdS. Constructing arbitrary \(D\)-dimensional pseudo-Riemannian information manifold from a classical spacetime is left as an exercise to the reader18.
Footnote 17: Notice \(\rho=\exp(-\beta H)/Z\) gives \(\phi-\varrho=-\beta H\) where the energy \(H\) is related to the entropy \(S\)[15].
Footnote 18: This can be obtained in a similar fashion of how AdS is obtained from complexifying dimensions of CFT, see Eq. (9-12) in Ref. [79], or see the \((u,v)\)_thermal_ coordinates in Ref. [80]
## 4 Entropy of the Information Manifold
As we have seen in section (2) for non dissipative systems, the time rate change of the entropy is directly related to the content of information expressed in Liouville-von Neumann equation. But in the previous section we found that there is a correspondence between the spacetime and the statistical information manifold, this correspondence makes the spacetime metric corresponds to Fisher metric even when we compare the _Lorentzian_ signature of both [81, 82, 78]. As the spatial entropic area \(A\) is part of the spacetime, we can construct an area \(\mathbb{A}\) in the information manifold that corresponds to the \(A\), and the spatial metric of that information area exists in light of our discussion in subsection (3.4). More about the area \(\mathbb{A}\) of such _blurred spatial space from quantum entanglement contours_ can be found in Ref. [54], or after complexifying the exponential distribution families 19. And if we accept the existence of such correspondence, then we can find a equation for the spatial-like expansion rate in the information manifold \(\emptyset\) that corresponds to \(\theta\) as mentioned before in section (2), i.e.
Footnote 19: \(\mathbb{A}\) could be developed for AdS as in Ref. [83].
\[\mathfrak{G}=\frac{1}{\mathbb{A}}\frac{d}{dt}\left(\mathbb{A}\right), \tag{81}\]
where, in comparison to Eq. (11) and Eq. (48a), the spatial-like expansion rate in the information manifold \(\mathfrak{G}\) is defined using the determinant of the averaged spatial components of the Fisher metric \(\boldsymbol{\gamma}=\ \det(\boldsymbol{\gamma}_{ij})=\det(\langle\mathfrak{g}_{ij}\rangle)\), i.e.
\[\mathfrak{\vartheta}=\frac{1}{\sqrt{\gamma}}\frac{d}{dt}\left(\sqrt{\gamma} \right). \tag{82}\]
Since the exponential family is assumed to be either real or complex, the Hamiltonian is expected to be non-Hermitian with non-unitary transformation, and it is expected to be a
stochastic Hamiltonian in comparison with regular Hamiltonians in regular phase spaces. Thus, the Hamiltonian should be modified to obey Lindblad master equation [84], which is the most general form of Liouville equation, and we comment on this in the discussion. Then for a stochastic Hamiltonian \(\mathbb{H}\) as a function in \(\mathbb{X}\) and its conjugate momentum, Eq. (8) in the information manifold becomes
\[i\frac{1}{4\mathbb{C}_{4}}\textcoth A}=\text{Tr}\left[i\hbar\frac{d\rho}{dt}\ln \rho+[\mathbb{H},\rho]\right]. \tag{83}\]
And for dissipative systems, Eq. (7)
\[-i\hbar\frac{dS_{\text{BH}}}{dt}=-i\frac{1}{4\mathbb{C}_{4}}\textcoth A}+\text{ Tr}\left[i\hbar\frac{d\rho}{dt}\ln\rho+[\mathbb{H},\rho]_{\text{Lb}}\right], \tag{84}\]
which is the entropy of the black hole in the information manifold with no classical components from the spacetime itself, just information geometry. Based on Lindblad master equation, the \([\mathbb{H},\rho]_{\text{Lb}}\) term is the time evolution of the density under the influence of the interaction Hamiltonian \(\mathbb{H}_{i}\) in an open dissipative system [85]
\[\frac{d\rho(t)}{dt}\equiv[\mathbb{H},\rho]_{\text{Lb}}:=\frac{1}{i\hbar}[ \mathbb{H}_{i}(t),\rho(0)]-\frac{1}{\hbar^{2}}\int_{0}^{t}dt^{\prime}[\mathbb{ H}_{i}(t),[\mathbb{H}_{i}(t^{\prime}),\rho(t)]]. \tag{85}\]
The question now is: what is the meaning of the area \(\mathbb{A}\) in the information manifold, and how is it related to the spatial extremal area \(A\) in the spacetime? The answer lies in the definition of extremal surface in dS space, which could be compared with that RT surface in the AdS space [86, 87]. In order to get the extremal surfaces in dS space, we must complexify time such that average speed on the timelike surfaces, defined by the ratio of the shortest spatial angular length \(l\) of a dS space to the shortest time \(\epsilon\), becomes the determining factor of the size of the external surface, i.e.
\[A_{dS}=-\pi R^{2}\left(\frac{l}{c\epsilon}-1\right), \tag{86}\]
The last equation shows that for some cases, \(l/\epsilon>1\), the area could be negative20, which would lead to negative or even complex valued entropy! Before the last comment gets "frown upon", this could be necessary to avoid the disappearance of the spatial surfaces of dS Rindler wedge [88]. This might not be well-appreciated as it says there could be non-unitary states in CFT, i.e, the corresponding Hamiltonian \(\mathbb{H}\) could be non-Hermitian. Very recently, it is proved that the non-Hermiticity stems from the fact that the non-unitary CFT, dual to dS, lives on a space-like surface and the time coordinate emerging from an Euclidean CFT [89, 90] related to the previously mentioned blurred space. When we shift to the language of information manifolds, we surprisingly find some "untimely meditations" about the necessity of complexfying the spacetime and probability distributions so that we
get a Fisher metric as an averaged metric over spacetime fluctuations with a Lorentzian signature [78]. Consequently, claiming the necessity of non-Hermitian Hamiltonian corresponding to the Einstein-Hilbert formulation of GR in information manifold suggests that we can study the dynamics of a Wheeler-deWitt Hamiltonian [91, 92] described by the spatial metric \(\mathbf{\gamma}\) on information manifold. An example of the pseudo-Hermitian Wheeler-deWitt Hamiltonian is discussed in details in [93], and detailed calculations of such Hamiltonian in information manifold could be followed from Ref. [94, 95, 96] but they are left for a future study.
One last thing to be said about the RT formula. By comparing the approach21 followed in [97], the RT formula in \(4D\) dS becomes
Footnote 21: See Ref. [87] for the analysis of RT formula in \(2D\) information manifolds.
\[S_{\rm RT}=\frac{A_{dS}}{4G_{4}}=-\frac{\pi R^{2}}{4G_{4}}\ln(\frac{l}{c\epsilon })\sim-\frac{\pi R^{2}}{4G_{4}}\left(\frac{l}{c\epsilon}-1\right), \tag{87}\]
which is the formula of the entropy of black hole when the horizon coincides with the RT extremal surface. And to get the expression of the entropy of the extremal surface in information manifold, simply replace \(A\to\mathbb{A}\) and \(G_{4}\to\mathbb{G}_{4}\), where \(G_{4}\) is the gravitational constant in \(4D\) spacetime as given in Eq. (75). For more details on the leading divergent term in the previous equation and its relation to the holographic entanglement entropy, see Ref. [98].
## 5 Discussions and Conclusions
As we have seen, our study suggests reducing the geometrical properties, including spacetime itself, to an information geometry language in a way that could evolve the insight on the deep connection between physics and information. In this work, we studied in details the coarse grained entropy of the black hole that obeys the second law of thermodynamics. We analyzed the entropy-area law corrected by von Neumann entropy of the quantum matter outside its event horizon in order to obey second law of thermodynamics and to preserve information. We constructed the corresponding form of this corrected entropy-area law in quantum information geometric language. Consequently, a corresponding spacetime emerges from the quantum information. We discussed the link between Wald-Jacobson approaches of thermodynamic/gravity correspondence and Fisher pseudo-Riemannian metric of information manifold that guarantees extending the geometric interpretation to any quantum theory. We formulated Einstein's field equations in information geometry forms, and we obtained a modified Ricci tensor that helped constructing a Lagrangian of such theory. Also, we used the modified Ricci tensor to introduce the reduced Einstein tensor \(\widetilde{\mathbb{G}}_{\mu\nu}\), which is directly related to the energy-momentum tensor in such manifold. The formulated Einstein's field equations led into two interesting outcomes stemming fundamentally from information geometry. The first result is finding a quantum
origin of a positive cosmological constant that is founded on Fisher metric. This cosmological constant resembles those found in Lovelock's theories in de Sitter background due to complexifying time and the Gaussian exponential families of probability distributions. The second result is a time varying gravitational constant that resembles the idea of Dirac's large number hypothesis and predicts varying of gravitational constant based on simple analysis of nature constants. We extended our analysis into the information manifold and write down a dynamical equation for the entropy in quantum information manifold using Liouville-von Neumann equation for the quantum system Hamiltonian. According to our results, the Hamiltonian in the information manifold could be non-Hermitian. The resulting dynamical equation provides a clue to a direction that could ameliorate the problem of time.
It is worth noting that relating Jacobson endeavors to information geometry requires considering _non-equilibrium thermodynamics of spacetime_ and its associated _dissipative gravity_[99, 100]. This relation comes from the coarse-graining process and the suggested quantum thermodynamical origin of spacetime. To achieve fine-graining, the observational entropy will match von Neumann entropy after several consecutive coarse-graining processes [33]. And for finite-dimensional systems like spacetime itself, \(x^{\mu}\) are obviously finite, the observational entropy can be expressed as _relative entropy_. Moreover, a non-local heat fluxes is proven to take place in non-equilibrium thermodynamical gravitational systems, for both Einstein's general relativity and the scalar-tensor modifications, due to their dissipative characters [100]. When applied to Rindler spacetime, The thermal character of such heat flux extends the vacuum thermal state to include the whole Rindler wedge not just the single observer. So, an accelerated observer would access information, due to entanglement entropy, on spacelike slices. Consequently, the restricting the neighborhood of the Rindler wedge origin, as spacelike slice around it, determines the expansion coefficient. As the time rate change of the expansion coefficient is related to the dissipative energy coupled to the bulk and shear viscosity, then it relates the entropy, both internal at the irreversible level and exchanged at the reversible level, to the Equivalence Principle in such dissipative systems. Moreover, the time rate change of the expansion coefficient states a universal relation between viscosity and internal viscous part of the entropy density as found in the AdS/CFT [26].
We know that the collapse of a pure state results in a mixed state, which is a process the unitary transformations cannot achieve in irreversible processes due to the problem of preserving the norm of the wave functions. Moreover, there is no such _realistic_ quantum system that could be described by a pure state [101, 102]. So, we are left with non-unitary transformations. As the quantum time-reversible processes governed by Schrodinger evolution equation are always unitary, then there is an obvious contradiction in assuming those processes to comprise the non-reversible macro physics such as the second law of thermodynamics. There is a long debate on the optimal epistemic and/or ontological way to resolve this contradiction, it is usually discussed under the umbrella of wave function jump and dissipative systems, see Ref. [103] for more details. One of the remarkable suggestions is that the microscopic variables indeed evolve according to non-unitary processes [104].
So, based on the non-unitary irreversible von Neumann's _measurement transitions_ that render mixed states [105], Ref. [106] provides an account on assuming that wave function collapse takes place spontaneously and randomly in space and time. We know that von Neumann entropy is a function in basis-independent density operators, meanwhile the _reversible_ Shannon entropy, related to Gibbs and Boltzmann ones, is a function in the probability density matrices. If von Neumann and Shannon entropies are related, then there must be some stochastic variables that correspond to _coarse-graining_ Liouville equation. Classically this is associated with a loss in information, which is discussed in Balian _et al._ endeavor [19]. At the quantum level, this is associated with a loss in phase coherence in quantum states. This is suggested to happen in a non-unitary transformation from a pure state to mixed one [106]. This guarantees the validity of the second law of thermodynamics despite having Liouville equation to be governed by phase-space Hamiltonians. However, this does not demand making change to the Schrodinger equation; rather it says that Liouville equation is physically broken at the quantum level. The price could be replacing the regular Liouville equation with the more general Lindblad master equation of open quantum systems [84], which allows the corresponding Hamiltonian to have non-Hermitian parts [85]. These non-unitary quantum processes are found in many high energy systems, such as CFTs with zero and negative central charges that exhibit entanglement entropy [107, 108], in condensed matter systems [109], and even in quantum electronics [110].
A final comment on complexifying time related to what is mentioned in Isham's report on the problem of time [111]. In the example, if we focus only on the properties of an open quantum system, then the relevant state becomes that of the reduced density matrix obtained by summing over-in another word by tracing out-the states of the surroundings. If the states of the surroundings are approximately orthogonal-Balian _et al._ assume that too--then the density of quantum system exhibits decoherence. This is guaranteed to be true as the reduced density matrix is proved to be governed in different examples of Lindblad master equation [112] such as spatial decoherence [113] and quantum Brownian motion [114]. Isham emphasizes that the inability to find a satisfactory unitary Hamiltonian for Wheeler-de Witt equation should not be considered necessarily as a disaster. Rather, this "_might reflect something of genuine physical significance_". For example, in Hartle and Hawking [115] and Vilenkin [116] endeavors, time becomes _complex_ due to the _non-unitary_ evolution.
## Aknowledgement
The author would like to thank A. F. Ali for for discussions and comments during the preparation of this work.
Appendix
This appendix is dedicated for the detailed calculations of the connection terms and their derivatives in Einstein equation in information manifold.
If we substitute Eq. (63) in the derivative terms of Eq. (64) we get
\[\partial_{\alpha}\mathbb{I}_{\,\mu\nu}^{\,\alpha}-\partial_{\nu} \mathbb{I}_{\,\mu\alpha}^{\,\alpha} =\mathfrak{g}^{\alpha\beta}\partial_{\alpha}\left[\langle \partial_{\mu}\partial_{\nu}\varrho\partial_{\beta}\varrho\rangle-\frac{1}{2} \langle\partial_{\mu}\varrho\partial_{\nu}\varrho\partial_{\beta}\varrho \rangle\right]-\mathfrak{g}^{\alpha\beta}\partial_{\mu}\left[\langle \partial_{\alpha}\partial_{\nu}\varrho\partial_{\beta}\varrho\rangle-\frac{1} {2}\langle\partial_{\alpha}\varrho\partial_{\nu}\varrho\partial_{\beta} \varrho\rangle\right]\] \[+\partial_{\alpha}(\mathfrak{g}^{\alpha\beta})\left[\langle \partial_{\mu}\partial_{\nu}\varrho\partial_{\beta}\varrho\rangle-\frac{1}{2} \langle\partial_{\mu}\varrho\partial_{\nu}\varrho\partial_{\beta}\varrho \rangle\right]-\partial_{\mu}(\mathfrak{g}^{\alpha\beta})\left[\langle \partial_{\alpha}\partial_{\nu}\varrho\partial_{\beta}\varrho\rangle-\frac{1} {2}\langle\partial_{\alpha}\varrho\partial_{\nu}\varrho\partial_{\beta} \varrho\rangle\right]\] (A1)
Also, if we substitute Eq. (63) in the multiplicative terms of Eq. (64) we get
\[\begin{split}\mathbb{I}_{\,\beta\alpha}^{\,\alpha}\mathbb{I}_{\, \mu\nu}^{\beta}-\mathbb{I}_{\,\beta\nu}^{\,\alpha}\mathbb{I}_{\,\mu\alpha}^{ \,\beta}&=\mathfrak{g}^{\alpha\kappa}\mathfrak{g}^{\beta\lambda} \Bigg{\{}\left[\langle\partial_{\alpha}\partial_{\beta}\varrho\partial_{ \kappa}\varrho\rangle-\frac{1}{2}\langle\partial_{\alpha}\varrho\partial_{ \beta}\varrho\partial_{\kappa}\varrho\rangle\right]\times\left[\langle \partial_{\mu}\partial_{\nu}\varrho\partial_{\lambda}\varrho\rangle-\frac{1} {2}\langle\partial_{\mu}\varrho\partial_{\nu}\varrho\partial_{\lambda} \varrho\rangle\right]\\ &\qquad\qquad-\left[\langle\partial_{\nu}\partial_{\beta}\varrho \partial_{\kappa}\varrho\rangle-\frac{1}{2}\langle\partial_{\nu}\varrho \partial_{\beta}\varrho\partial_{\kappa}\varrho\rangle\right]\times\left[ \langle\partial_{\mu}\partial_{\alpha}\varrho\partial_{\lambda}\varrho \rangle-\frac{1}{2}\langle\partial_{\mu}\varrho\partial_{\alpha}\varrho \partial_{\lambda}\varrho\rangle\right]\Bigg{\}}\end{split}\] (A2)
We distribute the outer derivative applied on first line in Eq. (A1), together with exploiting the properties in Eq. (60), to get
\[\begin{split}\mathfrak{g}^{\alpha\beta}\partial_{\alpha}\left[ \langle\partial_{\mu}\partial_{\nu}\varrho\partial_{\beta}\varrho\rangle- \frac{1}{2}\langle\partial_{\mu}\varrho\partial_{\nu}\varrho\partial_{\beta} \varrho\rangle\right]-\mathfrak{g}^{\alpha\beta}\partial_{\mu}\left[\langle \partial_{\alpha}\partial_{\nu}\varrho\partial_{\beta}\varrho\rangle-\frac{1 }{2}\langle\partial_{\alpha}\varrho\partial_{\nu}\varrho\partial_{\beta} \varrho\rangle\right]=\\ \mathfrak{g}^{\alpha\beta}\Bigg{\{}\langle\partial_{\mu} \partial_{\nu}\varrho\partial_{\alpha}\partial_{\beta}\varrho\rangle-\langle \partial_{\alpha}\varrho\partial_{\mu}\partial_{\nu}\varrho\partial_{\beta} \varrho\rangle-\frac{1}{2}\Big{[}\langle\partial_{\mu}\partial_{\alpha} \varrho\partial_{\nu}\varrho\partial_{\beta}\varrho\rangle+\langle\partial_ {\mu}\varrho\partial_{\alpha}\partial_{\nu}\varrho\partial_{\beta}\varrho \rangle+\langle\partial_{\mu}\varrho\partial_{\nu}\varrho\partial_{\alpha} \partial_{\beta}\varrho\rangle\Big{]}\\ -\langle\partial_{\alpha}\partial_{\nu}\varrho\partial_{\mu} \partial_{\beta}\varrho\rangle+\langle\partial_{\mu}\varrho\partial_{\alpha} \partial_{\nu}\varrho\partial_{\beta}\varrho\rangle+\frac{1}{2}\Big{[}\langle \partial_{\alpha}\partial_{\mu}\varrho\partial_{\nu}\varrho\partial_{\beta} \varrho\rangle+\langle\partial_{\alpha}\varrho\partial_{\mu}\partial_{\nu} \varrho\partial_{\beta}\varrho\rangle+\langle\partial_{\alpha}\varrho \partial_{\nu}\varrho\partial_{\mu}\partial_{\beta}\varrho\rangle\Big{]} \Bigg{\}}=\\ \mathfrak{g}^{\alpha\beta}\Big{[}\langle\partial_{\mu}\partial_{ \nu}\varrho\partial_{\alpha}\partial_{\beta}\varrho\rangle+\frac{1}{2}\langle \partial_{\mu}\varrho\partial_{\alpha}\partial_{\nu}\varrho\partial_{\beta} \varrho\rangle+\frac{1}{2}\langle\partial_{\mu}\varrho\partial_{\beta} \partial_{\nu}\varrho\partial_{\alpha}\varrho\rangle\\ -\langle\partial_{\mu}\partial_{\beta}\varrho\partial_{\nu} \partial_{\alpha}\varrho\rangle-\frac{1}{2}\langle\partial_{\alpha}\varrho \partial_{\mu}\partial_{\nu}\varrho\partial_{\beta}\varrho\rangle-\frac{1}{2} \langle\partial_{\mu}\varrho\partial_{\alpha}\partial_{\beta}\varrho\partial_{ \nu}\varrho\rangle\Big{]}\end{split}\] (A3)
We contract Eq. (A3) using \(\mathfrak{g}_{\mu\nu}\) to get
\[\mathfrak{g}^{\alpha\beta}\partial_{\alpha}\left[\langle\partial_{ \kappa}\partial_{\kappa}\varrho\partial_{\beta}\varrho\rangle-\frac{1}{2} \langle\partial_{\kappa}\varrho\partial_{\kappa}\varrho\partial_{\beta} \varrho\rangle\right]-\mathfrak{g}^{\alpha\beta}\partial_{\kappa}\left[ \langle\partial_{\alpha}\partial_{\kappa}\varrho\partial_{\beta}\varrho \rangle-\frac{1}{2}\langle\partial_{\alpha}\varrho\partial_{\kappa}\varrho \partial_{\beta}\varrho\rangle\right]=\\ \mathfrak{g}^{\alpha\beta}\Big{[}\langle\partial_{\kappa} \partial_{\kappa}\varrho\partial_{\alpha}\partial_{\beta}\varrho\rangle+\frac{1 }{2}\langle\partial_{\kappa}\varrho\partial_{\alpha}\partial_{\kappa}\varrho \partial_{\beta}\varrho\rangle+\frac{1}{2}\langle\partial_{\kappa}\varrho \partial_{\beta}\partial_{\kappa}\varrho\partial_{\alpha}\varrho\rangle\\ -\langle\partial_{\kappa}\partial_{\beta}\varrho\partial_{ \kappa}\partial_{\alpha}\varrho\rangle-\frac{1}{2}\langle\partial_{\alpha} \varrho\partial_{\kappa}\partial_{\kappa}\varrho\partial_{\beta}\varrho \rangle-\frac{1}{2}\langle\partial_{\kappa}\varrho\partial_{\alpha}\partial _{\beta}\varrho\partial_{\kappa}\varrho\rangle\Big{]}\] (A4)
Now, we multiply Eq. (A4) by \(-\frac{1}{2}\mathfrak{g}_{\mu\nu}\) then pick terms from the result of such multiplication that match with the terms in Eq. (A3) to show that Eq. (A1), i.e. the derivative parts in Eq. (64), corresponds to the cosmological coupling constant term in Lovelock theory of gravity [68]. The middle steps are obtained using the definition of the \(\mathcal{G}_{\mu\nu}\) in Eq. (A20) and the definition of \(\mathfrak{g}_{\mu\nu}\) in Eq. (A21). The \(\mathfrak{g}^{\mu\nu}\mathfrak{g}_{\mu\nu}=\mathcal{G}^{\mu\nu}\mathcal{G}_{ \mu\nu}=D\), where \(D\) is the dimension of the manifold. Collect the fourth term in the last two lines of Eq. (A3) with \(-\frac{1}{2}\mathfrak{g}_{\mu\nu}\times\)the fourth term in the last two lines of Eq. (A4) to get
\[-\mathfrak{g}^{\alpha\beta}\langle\partial_{\mu}\partial_{\alpha}\varrho \partial_{\nu}\partial_{\beta}\varrho-\frac{1}{2}\mathfrak{g}_{\mu\nu} \partial_{\kappa}\partial_{\alpha}\varrho\partial_{\kappa}\partial_{\beta} \varrho\rangle=-(1-D/2)\mathfrak{g}_{\mu\nu}\] (A5)
Collect the second and the third terms in the last two lines of Eq. (A3) with \(-\frac{1}{2}\mathfrak{g}_{\mu\nu}\times\)the third term in the last two lines of Eq. (A4) to get
\[\frac{1}{2}\mathfrak{g}^{\alpha\beta}\langle\partial_{\alpha}\partial_{\nu} \varrho\partial_{\beta}\partial_{\mu}\varrho+\partial_{\alpha}\partial_{\mu} \varrho\partial_{\beta}\partial_{\nu}\varrho-\mathfrak{g}_{\mu\nu}\partial_{ \kappa}\partial_{\alpha}\varrho\partial_{\kappa}\partial_{\beta}\varrho\rangle =(1-D/2)\mathfrak{g}_{\mu\nu}\] (A6)
It is obvious that the last two results, Eq. (A5) and Eq. (A6), cancel each other. Collect the last term in the last two lines of Eq. (A3) with \(-\frac{1}{2}\mathfrak{g}_{\mu\nu}\times\)the last term in the last two lines of Eq. (A4) to get
\[-\frac{1}{2}\mathfrak{g}^{\alpha\beta}\langle\partial_{\alpha}\partial_{\beta} \varrho(\partial_{\mu}\partial_{\nu}\varrho-\frac{1}{2}\mathfrak{g}_{\mu\nu} \partial_{\kappa}\partial_{\kappa}\varrho)\rangle=-(D/2-D^{2}/4)\mathfrak{g} _{\mu\nu}\] (A7)
Collect the last term in the last two lines of Eq. (A3) with \(-\frac{1}{2}\mathfrak{g}_{\mu\nu}\times\)the last term in the last two lines of Eq. (A4) to get
\[\frac{1}{2}\mathfrak{g}^{\alpha\beta}\langle\partial_{\alpha}\partial_{\beta} \varrho(\partial_{\mu}\partial_{\nu}\varrho-\frac{1}{2}\mathfrak{g}_{\mu\nu} \partial_{\kappa}\partial_{\kappa}\varrho)\rangle=(D/2-D^{2}/4)\mathfrak{g} _{\mu\nu}\] (A8)
It is obvious that the last two results, Eq. (A7) and Eq. (A8), cancel each other. We are left with the fifth term in the last two lines of Eq. (A3) and the \(-\frac{1}{2}\mathfrak{g}_{\mu\nu}\times\)fifth term in the last two lines of Eq. (A4). We combine both to get
\[-\frac{1}{2}\mathfrak{g}^{\alpha\beta}\langle\partial_{\alpha}\varrho\partial_{ \beta}\varrho(\partial_{\mu}\partial_{\nu}\varrho-\frac{1}{2}\mathfrak{g}_{ \mu\nu}\partial_{\kappa}\partial_{\kappa}\varrho)\rangle=-(D/2-D^{2}/4) \mathfrak{g}_{\mu\nu}\] (A9)
Therefore, Eq. (A9) is the only part that contributes to Eq. (A1)
\[\Big{(}\partial_{\alpha}\mathbb{I}^{\alpha}_{\,\,\mu\nu}-\partial_{\nu}\mathbb{I} ^{\alpha}_{\,\,\mu\alpha}\Big{)}-\frac{1}{2}\mathsf{g}_{\mu\nu}\mathsf{g}^{ \kappa\lambda}\Big{(}\partial_{\alpha}\mathbb{I}^{\alpha}_{\,\,\kappa\lambda}- \partial_{\kappa}\mathbb{I}^{\alpha}_{\,\,\lambda\alpha}\Big{)}=\frac{1}{2}D( \frac{D}{2}-1)\mathsf{g}_{\mu\nu}\] (A9 \[\mathsf{A}\mathsf{9}^{\prime}\] )
where \(\Lambda=D(D/2-1)\) is defined for \(D\geqslant 2\) dimensional manifold.
For the second line in Eq. (A1), we use
\[\partial_{\mu}(\mathsf{g}^{\alpha\beta})=-\mathsf{g}^{\alpha\kappa}\mathsf{g}^ {\beta\lambda}\partial_{\mu}\mathsf{g}_{\kappa\lambda}=\mathsf{g}^{\alpha \kappa}\mathsf{g}^{\beta\lambda}\langle\partial_{\mu}\varrho\partial_{\kappa }\varrho\partial_{\lambda}\varrho\rangle\] (A10)
so that the second line in Eq. (A1) is
\[\mathsf{g}^{\alpha\kappa}\mathsf{g}^{\beta\lambda}\Bigg{\{} \langle\partial_{\alpha}\varrho\partial_{\kappa}\varrho\partial_{\lambda} \varrho\rangle\left[\langle\partial_{\mu}\partial_{\nu}\varrho\partial_{ \beta}\varrho\rangle-\frac{1}{2}\langle\partial_{\mu}\varrho\partial_{\nu} \varrho\partial_{\beta}\varrho\rangle\right]-\langle\partial_{\mu}\varrho \partial_{\kappa}\varrho\partial_{\lambda}\varrho\rangle\left[\langle \partial_{\alpha}\partial_{\nu}\varrho\partial_{\beta}\varrho\rangle-\frac{1 }{2}\langle\partial_{\alpha}\varrho\partial_{\nu}\varrho\partial_{\beta} \varrho\rangle\right]\Bigg{\}}\] (A11)
Next, we expand Eq. (A2). The terms with no \(1/4\) resulting from such expansion are
\[\mathsf{g}^{\alpha\kappa}\mathsf{g}^{\beta\lambda}\Big{[} \langle\partial_{\alpha}\partial_{\beta}\varrho\partial_{\kappa} \varrho\rangle\langle\partial_{\mu}\partial_{\nu}\varrho\partial_{\lambda} \varrho\rangle-\frac{1}{2}\langle\partial_{\alpha}\partial_{\beta}\varrho \partial_{\kappa}\varrho\rangle\langle\partial_{\mu}\varrho\partial_{\nu} \varrho\partial_{\lambda}\varrho\rangle-\frac{1}{2}\langle\partial_{\alpha} \varrho\partial_{\beta}\varrho\partial_{\kappa}\varrho\rangle\langle \partial_{\mu}\partial_{\nu}\varrho\partial_{\lambda}\varrho\rangle\] \[-\langle\partial_{\nu}\partial_{\beta}\varrho\partial_{\kappa} \varrho\rangle\langle\partial_{\mu}\partial_{\alpha}\varrho\partial_{ \lambda}\varrho\rangle+\frac{1}{2}\langle\partial_{\nu}\partial_{\beta} \varrho\partial_{\kappa}\varrho\rangle\langle\partial_{\mu}\varrho\partial_{ \alpha}\varrho\partial_{\lambda}\varrho\rangle+\frac{1}{2}\langle\partial_{\nu }\varrho\partial_{\beta}\varrho\partial_{\kappa}\varrho\rangle\langle \partial_{\mu}\partial_{\alpha}\varrho\partial_{\lambda}\varrho\rangle\Big{]}\] (A12)
For the first line in Eq. (A12), exchange \(\beta\) and \(\lambda\) in the first term, change the sign in that term according to Eq. (60). Then, collect that term with the third term in the same line. And for the second line in Eq. (A12), exchange \(\kappa\) and \(\alpha\) in the first term, change the sign in that term according to Eq. (60). Then, collect that term with the third term in the same line. Also for the second and fifth term in Eq.(A12), change the sign in those terms according to Eq. (60). Thus, Eq. (A12) becomes
\[-\mathsf{g}^{\alpha\kappa}\mathsf{g}^{\beta\lambda}\Bigg{\{} \langle\partial_{\alpha}\varrho\partial_{\kappa}\varrho\partial_{\lambda} \varrho\rangle\left[\langle\partial_{\mu}\partial_{\nu}\varrho\partial_{ \beta}\varrho\rangle-\frac{1}{2}\langle\partial_{\mu}\varrho\partial_{\nu} \varrho\partial_{\beta}\varrho\rangle\right]-\langle\partial_{\mu}\varrho \partial_{\kappa}\varrho\partial_{\lambda}\varrho\rangle\left[\langle \partial_{\alpha}\partial_{\nu}\varrho\partial_{\beta}\varrho\rangle-\frac{1 }{2}\langle\partial_{\alpha}\varrho\partial_{\nu}\varrho\partial_{\beta} \varrho\rangle\right]\Bigg{\}}\] \[+\frac{1}{2}\mathsf{g}^{\alpha\kappa}\mathsf{g}^{\beta\lambda} \Bigg{[}\langle\partial_{\alpha}\varrho\partial_{\beta}\varrho\partial_{ \kappa}\varrho\rangle\langle\partial_{\mu}\varrho\partial_{\nu}\varrho\partial_{ \lambda}\varrho\rangle-\langle\partial_{\nu}\varrho\partial_{\beta}\varrho \partial_{\kappa}\varrho\rangle\langle\partial_{\mu}\varrho\partial_{\alpha} \varrho\partial_{\lambda}\varrho\rangle\Bigg{]}\] (A13)
We see that the first line in Eq. (A13) cancels with Eq. (A11), which is obtained from the second line in Eq.(A1). Moreover, we add the second line in Eq. (A13) to the terms with \(1/4\) in Eq. (A2). Then, we contract the \(\mu\nu\) indices of the result of such addition, multiply it with \(-\frac{1}{2}\mathsf{g}_{\mu\nu}\), and add it to the original terms before the \(\mu\nu\) contraction such that we introduce a new tensor
\[\widetilde{\mathsf{R}}_{\mu\nu} =\Big{(}\mathbb{I}^{\alpha}_{\,\,\beta\alpha}\mathbb{I}^{\beta}_{\, \,\mu\nu}-\mathbb{I}^{\alpha}_{\,\,\beta\nu}\mathbb{I}^{\beta}_{\,\,\mu\alpha }\Big{)}-\frac{1}{2}\mathsf{g}_{\mu\nu}\mathsf{g}^{\kappa\lambda}\Big{(} \mathbb{I}^{\alpha}_{\,\,\beta\alpha}\mathbb{I}^{\beta}_{\,\,\kappa\lambda}- \mathbb{I}^{\alpha}_{\,\,\beta\kappa}\mathbb{I}^{\beta}_{\,\,\lambda\alpha}\Big{)}\] \[=\frac{1}{4}\mathsf{g}^{\alpha\kappa}\mathsf{g}^{\beta\lambda} \Big{[}\langle\partial_{\alpha}\varrho\partial_{\beta}\varrho\partial_{\kappa} \varrho\rangle\langle\partial_{\mu}\varrho\partial_{\nu}\varrho\partial_{\lambda} \varrho\rangle-\langle\partial_{\nu}\varrho\partial_{\beta}\varrho\partial_{ \kappa}\varrho\rangle\langle\partial_{\mu}\varrho\partial_{\alpha}\varrho \partial_{\lambda}\varrho\rangle\Big{]}\] (A14)
Here we follow Ref. [64]. In order to relate the Fisher metric with the entropy defined as in Eq. (29), or Eq. (39), we know the density means also the relative share of certain energy state \(E(\mathbb{X}^{\mu})\) from the total collection of all energy states in the partition function \(Z\)[57], i.e.
\[\rho(\xi^{\mu};\mathbb{X}^{\mu})=\exp\left[-\beta E(\mathbb{X}^{\mu})-\ln Z(\xi^ {\mu})\right]\] (A15)
which reintroduces the probability distributions to the family of exponentials. Then, we can define the density generally as
\[\rho(\xi^{\mu})=\exp\left[\xi^{\mu}\mathbb{E}_{\mu}(\mathbb{X}^{\nu})-\phi(\xi ^{\nu})\right]\] (A16)
which is Eq. (78). The corresponding dual density becomes
\[\varrho=-\ln\rho=\phi(\xi^{\nu})-\xi^{\mu}\mathbb{E}_{\mu}\] (A17)
Applying the first and the second derivative with respect to \(\xi^{\mu}\) on Eq. (A17) yields
\[\partial_{\mu}\varrho =\partial_{\mu}\phi-\mathbb{E}_{\mu}(\mathbb{X}^{\nu})\] (A18) \[\partial_{\mu}\partial_{\nu}\varrho =\partial_{\mu}\partial_{\mu}\phi\] (A19)
In light of Eq. (29), Eq. (39) and Eq. (58), the last Eq. (A18-A19) can be rearranged to get
\[\langle\mathcal{G}_{\mu\nu}\rangle=\langle\partial_{\mu}\partial_{\nu} \varrho\rangle=\partial_{\mu}\partial_{\nu}\phi=\mathtt{g}_{\mu\nu}\] (A20)
Since \(\langle\partial_{\mu}\varrho\rangle=0\), then we apply another differentiation and use Eq. (A20) to get
\[\mathtt{g}_{\mu\nu}=\langle\partial_{\mu}\partial_{\nu}\varrho\rangle=\langle \partial_{\mu}\varrho\partial_{\nu}\varrho\rangle\] (A21)
despite that \(\partial_{\mu}\partial_{\nu}\varrho\neq\partial_{\mu}\varrho\partial_{\nu}\varrho\). Moreover, since
\[\langle\mathbb{E}_{\mu}\rangle=\partial_{\mu}\phi\,\] (A22)
then the last three equations give
\[\mathtt{g}_{\mu\nu} =\langle\mathbb{E}_{\mu}\mathbb{E}_{\nu}\rangle-\langle\mathbb{E }_{\mu}\rangle\langle\mathbb{E}_{\nu}\rangle\] (A23) \[=\langle\mathbb{E}_{\mu}\mathbb{E}_{\nu}\rangle-\partial_{\mu} \phi\langle\mathbb{E}_{\nu}\rangle=-\langle\partial_{\mu}\varrho\mathbb{E}_{ \nu}\rangle\] (A24)
The last three relations will help us to construct the Christoffel symbol [57, 58], from the connections in Eq. (41b), and consequently the Riemann curvature tensor as functions in the density vectors as we will see in a little bit.
Now, we substitute Eq. (57) and Eq. (A18) into Eq. (A14), then expand, we obtain
\[\widetilde{\mathbb{R}}_{\mu\nu} =\frac{1}{4}\Bigg{\{}D\partial_{\mu}\phi\partial_{\nu}\phi- \mathtt{g}^{\beta\kappa}\partial_{\mu}\phi\langle\mathbb{E}_{\nu}\mathcal{G} _{\beta\kappa}\rangle-\mathtt{g}^{\alpha\lambda}\partial_{\nu}\phi\langle \mathbb{E}_{\mu}\mathcal{G}_{\alpha\lambda}\rangle\] \[\qquad+\mathtt{g}^{\alpha\kappa}\mathtt{g}^{\beta\lambda}\Big{[} \langle\mathbb{E}_{\mu}\mathcal{G}_{\alpha\lambda}\rangle\langle\mathbb{E}_{ \nu}\mathcal{G}_{\beta\kappa}\rangle-\langle\partial_{\alpha}\varrho\partial _{\beta}\varrho\rangle\langle\mathbb{E}_{\mu}\mathbb{E}_{\nu}\partial_{ \lambda}\varrho\rangle\Big{]}\] \[\qquad+\mathtt{g}^{\alpha\kappa}\mathtt{g}^{\beta\lambda}\langle \partial_{\alpha}\varrho\partial_{\beta}\varrho\partial_{\kappa}\varrho \rangle\Big{[}\partial_{\mu}\phi\langle\mathbb{E}_{\nu}\partial_{\lambda} \varrho\rangle+\partial_{\nu}\phi\langle\mathbb{E}_{\mu}\partial_{\lambda} \varrho\rangle\Big{]}\Bigg{\}}\] (A25)
We focus on the last line of Eq. (A25). Eq. (A18) and Eq. (A20) yield \(\langle\partial_{\alpha}\varrho\partial_{\beta}\varrho\partial_{\kappa}\varrho \rangle=\langle\partial_{\beta}\varrho\mathcal{G}_{\alpha\kappa}\rangle= \partial_{\beta}\phi\mathfrak{g}_{\alpha\kappa}-\langle\mathbb{E}_{\beta} \,\mathcal{G}_{\alpha\kappa}\rangle\). And the terms \(\langle\partial_{\mu}\varrho\mathbb{E}_{\nu}\rangle=-\mathfrak{g}_{\mu\nu}\) as we infer from Eq. (A24). Then, we expand Eq. (A25) to get
\[\widetilde{\mathbb{R}}_{\mu\nu}=\frac{1}{4}\mathfrak{g}^{\alpha\kappa}\mathfrak{ g}^{\beta\lambda}\Big{[}\langle\mathbb{E}_{\mu}\,\mathcal{G}_{\alpha\beta} \rangle\langle\mathbb{E}_{\nu}\,\mathcal{G}_{\kappa\lambda}\rangle-\langle \mathbb{E}_{\mu}\rangle\langle\mathbb{E}_{\nu}\rangle\mathfrak{g}_{\alpha\beta }\mathfrak{g}_{\kappa\lambda}-\langle\partial_{\alpha}\varrho\partial_{\beta} \varrho\partial_{\kappa}\varrho\rangle\langle\mathbb{E}_{\mu}\mathbb{E}_{\nu} \partial_{\lambda}\varrho\rangle\Big{]}\] (A26)
The last term in Eq. (A26) is negligable as \(\langle\mathbb{E}_{\mu}\mathbb{E}_{\nu}\partial_{\lambda}\varrho\rangle \sim-\langle\partial_{\lambda}(\mathbb{E}_{\mu}\mathbb{E}_{\nu})\rangle=-2 \langle\partial_{\lambda}[\mathbb{E}_{(\mu}\mathbb{E}_{\nu)}]\rangle\), and Eq. (A19) says that \(\partial_{\mu}\mathbb{E}_{\nu}=0\). Therefore, Eq. (A26) becomes
\[\widetilde{\mathbb{R}}_{\mu\nu}=\frac{1}{4}\mathfrak{g}^{\alpha\kappa}g^{ \beta\lambda}\Big{[}\langle\mathbb{E}_{\mu}\,\mathcal{G}_{\alpha\beta}\rangle \langle\mathbb{E}_{\nu}\,\mathcal{G}_{\kappa\lambda}\rangle-\langle\mathbb{E} _{\mu}\rangle\langle\mathbb{E}_{\nu}\rangle\mathfrak{g}_{\alpha\beta} \mathfrak{g}_{\kappa\lambda}\Big{]}\] (A27)
As we defined the stochastic variables \(\mathbb{X}^{\mu}\equiv\mathbb{X}^{\mu}(\langle x^{\nu}\rangle,\sigma_{x^{\nu}})\) in the beginning of subsection (3.5), the same can be done for the stochastic metric \(\,\mathcal{G}_{\mu\nu}(\xi,\mathbb{X})\) as we expand it around the \(\langle x^{\mu}\rangle\) while we keep \(\sigma_{x^{\mu}}\) as it is. So, \(\mathbb{X}\equiv\mathbb{X}^{\mu}(\langle x\rangle)\), And the metric becomes
\[\mathcal{G}_{\mu\nu}(\mathbb{X})=\,\mathcal{G}_{\mu\nu}(\langle x\rangle)+ \,\dot{\mathcal{G}}_{\mu\nu}(\langle x\rangle)\Big{(}\mathbb{X}-\langle x \rangle\Big{)}+\frac{1}{2}\ddot{\mathcal{G}}_{\mu\nu}(\langle x\rangle)\Big{(} \mathbb{X}-\langle x\rangle\Big{)}^{2}+\cdots\] (A28)
where
\[\dot{\mathcal{G}}_{\mu\nu}(\langle x\rangle)=\lim_{\mathbb{X}\to\langle x \rangle}\frac{\partial}{\partial\mathbb{X}}\,\mathcal{G}_{\mu\nu}(\mathbb{X})\] (A29)
and \(\ddot{\mathcal{G}}_{\mu\nu}(\langle x\rangle)\) is the usual second derivative of the above equation. Defining \(\,\mathcal{G}_{\mu\nu}\) as a function in \(\langle x\rangle\) allows us get \(\langle\mathcal{G}_{\mu\nu}(\langle x\rangle)\rangle=\,\mathcal{G}_{\mu\nu}( \langle x\rangle)=\,\mathcal{G}_{\mu\nu}\) as averaging the average is a redundant process. Now we substitute Eq. (A28) in Eq. (A27), together with the help of Eq. (A18-A22) and the approximation \(\langle\mathbb{E}_{\mu}(\mathbb{X}-\langle x\rangle)^{n}\rangle\sim\partial_{ \mu}\phi\langle(\mathbb{X}-\langle x\rangle)^{n}\rangle\), to obtain
\[\widetilde{\mathbb{R}}_{\mu\nu}=\frac{1}{4}\mathfrak{g}^{\alpha \kappa}\mathfrak{g}^{\beta\lambda}\Bigg{\{} \Big{[}\,\mathcal{G}_{\alpha\beta}\,\mathcal{G}_{\kappa\lambda}-\mathfrak{g}_ {\alpha\beta}\mathfrak{g}_{\kappa\lambda}\Big{]}+\,\mathcal{G}_{\kappa\lambda} \Big{[}\,\dot{\mathcal{G}}_{\alpha\beta}\langle\mathbb{X}-\langle x\rangle \rangle+\frac{1}{2}\ddot{\mathcal{G}}_{\alpha\beta}\langle(\mathbb{X}- \langle x\rangle)^{2}\rangle+\cdots\Big{]}\] \[+\,\mathcal{G}_{\alpha\beta}\Big{[}\dot{\mathcal{G}}_{\kappa \lambda}\langle\mathbb{X}-\langle x\rangle\rangle+\frac{1}{2}\ddot{\mathcal{ g}}_{\kappa\lambda}\langle(\mathbb{X}-\langle x\rangle)^{2}\rangle+\cdots\Big{]} \Bigg{\}}\times\partial_{\mu}\phi\partial_{\nu}\phi\] (A30)
## Data Availability Statement
No Data associated in the manuscript.
|
2308.00206 | Synthetic Skull CT Generation with Generative Adversarial Networks to
Train Deep Learning Models for Clinical Transcranial Ultrasound | Deep learning offers potential for various healthcare applications, yet
requires extensive datasets of curated medical images where data privacy, cost,
and distribution mismatch across various acquisition centers could become major
problems. To overcome these challenges, we propose a generative adversarial
network (SkullGAN) to create large datasets of synthetic skull CT slices,
geared towards training models for transcranial ultrasound. With wide ranging
applications in treatment of essential tremor, Parkinson's, and Alzheimer's
disease, transcranial ultrasound clinical pipelines can be significantly
optimized via integration of deep learning. The main roadblock is the lack of
sufficient skull CT slices for the purposes of training, which SkullGAN aims to
address. Actual CT slices of 38 healthy subjects were used for training. The
generated synthetic skull images were then evaluated based on skull density
ratio, mean thickness, and mean intensity. Their fidelity was further analyzed
using t-distributed stochastic neighbor embedding (t-SNE), Fr\'echet inception
distance (FID) score, and visual Turing test (VTT) taken by four staff clinical
radiologists. SkullGAN-generated images demonstrated similar quantitative
radiological features to real skulls. t-SNE failed to separate real and
synthetic samples from one another, and the FID score was 49. Expert
radiologists achieved a 60\% mean accuracy on the VTT. SkullGAN makes it
possible for researchers to generate large numbers of synthetic skull CT
segments, necessary for training neural networks for medical applications
involving the human skull, such as transcranial focused ultrasound, mitigating
challenges with access, privacy, capital, time, and the need for domain
expertise. | Kasra Naftchi-Ardebili, Karanpartap Singh, Reza Pourabolghasem, Pejman Ghanouni, Gerald R. Popelka, Kim Butts Pauly | 2023-08-01T00:05:02Z | http://arxiv.org/abs/2308.00206v3 | # SkullGAN: Synthetic Skull CT Generation with Generative Adversarial Networks
###### Abstract
We propose a novel \(\mathcal{F}\)-based framework for solving the SkullGAN problem with a generative adversarial network. We show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN. We show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN. We show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\)-based framework for the SkullGAN. We also show that the SkullGAN is a \(\mathcal{F}\)-based framework for the SkullGAN, which is a \(\mathcal{F}\
SkullGAN: Synthetic Skull CT Generation with Generative Adversarial Networks
Summary
**SkullGAN, a deep generative adversarial network, can generate large numbers of synthetic skull CT segments that are visually and quantitatively indistinguishable from real skull CT segments for training deep learning models with applications in healthcare.**
Key Points
* To address limitations in accessing large numbers of real, curated, and anonymized skull CTs to train deep learning models involving the human skull, SkullGAN was trained on 2,414 normal, real skull CT segments from 38 subjects to generate highly varied synthetic skull images.
* Synthetic CT images generated by SkullGAN were indistinguishable from a test set of real skull CTs, both in quantitative radiological metrics and when subject to the SkullGAN discriminator.
* Radiological metrics such as skull density ratio (SDR) can easily be fooled if used for statistical comparison between real and synthetic skulls. Many-parameter nonlinear classifiers are better suited for separating low quality slices from realistic ones.
## Abstract
**Purpose:** Deep learning offers potential for various healthcare applications involving the human skull, yet requires extensive datasets of curated medical images. To overcome this challenge, we propose SkullGAN, a generative adversarial network (GAN), to create large datasets of synthetic skull CT slices, thus reducing reliance on real images and accelerating the integration of machine learning into healthcare.
**Materials and Methods:** CT slices of 38 subjects were fed to SkullGAN, a neural network comprising over 200 million parameters. The generated synthetic skull images were then evaluated based on three quantitative radiological features: skull density ratio (SDR), mean thickness, and mean intensity, They were further analyzed using t-distributed stochastic neighbor embedding (t-SNE), and by applying the SkullGAN discriminator as a classifier.
**Results:** SkullGAN-generated images demonstrated similar key quantitative radiological features to real skulls. Additionally, more definitive analysis was undertaken by applying the discriminator of SkullGAN. The SkullGAN discriminator classified 56.5% of a test set of real skull images and 55.9% of the SkullGAN-generated images as reals (the theoretical optimum being 50%), demonstrating that the SkullGAN-generated skull set is indistinguishable from the real skull set - within the limits of our nonlinear classifier.
**Conclusion:** SkullGAN makes it possible for researchers to generate large numbers of synthetic skull CT segments, necessary for training neural networks for medical applications involving the human skull. By doing so, it mitigates challenges associated with preparing large, high-quality training datasets, such as access, capital, time, and the need for domain expertise.
## Introduction
The adequate training of a neural network requires very large and often difficult to obtain quantities of standardized data. This problem becomes even more pronounced in medical applications where data preparation steps such as anonymizing, slicing, segmentation, preprocessing, and labeling cannot be easily crowdsourced and require domain expertise [1, 2, 3]. To address this data scarcity problem in medical applications involving human skull imaging, we propose SkullGAN, a deep Generative Adversarial Network (GAN) that generates large numbers of synthetic skull CT segments for training deep learning models and for simulation software. We will investigate whether SkullGAN-generated skull CT segments display visual similarities to real skull CTs without being exact replicas of the training set, and if they are quantitatively indistinguishable from a test set of real skull CTs.
### Related Work
The limited availability of sufficiently large datasets, on the order of thousands of samples, of preprocessed and segmented human skull CTs has never been addressed via generative methods. Insofar as other systems and organs are concerned, prior research in exploring GANs for creating synthetic medical images such as cardiac MRI, liver CT, and retina images showed that their generated samples did not have the same richness as real images [4]. More than being a critique of the power of GANs, this prior work was a critique of the reliability of traditional evaluation metrics, such as the Frechet inception distance score [5], in assessing the quality of GAN-generated samples. In a separate work where GAN-generated liver lesions were used to augment available real medical images for training purposes, their convolutional neural network classification performance improved from 78.6% to 85.5% in sensitivity and from 88.4% to 92.4% in specificity [6]. These findings point to the potential GANs hold in generating synthetic medical images that could effectively replace real medical images for training neural networks, if enough capacity is given to the networks and adequate curated data is used to train them.
## Materials and Methods
### Model
SkullGAN is based on the Generative Adversarial Network (GAN) [7] and the deep-convolutional GAN (DC-GAN) [8]. We augment the DC-GAN architecture with two noise-injection layers. This simple modification helps SkullGAN produce high-quality \(128\times 128\) skull CT segments.
Following the original implementation, we use binary cross entropy loss with the following loss function:
\[\ell=\log D(x)+\log(1-D(G(z))), \tag{1}\]
where \(x\) is a real skull segment sample and \(z\) is a random noise vector sampled from a uniform distribution, with \(D(\cdot)\) and \(G(\cdot)\) denoting the discriminator and generator, respectively.
### Network Architecture
The generator, shown in Figure 1, takes a latent vector of size 200 as input and passes it through 6 sequential 2D convolutional layers. The output of the first layer has 4,096 channels, with each subsequent layer downscaling the channels by a factor of two and upscaling the features by a factor of two to yield an intermediate output of \(128\times 128\times 128\). Subsequent convolutional layers reduce the channel dimension to yield a final output of size \(1\times 128\times 128\). External information in the form of Gaussian noise is injected into two layers of the generator to improve the quality of the pore structures in the generated skulls. A tanh layer constrains the final output to \([-1,1]\), to match the normalization used for the training set. The discriminator has a mirrored
architecture without Gaussian noise injection. These conditions result in 192 million parameters in the generator and 11.2 million parameters in the discriminator. Layer details are presented in Table 1.
### CT Imaging and Segmentation
This study included 38 anonymized skull CT scans from healthy subjects, with 28 from the University of Virginia's Department of Radiology and 10 from Stanford's Department of Radiology. All CT scans were taken at 120 keV on GE scanners with axial slicing, a 0.625 mm slice thickness, and the Bone Plus kernel.
The skull CTs were segmented using ImageJ and Slicer [9, 10] to remove the brain and artifacts outside the skull. Slices were selected at an interval of 3.1 mm (every 5 slices) in the axial, coronal, and sagittal planes. As such, our training set consisted of slices from the temporal and parietal bones. Given the high variability in skull structure within subjects, both the left and the right temporal bones were extracted from every axial plane, without fear of biasing the dataset. Conversely, irregularities and sutures present in the frontal and occipital bones, combined with the small size of our training dataset, diminished the quality of the SkullGAN-generated skull segments. As such, we excluded frontal and occipital bones from the training set. Slices were masked in MATLAB [11] to produce a final dataset of 4,828 2D skull segments, half of which were used for training and half for testing (Figure 2).
Figure 1: \(|\) SkullGAN generator and training pipeline. SkullGAN was first pre-trained on the Celeb-A dataset, and then trained on human skull CTs. In contrast to random initialization of the weights for training on the human skull CTs, pre-training yielded layers with fine-tuned weights for detecting edges and resulted in better quality skull segment images, with finer definition both in contour and interior bone structure.
Figure 2: \(|\) SkullGAN training set preparation. After segmentation, the slices were masked and rotated where necessary. To account for both the left and the right temporal bones, two segments were taken from each axial slice. This resulted in a training set of 2,414 2D horizontal skull segments.
### Datasets
The training and evaluation of SkullGAN involve multiple sets of images, each catering to unique stages in the training and assessment of the model. The first step involves pre-training SkullGAN on a large dataset of human faces to familiarize the model with facial features and structures. The main training process then trains SkullGAN on real human skull images. The output of SkullGAN is referred to as the synthetic set, which is a collection of generated skull CT images. The quality of these synthetic images is evaluated through comparison with various other image sets, each providing a unique perspective for evaluation. Example images from each set are shown in Figure 3.
**pre-training set**: 100,000 cropped and rescaled \(128\times 128\) celebrity images (Celeb-A dataset) [12]. This dataset, well-known in computer vision literature [13, 14], was used during pre-training to allow the model to learn fundamental facial structures and features, which facilitated its subsequent learning of human skulls during the main training phase.
**training set**: 2,414 real human skull CT segments used to train SkullGAN.
**test set**: 2,414 real human skull CT segments used for analysis and comparison to the synthetic set. These samples are distinct from the training set and were not seen by SkullGAN during training.
**synthetic set**: 1,000 2D synthetic skull segments generated by SkullGAN.
**control set**: 1,000 2D synthetic skull segments generated by a less powerful iteration of SkullGAN that did not incorporate pre-training or other enhancements in the final model. This set serves as a baseline for performance comparison, allowing us to evaluate the improvements achieved with the current iteration of SkullGAN.
**artificial set**: 500 _idealized_ skull segments, engineered to represent the simplest model of the skull, and 500 _unrealistic_ skull segments, purposefully engineered to look ostensibly unreal. Although visually distinct from real skull images, these artificial images are designed to fool quantitative radiological metrics such as SDR, mean thickness, and mean intensity, illustrating the potential limitations of these common metrics when assessing synthetic skulls.
### Training
SkullGAN was pre-trained on 100,000 Celeb-A samples [12] for 10 epochs and then trained on the training set for 1,000 epochs. Pre-training greatly improved the resolution of the synthetic set, while reducing the number of iterations required for convergence during training. The training set was normalized to a range of [-1, 1]. Both the generator and discriminator networks used mini-batch training with a batch size of 64. Optimization was performed using the Adam optimizer [15] with no weight decay and \(\beta=0.5\).
To stabilize the networks and improve the synthetic skulls, several techniques were applied. For the discriminator, we used label smoothing by assigning soft labels of 0.9 (instead of 1.0) for real samples to encourage stabilization. A dynamic learning rate reduction on plateau [16] was used for both networks: For the generator, a decay factor of 0.5 and patience of 1,000 iterations was determined through our parameter search, while for the discriminator, a decay factor of 0.8 and patience of 1,000 iterations were used. Each network was allowed to update only if its loss in the last batch was lower than a heuristically-determined value of 90%. Gaussian-blurred real samples were gradually introduced to the discriminator as fake samples (0 labeled) past the \(15^{th}\) epoch, until a ratio of \(\nicefrac{{1}}{{2}}\) blurred real samples and \(\nicefrac{{1}}{{2}}\) fake samples was reached. This ratio was then kept constant. Figure 4 illustrates the comprehensive training and inference workflows of SkullGAN, providing an explanation for the various datasets utilized in each step.
\begin{table}
\begin{tabular}{|c|l|l|l|l|l|} \hline \multicolumn{2}{|c|}{Generative} & \multicolumn{2}{|c|}{Decriminator} & \multicolumn{1}{c|}{Decriminator} & \multicolumn{1}{c|}{Output Size} & \multicolumn{1}{c|}{Losses} & \multicolumn{1}{c|}{Sections} & \multicolumn{1}{c|}{Output Size} \\ \hline \multirow{3}{*}{Block 1} & \multicolumn{2}{|c|}{Transpose Combination} & \(4\times 4\) conv, stride 1, padding 0 & \multirow{3}{*}{Block 1} & \multicolumn{2}{|c|}{ConvConv} & \(4\times 4\) conv, stride 2, padding 1 & \multirow{3}{*}{\(64\times 64\)} \\ & \multicolumn{2}{|c|}{Back Nomenclature} & \multicolumn{2}{|c|}{consumer 0.1} & & & & & \\ \cline{2-5} \cline{7-7} & ReLU & & & & & & \\ \hline \multirow{3}{*}{Block 2} & \multicolumn{2}{|c|}{Transpose Combination} & \(4\times 4\) conv, stride 2, padding 1 & \multirow{3}{*}{Block 3} & \multirow{3}{*}{Block 2} & \multirow{3}{*}{Block 1} & \multirow{3}{*}{\(2\)} \\ & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & Batch Nomenclature & \(4\times 4\) conv, stride 2, padding 1 & \multirow{3}{*}{Block 3} & \multirow{3}{*}{Block 3} & \multirow{3}{*}{Block 3} & \multirow{3}{*}{Block 1} \\ & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & Batch Nomenclature & \(1024\times 16\) & \multirow{3}{*}{Block 4} & \multirow{3}{*}{Block 4} & \multirow{3}{*}{Block 4} & \multirow{3}{*}{Block 4} \\ & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \hline \multirow{3}{*}{Block 6} & \multicolumn{2}{|c|}{Transpose Convolution} & \(4\times 4\) conv, stride 2, padding 1 & \multirow{3}{*}{Block 5} & \multirow{3}{*}{Block 5} & \multirow{3}{*}{Block 5} & \multirow{3}{*}{Block 1} \\ & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & Batch Nomenclature & \(1\times 128\times 128\) & \multirow{3}{*}{Block 6} & \multirow{3}{*}{Block 6} & \multirow{3}{*}{Block 6} & \multirow{3}{*}{Block 6} \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & \\ \hline \multirow{3}{*}{Block 7} & \multicolumn{2}{|c|}{Transpose Convolution} & \(4\times 4\) conv, stride 2, padding 1 & \multirow{3}{*}{Block 7} & \multirow{3}{*}{Block 7} & \multirow{3}{*}{Block 7} & \multirow{3}{*}{Block 7} & \multirow{3}{*}{Block 7} \\ & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & & \\ \cline{2-5} \cline{7-7} & \multicolumn{2}{|c|}{Back Nomenclature} & & & & & & & & \\ \cline{2-5} \cline{7-7} & \mul
Figure 4: Training and Inference Workflows. **Training the Discriminator:** In each iteration, a batch of synthetic CT images (labeled 0), generated by the Generator, and a batch of real CT scans (labeled 0.9) are presented to the Discriminator. To facilitate the production of high-resolution CT scans by the Generator, we aim to enhance the Discriminator’s classification ability. Thus, at every iteration, we also introduce a batch of Gaussian-blurred real CT scans, but label them as fakes (labeled 0). **Training the Generator:** The Generator creates synthetic CT scans, which are then evaluated by the Discriminator for authenticity. The weights of the generator are updated based on these evaluations. **Inference (CT Generation with the Generator):** After training, the Generator is employed to produce the synthetic set. **Inference (Classification with the Discriminator):** The trained Discriminator serves as a powerful classifier, determining whether its input is real or fake. Six distinct datasets are presented with their expected class labels in brackets. The training CTs are expected to be labeled as real, but if trained effectively, the Generator should produce synthetic CTs that the Discriminator cannot confidently classify. At this stage, a random (\({}^{50}\)/\({}_{80}\)) classification is anticipated. This logic also applies to the unseen test CT dataset, which is also expected to yield a \({}^{50}\)/\({}_{80}\) classification.
Figure 3: \(|\) Five cropped examples from each dataset. **a.** Training Set: real skull CT segments. **b.** Test Set: real skull CT segments unseen by SkullGAN. **c.** Synthetic Set: skull CT segments generated by SkullGAN. **d.** Control Set: poor-quality skull CT segments generated by a less powerful iteration of SkullGAN that was not pretrained on the Celeb-A dataset. **e.** Artificial Set: idealized and unrealistic fake skull CT segments deliberately engineered to look unlike any real skull segments and yet fool quantitative radiological assessment metrics.
### Quantitative Radiological Metrics
#### Skull Density Ratio (SDR)
SDR [17] was calculated for each skull segment by taking 32 vertical cross-sections down the segment, spaced approximately 1.8 mm apart, and then computing the mean ratio of the minimum to maximum pixel intensities:
\[\text{SDR}_{\text{j}}=\frac{1}{32}\sum_{i=1}^{32}\frac{\text{min}(S_{ji})}{ \text{max}(S_{ji})}, \tag{2}\]
where \(\text{SDR}_{\text{j}}\) denotes the SDR for skull \(j\), and \(S_{ji}\) refers to the \(i^{th}\) vertical cross-section for the \(j^{th}\) skull segment.
#### Mean Thickness
The mean thickness of each skull segment was calculated by averaging the thickness through 32 cross-sections down the image, spaced approximately 1.8 mm apart:
\[\text{MT}_{\text{j}}=\frac{1}{32}\sum_{i=1}^{32}\rho\times T_{i}, \tag{3}\]
where \(\text{T}_{\text{i}}\) denotes the thickness in pixels of a cross-section \(i\), and \(\rho\) denotes the CT resolution in \(\nicefrac{{mm}}{{pixel}}\).
#### Mean Intensity
The mean intensity for each skull segment was calculated by first thresholding the image to ignore Hounsfield Unit (HU) values of 10 or less (the background). The intensity of each pixel was then averaged to obtain the mean intensity of the skull segment:
\[\text{MI}_{\text{j}}=\frac{1}{N_{j}}\sum_{\text{x,y}}I(x,y)\cdot\mathbf{1}_{I (x,y)>10\text{ HU}} \tag{4}\]
where \(\text{MI}_{\text{j}}\) denotes the mean intensity for skull segment \(j\), and \(N_{j}\) is the total number of pixels in segment \(j\) that meet the thresholding requirement of \(>\) 10 HU.
### Separability of the Datasets
We used t-distributed stochastic neighbor embedding (t-SNE) [18] and the discriminator of SkullGAN to assess the similarity or separability of our datasets.
The t-SNE algorithm constructs a probability distribution for pairs of objects based on their similarity, both in the original high-dimensional space and in a lower dimensional representation, and then iteratively solves for a mapping between the two distributions, thus representing the high-dimensional objects in a lower-dimensional, visually interpretable manner.
The discriminator of SkullGAN was trained to convergence until it could no longer decisively label the generator output as real or fake. The theoretical global minimum of this minimax game is when the Discriminator reaches a 50% accuracy [7]. Therefore, if the discriminator classifies half of the test set as real and the other half as fake, and shows a similar performance on newly generated synthetic samples by SkullGAN, we can conclude that SkullGAN generates skull segments that are quantitatively indistinguishable from test skull segments, within the limits of our discriminator's capacity as a classifier.
### Memory GAN
A common challenge in training GANs is "Memory GAN," in which the network simply memorizes the training set [19]. To test for this failure mode and verify the uniqueness of our SkullGAN-generated segments, we compared all of our training set to an equally sized batch of our synthetic set. To find the closest real counterparts to the synthetic segments, we searched for the minimum distance between synthetic and real samples, computed once via scale-invariant feature transform (SIFT) [20], and once with simple pixel-wise mean squared error (MSE).
## Results
Figure 3 displays example images from the training set, the test set, the synthetic set generated by SkullGAN, the poor quality control set generated by a less powerful version of SkullGAN that was not pre-trained on the Celeb-A dataset, and the artificial set, separated into idealized and unrealistic varieties. The total training time for SkullGAN was approximately 2 hours on 2 NVIDIA A100 40GB GPUs, running on a machine-learning optimized Google Cloud Platform [22] instance. Generation time on this setup averaged 2.95 seconds for 2,500 images and 9.6 minutes for 100,000 images.
### Memory GAN
We employed two methods in identifying whether the generator in SkullGAN was overfitting the training set:
Figure 5: Example random SkullGAN samples, their closest real counterparts, and the difference maps between them (images cropped for visual purposes). Hounsfield unit values of 0 correspond to water, and above 700 to bone [21].
pixel-wise MSE and SIFT. We found the candidates identified through pixel-wise MSE to be visually more similar to one another, compared to candidates identified through SIFT. Example of differences in pixel intensities for both metrics are shown in Figure 5. None of the randomly generated samples were replicas of their closest real counterparts, allowing the conclusion that the SkullGAN network did not memorize the training set.
### Susceptibility of the Quantitative Radiological Metrics to Failure
While quantitative radiological metrics such as SDR, mean thickness, and mean intensity are viable measures to compare real skull CTs for the purpose of clinical evaluation, they can fail when assessing synthetic skull CTs. As shown in Figure 6, artificial skull CT segments engineered to match the SDR, mean thickness, and mean intensity distributions of real skull CT segments can easily fool these metrics, even though they are visually clearly fake. Therefore, more powerful methods were employed to analyze the separability of these datasets.
### Visual Clustering of Skull Data Sets
When applying t-SNE to the distributions of radiologic metrics for all of the data points (Figure 6), we observed no separability, confirming the inadequacy of these features in authenticating the real skull CT segments from the synthetic and artificial sets (Figure 7a). However, once we break free from these limiting radiological metrics and instead unroll every sample into a vector of length \(128\times 128\), t-SNE treats every entry (pixel) in this vector as a feature. Applying t-SNE to the \(4,000\times 16,384\) matrix where the rows represent training, control, synthetic, and artificial skull sets in batches of 1,000, we observed a clear separate clustering of the artificial set. Interestingly, within the artificial set, the unrealistic segments were in a clearly separate cluster than the idealized segments (Figure 7b). Despite this, t-SNE still fell short of separating the training set from the control set and the synthetic (SkullGAN-generated) set. A more complex, highly nonlinear classifier, in this case the SkullGAN discriminator, was needed to separate these similar distributions.
### Classification by the Discriminator
Unlike the typical radiological metrics or t-SNE, the discriminator successfully separated the datasets into groups anticipated _a priori_. As demonstrated in Figure 7c, the discriminator classified 97.1% of the training set, 56.5% of the test set, and 55.9% of the SkullGAN-generated synthetic set as real. 100% classification of the training set as reals was not expected, especially because Gaussian-blurred reals were introduced as fakes during training. While t-SNE failed to separate the control set from the training and synthetic sets, the discriminator correctly classified 100% of the control set as fakes (Figure 7b,c). Labeling the artificial set as fake was no difficult task for the discriminator as evidenced by the clustering of the artificial set in the leftmost end of the classification spectrum.
## Discussion
In this work, we demonstrated the ability of SkullGAN to generate large numbers of synthetic skull CT segments that are visually and quantitatively indistinguishable from real skull CTs. One of the main advantages of using SkullGAN is its ability to overcome some of the challenges associated with obtaining real CT scans. Large datasets of anonymized, curated, and preprocessed medical images often are limited by factors such as time, capital, and access. In contrast, SkullGAN can generate an infinite number of highly varied skull CT segments quickly and at a very low cost. This makes it possible for any researcher to generate
Figure 6: Violin plots of the quantitative radiological metrics for the training, synthetic, control, and artificial sets. What is noteworthy is that we can engineer artificial skulls that are ostensibly unrealistic and still match the training set (real skull CTs) across the three radiological metrics of skull density ratio, mean intensity, and mean thickness. In fact, we can go as far as matching the shapes of the distributions: the bimodal distribution of mean intensity for the artificial set resembles that of the training set.
large datasets of skull CT segments for the purpose of training deep-learning models with applications involving the human skull, geared towards medical diagnosis and treatment planning.
One such example is the field of transcranial ultrasound stimulation (TUS), where convergent, high frequency sound waves sonicate a target deep within the brain, transcranially and noninvasively [23, 24, 25]. TUS holds the potential for treatment of a wide range of neurological disease [26, 27, 28, 29, 30]. Even though accounting and correcting for beam aberrations as they pass through the skull and intersect at a target inside the brain [31, 32, 33], lends itself to being cast as an end-to-end machine learning problem, there have been little to no such attempts to date [34, 35]. Such a machine learning model would be able to plan ultrasound treatments much faster than conventional methods while offering higher accuracy. The main reason such a model hasn't been attempted is a lack of a sufficiently large dataset of preprocessed real human skull CTs for the purposes of training neural networks. When large samples of skull CTs were needed, either manually preprocessed human skull scans [36], or idealized models had been simulated [34, 37]. However, we showed that idealized skulls can only fool quantitative radiological metrics, but cannot deceive a powerful nonlinear classifier. As such, these artificial skulls would be unlikely to yield high performance for supervised learning models. Instead of resorting to these idealized representations of skulls, researchers can now use SkullGAN to test their algorithms on large quantities of realistic synthetic skull CT segments.
There are some challenges associated with evaluating the quality of the synthetic CT scans generated by SkullGAN. As mentioned earlier, quantitative radiological metrics such as skull density ratio (SDR), mean thickness, and mean intensity are susceptible to spurious results if used for statistical comparison between real and synthetic skulls. Instead, many-parameter nonlinear classifiers are better suited for separating low-quality slices from realistic ones. Here, we demonstrated the inseparability of our synthetic set from a test set by applying our discriminator as a classifier.
One potential limitation of this work is that we have trained SkullGAN on a relatively small dataset of 38 subjects. While the results are promising, it would be useful to test this technique on a larger and more diverse dataset to ensure that it generalizes well to other populations. A larger dataset, on the order of hundreds of real human skull CTs, would further improve the quality of the synthetic skull images.
Additionally, our primary application for developing SkullGAN was TUS. In TUS, ultrasound transducers are typically placed either on temporal bone or parietal bone. For that reason, we focused on training SkullGAN to generate realistic temporal and parietal bones. With the rise of application-specific demand for other parts of the skull, retraining of SkullGAN over a dataset that includes occipital and frontal lobes will be necessary.
Another limitation of the present study is that SkullGAN only generates 2D slices of skull CT segments. While this is suitable for some applications, it is not yet ideal for clinical deep-learning algorithms. Additionally, the 2D nature of SkullGAN means that it may not capture certain features of the skull that are only visible in three dimensions, such as complex bone structures or variations in bone density. Future work could
Figure 7: Separability of the datasets. **a.** Visual representation of t-SNE applied to the radiological features shown in Figure 6. No discernible clustering is seen, and the datasets appear inseparable with this method. **b.** Visual representation of t-SNE applied to the unrolled skulls, where each image is represented as a vector of size 16,384. The artificial set is clearly separated into clusters by t-SNE (one for the unrealistic models and another for the idealized models), while the other distributions remain inseparable. **c.** Classifications of each dataset by the SkullGAN discriminator. The dotted line represents the 0.5 mark. Data points labeled \(\geq\) 0.5 are classified as real, and data points labeled \(<\) 0.5 are classified as fake. A large proportion of the training dataset is classified as real (97.1%), while the entirety of the control and artificial sets are confidently classified as fakes. Results from SkullGAN and a test set of 2,414 real skulls yield a near 50/50, or random guess by the model. (Note that **a** and **b** did not have a test set, hence the absence of test set in the horizontal legend.)
explore the possibility of extending SkullGAN to generate 3D volumetric skull CT scans. Another potential avenue could be exploring the use of a 2.5D model for generating pseudo-volumetric images [38, 39]. This approach involves generating 2D slices of the object of interest and then stacking them together to create a 3D image. While not a true volumetric image, this approach can provide some of the benefits of 3D imaging while still leveraging the strengths of SkullGAN's 2D approach. This could be a promising direction for researchers looking to generate synthetic medical images for use in deep-learning models that require 3D data, but for whom obtaining sufficient volumetric data is not feasible.
In conclusion, our work presents a novel approach of using GANs to address data scarcity problems in healthcare, by generating large numbers of synthetic human skull CT segments. The results demonstrate that SkullGAN is capable of generating synthetic skull CT segments that are indistinguishable from real skull CT segments. Future work should investigate the performance of SkullGAN on larger and more diverse datasets, and extend SkullGAN to generate volumetric skull models. Much like ImageNet [40] played a pivotal role in development of advanced deep learning algorithms in computer vision, by providing a very large labeled dataset for training, SkullGAN and its variants trained on other systems and organs of the human body [6, 41, 4] may play a similar role. The preponderance of such valid, high-quality, and preprocessed medical images readily available to any researcher will user in a new wave of advanced deep learning models in healthcare, that go beyond classification and segmentation.
## Acknowledgements
We would like to thank Jeremy Irvin and Eric Luxenberg for their helpful discussions, Ningrui Li and Farni Fu for advice on skull CT segmentation and pre-processing, and Jeff Elias at The University of Virginia for graciously providing 28 of the 38 human skull CTs for training SkullGAN. This work was generously supported by NIH R01 Grant EB032743.
## Code Availability
SkullGAN was written in Python v3.9.2 using PyTorch v1.9.0. All of the source code and training data are available at [https://github.com/kbp-lab/SkullGAN](https://github.com/kbp-lab/SkullGAN).
|
2307.10690 | Bridging Intelligence and Instinct: A New Control Paradigm for
Autonomous Robots | As the advent of artificial general intelligence (AGI) progresses at a
breathtaking pace, the application of large language models (LLMs) as AI Agents
in robotics remains in its nascent stage. A significant concern that hampers
the seamless integration of these AI Agents into robotics is the
unpredictability of the content they generate, a phenomena known as
``hallucination''. Drawing inspiration from biological neural systems, we
propose a novel, layered architecture for autonomous robotics, bridging AI
agent intelligence and robot instinct. In this context, we define Robot
Instinct as the innate or learned set of responses and priorities in an
autonomous robotic system that ensures survival-essential tasks, such as safety
assurance and obstacle avoidance, are carried out in a timely and effective
manner. This paradigm harmoniously combines the intelligence of LLMs with the
instinct of robotic behaviors, contributing to a more safe and versatile
autonomous robotic system. As a case study, we illustrate this paradigm within
the context of a mobile robot, demonstrating its potential to significantly
enhance autonomous robotics and enabling a future where robots can operate
independently and safely across diverse environments. | Shimian Zhang, Qiuhong Lu | 2023-07-20T08:35:13Z | http://arxiv.org/abs/2307.10690v2 | # Bridging Intelligence and Instinct: A New Control Paradigm for Autonomous Robots
###### Abstract
As the advent of artificial general intelligence (AGI) progresses at a breathtaking pace, the application of large language models (LLMs) as AI Agents in robotics remains in its nascent stage. A significant concern that hampers the seamless integration of these AI Agents into robotics is the unpredictability of the content they generate, a phenomena known as "hallucination". Drawing inspiration from biological neural systems, we propose a novel, layered architecture for autonomous robotics, bridging AI agent intelligence and robot instinct. In this context, we define Robot Instinct as the innate or learned set of responses and priorities in an autonomous robotic system that ensures survival-essential tasks, such as safety assurance and obstacle avoidance, are carried out in a timely and effective manner. This paradigm harmoniously combines the intelligence of LLMs with the instinct of robotic behaviors, contributing to a more safe and versatile autonomous robotic system. As a case study, we illustrate this paradigm within the context of a mobile robot, demonstrating its potential to significantly enhance autonomous robotics and enabling a future where robots can operate independently and safely across diverse environments.
## I Introduction
The rapid evolution of artificial general intelligence (AGI) technologies, particularly large language models (LLMs) like GPT-4 [15] and LLaMA [19], has catalyzed a new wave of potential applications within the realm of robotics [20]. As AI agents, LLMs can generate high-level decisions and instructions that guide a robot's behavior, demonstrating potential in tool utilization [11, 17], task planning [13],and task creation [4]. However, despite the promises, there remains a dearth of substantive applications within robotics, largely due to the unpredictability of LLMs' outputs, a phenomenon often referred to as "hallucination" [15, 2].
Mitigation strategies such as Chain-of-Thoughts [8], Self-Reflection [18] and etc., have been proposed to manage this issue, but these interventions target the LLMs themselves rather than considering potential architectural solutions within the robotic system. Traditionally, robotic control systems have adhered to either cognitive models [9], which mimic human cognitive processes, or behavior-based models that produce direct responses to sensory inputs [3]. Yet, none of these traditional architectures have fully considered the incorporation and interaction of AI agents.
In response to these challenges, we introduce a novel architecture for robotic control systems designed to bridge high-level intelligence with low-level instinctual protocols, as Fig. 1 shows. Inspired by the human nervous system's 'brain and brainstem' paradigm, this architecture proposes four distinct layers: External, Decision, Instinct, and Device. This new paradigm treats the AI agent as the 'brain,' handling advanced decision-making, while a 'Robot Instinct' module acts as the 'brainstem,' overseeing basic survival-essential tasks.
Our architectural design presents a systemic solution to the hallucination risks posed by LLM-based AI agents. By integrating high-level decision-making AGIs with robust low-level safety mechanisms, we limit the harm of potential incorrect decisions. Moreover, we emphasize the need for robots, even as they gain sophisticated AGI capabilities, to retain robust instinctual reactions akin to human survival instincts, ensuring they can effectively serve in various tasks and environments.
The primary contribution of this paper is the introduction of this innovative four-layered architecture for robotic control systems. We demonstrate its efficacy and versatility through a specific case study, marking a significant stride towards a new era of robotics. We envision this approach not only propelling the development of robotic control systems but also fostering a tighter synergy between AI and robotics, empowering more nuanced interaction between robots and
Fig. 1: Layered Hierarchy Design. The pyramid structure of our proposed architecture, comprising four layers: External, Decision, Instinct, and Device. The External layer represents high-level entities interacting with the system, the Decision layer acts as the ‘brain’ making high-level decisions, the Instinct layer continuously maintains safety acting as the ‘brainstem,’ and the Device layer executes the commands by controlling the robot’s physical actions.
their environment.
## II Related Works
### _AI Agents for Robotics_
The application of AI agents in robotics has become a burgeoning field of research with the advancement of machine learning techniques, especially Large Language Models (LLMs).
#### Ii-A1 LLMs as AI Agents
Recent research findings indicate that LLMs can be effectively used as AI agents to address complex tasks in robotics [4, 11, 17, 13]. Such capabilities enable LLMs to interact with both structured and unstructured environments, making them valuable for the development of versatile and adaptable robotic systems.
#### Ii-A2 AI Agents as the Robot Brain
Despite these advancements, the application of AI agents as the primary "brain" of a robot is still in the early stages [12], largely limited by the complexities of integrating AI with physical systems. Some work has integrated models like Chat-GPT into a robot's operational flow [20], albeit in a simulated environment with the need for human oversight to ensure accuracy and appropriateness of the AI's outputs.
In light of these challenges and opportunities, our proposed design framework represents a pioneering step towards fully integrating AI agents into robotic systems. We view this as an inevitable trend in the evolution of robotics, given the increasing demand for intelligent and autonomous operation in real-world environments. Our approach aims to create a truly anthropomorphic robotic entity, capable of sophisticated interaction with the world while ensuring the safety and reliability of operations.
### _Robot System Architectures_
Historically, the Sense-Plan-Act (SPA) architecture was proposed as a tiered model with distinct modules [16], each responsible for a specific function or task. While this top-down architecture was effective in dividing labor, it was also inherently sequential, resulting in difficulties in rapidly adapting to dynamic environments.
An alternative approach was put forth by [3], who proposed the Subsumption Architecture. This model popularized Behavior-Based Robotics (BBR), which emphasized decentralized control and modularity. Complex behaviors were seen as emergent from the interaction of simple, concurrently operating modules. However, the practical implementation of this architecture has proven to be complex, particularly in intelligent robots where the decoupling of high-level and low-level behaviors can be challenging.
The Hybrid Architecture was then introduced to bridge the strengths of the SPA and BBR architectures. In a hybrid model, a robot's behavior can be generated either by a central plan (akin to SPA) or by a set of independent behavior modules (akin to BBR). [5] proposed a notable instance of this paradigm, a three-tiered architecture consisting of a reactive Controller layer, a reactive Sequencer layer, and a Deliberator layer.
Our proposed paradigm control architecture could be seen as a new form of Hybrid Architecture, introducing unique components designed to maximize the benefits of recent advancements in AI. It innovatively blends the deliberative power of AI Agent decision-making capabilities with the reactivity of an Instinct layer with low-level safety mechanisms, coupled with a clear definition of human-machine interactions within the External Layer.
### _Safety Mechanisms_
Safety mechanisms are crucial to the design and implementation of any robotic system, ensuring safe interactions with the environment and reliable execution of tasks. Various techniques have been developed over the years.
#### Ii-C1 Model-Based Safety Mechanisms
Techniques such as Control Lyapunov Function (CLF) and Control Barrier Function (CBF) have been extensively used to enforce safety in robotic systems [1, 14, 6]. These techniques are highly effective in systems where precise mathematical models of the system dynamics and environment are known. However, the performance of these methods can be significantly degraded if the model is inaccurate, or in the presence of unknown system dynamics.
#### Ii-C2 Model-Free Safety Control
With the advent of powerful Reinforcement Learning (RL) algorithms, model-free safety control mechanisms have emerged as promising solutions on complex and high-dimensional systems [7, 10]. These mechanisms are particularly useful when dealing with unknown environments or unmodeled system dynamics. However, RL-based mechanisms typically require a large amount of sample data for training and can be time-consuming. Such mechanism are usually not easy to migrate from one robot platform to another.
#### Ii-C3 AI Agents Safety Concerns
Given the rapidly growing capabilities of LLMs, it is imperative to address the potential safety concerns that come with their integration into robotic systems. Techniques such as Chain of Thoughts [8], Self-Reflection [18], and human-in-the-loop mechanisms [20] have been used to prevent or reduce the rate of "hallucination". However, these methods do not entirely eliminate the propensity of LLMs to generate incorrect responses due to their inherent nature.
Our architecture distinctly addresses these safety concerns through two primary mechanisms: First, our Instinct Layer operating independent, uninterrupted safety mechanisms, effectively caters to safety requirements. The implementation of these safety protocols can leverage established safety control methods such as CBF, RL-learning, etc. Second, our Decision Layer has a multi-tiered interaction mechanism including a feedback loop with the Instinct Layer for continual self-reflection and interaction with the External Layer for incorporating human-in-the-loop to prevent the possibility of incorrect planning from an LLM as much as possible.
## III Proposed Framework
### _Layered Hierarchy Design_
Our proposed control framework revolutionizes the conventional robotic architecture by incorporating a hierarchically layered design based on the level of intelligence, as Fig. 1 shows. This hierarchical model emulates the human cognitive process, enabling more human-like behavior and interaction in robotic systems.
#### Iii-A1 External Layer
At the top of this hierarchy, as an external layer, we have the (Artificial) General Intelligence, represented by humans and other high-level intelligent agents. This level provides high-level goals and instructions and serves as a means of interpreting the robot's behavior and feedback in the broader context of the world.
#### Iii-A2 Decision Layer
Below this, we have the Decision Layer, the highest level within the robot's architecture. This layer is composed of multiple AI Agents as the 'brain' of the robot, each potentially an instance of high-level generative models such as GPT-4, LLaMA, etc. The tasks carried out by these AI Agents encompass a range of high-level goals that are fundamental for a fully autonomous robot. Such tasks include complex interaction with humans or other agents, autonomous decision making based on the environment or the tasks at hand, goal-directed behavior based on both short-term and long-term objectives, and even tool use and learning.
Realizing these high-level tasks is often achieved by converting the tasks into a text form that the AI models understand [citation]. The models can then generate a chain of thoughts or decisions [citation], similar to how humans would reason. This makes use of AI Planners and AI Executors [citation], which respectively deal with deciding what to do and implementing the decision.
#### Iii-A3 Instinct Layer
Following the Decision Layer is the Instinct Layer. This level of our framework represents the low-level intelligence within the robotic system as the 'brain-stem' and is composed of a collection of Robot Instinct modules. Unlike the AI Agents that make strategic decisions, the role of the Robot Instinct modules is to ensure the robot's safety and operability in the face of immediate environmental challenges. Each of these modules can be powered by a Discriminative Model, or a more traditional control scheme depending on the specific task.
These **survival-essential** tasks are abilities intrinsic to the robot's survival and functionality, such as obstacle avoidance, random roaming, and overload protection. These functions remain operational even in the absence or failure of the AI Agents, whether due to hallucination phenomena, signal interference, or other unexpected scenarios. Consequently, even when higher-level decision-making capabilities are compromised, the robot can still provide basic services and ensure its own safety and that of its surroundings.
#### Iii-A4 Device Layer
Finally, at the base of the hierarchy, we have the Device Layer. This layer is devoid of any form of intelligence and comprises the fundamental hardware components of the robot, such as the motor, camera, laser, and DRAM. These devices serve as the robot's sensors, executors, and memory, forming the 'physical body' of the robot that interacts with the environment.
By adopting such a layered, hierarchical design, our framework allows for the distribution of control and decision-making processes at different levels of intelligence. This not only enables robots to exhibit more human-like behavior but also opens new avenues for safer and more reliable robot operation.
### _Modules and Flows_
After having delved into the detailed discussion on the layered design of the architecture, we now turn to delineating the functionalities of the various modules within the architecture and the data, control, and feedback flows between them, as Fig. 2 shows. This modular design and flow analysis are key to realizing our layered architecture, jointly forming the backbone of the architecture.
#### Iii-B1 Data Flow
All real-time data (such as image and audio signals and motor encoder information) are first acquired by the Sensor module and then delivered to the Robot Instinct module. The Robot Instinct is responsible for processing this raw sensory data, executing survival-essential tasks, and producing a simplified version for the AI Agent. This design eases the computational burden of the AI Agent and allows it to focus more on high-level decision-making, similar to how higher cognitive functions in humans rely on processed sensory inputs.
The processed data from Robot Instinct is then transferred to the AI Agent for advanced decision-making processes. Both the Robot Instinct and AI Agent generate data based on their operation, which is stored in the Memory module for future recall and learning. The real-time nature and reliability
Fig. 2: Modules and Flows. Drawing from our layered hierarchy, we can distill our framework into a few key modules: the (Artificial) General Intelligence module, which includes Human and/or Other Agents, the AI Agent module, the Robot Instinct module, and the Device Layer modules. The Device Layer is further subdivided into the Sensor, Executor, and Memory modules. This modular breakdown allows for targeted interactions and efficient data and control flows within the robotic system.
of the Robot Instinct processing are critical to the successful operation of this framework.
#### Iii-B2 Control Flow
The AI Agent, using generative AI, generates high-level commands, often as function API calls in high-level languages, even pseudocode in complex conditions or loops. These commands, which encapsulate complex tasks or behaviors, are then sent to the Robot Instinct module.
The Robot Instinct module is responsible for the execution of these high-level commands. To do so, it translates high-level commands into a series of low-level API calls such as specific motor movements or stops, in languages closer to the hardware, like C 1.
Footnote 1: The Robot Instinct’s safety check should ensure low latency, allowing the high-level commands to “penetrate” through the Robot Instinct and reach the Device Layer directly while maintaining safety.
Importantly, the Robot Instinct module provides a standardized interface for the AI Agent, abstracting the details of the underlying Executor devices. This abstraction allows the AI Agent to focus on high-level decision-making without worrying about the specific characteristics of the individual devices, which greatly enhances the scalability and versatility of the AI system.
#### Iii-B3 Feedback Flow
Feedback data from the Robot Instinct module is returned to the AI Agent, promoting a process of self-reflection and adjustment. This feedback mechanism allows the AI Agent to make informed next-step decisions, thus enabling more effective learning and adaptation to dynamic environments.
### _Feedback Mechanism_
In our proposed robotic control framework, the AI Agent and the Robot Instinct interact with each other not merely through one-way command issuing but also through bidirectional feedback communication, as Fig. 3 shows. This closed-loop structure aids in refining the decision-making process and ensures the safety of the robot's operations.
#### Iii-C1 Feedback to the AI Agent
The AI Agent, after issuing high-level commands, receives feedback from the Robot Instinct. This feedback includes the status of the commands executed and possibly additional sensor data. By incorporating modern techniques such as chain of thoughts/decision, self-reflection via in-context learning, the AI Agent can optimize subsequent commands without the need for retraining the model. This is a critical feature for the robot to learn and adapt to new environments and situations.
#### Iii-C2 Safety Protocols and Instinctive Refusal
On the other side, the Robot Instinct module is not simply a recipient of high-level commands from the AI Agent. It has built-in safety protocols that allow it to check every incoming command. The safety check utilizes either a discriminative model or traditional methods to ensure the commands will not violate any **survival-essential** tasks. If a command is deemed unsafe, the Robot Instinct module has the authority to instinctively refuse it. This feature ensures that the robot's fundamental safety and self-preservation are always a priority.
### _Architecture Revisit: Intelligence and Necessity_
We now revisit our layered design from dual perspectives. We delve further into the framework's intelligence and necessity, providing a comprehensive understanding and evaluation of our design.
Firstly, our layered architecture presents a top-down distribution in terms of the level of intelligence. At the highest level, in the External Layer, we find humans or other advanced intelligent agents capable of sophisticated decision-making and high-level task planning. The Decision Layer houses the AI Agent, responsible for the robot's high-level decisions. Following is the Instinct Layer, accountable for some critical survival tasks such as obstacle avoidance and overload protection. Finally, at the base in the Device Layer, we have basic devices and systems such as sensors and actuators.
Secondly, our layered architecture demonstrates a bottom-up distribution in terms of necessity. At the base, the Device Layer is the necessary structural part of the robot, without which the robot cannot achieve basic perception and action. The Instinct Layer is the necessary control part of the robot, without which the robot is unsafe and incapable of performing basic survival tasks. The Decision Layer is a necessary part of the intelligent robot, without which the robot cannot perform advanced decision-making and task planning. Finally, at the highest level, the External Layer is a necessary part for group robots or human-robot interaction, without which the robot cannot perform cooperation and interaction.
## IV Case Study: Mobile Robot
### _The Traditional Behavior-based Structure_
In this section, we examine the traditional behavior-based structure by means of a mobile robot case study. This existing control framework is structured around distinct parallel behavior modules, each with their direct input from sensors and direct output to actuators. These behavior modules, acting like individual'mini-brains,' coordinate to control the robot. Key behavioral modules include target tracking, map building (e.g., based on LiDAR SLAM), path planning, motor operation, and obstacle avoidance modules.
The behavior-based structure has its advantages. It stands out for its simplicity, which leads to efficiency in design
Fig. 3: Feedback Mechanism.
and testing stages. This design approach also ensures that individual modules work independently, minimizing mutual interference and allowing for independent optimization of each.
However, the main drawback of the behavior-based structure emerges in the context of system generalization and scalability. In this architecture, every behavioral module receives its input directly from sensors and sends its output directly to actuators. As a result, each module needs to be intimately familiar with the particular sensors and actuators of the robot it is implemented on. This specificity restricts the reusability of these modules across different robotic platforms and types, hindering the potential for broad-based solutions and extensions to various robotic platforms. Moreover, due to the tight coupling between modules and specific hardware, making adjustments for new sensors or actuators, or adding new behaviors, may require significant system rewrite, adding complexity to the system's maintenance and expansion.
### _Transition to the Proposed Framework_
Transiting from traditional behavior-based structures, we elucidate our novel framework from bottom to top.
#### Iii-B1 Device Layer
The Device Layer comprises the fundamental hardware necessary for the operation of the robot, including motors, actuators, sensors, and other physical components that directly interact with the environment.
#### Iii-B2 Instinct Layer
The Instinct Layer acts as a critical hub, orchestrating survival-essential tasks such as obstacle avoidance, overload protection, and safety assurance to prevent harm to humans. Different modules within this layer, each running on a dedicated chip, handle these tasks in parallel, achieving both efficiency and robustness. This design enables the layer to adapt to escalating complexities as mobile robot tasks increase in their scope and intricacy.
To ensure the robust and timely execution of these tasks, the Instinct modules are implemented on low-latency, high-stability platforms such as micro-controllers or FPGAs. As depicted in Alg. 1, each instinct module has the highest thread priority. This priority assignment ensures that survival-essential tasks are always addressed promptly, even in the face of concurrent high-level command executions from the Decision Layer.
A significant feature of our architecture is the dual-layer safety mechanism incorporated into the Instinct Layer. Every high-level command received from the Decision Layer must pass a rigorous safety check before execution. This procedure guarantees that any command, whether survival-essential or derived from higher-level decision-making, conforms to the pre-defined safety protocol. Thus, our architecture harmonizes advanced decision-making capabilities with rigorous safety assurances, providing a reliable and safe control system for mobile robots.
```
Input: High-level commands from Agent, Devices (Motor, Lidar) Output: Low-level command execution, Feedback to Agent initialization; whileTruedo // Survival Essential Tasks ifdevice.status=="safe"then performSurvivalTasks(); end if else enterSafeMode(); sendFeedback(); continue; end if // Handling High-level Commands commands = getHighCommands(); foreachcommanddo lowCommands = convert(command); foreachlowCommanddo if safetyCheck(lowCommand)then switchlowCommand.typedo caseMotordo motor.execute(lowCommand); end if caseLidardo lidar.acquire(lowCommand); end if sendFeedback(); sendData(); end if else refusal(); sendFeedback(); end if end if end if end if end for
```
**Algorithm 1**Instinct Module
#### Iii-B3 Decision Layer
The Decision Layer serves as the 'brain' of the robot, where intricate decision-making tasks are carried out. As highlighted in Algorithm 2, this layer is not operated by a single LLM, but rather, it consists of multiple LLM modules that collaborate to perform the AI Agent's tasks. These tasks include interactions with humans or other agents (facilitated by the Interaction Agent), task planning and order arbitration (handled by the Task Planning Agent), tool utilization (enabled by the Tool Utilization Agent), and self-reflection based on feedback from the Instinct Layer (performed by the Self-Reflection Agent).
The Interaction Agent enables the AI to comprehend and respond to external instructions, facilitating cooperative tasks between the robot and humans or other agents. The Task Planning Agent, working closely with the Self-Reflection Agent, enables the AI to analyze dynamic environments and make autonomous decisions. By interpreting the Instinct
Layer's API calls/documentation, the Tool Utilization Agent enhances the system's versatility.
Incorporated within the Decision Layer is a safety mechanism that reduces the risk of incorrect AI agent decisions. By assigning distinct roles to each LLM module and establishing feedback loops, the AI is less prone to making mistakes. The safety checks at the Decision Layer further reinforce the safety precautions implemented at the Instinct Layer, providing a comprehensive safety system for the robot.
```
Input: Tasks from External Layer, Feedback from Instinct Layer Output: High-level commands to Instinct Layer initialization; whileTruedo // LLM for interaction with External Layer tasks = getTasks(); foreachtaskdo whiletask not completedo // LLM for getting feedback from Instinct Layer feedback = getFeedback(); status = getData(); // LLM for self-reflection reflect = selfReflection(feedback, status); // LLM for task planning and order arbitration highCommands = plan(task, reflect, status); // LLM for tool utilization sendCommand(highCommands); task.update(); end foreach end foreach
```
**Algorithm 2**Agent Algorithm with Multiple LLMs
#### Iv-C4 External Layer
The External Layer represents high-level (artificial) general intelligence, including humans and other AI agents that send out commands to the robot. In an era demanding advanced human-machine cooperation and swarm coordination, our External Layer is designed to meet these needs by facilitating complex and sophisticated interactions and commands.
The transition to this proposed framework presents significant advantages. Firstly, it improves the scalability and adaptability of the system. Algorithms in the AI Agent module can be designed and deployed across different types of robots without needing to understand the intricacies of each specific hardware component. Secondly, the framework encourages modularity, making it easier to add, remove, or upgrade specific functions. Lastly, the introduction of the External Layer opens up new possibilities for human-robot interaction and cooperative intelligence, fostering the integration of the robot into larger, more complex systems.
## V Conclusion
In this work, we have proposed a novel control paradigm for autonomous robots, unifying the intellectual capabilities of large language models (LLMs) with instinctual functionalities. Our layered design approach inherently caters to the evolving demand for intelligent, safe, and adaptive robotic systems.
The burgeoning development of AI agents, particularly LLMs, has been a game-changer in the realm of robotics, offering immense benefits such as intuitive task comprehension, adept human-robot interaction, and complex decision-making. However, with these opportunities come inherent risks, especially regarding safety. By integrating an Instinct Layer in our paradigm, we ensure the prompt execution of survival-essential tasks and maintain an upper hand in safety matters.
Despite the promising blueprint of our new architecture, there are avenues left unexplored in this paper. Currently, we are planning to conduct experimental validations of our paradigm on mobile robot platforms and multi-axis robotic arms. This will further substantiate the enhanced intelligence and safety aspects brought by our design. Furthermore, the inter-layer communication, although not covered extensively in this paper, is a crucial area of future research. We envision a medium akin to Robot OS between the Decision and Instinct Layers to bolster system transparency, reduce design complexity, and increase scalability.
In addition, we barely scratched the surface of long/short term memory in this paper. A more profound exploration is planned for future studies, elucidating how AI Agents and Robot Instincts can learn and optimize strategies from their memory.
While our work indeed poses challenges and has room for refinement, the proposed architecture unarguably opens up new frontiers in the landscape of autonomous robotics. By bridging intelligence and instinct, we hope to harness the immense potential of AI and robotics and anticipate a future where robots can operate independently, intelligently, and safely in a wide range of environments.
|
2307.01945 | Query-based Video Summarization with Pseudo Label Supervision | Existing datasets for manually labelled query-based video summarization are
costly and thus small, limiting the performance of supervised deep video
summarization models. Self-supervision can address the data sparsity challenge
by using a pretext task and defining a method to acquire extra data with pseudo
labels to pre-train a supervised deep model. In this work, we introduce
segment-level pseudo labels from input videos to properly model both the
relationship between a pretext task and a target task, and the implicit
relationship between the pseudo label and the human-defined label. The pseudo
labels are generated based on existing human-defined frame-level labels. To
create more accurate query-dependent video summaries, a semantics booster is
proposed to generate context-aware query representations. Furthermore, we
propose mutual attention to help capture the interactive information between
visual and textual modalities. Three commonly-used video summarization
benchmarks are used to thoroughly validate the proposed approach. Experimental
results show that the proposed video summarization algorithm achieves
state-of-the-art performance. | Jia-Hong Huang, Luka Murn, Marta Mrak, Marcel Worring | 2023-07-04T22:28:17Z | http://arxiv.org/abs/2307.01945v1 | # Query-Based Video Summarization with Pseudo Label Supervision
###### Abstract
Existing datasets for manually labelled query-based video summarization are costly and thus small, limiting the performance of supervised deep video summarization models. Self-supervision can address the data sparsity challenge by using a pretext task and defining a method to acquire extra data with pseudo labels to pre-train a supervised deep model. In this work, we introduce segment-level pseudo labels from input videos to properly model both the relationship between a pretext task and a target task, and the implicit relationship between the pseudo label and the human-defined label. The pseudo labels are generated based on existing human-defined frame-level labels. To create more accurate query-dependent video summaries, a semantics booster is proposed to generate context-aware query representations. Furthermore, we propose mutual attention to help capture the interactive information between visual and textual modalities. Three commonly-used video summarization benchmarks are used to thoroughly validate the proposed approach. Experimental results show that the proposed video summarization algorithm achieves state-of-the-art performance.
Jia-Hong Huang\({}^{1}\)1, Luka Murn\({}^{2}\), Marta Mrak\({}^{2}\), Marcel Worring\({}^{1}\)\({}^{1}\)University of Amsterdam, Amsterdam, Netherlands ; \({}^{2}\)BBC Research and Development, London, UK Query-based video summarization, semantics, self-supervision, weak supervision, pseudo labels
Footnote 1: Work done during an internship at BBC Research and Development, London, UK.
## 1 Introduction
Query-based video summarization automatically generates a short video clip to summarize the content of a given video by capturing its query-dependent parts, as shown in Fig. 1. Such a task can be modeled as a fully-supervised machine learning problem [1, 2, 3]. However, creating a large-scale manually-labeled video dataset for a fully-supervised task is costly. Hence, existing datasets, e.g., TVSum [4], SumMe [5], and QueryVS [2], are quite small.
The lack of larger human-annotated datasets is common in fully-supervised deep learning tasks. Self-supervised learning is one of the most successful ways to alleviate this challenge [6, 7, 8, 9]. According to [7, 10], self-supervision is an effective method to balance the cost of data labelling and the performance gain of a fully-supervised deep model. The main idea of self-supervised learning is defining a pretext task and introducing a way to acquire extra data with reliable pseudo labels to pre-train a fully-supervised deep model for performing a target task [6, 7].
Existing self-supervision methods assume that the relation between a target task with human-defined labels and an introduced pretext task with pseudo labels does not exist or exists in a very limited way [7, 10]. However, this assumption may not be accurate for query-based video summarization, where frame-level human-defined labels can be considered as supervision signals of a target task. Segment-level pseudo labels can be considered as supervision signals of a pretext task. Since a video segment is composed of frames, there is an implicit relation between the entire segment and the corresponding frames. The improvement in model performance can hit a bottleneck without modelling these implicit relations.
In this work, a segment-based video summarization pretext task with specially designed pseudo labels is introduced to address this challenge, detailed in Fig. 2. Pseudo labels are generated based on existing human-defined annotations, helping to model the implicit relations between the pretext task and the target task, i.e., frame-based video summarization [2, 4, 5]. In query-based video summarization, we observe that generating accurate query-dependent video summaries can be challenging in practice due to ineffective semantics embedding of textual queries. We address this issue by proposing a semantics booster that generates context-aware query representations which are capable of efficiently capturing the semantics. Furthermore, we noticed that the query
Figure 1: Query-based video summarization. A video is summarized based on textual queries. The summarization algorithm runs independently for each query.
input does not always help model performance, most likely due to the interactions between textual and visual modalities not being properly modelled. We address this challenge by introducing mutual attention that helps capture the interactive information between different modalities.
These novel design choices enable us to improve the model performance of query-based video summarization with self-supervision. Extensive experiments show that the proposed method is effective and achieves state-of-the-art performance. If we examine the problem from the perspective of frame-level label vs. segment-level label, the proposed method can also be considered as a weakly-supervised video summarization approach. Hence, existing weakly-supervised methods are also considered as baselines in this work.
## 2 Related Work
### Fully-supervised video summarization
Fully-supervised learning is a common way to model video summarization [5, 11, 12, 13, 14]. In fully-supervised video summarization, labels defined by human experts are used to supervise a model in the training phase. In [5], a video summarization approach is proposed to automatically summarize user videos that contain a set of interesting events. The authors start by dividing a video based on a superframe segmentation, tailored to raw videos. Then, various levels of features are used to predict the score of visual interesting per-frame. Finally, a video summary is produced by selecting a set of superframes in an optimized way. In [12, 13], a Recurrent Neural Network (RNN) is used in a hierarchical way to model the temporal structure in video data. The authors of [11] consider video summarization as a problem of structured prediction. A deep-learning-based method is proposed to estimate the importance of video frames based on modelling their temporal dependency. The authors of [14] propose an importance propagation-based collaborative teaching network (iPTNet) for video summarization by transferring samples from a video moment localization correlated task equipped with a lot of training data. In [2, 3, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34], the model learning process expands beyond solely utilizing visual inputs and incorporates an additional modality, such as viewers' comments, video captions, or any other contextual data available.
The aforementioned fully-supervised methods exploit a full set of human expert annotations to supervise the model in the training phase. Although such a method performs well, it is costly. Therefore, a better solution should be developed for video summarization.
### Weakly-supervised video summarization
In [35, 36, 37, 38, 39], video summarization is considered as a weakly-supervised learning task. Weakly-supervised learning can mitigate the need for extensive datasets with human expert annotations. Instead of using a full set of data with human expert labels, such as frame-level annotations, weakly-supervised approaches exploit less-expensive weak labels, such as video-level annotations from human experts. Although weak labels are imperfect compared to a full set of human expert annotations, they still can be used to train video summarization models effectively.
### Self-supervision in video summarization
In [40, 41], image pretext tasks [7] are extended to video for self-supervision in video summarization. In [40], the keyframes of a video are defined as those which are very different in their optical flow features and appearance from the rest of the frames of the video. The authors of [41] claim that a good video sequence encoder should have the ability to model the correct order of video segments. Segments are selected from a given video based on a fixed proportion before feeding it into a neural network. They are randomly shuffled and used to train the neural network and distinguish the odd-position segments to control the difficulty of the auxiliary self-supervision task.
Existing work related to self-supervision in video summarization is very limited, and they do not focus on query-based video summarization. To the best of our knowledge, our proposed method is one of the pioneer works of self-supervision in query-based video summarization.
### Word embedding methods
According to [42], static word embeddings and contextualized word representations are commonly used to encode textual data. Both of them are more effective than the Bag of Words (BoW) method. Skip-gram with negative sampling (SGNS) [43] and GloVe [44] are well-known models for generating static word embeddings. According to [45, 46], these models learn word embeddings iteratively in practice. However, it has been proven that both of them implicitly factorize a word-context matrix containing a co-occurrence statistic.
The authors of [42] mention that in static word embeddings methods, all meanings of a polysemous word must share a single vector because a single representation for each word is created. Hence, the contextualized word representations method is more effective than static word embeddings because of its context-sensitive word representations. In [47, 48, 49], the proposed neural language models are fine-tuned to create deep learning-based models for a wide range of downstream natural language processing tasks.
In this work, a contextualized word representation-based method is used to encode the text-based input query.
## 3 Methodology
In this section, the proposed query-based video summarization method is described in detail, and illustrated in Fig. 2. The approach is based on contextualized query representations, attentive convolutional 2D and 3D features, interactive
attention mechanism, mean-based pseudo shot label generation, and video summary generation.
### Semantics Booster
Generating an accurate query-dependent video summary is challenging because of the ineffective semantics embedding of input textual queries. In this work, a semantics booster is introduced to capture the semantics of the input query effectively. The transformer-based model architecture has been firmly established as one of the state-of-the-art approaches in language modeling and machine translation [50]. Hence, the proposed semantics booster is built on top of the transformer architecture to generate context-aware query representations, described as follows.
For an input token \(k_{n}\), its embedding \(x_{n}\) is defined as: \(x_{n}=W_{e}*k_{n}+P_{k_{n}},n\in\{1,...,N\}\), where \(W_{e}\in\mathbb{R}^{E_{e}\times V_{e}}\) is the input text-based query token embedding matrix with the vocabulary size \(V_{s}\) and the word embedding size \(E_{s}\), the positional encoding of \(k_{n}\) is \(P_{k_{n}}\), and \(N\) denotes the number of input tokens. The subscripts \(s\) and \(e\) denote size and embedding, respectively. The representation of the current word \(Q\) is generated by one linear layer defined as: \(Q=W_{q}*x_{n}+b_{q}\), where \(b_{q}\) and \(W_{q}\in\mathbb{R}^{H_{s}\times E_{s}}\) are learnable parameters of the linear layer, the output size of the linear layer is \(H_{s}\) and the subscript \(q\) denotes query. The key vector \(K\) is calculated by the other linear layer defined as: \(K=W_{k}*x_{n}+b_{k}\), where \(b_{k}\) and \(W_{k}\in\mathbb{R}^{H_{s}\times E_{s}}\) are learnable parameters of the linear layer. The subscript \(k\) denotes key. The value vector \(V\) is generated by another linear layer defined as: \(V=W_{v}*x_{n}+b_{v}\), where \(b_{v}\) and \(W_{v}\in\mathbb{R}^{H_{s}\times E_{s}}\) are learnable parameters of the linear layer. The subscript \(v\) denotes value.
After \(Q\), \(K\), and \(V\) are calculated, the masked self-attention is generated as: \(\text{MaskAtten}(Q,K,V)=\text{softmax}(m(\frac{QKT}{\sqrt{d_{k}}}))V\), where \(m(\cdot)\) and \(d_{k}\) denote a masked self-attention function and a scaling factor, respectively. The layer normalization is calculated as: \(Z_{\text{Norm}}=\text{LayerNorm}(\text{MaskAtten}(Q,K,V))\), where \(\text{LayerNorm}(\cdot)\) denotes a layer normalization function. Then, the introduced context-aware representation \(\mathcal{R}_{\text{context}}\) of the input text-based query is derived as: \(\mathcal{R}_{\text{context}}=\sigma(W_{1}Z_{\text{Norm}}+b_{1})W_{2}+b_{2}\), where \(\sigma\) is an activation function, \(W_{1}\), \(W_{2}\), \(b_{1}\), and \(b_{2}\) are learnable parameters of a position-wise feed-forward network. To have even better textual representations, a textual attention function \(\text{TextAtten}(\cdot)\) is introduced to reinforce the context-aware representation. The function takes \(\mathcal{R}_{\text{context}}\) as input and calculates the attention and textual representation in an element-wise way. The attentive context-aware representation is calculated as \(Z_{ta}=\text{TextAtten}(\mathcal{R}_{\text{context}})\), where \(ta\) indicates textual attention.
### Visual Attention
A 2D ConvNet and a 3D ConvNet are exploited to distill the video frame and video segment information, respectively. To reinforce the generated 2D and 3D features, a visual attention function \(\text{AttenVisual}(\cdot)\) is introduced to improve the quality of features.
Let \(E\) and \(X\) be a feature generator and a set of video clips, respectively. A feature generator \(E\) maps an input \(x\in X\) to a feature vector \(f\in\mathbb{R}^{d}\). \(F=\{f=E(x)\in\mathbb{R}^{d}\mid x\in X\}\) denotes a set of features produced by the feature generator \(E\). Let \(F_{s}\) be the generated features from the video spatial feature generator \(E_{s}\). \(F_{st}\) denotes the generated features from the video spatio-temporal feature generator \(E_{st}\). Frame-level and segment-level data both are exploited to train the proposed query-based video summarization model, meaning \(F=F_{s}\cup F_{st}\). In the frame-level case, the attentive feature generator \(\text{AttenVisual}(\cdot)\) learns attention weights and produces attentive spatial features \(Z_{as}=\{f_{as}=\text{AttenVisual}(f)\in\mathbb{R}^{d}\mid f\in F_{s}\}\), i.e., attentive convolutional 2D features. In the segment-level case, the attentive feature generator learns attention weights and produces attentive spatio-temporal features \(Z_{ast}=\{f_{ast}=\text{AttenVisual}(f)\in\mathbb{R}^{d}\mid f\in F_{st}\}\), i.e., attentive convolutional 3D features.
Figure 2: Flowchart of the proposed self-supervision method for query-based video summarization. The model is pre-trained by the textual-spatial features from the Mutual Attention Mechanism and pseudo segment-level labels. The completely trained video summary generator exploits the fully-connected layer to produce a frame-level score vector for the given input video and outputs the final query-dependent video summary.
### Mutual Attention
We observe that textual queries do not always help the model performance due to the interactions between the video and query inputs not being modelled effectively. In this work, a mutual attention mechanism MutualAtten\((\cdot)\) is introduced to address this issue and model the interactive information between the video and query. The mutual attention \(Z_{ma}\) performs one by one convolution, i.e., convolutional attention. \(Z_{ma}=\text{MutualAtten}(Z_{ta}\odot Z_{as}\odot Z_{ast})\), where \(Z_{ta}\) indicates textual attention and \(\odot\) denotes Hadamard product.
### Pseudo Segment-level Label Generation
Let \(S_{f}\) be a set of human experts' frame-level score annotations and \(P\) a pseudo score annotation generator that maps frame-level human expert scores to a segment-level pseudo score.
In [4], the authors empirically find that a two-second segment is suitable for capturing local context of a video as it achieves good visual coherence. Based on this observation, in this work the proposed pseudo label generator \(P\) is designed to generate a segment-level score every two seconds. In practice, since the generated pseudo score annotations are not validated by human experts, they might contain noisy or biased information. Based on [51], the Mean function is one of the effective ways to reduce the noise contained in the segment-level pseudo label. Hence, Mean function is used to design the proposed pseudo label generator \(P\) to produce the mean score \(S_{\text{mean}}=P(S_{f})=\text{Mean}(S_{f})\), i.e., the two-second segment-level pseudo score label. In the training phase, compared with the frame-level label, the mean-based pseudo segment label \(S_{\text{mean}}\) is used not only for spatial supervision but also for temporal supervision. The temporal supervision with the segment-level pseudo annotations improves the query-based video summarization model performance.
### Loss Function
According to [2], query-based video summarization can be modeled as a classification problem. Thus, in this work, the categorical cross-entropy loss function is adopted to build the proposed approach:
\[\text{Loss}=-\frac{1}{N}\sum_{i=1}^{N}\sum_{c=1}^{C}\mathbf{1}_{y_{i}\in C_{c }}\text{log}(P_{\text{model}}\left[y_{i}\in C_{c}\right]), \tag{1}\]
where \(N\) indicates the number of observations, \(C\) denotes the number of categories, \(\mathbf{1}_{y_{i}\in C_{c}}\) is an indicator function of the \(i\)-th observation belonging to the \(c\)-th category, and \(P_{\text{model}}[y_{i}\in C_{c}]\) is the probability predicted by the model for the \(i\)-th observation to belong to the \(c\)-th category.
## 4 Experiments and Analysis
### Datasets and evaluation metrics
**Datasets.** TVSum [4] is a commonly used dataset for traditional video summarization, containing only the video input. However, authors of [17, 18] consider TVSum metadata, e.g., video title, as a text-based query input to generate the query-dependent video summary. In our experiments, the TVSum dataset is randomly divided into 40/5/5 videos for training/validation/testing, respectively. The video length is ranging from 2 to 10 minutes. The human expert score labels range from 1 to 5, and are annotated with 20 frame-level responses per video [18].
The SumMe [5] dataset is randomly divided into 19 videos for training, 3 videos for validation, and 3 videos for testing. The video duration in SumMe is ranging from 1 to 6 minutes. In SumMe, the human expert annotation score ranges from 0 to 1. SumMe is not used for query-based video summarization and we do not have a query input when a model is evaluated on this dataset.
QueryVS [2] is an existing dataset designed for query-based video summarization. In our experiments, the QueryVS dataset is separated into 114/38/38 videos for training/validation/testing, respectively. The video length in QueryVS is ranging from 2 to 3 minutes, and every video is retrieved based on a given text-based query.
To validate the proposed query-based video summarization method, three segment-level datasets are created based on the above frame-level datasets. Both the segment-level dataset, i.e., for pre-training, and the frame-level dataset, i.e., the target dataset, are used to conduct our experiments.
**Evaluation metric.** Based on [4, 5, 17, 18, 52], the \(F_{\beta}\)-score with the hyper-parameter \(\beta=1\) is a commonly used metric for assessing the performance of supervised video summarization approaches. It is based on measuring the agreement between the predicted score and ground truth score provided by the human expert. The \(F_{\beta}\)-score is defined as: \(F_{\beta}=\frac{1}{N}\sum_{i=1}^{N}\frac{(1+\beta^{2})\times p_{i}\times r_{i} }{(\beta^{2}\times p_{i})+r_{i}}\), where \(r_{i}\) indicates \(i\)-th recall, \(p_{i}\) indicates \(i\)-th precision, \(N\) indicates number of \((r_{i},p_{i})\) pairs, "\(\times\)" denotes scalar product, and \(\beta\) is used to balance the relative importance between recall and precision.
### Experimental settings
In the experiments, a 2D ResNet-34 network pre-trained on the ImageNet database [55] is adopted to generate frame-level features for each input video. The \(512\) features are extracted from the visual layer one layer below the classification layer. A 3D ResNet-34 pre-trained on the Kinetics benchmark [56] is used in the experiments to generate segment-level features
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Pseudo label pre-training & Mutual attention & Semantics booster & **TVSum** & **QueryVS** \\ \hline - & - & - & 47.5 & 50.8 \\ \hline ✓ & - & - & 61.3 & 52.9 \\ \hline - & ✓ & - & 58.9 & 52.0 \\ \hline - & - & ✓ & 56.4 & 52.3 \\ \hline ✓ & ✓ & ✓ & **68.4** & **55.3** \\ \hline \end{tabular}
\end{table}
Table 1: Ablation study of the pseudo segment-level label pre-training, semantics booster, and mutual attention mechanism using \(F_{1}\)-score.
for each input video. The features with \(512\) dimensions are located in the visual layer which is right after the global average pooling layer.
The video lengths in the SumMe, TVSum and QueryVS datasets vary, with the maximum number of frames in a video being \(388\) for SumMe, \(199\) for QueryVS, and \(647\) for TVSum. A frame-repeating preprocessing technique [2] is followed to make all the videos in each dataset the same length.
The input size of the CNN is \(224\) by \(224\) with RGB channels. Every channel is normalized by standard deviation \(=(0.2737,0.2631,0.2601)\) and \(\text{mean}=(0.4280,0.4106,0.3589)\). PyTorch is used for the implementation and to train models for \(100\) epochs with \(1e-7\) learning rate. The Adam optimizer is used, with hyper-parameters set as \(\epsilon=1e-8\), \(\beta_{1}=0.9\), and \(\beta_{2}=0.999\).
### Ablation Study
The ablation study of the proposed method is presented in Table 1. The baseline model without the mutual attention mechanism and pseudo segment label pre-training and no semantics booster performs significantly worse than approaches utilising any or all of the proposed improvements. Note that when the semantics booster is not adopted, the BoW embedding method is used.
The mutual attention mechanism helps capture the interaction between the input query and video more effectively. The pseudo segment-level label pre-training helps the proposed model have better initialization. The semantics booster captures the semantic meaning of the text-based query.
### Comparison with state-of-the-art models
The comparison with existing fully-supervised, weakly-supervised and query-based approaches is presented in Table 2. The results show the performance of our proposed method is the best on TVSum and QueryVS datasets, with a competitive performance on the SumMe dataset.
The correctness of the generated segment-level pseudo labels is not guaranteed by human experts, but it still contains useful information, e.g., better temporal information, to supervise the proposed model during pre-training. In weakly-supervised methods, although the correctness of the coarse labels, e.g., video-level label, is guaranteed by human experts, it is still not good enough to boost the model performance better than our proposed method. In query-based summarization methods, although the other modality is used to help the model performance, the effectiveness of the multi-modal feature fusion could limit the performance improvement.
Randomly selected qualitative results are shown in Fig. 3.
## 5 Conclusion
In this work, a new query-based video summarization approach is proposed. The method is based on the self-supervision of segment-level pseudo scores, semantics booster, and a mutual attention mechanism. Additionally, three segment-level video summarization datasets for self-supervision are proposed based on existing small-scale query-based video summarization datasets. Experimental results show the mean-based segment-level pseudo labels provide effective temporal supervision. The proposed approach achieves state-of-the-art performance in terms of the \(F_{1}\)-score. Nowadays, video content is growing at an ever-increasing speed and beyond the capacity of an individual for full comprehension. In such cases, the proposed query-based video summarization method has the potential to improve the efficiency of video exploration.
## 6 Acknowledgments
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 765140.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Model** & **Method** & **TVSum** & **SumMe** & **QueryVS** \\ \hline vsLSTM [11] & \multirow{3}{*}{Fully supervised} & 54.2 & 37.6 & - \\ \cline{1-2} H-RNN [12] & & 57.7 & 41.1 & - \\ \cline{1-2} HSA-RNN [13] & & 59.8 & 44.1 & - \\ \cline{1-2} \cline{3-4} iPTNet [14] & & 63.4 & 54.5 & - \\ \cline{1-2} \cline{3-4} SMLD [53] & & 61.0 & 47.6 & - \\ \cline{1-2} \cline{3-4} SMN [54] & & 64.5 & **58.3** & - \\ \cline{1-2} \cline{3-4} FPSVF [36] & Weakly supervised & - & 41.9 & - \\ \cline{1-2} \cline{3-4} WS-HRL [39] & & 58.4 & 43.6 & - \\ \cline{1-2} \cline{3-4} DSSE [17] & \multirow{3}{*}{Query based} & 57.0 & - & - \\ \cline{1-2} \cline{3-4} DQSN [18] & & 58.6 & - & - \\ \cline{1-2} \cline{3-4} QueryVS [2] & & - & - & 41.4 \\ \cline{1-2} \cline{3-4} GPT2MVS [3] & & - & - & 54.8 \\ \cline{1-2} \cline{3-4} Ours & & **68.4** & 52.4 & **55.3** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison with state-of-the-art video summarization methods based on the \(F_{1}\)-score, best highlighted in bold. ‘-’ denotes unavailability from previous work.
Figure 3: Randomly selected qualitative results of the proposed method. Selected frames from the ground truth frame-based score annotations for the input video are highlighted in gray, with red representing the frames not selected. Frames selected for the query-dependent video summary are highlighted in green. \(217\) denotes the video length before video preprocessing and \(647\) denotes the video length after the video preprocessing. |
2310.03545 | Distribution-free risk assessment of regression-based machine learning
algorithms | Machine learning algorithms have grown in sophistication over the years and
are increasingly deployed for real-life applications. However, when using
machine learning techniques in practical settings, particularly in high-risk
applications such as medicine and engineering, obtaining the failure
probability of the predictive model is critical. We refer to this problem as
the risk-assessment task. We focus on regression algorithms and the
risk-assessment task of computing the probability of the true label lying
inside an interval defined around the model's prediction. We solve the
risk-assessment problem using the conformal prediction approach, which provides
prediction intervals that are guaranteed to contain the true label with a given
probability. Using this coverage property, we prove that our approximated
failure probability is conservative in the sense that it is not lower than the
true failure probability of the ML algorithm. We conduct extensive experiments
to empirically study the accuracy of the proposed method for problems with and
without covariate shift. Our analysis focuses on different modeling regimes,
dataset sizes, and conformal prediction methodologies. | Sukrita Singh, Neeraj Sarna, Yuanyuan Li, Yang Li, Agni Orfanoudaki, Michael Berger | 2023-10-05T13:57:24Z | http://arxiv.org/abs/2310.03545v1 | # Distribution-free risk assessment of regression-based machine learning algorithms
###### Abstract
Machine learning algorithms have grown in sophistication over the years and are increasingly deployed for real-life applications. However, when using machine learning techniques in practical settings, particularly in high-risk applications such as medicine and engineering, obtaining the failure probability of the predictive model is critical. We refer to this problem as the risk-assessment task. We focus on regression algorithms and the risk-assessment task of computing the probability of the true label lying inside an interval defined around the model's prediction. We solve the risk-assessment problem using the conformal prediction approach, which provides prediction intervals that are guaranteed to contain the true label with a given probability. Using this coverage property, we prove that our approximated failure probability is conservative in the sense that it is not lower than the true failure probability of the ML algorithm. We conduct extensive experiments to empirically study the accuracy of the proposed method for problems with and without covariate shift. Our analysis focuses on different modeling regimes, dataset sizes, and conformal prediction methodologies.
## 1 Introduction
In safety-critical applications, it is crucial that the ML modelling errors stay within certain limits. One is then interested in computing the probability of the errors being larger than these limits. Consider, for instance, a ML model that predicts the health of a battery [1; 2]. A battery owner using such a model would then be interested in the probability of the modelling error being larger than a threshold. This would help the owner assess the extent to which a battery's health might be jeoperdized, in case the model under-performs. Similar scenarios could also arise in safety-critical medical applications where a model predicting medicine dosage cannot be more than \(10\%\) inaccurate [3]. Solving this critical challenge of ML risk evaluation can have drastic implications on the degree of adoption of such algorithms in practice. We refer to this challenge as the risk-assessment problem.
### Risk-assessment: problem formulation
We formalize the above problem mathematically. Consider a given pre-defined interval \(\mathcal{I}(X)=[a_{-}(X),a_{+}(X)]\), where \(a_{\pm}(X)\in\mathbb{R}\) and \(a_{-}(X)<a_{+}(X)\). The risk-assessment problem seeks the miscoverage rate \(\alpha_{\mathcal{I}}\) that approximates \(\mathbb{P}(Y\not\in\mathcal{I}(X))\), where \(X\in\mathbb{R}^{d}\) and \(Y\in\mathbb{R}\) denote the input and output, respectively. Equivalently,
\[\text{\it Risk assessment task:}\qquad\text{Given }\mathcal{I}(X)\text{ find }\alpha_{\mathcal{I}}:\mathbb{P}(Y\in\mathcal{I}(X))\geq 1- \alpha_{\mathcal{I}}, \tag{1}\]
We require two properties from \(\alpha_{\mathcal{I}}\). Firstly, we require accurate risk-assessment. For a reliable risk-assessment, the coverage \(1-\alpha_{\mathcal{I}}\) should be an accurate approximation of \(\mathbb{P}(Y\in\mathcal{I}(X))\). Secondly, we require conservative risk-assessment. The probability \(\mathbb{P}(Y\in\mathcal{I}(X))\) should be bounded from below by \(1-c\alpha_{\mathcal{I}}\), which ensures that we do not over-estimate this probability 1. Note that conservative risk-assessment does not necessarily provide an accurate risk-assessment or vice-versa. For example, \(\alpha_{\mathcal{I}}=1\) - i.e. a \(100\%\) failure rate - results in a conservative risk-assessment for any ML model but for most ML models, provides inaccurate risk-assessment.
Footnote 1: The value of \(c\) would depend on the prediction interval approach—details discussed later.
We seek a distribution-free solution to the risk-assessment problem. The following argument motivates our choice. For classical probabilistic models with assumptions about the data distribution, achieving the risk assessment task is straightforward. For example, we consider a linear regression model fitted by least-square-method. If we assume the prediction errors \(\hat{\epsilon}=Y-\hat{Y}\) given by the predictor \(\hat{Y}\in\mathbb{R}\) follows a normal distribution \(P_{\hat{\epsilon}}\) and are independent with \(X\), we get immediately \(\alpha_{I}=\int_{Y\not\in I(X)}dP_{\hat{\epsilon}}\). However, in many cases in practice such assumptions generally do not hold. For solving the risk-assessment task across many use cases, we do not make distributional assumptions, i.e. we work in a distribution-free setting. To this end, we exploit distribution-free prediction interval generation.
### Prediction interval generation
In prediction interval generation, we are given a miscoverage level \(\alpha\) and we compute a corresponding prediction interval \(\mathcal{T}(X;\alpha)\). We leverage this to solve our risk-assessment problem. The idea is to generate nested prediction intervals with varying miscoverage rates and select the largest prediction interval that fits inside the given interval \(\mathcal{I}(X)\). The miscoverage of this largest interval then provides a solution to the risk-assessment problem--see Subsection 3.1 for details. Furthermore, as discussed in Subsection 3.1, this nested property helps us recover a conservative risk-assessment.
The problem of prediction interval generation can be precisely posed as
\[\text{\it Prediction interval computation:}\qquad\text{Given }\alpha\text{ find }\mathcal{T}(X;\alpha):\mathbb{P}(Y\in\mathcal{T}(X;\alpha))\geq 1-c\alpha, \tag{2}\]
where \(X\in\mathbb{R}^{d}\) and \(Y\in\mathbb{R}\) represent the input and label, respectively, and the value of \(c\) depends upon the method under consideration. Note that the probability in the above expression is not conditional over a particular test point but instead, marginal--it captures the randomness over the entire test set. Indeed, in general, one could prove that conditional coverage cannot be achieved [4]. This point will be crucial when we later compare our method to previous works.
Our need for distribution-free risk-assessment, motivates choosing conformal prediction(CP) as our prediction interval generation technique--Section 2 provides a brief overview to CP. CP relies on a (so-called) conformal score, which encapsulates the heuristic uncertainty of a model--an example being the residual for a regression algorithm. The prediction interval has provable coverage properties, i.e. it satisfies the lower bound on the probability given in Equation 2[5; 6]. The quality of the prediction interval is largely determined by the choice of the score function [7; 8; 9; 10; 11]. The CP technique is model agnostic and can be applied to both regression [12; 13] and classification problems [14; 15]. Although the initial work was limited to exchangeable datasets [5; 16], the technique has been recently extended to non-exchangeable data [17; 18]. We briefly review the CP technique in Section 2.
### Previous work on distribution-free risk assessment
In a distribution-free setting, to the best of our knowledge, only the authors in [19] solve the risk assessment problem using CP. They refer to their CP algorithm as JAW, which provides the
marginal coverage property described in Equation 2. Subsection 3.1 provides the details. The main shortcomings are summarised here. Firstly, the JAW-based risk assessment approach - although claimed to be conservative in the sense defined in Section 1 - is not conservative theoretically. Secondly, the JAW-based approach disregards the randomness in the solution to the risk-assessment task and thereby, compared to our solution, provides a crude high-variance solution to the risk assessment problem - Subsection 3.2 elaborates further. Lastly, authors in [19] do not numerically analyse the accuracy of their solution to the risk-assessment problem. An AUC-type approach was explored and as yet, it is unclear how this relates to the accuracy or to how conservative the risk-assessment is.
### Contributions
Our contributions, which cater to the problems encountered by the JAW-based approach, are summarized as follows. Firstly, we formalize the risk-assessment task and we propose a general-purpose framework to solve the risk-assessment task for regression problems that leverages CP techniques. Using the coverage property of the CP technique, we prove that our risk-assessment is conservative in the sense described in Section 1. We also observe experimentally that our risk-assessment is conservative. Secondly, we identify the randomness in the solution to the risk-assessment task and capture it accurately via a hold-out set. Our hold-out set does not require label information and thus, could also leverage generative-ML techniques [20]. Lastly, to assess the accuracy of our risk-assessment algorithm, we propose a comprehensive set of computational experiments on problems with and without covariate shifts. We assess the influence of model-type, data size, and CP technique on the proposed algorithm.
## 2 Background
As mentioned in Section 1, we choose CP as our prediction interval generation technique. This section briefly summarises CP in the covariate shift setting. This setting is more general and more practically relevant than the i.i.d setting (or the weaker exchangeability setting) considered earlier by the CP literature. We start with introducing covariate shift.
### Covariate shift
The covariate shift problem has gained attention within the field of uncertainty quantification in recent studies [17; 18; 21; 22]. This topic has also been highlighted in various ML applications, specifically in the context of health-related applications[23; 24; 25]. Under covariate shift, the distribution for \(Y|X\) remains the same under training and testing. However, the distribution for \(X\) could change. Let \(P_{Y|X}\) and \(P_{X}\) represent the distribution for \(Y|X\) and \(X\), respectively, under training. Under testing, the distribution for \(X\) changes to \(\tilde{P}_{X}\). The training and the testing data points are sampled from their respective distributions independently. This data setting clearly violates the important data exchangeability assumption CP based on. To adapt the standard CP techniques for covariate shift, Tibshirani et.al [17] proposed the concept of _weight exchangeability_, and proved that if \(\tilde{P}_{X}\) is absolutely continuous with respect to \(P_{X}\), the data under the covariate shift are weighted exchangeable. By utilizing a weight function, i.e., the likelihood ratio of testing covariate distribution over the training one \(w(x)=\mathrm{d}\tilde{P}_{X}(x)/\mathrm{d}P_{X}(x)\), the weighted CP intervals can provide a valid coverage guarantee under covariate shift [17]. We present more details in the next section.
### Conformal Prediction (CP)
We restrict ourselves to the weighted split-CP [17] and JAW [19] approaches which are applicable to the co-variate shift setting. We first discuss the split-CP technique. Consider a score-function \(S:\mathbb{R}^{d}\times\mathbb{R}\rightarrow\mathbb{R}\), and \(S(x,y)=|y-\mu(x)|\), where \(\mu(x):\mathbb{R}^{d}\rightarrow\mathbb{R}\) represents our model's approximation at point \(x\). Let \(\mathcal{Z}=\left\{Z_{i}=\left(X_{i},Y_{i}\right)\right\}_{i=1,\ldots,n}\) represent a set of calibration points that are independent from the set used to train \(\mu\). For an input point \(x\), the weighted split-CP prediction interval is then defined as \(\mathcal{T}(x;\alpha)=\mu(x)\pm Q_{\alpha}^{+}\{p_{i}^{w}(x)\delta_{S(X_{i},Y _{i})}\}\). For notational simplicity, we
suppress the dependence of \(\mathcal{T}\) on \(\mathcal{Z}\). The weights \(p_{i}^{w}(x)\) and \(p_{n+1}^{w}(x)\) are defined as
\[p_{i}^{w}(x)=\frac{w(X_{i})}{\sum_{j=1}^{n}w(X_{j})+w(x)},i=1,\cdots,n,\quad p_{n +1}^{w}(x)=\frac{w(x)}{\sum_{j=1}^{n}w(X_{j})+w(x)}, \tag{3}\]
where \(w(X)\) is defined in Subsection 2.1. The term \(Q_{\alpha}^{+}\{v_{i}\}\) is the \(\lceil(1-\alpha)(n+1)\rceil\)-th smallest value of \(v_{1},v_{2},\ldots,v_{n}\), and \(Q\) is the quantile function under the empirical distribution of the values \(v_{1},\cdots,v_{n}\). The \(\delta_{v}\) represents the point mass distribution at point \(v\). The idea behind introducing the weight functions is weight exchangeability, allowing one to use exchangeability-based tools from standard CP[17].
The JAW approach collects samples of the score function differently. It considers a leave-one-out approach on the training data set. Furthermore, let \(\mu_{-i}\) represent a model trained on all training points other than the i-th one. Samples of the score function are then defined as \(S(X_{i},Y_{i})=|Y_{i}-\mu_{-i}(X_{i})|\). Furthermore, in the formulae for the prediction interval given above, JAW replaces \(\mu\) by \(\mu_{-i}\). The prediction interval for JAW then reads \(\mathcal{T}(x;\alpha)=\left[Q_{\alpha}^{-}\left\{p_{i}^{w}(x)\delta_{\mu_{-i}( x)-S(X_{i},Y_{i})}\right\},Q_{\alpha}^{+}\left\{p_{i}^{w}(x)\delta_{\mu_{-i}(x)+S(X _{i},Y_{i})}\right\}\right]\) where \(Q_{\alpha}^{-}\{v_{i}\}\) is the \(\lceil\alpha(n+1)\rceil\)-th smallest value of \(v_{1},v_{2},\ldots,v_{n}\). For simplicity, we collectively express the above two prediction intervals as
\[\mathcal{T}(x;\alpha)=\left[Q_{\alpha}^{-}\left\{p_{i}^{w}(x)\delta_{V_{i}^{- }(X)}\right\},Q_{\alpha}^{+}\left\{p_{i}^{w}(x)\delta_{V_{i}^{+}(X)}\right\} \right], \tag{4}\]
where \(V_{i}^{+}(X)\) and \(V_{i}^{-}(X)\) are the upper and lower point masses, respectively, and read
\[V_{i}^{\pm}(X):=\mu_{\square}(X)\pm S(X_{i},Y_{i}). \tag{5}\]
The quantity \((\square)\) is a placeholder which could either be empty or \(-i\) for split-CP and JAW, respectively. Recall that \(-i\) represents that the \(i\)-th training point was dropped while training \(\mu_{-i}\).
The following properties are noteworthy. Firstly, for exchangeable datasets, substituting \(w=1\), the weighted split-CP and the JAW intervals reduce to prediction intervals that correspond to the standard split-CP and the Jackknife+ intervals, respectively. Secondly, one can show that the above prediction intervals are nested in the sense that
\[\mathcal{T}(X;\alpha_{1})\subseteq\mathcal{T}(X;\alpha_{2}),\quad\forall \alpha_{1}\geq\alpha_{2}. \tag{6}\]
The above property would be helpful later during our risk-assessment formulation. Secondly, both the weighted split-CP and JAW have the coverage property, which, for later convenience, we summarize below.
**Theorem 2.1**.: _Under the assumptions: a) data under co-variate shift; and b)\(\tilde{P}_{X}\) is absolutely continuous with respect to \(P_{X}\), the interval predictions resulting from weighted split-CP and JAW satisfy \(\mathbb{P}(Y\in\mathcal{T}(X;\alpha))\geq 1-c\alpha,\) where \(c\) equals \(1\) and \(2\) for split-CP and JAW, respectively._
_Proof_: See [17; 19]. \(\square\)
## 3 Proposed approach: theoretical properties and algorithm
### Nested prediction interval generation
We discuss how nested prediction intervals help us solve the risk-assessment problem. We generate nested prediction intervals that are contained inside the pre-defined interval \(\mathcal{I}(X)\) and use the coverage of these prediction intervals to solve the risk-assessment problem. Building upon the idea from [19], we express the nested interval generation as follows.
For any test input \(X\) and a calibration set \(\mathcal{Z}\), we seek a miscoverage \(\alpha(X,\mathcal{Z})\) such that the prediction interval (given in Equation 4), at the test input, is contained inside the pre-defined interval \(\mathcal{I}(X)=[a_{-}(X),a_{+}(X)]\). Equivalently,
\[\alpha\left(X,\mathcal{Z}\right):=\min_{\alpha^{\prime}}\{\alpha^{\prime}: \mathcal{T}(X;\alpha^{\prime})\subseteq\mathcal{I}(X)\}. \tag{7}\]
where the prediction interval \(\mathcal{T}(x;\alpha^{\prime})\) is computed using CP, which uses the calibration set \(\mathcal{Z}\). Two points motivate the above definition of \(\alpha(X,\mathcal{Z})\). Firstly, since \(\mathcal{T}(X;\alpha(X,\mathcal{Z}))\) is included in \(\mathcal{I}(X)\), we conclude that
\[\mathbb{P}(Y\in\mathcal{I}(X)|\mathcal{Z}=\mathcal{Z}_{0},X=X^{*})\geq\mathbb{P }(Y\in\mathcal{T}(X;\alpha(X,\mathcal{Z}))|\mathcal{Z}=\mathcal{Z}_{0},X=X^{* }). \tag{8}\]
where \(\mathcal{Z}_{0}\) is a realisation of the calibration set, and the randomness is over \(Y|X\). The above property subsequently provides conservative risk-assessment--see Theorem 3.2. Secondly, the minimum over \(\alpha^{\prime}\) (owing to Equation 6) ensures the largest interval \(\mathcal{T}(X;\alpha^{\prime})\) contained inside \(\mathcal{I}(X)\). This ensures the optimality of risk-assessment. Otherwise, one could choose a huge \(\alpha^{\prime}\), resulting in a small \(\mathcal{T}(X;\alpha^{\prime})\) (that would be included in \(\mathcal{I}(X)\)) but in a grossly-overconservative and inaccurate failure probability \(\mathbb{P}(Y\in\mathcal{I}(X))\).
Owing to the explicit form of the CP-generated prediction interval \(\mathcal{T}(X;\alpha)\) given in Equation 4 and the nested property in Equation 6, solving for \(\alpha(X,\mathcal{Z})\) in Equation 7 involves a simple summing of the masses of those point-masses (\(p_{i}^{w}(x)\delta_{V_{i}^{-}(X)}\) given in Equation 4) whose locations lie inside \(\mathcal{I}(X)\). This is expressed as
\[\alpha\left(X,\mathcal{Z}\right)=\max(\alpha^{-}(X),\alpha^{+}(X )),\text{where}\] \[\alpha^{-}(X)=\sum_{i=1}^{n}p_{i}^{w}(X)\mathbb{1}\{V_{i}^{-}(X) \leq a_{-}^{*}(X)\},\alpha^{+}(X)=\sum_{i=1}^{n}p_{i}^{w}(X)\mathbb{1}\left\{ a_{+}^{*}(X)\leq V_{i}^{+}(X)\right\}, \tag{9}\]
where \(a_{-}^{*}(X)\) is the smallest \(V_{i}^{-}(X)\) that is greater than \(a_{-}(X)\), and \(a_{+}^{*}(X)\) is the largest \(V_{i}^{+}(X)\) that is smaller than \(a_{+}(X)\) for \(i=1,\cdots,n\).
_Remark 3.1_ (Revisiting the bound in [19]).: Authors in [19] erroneously disregard the randomness in the miscoverage \(\alpha(X,\mathcal{Z})\) and the conditionality of the probabilities in 8. They bound \(\mathbb{P}(Y\in\mathcal{T}(X;\alpha(X,\mathcal{Z}))|\mathcal{Z}=\mathcal{Z}_{0 },X=X^{*})\) from below by \(1-c\alpha(X,\mathcal{Z})\). As already explained in Subsection 1.2, this bound is erroneous because the CP techniques used in [5] could only provide marginal coverage. Indeed, in a general setting, conditional coverage could never be achieved [4].
### Solution to risk-assessment
Following the previous discussion, to achieve a conservative risk-assessment, we wish to bound the conditional probabilities in (8) from below. Since CP provides us marginal coverage, we marginalise these conditional probabilities with respect to the calibration set \(\mathcal{Z}\) and the input \(X\). This provides
\[\mathbb{P}(Y\in\mathcal{I}(X))\geq\mathbb{P}(Y\in\mathcal{T}(X; \alpha_{\mathcal{I}}))\geq 1-c\alpha_{\mathcal{I}}, \tag{10}\]
where
\[\alpha_{\mathcal{I}}:=\mathbb{E}_{\mathcal{Z},X}[\alpha(X,\mathcal{Z})]. \tag{11}\]
The last inequality in the above expression, follows from the coverage property of CP (Theorem 2.1).
We consider an unbiased approximation to \(\alpha_{\mathcal{I}}\). This lets us replace \(\alpha_{\mathcal{I}}\) in the above inequality by the average of its approximation, which results in a conservative risk-assessment. For our unbiased estimator, we consider a hold-out set \(\mathcal{Z}_{0}^{\alpha}:=\{X_{i_{\alpha}}\}_{\{i_{\alpha}=1,\ldots,m\}}\), which is independent from the training and the calibration set \(\mathcal{Z}\). We note that this set only represents the testing input distribution and does not require label information.
Using the above, we approximate \(\alpha_{\mathcal{I}}\)--and thus \(\mathbb{P}(Y\not\in\mathcal{I}(X))\)--via
\[\alpha_{\mathcal{I}}\approx\alpha_{\mathcal{I}}^{m}:=\frac{1}{m} \sum_{X\in\mathcal{Z}_{0}^{\alpha}}\alpha(X,\mathcal{Z}). \tag{12}\]
Algorithm 1 summarises the computation of \(\alpha_{\mathcal{I}}^{m}\). We name this algorithm InvCP (Inverse Conformal Prediction), as it applies the inverse of CP to compute the coverage level instead of a prediction interval. We also discuss the specific cases of this algorithm, see the details in the Supplement S.1.1.
By taking an expectation on both sides of Equation 12 and applying the definition in Equation 11, we can prove that \(\alpha_{\mathcal{I}}^{m}\) is an unbiased estimator for \(\alpha_{\mathcal{I}}\). Replacing \(\alpha_{\mathcal{I}}\) by the expected value of its estimation in Equation 10, we find that
\[\mathbb{P}(Y\in\mathcal{I}(X))\geq 1-c\mathbb{E}_{\mathcal{Z},X}\left[ \alpha_{\mathcal{I}}^{m}\right]. \tag{13}\]
Furthermore, from the law of large numbers, as \(m\to\infty\) and \(\forall\mathcal{Z}\), we find \(\alpha_{\mathcal{I}}^{m}\overset{P}{\rightarrow}\mathbb{E}_{X}[\alpha(X, \mathcal{Z})]\). We collect our findings in the result below.
**Theorem 3.2**.: _Assume that the data has co-variate shift in the sense of Subsection 2.1. Furthermore, assume that the prediction interval in the definition for \(\alpha(X,\mathcal{Z})\) (given in Equation 7) has the coverage property as given in Theorem 2.1. \(\alpha_{\mathcal{I}}^{m}\) is an unbiased estimator for \(\alpha_{I}\). Furthermore, the following lower-bound for the probability \(\mathbb{P}(Y\in\mathcal{I}(X))\)_
\[\mathbb{P}(Y\in\mathcal{I}(X))\geq 1-c\alpha_{\mathcal{I}}=1-c\mathbb{E}_{ \mathcal{Z},X}\left[\alpha_{\mathcal{I}}^{m}\right], \tag{14}\]
_where \(c\) equals \(1\) and \(2\) for split-CP and JAW, respectively. As \(m\to\infty\) and for all \(\mathcal{Z}\), the estimator \(\alpha_{\mathcal{I}}^{m}\) converges, in probability, to the expected value \(\mathbb{E}_{X}[\alpha(X,\mathcal{Z})]\)._
Proof.: The derivation of the equations 8-13 completes the proof.
_Remark 3.3_ (Risk-assessment bound for JAW).: Since we estimate \(\mathbb{P}(Y\not\in\mathcal{I}(X))\) using \(\alpha_{\mathcal{I}}^{m}\), ideally, we expect the bound \(\mathbb{P}(Y\not\in\mathcal{I}(X))\leq\mathbb{E}[\alpha_{\mathcal{I}}^{m}]\). As Theorem 3.2 dictates, this is true for split-CP. However, for JAW, we get the conservative bound \(\mathbb{P}(Y\not\in\mathcal{I}(X))\leq 2\mathbb{E}[\alpha_{\mathcal{I}}^{m}]\). This is an artifact of the coverage property of JAW (and also Jackknife+), which reads \(\mathbb{P}(Y\in\mathcal{I}(X;\alpha))\geq 1-2\alpha\); see Theorem 2.1. Nonetheless, in experiments, also for JAW, one observes \(c\approx 1\)[19]. We make the same observation for our risk-assessment in Section 4.
_Remark 3.4_ (Connection to previously proposed bounds).: In light of the above theorem, we comment on the theoretical comparison between our work and the previous work discussed in Subsection 3.1. Firstly, the authors in [19] propose the bound \(\mathbb{P}(Y\in\mathcal{I}(X))\geq 1-c\alpha(X,\mathcal{Z})\). This bound, as we discussed earlier, is erroneous for CP techniques that do not provide conditional coverage. Our analysis corrects this bound to \(\mathbb{P}(Y\in\mathcal{I}(X))\geq 1-c\mathbb{E}_{\mathcal{Z},X}\left[\alpha(X, \mathcal{Z})\right]\). Secondly, for \(m=1\), the approximator \(\alpha_{\mathcal{I}}^{m}\) given in Equation 12 is a crude high-variance approximation for \(\alpha_{\mathcal{I}}\). Furthermore, this crude approximator is the same as the approximation \(\alpha(X,\mathcal{Z})\) proposed in [19]. Therefore, the estimator in [19] can be viewed as a special case of our estimator, which better captures the randomness of the coverage estimate (defined in Equation 7) via a hold-out set.
## 4 Experimental results
The goal of this section is to empirically test the performance of the proposed risk assessment algorithm, based on different conformal prediction methods for models with varying predictive performance and dataset sizes, under both exchangeable data and covariate shift settings.
### Experimental setup
We simulated a dataset such that the ground truth for the probability of the true label belonging to a specified interval could be estimated with a high degree of accuracy. We constructed the base model using only two covariates \(X_{1}\) and \(X_{2}\) (for simplicity of calculating true probability) with a log-transformation for establishing a non linear relationship. The model is defined as -
\[Y=X_{1}*|\log(|X_{2}/100|)|+X_{2}*|\log(|X_{1}/100|)|+\epsilon \tag{15}\]
where \(X_{1}\sim N(\mu_{1},\sigma_{1})\), \(X_{2}\sim N(\mu_{2},\sigma_{2})\) and \(\epsilon\sim N(0,\sigma_{\epsilon})\). For the experiment, we take the parameters \([\mu_{1},\mu_{2},\sigma_{1},\sigma_{2},\epsilon]\) as \([70,40,20,10,5]\). An illustration of the relationship between \(Y\) and \(X_{1}\) and \(X_{2}\) in the absence of any noise \(\epsilon\) is included in the supplementary material (Figure S.1).
We sampled from the underlying distributions of \(X_{1},X_{2},\epsilon\) to generate a dataset of a fixed size (\(n_{total}\)). We conducted a series of \(B=100\) trials with different training and test data splits from
\(n_{total}\) (where \(n_{train}=\frac{7}{10}n_{total},n_{test}=\frac{3}{10}n_{total}\)) and fit different models to compute the predicted values on the test set for each trial. For each trial, we then applied the InvCP methods to estimate the probability that \(\mathcal{I}(X)=[\hat{\mu}(X)-\tau,\hat{\mu}(X)+\tau]\) over the test set.
For the given data, the target variable \(Y\) has a mean of around \(85\) and a standard deviation of around \(20\). Based on this, we set \(\tau=10\), i.e. the sensitivity threshold of the absolute difference between the predicted and the true value is set equal to \(10\). The supplementary material (Figure S.1) includes an illustrative example of the problem for a random set of 100 test points. For each of the \(B\) trials, we then computed the coverage estimates \(1-\alpha_{T}^{m}\) following the InvCP algorithm (Supplement S.1) for the _Split conformal[12, 13]_, _Jackknife+ [26]_, and _Cross Validation+ (CV+[26])_ CP techniques. A _True probability_ is also computed as a baseline, which is the empirical probability of the true outcome \(Y\) falling inside of the given interval \(\mathcal{I}(X)\) over the test sets in all trials.
### Method effectiveness
We first run the experiment using a polynomial regression model and a dataset of size 1000. The true coverage over a set of B = 100 trials and corresponding estimates for coverage obtained using the risk assessment method based on different conformal prediction techniques are shown in Figure 1. The plot demonstrates that the average split, jackknife+, and CV+ estimates all lie close to the average true probability estimate, though estimates are generally lower than the true results. This is important to note because this demonstrates that the method works effectively and gives conservative coverage estimates, in line with the methodology of risk assessment. Further, we can observe that the split conformal results have much wider variability as compared to CV+/ jackknife+, which is again in line with what we would expect as split conformal only utilizes half the dataset for training and calibration, thereby making it more variable than CV+/ jackknife+.
We now proceed to compare how these trends are impacted across models and datasets of different sizes. We consider models of different levels of complexity specifically - linear regression [27], support vector regression (SVR) [28], K-nearest neighbours (KNN) [29] with \(k=5\) and polynomial regression [30], and data sizes increasing from 100 to 1000. These models and data sizes simulate the various predictive power of \(\hat{Y}\), see the predictive performance of these models in the Supplement (Table S.1). The ground true coverage probability of \(\mathcal{I}(X)\) is therefore varying accordingly, and we test the accuracy of our risk assessment methods under these scenarios. Figure 2 shows the results.
Across all models and dataset sizes, it is evident that all our coverage estimators are lower than the ground-truth probability on average, which proves the conservativeness of our risk assessment methods. In terms of accuracy, Jackknife+ provides closer estimators to the true probability on average in all simulated cases. In terms of variation of estimation, Jackknife+ and CV+ have lower variability (range) over trials than split conformal. We can see that as the size of the dataset increases or the predictiveness of model increases, the width of the coverage estimate spread decrease. This observation makes intuitive sense as a larger quantity of data and more predictive model should reduce variability in estimates. Further, we observe that this improvement is more marked for split conformal compared to jackknife+ and CV+. The latter constitutes an important result as it implies
Figure 1: Coverage estimates. _Green dashed line denotes averaged empirical coverage of \(\mathcal{I}(X)\) over 100 trials (approximated ground-truth probability). The histograms denote the distributions of coverage estimates by the InvCP algorithm based on Jackknife+, CV+, and Split over all trials._
that depending on the size of the dataset, split conformal results can be very close to those from jackknife+ or CV+.
We compare the performance of the proposed estimators (especially based on weighted and unweighted CP intervals) under similar experimentation but for a dataset with covariate shift. The results show effectiveness of the InvCP algorithm under covariate shift, see the supplementary material S.2.
## 5 Conclusions
We have shown how conformal prediction-based prediction interval techniques can be used to estimate the failure probability of an ML model under both, exchangeability and covariate shift. We theoretically proved that our approach is conservative and experimentally validated its accuracy. Our experiments demonstrate the performance of the risk assessment approach, comparing performance for different model complexities and different sizes of datasets under exchangeability and covariate shift setting (Supplement S.2.2).
The results obtained reflect the performance of the underlying model, i.e. as model quality improves, the coverage improves, and the estimates obtained are in line with this behaviour.
## Appendix A Method
### Algorithm details
We provide a walk-through of the proposed InvCP (Inverse Conformal Prediction in Algorithm 1). For each element in the \(\alpha\)-hold-out set \(\mathcal{Z}_{0}^{\alpha}\), we compute the location and the weights of the point masses appearing in the definition of the prediction interval given in Equation 4, i.e. we compute the weights \(p_{i}^{w}\), and the location of the lower and the upper point masses defined in Equation 5. We then count the number of upper and lower point masses that are below and above, respectively, the \(\mathcal{I}(X)\) interval's endpoints. To account for possible co-variate shift, we weigh these point masses,
Figure 2: Comparison by the size of dataset. _Green dashed lines denote the empirical coverages of \(\mathcal{I}(X)=[\hat{\mu}(X)-10,\hat{\mu}(X)+10]\) where the predictions \(\hat{\mu}(X)\) are fitted by different models with different data size, respectively. The boxplots show coverage estimates by the InvCP algorithm based on Jacknife+, CV+, and Split over 100 trials. For any size of data, all methods provide smaller coverage estimates than the ground-truth probability on average._
which finally provides us with samples of \(\alpha(X,\mathcal{Z})\). Taking an average over these samples leads to the desired result.
_Remark A.1_ (Length invariant symmetric intervals).: We consider an interval \(\mathcal{I}(X)\) of the form \(\mathcal{I}(X)=[\mu(X)-\tau,\mu(X)+\tau]\), where \(\tau>0\). This interval is symmetric around \(\mu(X)\), and its length doesn't change with \(X\). Furthermore, we assume that the data is exchangeable i.e., there is no co-variate shift and, thus, the weight function \(w=1\) in Subsection 2.1. For such a case, as shown below, the miscoverage \(\alpha(X,Z)\) defined in Equation 7 based on a symmetric prediction interval around \(\mu(X)\), e.g., Split-CP, is independent of \(X\) for any given \(\mathcal{Z}=\mathcal{Z}_{0}\). Consequently, no \(\alpha\)-hold-out set is required to collect samples of \(\alpha(X,\mathcal{Z})\), i.e. access to a training and calibration set is sufficient.
We further elaborate on the above claim. Under exchangeable data, the weights read \(p_{i}^{w}(X)=\frac{1}{n+1}\) for all \(i\in\{1,\ldots,n\}\). Applying these weights in Equation 9, we find
\[\alpha(X,\mathcal{Z}_{0})=\frac{1}{n+1}\sum_{i=1}^{n}\mathbb{1}\left\{S_{i} \geq\tau\right\}=\alpha_{0}(\mathcal{Z}_{0}) \tag{16}\]
where \(S\) is the score function \(S(x,y)=|y-\mu(x)|\).The sum \(\frac{1}{n+1}\sum_{i=1}^{n}\mathbb{1}\left\{S_{i}\geq\tau\right\}\) is independent of \(X\) and hence, for a given calibration set, so is \(\alpha(X,\mathcal{Z}_{0})\).
_Remark A.2_ (Connection to earlier work-continued).: For the particular case discussed above, our estimator \(\alpha_{\mathcal{I}}^{m}\), for all \(m\), would reduce to \(\alpha_{0}(\mathcal{Z}_{0})\), which equals \(\alpha(X,\mathcal{Z}_{0})\). Thus, in this case, our estimator would be the same estimator as that proposed in [19]. We emphasize, however, that the bound \(\mathbb{P}(Y\in\mathcal{I}(X))\geq 1-c\alpha_{0}(\mathcal{Z}_{0})\) resulting from [19] would still be erroneous since it disregards the randomness in the calibration set \(\mathcal{Z}\).
## Appendix B Experiments
### Details of experiments for exchangeable data
This section provides additional details of the experiments in our paper (Section 4). Figure 3 shows the nonlinear relationship of target variable \(Y\) and input variables \(X_{1}\) and \(X_{2}\) in the absence of noise term \(\epsilon\). Figure 4 illustrates the given interval \(\mathcal{I}(X)\) and computation of its empirical coverage, which we used as the _true probability_ in our experiments for evaluating the performance of proposed estimators. Table 1 provides the performance of the predictive models used in the experiments.
Figure 3: Relationship between \(X_{1}\), \(X_{2}\) and \(Y\) in the simulated data
### Experiments under covariate shift
We further evaluate the proposed risk assessment under the covariate shift setting in Subsection 2.1. We simulate an _exponential tilting_ between \(X_{\text{train}}\) and \(X_{\text{test}}\); similar covariate shifts are also considered in previous work [19, 17]. Let \(\tilde{w}(x)=\exp(-\beta/100\log(X_{1})+\beta/100\log(X_{2}))\), where \(\beta\) is a parameter to control the scale of covariate shift (e.g., \(\beta=0\) indicates no shift). Then, we re-sample the original testing points with the probabilities \(\tilde{w}(x)/||\tilde{w}(x)||_{1}\) to get the shifted data \(\tilde{X}_{\text{test}}\). Hence, the likelihood ratio of the covariate distributions, \(w(x)=dP_{X}(x)/dP_{X}(x)=\tilde{w}(x)/||\tilde{w}(x)||_{1}\). We increase the shift parameter \(\beta\) from 0 to 200 to simulate the shift from mild to strong. We use a dataset of \(n_{\text{total}}=1000\) and a polynomial predictive model. We assume the likelihood ratio of co variate distributions \(w\) is known for simplicity. In practice, a model can be fitted to estimate this likelihood ratio, and we refer the details to [17]. Apart from the methods in previous sections, we also applied the weighted CP intervals (_Weighted Jacknife+[19]_, _Weighted CV+[19]_, _Weighted Split[17]_) in InvCP algorithm, and evaluate the performance of the generated coverage estimates. The results are shown in Figure 5.
All methods can give conservative and accurate estimates when the shift degree is small, although the split conformal-based methods have more variability than others. As the covariate shift degree increases, only the weighted methods (weighted Jacknife+/CV+/Split) can give conservative estimates. In contrast, all unweighted methods over-estimate the coverages on average (i.e., under-estimate the risks of true outcomes falling outside of given intervals). This shows that the weighted methods can adjust to the underlying covariate shift better than their unweighted competitors. One may also notice that the weighted methods have large variances when there is a significant covariate shift. This can be explained by the large variance of the shifted testing sets created by data-dependent weights: for each train/test split, the weights are calculated based on the test set, and a shifted test set is obtained by resampling with these simulated weights. As the weighted methods adjust to the empirical testing
\begin{table}
\begin{tabular}{|l|c c c|} \hline Model & RMSE & MAE & R2 \\ \hline Linear Regression & 10.53 & 7.46 & 0.46 \\ SVR & 9.23 & 6.49 & 0.62 \\ KNN & 6.46 & 5.05 & 0.82 \\ Polynomial Regression & 5.37 & 4.42 & 0.89 \\ \hline \end{tabular}
\end{table}
Table 1: Simulated data - Model performance metrics. _As model complexity increases from Linear Regression to Polynomial Regression, the model performance improves, as observed from the metrics in the table._
Figure 4: Prediction intervals with fixed threshold \(\tau\). _The blue intervals denote the pre-defined interval \(\mathcal{I}(X_{i})=[\hat{\mu}(X_{i})-10,\hat{\mu}(X_{i})+10],i=1,\cdots,100\). The green cross denote the outcomes of target variable \(Y\) that fall inside of \(\mathcal{I}(X_{i})\), and the red triangles denote the outcomes that fall outside of \(\mathcal{I}(X_{i})\). This plot shows the empirical marginal coverage of \(\mathcal{I}(X)\) is 0.96._
covariate distribution closely, the variances of coverage estimates increase proportionally to the variance of the empirical coverage probabilities as the shift parameter \(\beta\) increases, which is another validation of the accuracy of the weighted methods under covariate shift.
### Space variant \(\tau\)
We have considered the probability \(Y\in[\hat{\mu}(X)-\tau,\hat{\mu}(X)+\tau]\). Here, \(\tau\) is fixed and doesn't change with \(Y\). However, in many real life applications, we may wish to consider the problem setup where the user defined threshold is not space invariant but may be a function of \(Y\). For example, in many industry and engineering applications, error thresholds are expressed as a percentage of \(Y\) i.e the model prediction \(\hat{\mu}\) is considered correct if the relative difference between the prediction and the true value does not exceed the defined threshold \(\tau\) i.e \(|\frac{Y-\hat{\mu}}{\hat{\mu}}|\leq\tau\) which implies \(Y\in[\hat{\mu}-|\hat{\mu}|\tau,\hat{\mu}+|\hat{\mu}|\tau)]\). We would like to note here that the above method can be easily extended to this problem with the modification that \(\tau\) will now be different for each best data point and correspond to \(\tau^{{}^{\prime}}=|\hat{\mu}|\tau\). Figure 6 shows the adaptive prediction bands for \(Y_{test}\) for the simulated dataset with \(\tau=0.175\). As we can see, the size of the prediction bands varies with the predicted value. The performance are similar to the fixed \(\tau\) case, and we provide the results for references in Figure 7-Figure 8.
Figure 5: Coverage estimates for varying strength of bias parameter. _Green dashed lines denote the empirical average coverage of \(\mathcal{I}(X)=[\hat{\mu}(X)-10,\hat{\mu}(X)+10]\) where the predictions \(\hat{\mu}(X)\) is fitted by Polynomial Regression. The boxplots show coverage estimates by the ImCP algorithm based on different CP methods over 100 trials. As the bias parameter in simulated covariate shift increases, only weighted CP methods can provide conservative/smaller coverage estimates comparing with ground-truth probability on average._
Figure 6: Prediction intervals - varying threshold \(\tau\)
## Appendix C Discussions and future work
The current paper proposes an interval-generation based method for risk assessment of regression models. We restrict this work to JAW and weighted split conformal prediction intervals as they provide theoretical coverages under covariate shift. Other conformal prediction intervals such as Conformalized Quantile Regression [8] or Conformalizing Bayes [6] can also be applied to our framework following the proposed InvCP algorithm. However, their theoretical properties under covariate shift are unclear and need further explorations. Furthermore, the recent development of adaptive conformal inference under arbitrary distribution shifts [31] provides the potential of conducting risk assessment in an online fashion. Deriving conservative risk assessment methods under arbitrary shifts is important as many ML models are used in the fast-changing areas such as finance and economics, where the market and customers behaviours can shift abruptly. In the future, we aim to extend the InvCP algorithm to improve the adaptivity of our method in these more challenging data scenarios.
Figure 8: Comparison by size of data for variant \(\tau\)
Figure 7: Comparison by model for variant \(\tau\) |
2308.08882 | Tetrahedral shape of $^{110}$Zr from covariant density functional theory
in 3D lattice space | Covariant density functional theory is solved in 3D lattice space by
implementing the preconditioned conjugate gradient method with a filtering
function (PCG-F). It considerably improves the computational efficiency
compared to the previous inverse Hamiltonian method (IHM). This new method is
then applied to explore the tetrahedral shape of $^{110}$Zr in the full
deformation space. The ground state of $^{110}$Zr is found to have a
tetrahedral shape, but the deformations $\beta_{31}$ and $\beta_{33}$ greatly
soften the potential energy surface. This effect is analysed with the
microscopic evolution of the single-particle levels near the Fermi surface
driven by the deformation. | Fangfang Xu, Bo Li, Zhengxue Ren, Pengwei Zhao | 2023-08-17T09:35:57Z | http://arxiv.org/abs/2308.08882v1 | # Tetrahedral shape of \({}^{110}\)Zr from covariant density functional theory in 3D lattice space
###### Abstract
Covariant density functional theory is solved in 3D lattice space by implementing the preconditioned conjugate gradient method with a filtering function (PCG-F). It considerably improves the computational efficiency compared to the previous inverse Hamiltonian method (IHM). This new method is then applied to explore the tetrahedral shape of \({}^{110}\)Zr in the full deformation space. The ground state of \({}^{110}\)Zr is found to have a tetrahedral shape, but the deformations \(\beta_{31}\) and \(\beta_{33}\) greatly soften the potential energy surface. This effect is analysed with the microscopic evolution of the single-particle levels near the Fermi surface driven by the deformation.
Introduction
The occurrence of spontaneous symmetry breaking leads to shapes with a variety of symmetries for nuclei. Nuclear shape can be described by the parametrization of the nuclear surface \(R(\theta,\phi)\) with a multipole expansion [1],
\[R(\theta,\phi)=R_{0}\left[1+\beta_{00}+\sum_{\lambda=1}^{\infty}\sum_{\mu=- \lambda}^{\lambda}\beta_{\lambda\mu}^{*}Y_{\lambda\mu}(\theta,\phi)\right], \tag{1}\]
where the \(\beta_{\lambda\mu}\)'s are the deformation parameters. The quadrupole shape with axial symmetry characterized by \(\beta_{20}\) has been known for a long time, which results in rotational excitation in nuclei [1]. In recent decades, many efforts have been devoted to studying the triaxiality [2; 3; 4; 5] and reflection asymmetry [6; 7; 8; 9] in nuclei, characterized by \(\beta_{22}\) and \(\beta_{30}\) respectively. Novel excitation modes have been predicted theoretically to identify these shapes in nuclei [1; 2; 10; 11], and many of them have been confirmed experimentally [3; 4]. Indeed, exotic shapes that violate both reflection and axial symmetries, such as tetrahedral shapes, may also exist in nuclei.
A tetrahedral shape corresponds to a finite value of \(\beta_{32}\), but vanishing values of all other \(\beta_{\lambda\mu}\)'s. The tetrahedral symmetry of nuclei is a direct consequence of the point group \(T_{d}^{D}\), which has two two-dimensional and one four-dimensional irreducible representations [12]. Due to the tetrahedral symmetry, the single-particle levels split into multiplets with degeneracies equal to the irreducible representations of the \(T_{d}^{D}\) group. A fourfold degeneracy results in large energy gaps in the single-particle spectrum, and these gaps are comparable to or even larger than the well-known spherical shell gaps. Empirically, these large gaps occur predominantly in nuclei with \(Z(N)=16,20,32,40,56,70\) and \(90\), and \(N=112,136\), and \(142\)[13; 14; 15; 16; 17; 18]. Thus, a nucleus with proton and/or neutron numbers equal to these values may have a static tetrahedral deformation, characterized by the occurrence of negative-parity bands with missing in-band \(E2\) transitions [19; 20].
Several experiments have been devoted to identifying the tetrahedral shape of nuclei. The negative-parity bands in \({}^{160}\)Yb and \({}^{154,156}\)Gd have been suggested as candidates for the rotational bands of tetrahedral nuclei [21], but the measured nonzero quadrupole moments contradict the existence of tetrahedral shapes in these nuclei [19; 20; 22]. For other candidates in nuclei \({}^{230,232}\)U [23], the possibilities of tetrahedral shapes for the negative-parity bands in \({}^{230,232}\)U appear difficult to reconcile with the systematics of measured quadrupole
moments for the neighboring isotone \({}^{226}\)Ra [24]. The isomeric state of \({}^{108}\)Zr is proposed to be a candidate for a tetrahedral shape isomer [25], while the measurement of the corresponding band structure is required to confirm the tetrahedral shape. The \({}^{156}\)Dy has been suggested as a tetrahedral candidate nucleus [26], but it is not supported from the experimental \(B(E2)/B(E1)\) ratios of transition probabilities for the negative-parity bands [27]. In conclusion, there is still no firm experimental evidence to support the existence of tetrahedral shapes in nuclei.
The possible tetrahedral shapes in the ground or isomeric states of nuclei have been investigated with many theoretical approaches. For example, the macroscopic-microscopic (MM) model [28; 29; 30; 14; 15; 21], the algebraic cluster model [32], the reflection asymmetric shell model [33; 34], the nonrelativistic density functional theories (DFTs) [35; 36; 37; 38; 39; 40; 41; 42] and the covariant density functional theories (CDFTs) [43; 44]. The CDFT [45] is of particular interest, since it brings many advantages to describe the nuclear systems [46; 47; 48], such as the natural inclusion of the self-consistent treatment of the time-odd fields [49] and spin-orbit interactions, which can be clearly seen in the nonrelativistic reduction of the CDFT via a similarity renormalization method [50]. However, up to now, the \(V_{4}\) symmetry has always been assumed in the application of CDFT to nuclear tetrahedral shapes [43; 44].
The aim of the present work is to explore the tetrahedral shapes of nuclei in the full deformation space by solving the CDFT in three-dimensional (3D) lattice space. The CDFT in 3D lattice space has been a long-standing challenge due to the variational collapse [51] and the Fermion doubling [52] problems. It becomes available recently [53] with the help of the inverse Hamiltonian method (IHM) [54] and the Fourier spectral method [55]. In Ref. [56], a more efficient method, the preconditioned conjugate gradient method with a filtering function (PCG-F), is proposed to solve the nuclear Dirac equation with a given potential in 3D lattice space. In this work, the CDFT will be solved in 3D lattice space by implementing the PCG-F method [56], and this new method is then applied to explore the tetrahedral shape of \({}^{110}\)Zr in the full deformation space. Note that the ground state of \({}^{110}\)Zr was previously predicted to be tetrahedral by the MM model [28], the Skyrme DFTs [37; 28], and the multidimensionally constrained CDFT (MDC-CDFT) [43].
The paper is organized as follows, the formulae for the CDFT and the PCG-F method will be briefly introduced in Sec. II. The numerical details are presented in Sec. III. Section IV is devoted to the results for tetrahedral shapes in \({}^{110}\)Zr. A summary is given in Sec. V.
Theoretical Framework
### Formalism of the CDFT
The starting point of the CDFT is a standard Lagrangian density in the point-coupling form, which can be written as [57]
\[\mathcal{L}= \bar{\psi}(i\gamma^{\mu}\partial_{\mu}-m)\psi \tag{2}\] \[-\frac{1}{2}\alpha_{S}(\bar{\psi}\psi)(\bar{\psi}\psi)-\frac{1}{2 }\alpha_{V}(\bar{\psi}\gamma^{\mu}\psi)(\bar{\psi}\gamma_{\mu}\psi)-\frac{1}{2 }\alpha_{TV}(\bar{\psi}\vec{\tau}\gamma^{\mu}\psi)\cdot(\bar{\psi}\vec{\tau} \gamma_{\mu}\psi)\] \[-\frac{1}{3}\beta_{S}(\bar{\psi}\psi)^{3}-\frac{1}{4}\gamma_{S}( \bar{\psi}\psi)^{4}-\frac{1}{4}\gamma_{V}[(\bar{\psi}\gamma^{\mu}\psi)(\bar{ \psi}\gamma_{\mu}\psi)]^{2}\] \[-\frac{1}{2}\delta_{S}\partial^{\nu}(\bar{\psi}\psi)\partial_{ \nu}(\bar{\psi}\psi)-\frac{1}{2}\delta_{V}\partial^{\nu}(\bar{\psi}\gamma^{ \mu}\psi)\partial_{\nu}(\bar{\psi}\gamma_{\mu}\psi)-\frac{1}{2}\delta_{TV} \partial^{\nu}(\bar{\psi}\vec{\tau}\gamma^{\mu}\psi)\cdot\partial_{\nu}(\bar{ \psi}\vec{\tau}\gamma_{\mu}\psi)\] \[-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-e\frac{1-\tau_{3}}{2}\left(\bar {\psi}\gamma^{\mu}\psi\right)A_{\mu},\]
where \(m\) is the nucleon mass. According to the conventional variational principle, one obtains the Dirac equation for nucleons,
\[\hat{h}(\mathbf{r})\psi_{k}(\mathbf{r})=\left[\mathbf{\alpha}\cdot\left(-i\mathbf{\nabla}-\mathbf{ V}(\mathbf{r})\right)+\beta\left(m+S(\mathbf{r})\right)+V^{0}(\mathbf{r})\right]\psi_{k}( \mathbf{r})=\varepsilon_{k}\psi_{k}(\mathbf{r}), \tag{3}\]
where \(\varepsilon_{k}\) is the single-particle energy. The single-particle Dirac Hamiltonian \(\hat{h}(\mathbf{r})\) contains the scalar \(S(\mathbf{r})\) and four-vector \(V^{\mu}(\mathbf{r})\) potentials,
\[S(\mathbf{r})= \alpha_{S}\rho_{S}+\beta_{S}\rho_{S}^{2}+\gamma_{S}\rho_{S}^{3}+ \delta_{S}\Delta\rho_{S}, \tag{4a}\] \[V^{\mu}(\mathbf{r})= \alpha_{V}j^{\mu}+\gamma_{V}(j^{\mu}j_{\mu})j^{\mu}+\delta_{V} \Delta j^{\mu}+\tau_{3}\alpha_{TV}j^{\mu}_{TV}+\tau_{3}\delta_{TV}\Delta j^{ \mu}_{TV}+e\frac{1-\tau_{3}}{2}A^{\mu}, \tag{4b}\]
where the electromagnetic field \(A^{\mu}\) is determined by Poisson's equation, and the densities and currents are defined as
\[\rho_{S}(\mathbf{r})= \sum_{k}v_{k}^{2}\bar{\psi}_{k}(\mathbf{r})\psi_{k}(\mathbf{r}), \tag{5a}\] \[j^{\mu}(\mathbf{r})= \sum_{k}v_{k}^{2}\bar{\psi}_{k}(\mathbf{r})\gamma^{\mu}\psi_{k}(\mathbf{ r}),\] (5b) \[\vec{j}^{\mu}_{TV}(\mathbf{r})= \sum_{k}v_{k}^{2}\bar{\psi}_{k}(\mathbf{r})\gamma^{\mu}\tau_{3}\psi_{ k}(\mathbf{r}),\] (5c) \[j^{\mu}_{c}(\mathbf{r})= \sum_{k}v_{k}^{2}\bar{\psi}_{k}(\mathbf{r})\gamma^{\mu}\frac{1-\tau_ {3}}{2}\psi_{k}(\mathbf{r}). \tag{5d}\]
Here, \(\tau_{3}\) is the isospin Pauli matrix with the eigenvalues \(+1\) for neutrons and \(-1\) for protons. The time component of the vector current \(j^{\mu}\) is usually denoted as the vector density \(\rho_{v}\).
For open shell nuclei, pairing correlations play an important role, and they are taken into account with the BCS method. The pairing energy functional is given by
\[E_{\rm pair}=-\sum_{\tau=n,p}\frac{G_{\tau}}{4}\int d^{3}r\kappa_{\tau}^{*}( \mathbf{r})\kappa_{\tau}(\mathbf{r}), \tag{6}\]
where \(G_{\tau}\) is the constant pairing strength and \(\kappa(\mathbf{r})\) is the pairing tensor,
\[\kappa(\mathbf{r})=2\sum_{k>0}f_{k}u_{k}v_{k}|\psi_{k}(\mathbf{r})|^{2}, \tag{7}\]
with the smooth-cutoff weight factor
\[f_{k}=\frac{\Theta(-\varepsilon_{k})}{1+\exp[(\varepsilon_{k}-\lambda_{\rm F} -\Delta E_{\tau})/\mu_{\tau}]}. \tag{8}\]
Here, the Fermi energy \(\lambda_{\rm F}\) is determined by the particle number, \(2\sum_{k>0}v_{k}^{2}=N_{\tau}\), with \(N_{\tau}\) the particle number of neutrons or protons. The cutoff parameters \(\Delta E_{\tau}=5\) MeV and \(\mu_{\tau}=\Delta E_{\tau}/10=0.5\) MeV are chosen as in Ref. [58]. \(\Theta(-\varepsilon_{k})\) equals one for bound levels and zero elsewhere, and it is introduced to exclude the continuum in the pairing window.
### Implementation of the PCG-F method
In the PCG-F method, the lowest \(\tilde{A}\) eigenstates in the Fermi sea of the Dirac equation (3) are solved iteratively starting from a set of orthonormalized guess solutions \(\psi_{k}^{(0)}\) (\(k=1,2,...,\tilde{A}\)). Here, the value of \(\tilde{A}\) is chosen to include all bound states. The trial wave function \(\psi_{k}\) is then updated iteratively,
\[\psi_{k}^{(i+1)}=\sum_{l=0}^{\tilde{A}}\left[G_{kl}^{a}X_{l}^{(i)}+G_{kl}^{b}W_ {l}^{(i)}+G_{kl}^{c}P_{l}^{(i)}\right]\qquad(i=0,1,2,...), \tag{9}\]
where \(X_{l}^{(i)}\), \(W_{l}^{(i)}\), and \(P_{l}^{(i)}\) are defined as,
\[X_{l}^{(i)}= F(\hat{h}^{(i)})\psi_{l}^{(i)}, \tag{10a}\] \[W_{l}^{(i)}= F^{4}(\hat{h}^{(i)})T_{l}^{(i)}\left[\hat{h}^{(i)}-\langle \psi_{l}^{(i)}|\hat{h}^{(i)}|\psi_{l}^{(i)}\rangle\right]\psi_{l}^{(i)},\] (10b) \[P_{l}^{(i)}= F(\hat{h}^{(i)})\left[\psi_{l}^{(i)}-\sum_{l^{\prime}=1}^{ \tilde{A}}\langle\psi_{l^{\prime}}^{(i-1)}|\psi_{l}^{(i)}\rangle\psi_{l^{ \prime}}^{(i-1)}\right]. \tag{10c}\]
The initial \(P_{l}^{(0)}\) is set to zero. The filtering operator \(F(\hat{h})\) and the preconditioner \(T_{l}\) are introduced for the sake of iteration convergence. The single-particle Dirac Hamiltonian \(\hat{h}^{(i)}\)
is constructed from the densities and currents determined by the wave functions \(\{\psi_{k}^{(i)}\}\). The coefficient matrices \(G^{a}\), \(G^{b}\), and \(G^{c}\) in Eq. (9) are chosen to minimize \(\sum_{k=1}^{\tilde{A}}\langle\psi_{k}^{(i+1)}|\hat{h}^{(i)}|\psi_{k}^{(i+1)}\rangle\) under the orthonormalization condition \(\langle\psi_{k}^{(i+1)}|\psi_{l}^{(i+1)}\rangle=\delta_{kl}\).
Similar to Ref. [56], the filtering operator \(F(\hat{h})\) and the preconditioner \(T_{l}\) read,
\[F(\hat{h}^{(i)})=\left(\hat{h}^{(i)}+2m\right)^{2}, \tag{11}\] \[T_{l}^{(i)}=\left[\hat{p}^{2}+\left(g_{l}^{(i)}m\right)^{2} \right]^{-1}, \tag{12}\]
with
\[g_{l}^{(i)}=0.15\frac{\langle\psi_{l}^{(i)}|\hat{h}^{(i)}|\psi_{l}^{(i)} \rangle}{(V^{0}+S)_{\rm min}}+0.10. \tag{13}\]
There are two criteria for the convergence of the iteration. One is that the energy dispersions \(\langle\psi_{l}^{(i)}|[\hat{h}^{(i)}]^{2}|\psi_{l}^{(i)}\rangle-\langle\psi_{ l}^{(i)}|\hat{h}^{(i)}|\psi_{l}^{(i)}\rangle^{2}\) for all occupied levels should be smaller than a certain value, e.g., \(10^{-8}\) MeV\({}^{2}\). The other one is that the differences between the mean potentials [Eqs. (4a) and (4b)] at two adjacent iterations should be smaller than a certain value. The convergence is achieved only if both criteria are satisfied.
### Nuclear bulk properties
From the converged wave functions, the nuclear total energy and the deformation parameters can be calculated. The total energy consists of the mean-field energy \(E_{\rm MF}\), the pairing energy \(E_{\rm pair}\) and the center-of-mass (c.m.) correction energy \(E_{\rm cm}\)
\[E_{\rm tot}=E_{\rm MF}+E_{\rm pair}+E_{\rm cm}, \tag{14}\]
where the mean-field energy \(E_{\rm MF}\) is written as
\[E_{\rm MF}= \int d^{3}r\bigg{\{}\sum_{k}v_{k}^{2}\psi_{k}^{\dagger}(\mathbf{ \alpha}\cdot\mathbf{p}+\beta m)\psi_{k} \tag{15}\] \[+ \frac{\alpha_{S}}{2}\rho_{S}^{2}+\frac{\alpha_{V}}{2}j^{\mu}j_{ \mu}+\frac{\alpha_{TV}}{2}\vec{j}_{TV}^{\mu}\cdot(\vec{j}_{TV})_{\mu}+\frac{ \beta_{S}}{3}\rho_{S}^{3}+\frac{\gamma_{S}}{4}\rho_{S}^{4}+\frac{\gamma_{V}}{4 }\left(j^{\mu}j_{\mu}\right)^{2}\] \[+ \frac{\delta_{S}}{2}\rho_{S}\Delta\rho_{S}+\frac{\delta_{V}}{2}j^ {\mu}\Delta j_{\mu}+\frac{\delta_{TV}}{2}\vec{j}_{TV}^{\mu}\cdot\Delta(\vec{j }_{TV})_{\mu}+\frac{1}{2}A_{\mu}\Delta A^{\mu}+ej_{c}^{\mu}A_{\mu}\bigg{\}}.\]
The pairing energy \(E_{\rm pair}\) is calculated following Eq. (6), and the c.m. correction energy \(E_{\rm cm}\) is considered with the microscopic c.m. correction
\[E_{\rm cm}=-\frac{1}{2mA}\langle\mathbf{P}_{\rm cm}^{2}\rangle, \tag{16}\]
with \(A\) the mass number and \(\mathbf{P}_{\rm cm}=\sum_{k}\mathbf{p}_{k}\) the total momentum in the c.m. frame.
The deformation parameters \(\beta_{\lambda\mu}\)'s are calculated with
\[\beta_{\lambda\mu}=\frac{4\pi}{3AR^{\lambda}}\int d^{3}r\rho_{v}(\mathbf{r})r^{ \lambda}Y_{\lambda\mu}, \tag{17}\]
where \(Y_{\lambda\mu}\) is the spherical harmonics and \(R=1.2\times A^{1/3}\) fm. Note that we need to additionally constrain the center of mass of the whole nucleus at the origin of the 3D box, and align the principal axes with the coordinate axes to remove the redundant degrees of freedom.
## III Numerical details
In this work, the point-coupling density functional PC-PK1 [57] is used. For the 3D lattice space, the step sizes and the grid numbers along the \(x\), \(y\) and \(z\) axes are chosen as 1 fm and 30, respectively. Similar to the Ref. [43], the neutron and proton pairing strengths \(G_{n}=-330~{}{\rm MeV}\cdot{\rm fm}^{3}\) and \(G_{p}=-430~{}{\rm MeV}\cdot{\rm fm}^{3}\), are determined by reproducing the empirical pairing gaps of \({}^{102,104}\)Zr, which are obtained with the three-point odd-even mass differences formula [59] (see Table 1).
## IV Results and discussion
We firstly discuss the efficiency of the PCG-F method in the self-consistent CDFT calculations for the ground state of \({}^{110}\)Zr. In Fig. 1, the maximum energy dispersion \(\left[\langle h^{2}\rangle-\langle h\rangle^{2}\right]_{\rm max}\) for the occupied single-particle states and the maximum absolute difference \(\Delta U\) between the mean potentials at two adjacent iterations are shown, in comparison with the results given
\begin{table}
\begin{tabular}{c c c c c} & \multicolumn{2}{c}{\({}^{102}\)Zr} & \multicolumn{2}{c}{\({}^{104}\)Zr} \\ & \(\Delta_{n}\) & \(\Delta_{p}\) & \(\Delta_{n}\) & \(\Delta_{p}\) \\ \hline Empirical & 1.10 & 1.54 & 1.08 & 1.53 \\ CDFT & 1.12 & 1.55 & 1.00 & 1.49 \\ \end{tabular}
\end{table}
Table 1: Pairing gaps (in MeV) calculated by the CDFT in 3D lattice space for \({}^{102,104}\)Zr, in comparison with the empirical values extracted from the three-point odd-even mass differences. The experimental masses are taken from AME2020 [60].
by the IHM. For the PCG-F method, it takes only 84 iterations to achieve the convergence, i.e., \(\left[\langle h^{2}\rangle-\langle h\rangle^{2}\right]_{\rm max}\leq 10^{-8}\ {\rm MeV}^{2}\) and \(\Delta U\leq 10^{-2}\) MeV, while it requires more than 170 iterations for the IHM to reach the same accuracy. The difference between the total energies obtained in these two methods is smaller than \(10^{-5}\) MeV. The total computational time for the PCG-F method is 386 minutes with the Intel(R) Xeon(R) CPU E5-2680, and it saves 52% of the computational time as compared with the IHM calculations. As seen in Ref. [56], compared with the IHM, the PCG-F method gives a much faster convergence in solving the Dirac equation with a given potential. The present results prove that the PCG-F method is more efficient than the IHM in the self-consistent CDFT calculations as well. This is not trivial because, during the self-consistent solution of the CDFT in 3D lattice space, the Dirac equation is not exactly solved until the self-consistency is achieved. In the following, we apply the framework of the PCG-F method to study the tetrahedral shape of \({}^{110}\)Zr.
Figure 1: (Color online.) The maximum energy dispersion for the occupied single-particle states (a), and the maximum absolute difference between the mean potentials (including the scalar and vector potentials) at two adjacent iterations (b), as functions of the iteration number for the ground state of \({}^{110}\)Zr. The solid and dotted curves respectively represent the results calculated by the PCG-F and IHM, and the corresponding computation times are also given.
Figure 2 depicts the one-dimensional potential energy curves of \({}^{110}\)Zr calculated with the CDFT in 3D lattice space by imposing different symmetry restrictions: (i) axial and reflection symmetry (AS & RS), (ii) axial symmetry and reflection asymmetry (AS & RA), (iii) \(V_{4}\) symmetry (including deformations \(\beta_{\lambda\mu}\) with even \(\mu\)), and (iv) full deformation space including all \(\beta_{\lambda\mu}\)'s (Full). There are two energy minima in all cases, i.e., the ground state at \(\beta_{20}\approx 0.00\) and a prolate minimum at \(\beta_{20}\approx 0.50\). The ground-state energy varies visibly in the calculations with different symmetry restrictions. A spherical ground state is obtained if one assumes axial and reflection symmetry. The ground-state energy is lowered by about 0.4 MeV if one releases the restriction of reflection symmetry. It is lowered further by about 0.5 MeV if the nonaxial deformation is allowed except the deformation of \(\beta_{\lambda\mu}\) with odd \(\mu\). This ground state has a tetrahedral shape (\(\beta_{20}\approx 0.00,\beta_{30}\approx 0.00,\beta_{32}\neq 0.00\)), and it is consistent with the results obtained in the previous MDC-CDFT calculations [43].
Figure 2: (Color online.) The potential energy curves of \({}^{110}\)Zr calculated with the CDFT in 3D lattice space by imposing different symmetries. The results restricted to axial symmetry and reflection symmetry (AS & RS), axial symmetry and reflection asymmetry (AS & RA), and \(V_{4}\) symmetry (including deformations \(\beta_{\lambda\mu}\) with even \(\mu\)) are represented by dotted, dashed-dotted, and dashed lines, respectively. The open circles represent the results without any symmetry restriction (Full), and the obtained ground-state shape is illustrated by the 3D image. The inset zooms in to the detailed structure of the potential energy curves near \(\beta_{20}=0\).
Thanks to the solutions in the 3D lattice space, one could remove all symmetry restrictions, and the results are shown by open circles in Fig. 2. The ground-state energy barely changes, and this implies quite small deformations beyond \(V_{4}\) symmetry, like \(\beta_{31}\) and \(\beta_{33}\). The obtained ground-state deformation parameters as well as the total energy for \({}^{110}\)Zr are listed in Table 2. The same results are found in the calculations with the same box size but a smaller step size of 0.8 fm. From the listed deformations in Table 2, one may conclude that the tetrahedral shape still exists in the full deformation space.
We further investigate the softness of the tetrahedral shape in the ground state of \({}^{110}\)Zr, against the \(\beta_{31}\) and \(\beta_{33}\) deformations. In Fig. 3, we show the potential energy surfaces in the \((\beta_{30},\beta_{32})\) and \((\beta_{31},\beta_{33})\) planes. The quadrupole deformations \(\beta_{20}\) and \(\beta_{22}\) are determined self-consistently, and their values are found to be very close to zero. In Fig. 3(a), the octupole deformations \(\beta_{31}\) and \(\beta_{33}\) are constrained to zero. It shows a well-developed tetrahedral ground state with \((\beta_{30},\beta_{32})\approx(0.00,0.15)\) and a pear-like isomeric state at \((\beta_{30},\beta_{32})\approx(0.15,0.00)\). The barrier between the two minima is about 0.5 MeV. After releasing all symmetry restrictions, as seen in Fig. 3(b), the tetrahedral ground state remains but the barrier vanishes. In fact, there is a rather flat path connecting the tetrahedral shape \((\beta_{30},\beta_{32})\approx(0.00,0.15)\) and the pear shape \((\beta_{30},\beta_{32})\approx(0.15,0.00)\) in the potential energy surface. This path can be seen more clearly in Fig. 3(c), and the nonzero \((\beta_{31},\beta_{33})\) values in the parentheses imply the importance of the \(\beta_{31}\) and \(\beta_{33}\) deformations. As seen in Fig. 3(d), the tetrahedral ground state is also very soft in the \((\beta_{31},\beta_{33})\) plane. The corresponding lowest-energy path connecting the tetrahedral shape and the pear shape, as seen in Fig. 3(e), is also very flat. All these results prove that the tetrahedral ground state in \({}^{110}\)Zr is greatly softened in the full deformation space.
For a microscopic understanding of the impacts of the \(\beta_{31}\) and \(\beta_{33}\) deformations on the tetrahedral ground state, the single-particle levels of \({}^{110}\)Zr near the Fermi surface are shown
in Figs. 4 and 5 as functions of \(\beta_{32}\) and \(\beta_{30}\), respectively. In Figs. 4(a) and 4(b), the deformations \(\beta_{31}\) and \(\beta_{33}\) are constrained to zero, and this leads to a vanishing \(\beta_{20}\). Therefore, at \(\beta_{32}=0\), the single-particle levels are degenerate according to their \(j\) values. With the increasing \(\beta_{32}\), the single-particle levels split into multiplets with degeneracies equal to the irreducible representations of the \(T_{d}^{D}\) group due to the tetrahedral symmetry. For example, the spherical levels with \(j=5/2\) are sixfold degenerate and they can be reduced to the two-dimensional irreducible representation and four-dimensional irreducible representation of the \(T_{d}^{D}\) group, and these levels split into two levels with degeneracies 2 and 4 as \(\beta_{32}\) and 4 as \(\beta_{32}\) and 4 as \(\beta_{33}\) and 4 as \(
Figure 4: (Color online.) Single-neutron and -proton levels of \({}^{110}\)Zr as functions of \(\beta_{32}\). In panels (a) and (b), the \(\beta_{31}\) and \(\beta_{33}\) are fixed to zero, while for panels (c) and (d), all deformations are allowed. The levels are labeled by the corresponding spherical quantum number of their main component. The dashed lines represent the Fermi levels.
Figure 5: (Color online.) Same as Fig. 4, but for the single-particle levels as functions of \(\beta_{30}\) and the \(\beta_{32}\) is fixed at 0.15.
increases. This is consistent with the previous study with the MDC-CDFT in Ref. [43]. The energy gaps at \(N=70\) and \(Z=40\) grow gradually with the increasing \(\beta_{32}\), and a remarkable \(\beta_{32}\) deformation is expected for \({}^{110}\)Zr. In Figs. 4(c) and 4(d), due to the nonzero \(\beta_{31}\) and \(\beta_{33}\) values, the spherical symmetry is broken, and the single-particle levels are sightly split even at \(\beta_{32}=0\). The energy gaps at \(N=70\) and \(Z=40\) are roughly constant with the increasing \(\beta_{32}\), and this is associated with the soft tetrahedral ground state in the \(\beta_{32}\) direction.
By fixing \(\beta_{32}=0.15\), the single-particle levels of \({}^{110}\)Zr are shown in Fig. 5 as functions of \(\beta_{30}\). In Figs. 5(a) and 5(b), a sharp decline of the energy gaps at \(N=70\) and \(Z=40\) is found with the increasing \(\beta_{30}\), and this leads to a tetrahedral ground state with vanishing \(\beta_{30}\) for \({}^{110}\)Zr. In contrast, the energy gaps at \(N=70\) and \(Z=40\), as seen in Figs. 5(c) and 5(d), vary only gently with the increasing \(\beta_{30}\), and this is reflected by the soft nature of the tetrahedral ground state in the \(\beta_{30}\) direction.
## V Summary
In summary, the CDFT has been solved in 3D lattice space by implementing the PCG-F method. It considerably improves the computational efficiency compared to the previous inverse Hamiltonian method. Based on this framework, the ground state of \({}^{110}\)Zr has been studied and found to have a tetrahedral shape with \(\beta_{20}\approx 0.00\), \(\beta_{30}\approx 0.00\) and \(\beta_{32}\neq 0.00\). While it is consistent with the results obtained in the previous MDC-CDFT calculations [43], the present work proves that the tetrahedral ground state of \({}^{110}\)Zr can still exist in a full deformation space but greatly softened. Specifically, with the inclusion of \(\beta_{31}\) and \(\beta_{33}\) deformations, the potential energy surface around the tetrahedral minimum becomes much softer in both \(\beta_{32}\) and \(\beta_{30}\) directions. The softness of the tetrahedral ground state should be associated with the roughly constant single-particle energy gaps at \(N=70\) and \(Z=40\).
The softness of the tetrahedral shape due to the \(\beta_{31}\) and \(\beta_{33}\) deformations might exist in other nuclei as well. To search for well tetrahedral states in nuclei, the present work has demonstrated the importance of the calculations performed in the full deformation space. Based on the CDFT in 3D lattice space, works along this line are in progress.
## Acknowledgments
We thank Y. K. Wang for helpful discussions. This work was partly supported by the National Natural Science Foundation of China (Grants No. 12070131001, No. 11935003, No. 11975031, and No. 12141501), and the High-performance Computing Platform of Peking University. Z. X. Ren is supported in part by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 101018170).
|
2308.01118 | A Survey on Popularity Bias in Recommender Systems | Recommender systems help people find relevant content in a personalized way.
One main promise of such systems is that they are able to increase the
visibility of items in the long tail, i.e., the lesser-known items in a
catalogue. Existing research, however, suggests that in many situations todays
recommendation algorithms instead exhibit a popularity bias, meaning that they
often focus on rather popular items in their recommendations. Such a bias may
not only lead to the limited value of the recommendations for consumers and
providers in the short run, but it may also cause undesired reinforcement
effects over time. In this paper, we discuss the potential reasons for
popularity bias and review existing approaches to detect, quantify and mitigate
popularity bias in recommender systems. Our survey, therefore, includes both an
overview of the computational metrics used in the literature as well as a
review of the main technical approaches to reduce the bias. Furthermore, we
critically discuss todays literature, where we observe that the research is
almost entirely based on computational experiments and on certain assumptions
regarding the practical effects of including long-tail items in the
recommendations. | Anastasiia Klimashevskaia, Dietmar Jannach, Mehdi Elahi, Christoph Trattner | 2023-08-02T12:58:11Z | http://arxiv.org/abs/2308.01118v3 | # A Survey on Popularity Bias in Recommender Systems
###### Abstract
Recommender systems help people find relevant content in a personalized way. One main promise of such systems is that they are able to increase the visibility of items in the _long tail_, i.e., the lesser-known items in a catalogue. Existing research, however, suggests that in many situations today's recommendation algorithms instead exhibit a _popularity bias_, meaning that they often focus on rather popular items in their recommendations. Such a bias may not only lead to limited value of the recommendations for consumers and providers in the short run, but it may also cause undesired reinforcement effects over time. In this paper, we discuss the potential reasons for popularity bias and we review existing approaches to detect, quantify and mitigate popularity bias in recommender systems. Our survey therefore includes both an overview of the computational metrics used in the literature as well as a review of the main technical approaches to reduce the bias. We furthermore critically discuss today's literature, where we observe that the research is almost entirely based on computational experiments and on certain assumptions regarding the practical effects of including long-tail items in the recommendations.
Keywords:Recommender Systems Popularity Bias Long Tail Fairness Diversity
## 1 Introduction
Recommender systems are nowadays used by many online platforms--including most major e-commerce and media streaming sites--where they can create substantial value for both consumers and providers [74]. From the consumers' side, these systems, for example, may support them in finding relevant content in
situations of information overload or help them _discover_ the content that was previously unknown to them. On the provider's side, on the other hand, recommendations can effectively improve engagement, stimulate cross-sales or help promoting items from the _long tail_[15] of less popular and probably hard-to-find items.1 Among these possible benefits, recommender systems seem to be particularly suited to support a long tail business strategy. By surfacing more of the long tail items in personalized way, they support both the goals of improved _discovery_ of new content for consumers as well as increased benefit for the provider, e.g., in terms of additional sales or changed demand curves, see [32, 58, 104].
Footnote 1: In a long tail situation, a very large amount of the revenue (e.g., 80 %) comes from a small set of (e.g., 10 %) of top-selling items, the _short head_.
While there is no doubt that recommender systems can effectively impact consumer behavior and shift sales distributions [89, 142], it turns out that in practical settings such systems can have unexpected effects. For instance, the results of a large-scale field test on a North-American retailer site revealed that a recommender system indeed has had a positive effect on the sales of niche items. However, the increase that was observed for the popular items was even more pronounced. Moreover, aggregate sales diversity actually _decreased_ in the presence of the recommender [90, 91]. Such observations can be attributed to a certain _popularity bias_ in the underlying algorithms, which means that the algorithms may have a tendency to focus on already popular items in their recommendations. As a result, the already popular ("Blockbuster") items [54] receive even more exposure through the recommendations, which can ultimately lead to a feedback loop where the "rich get richer".
Overall, a too strong focus on popular items can be disadvantageous both for consumers and providers. Consumers might find the recommendations obvious, not novel enough, and thereby not supporting the need for discovery. Providers, on the other hand, not only fail to supply adequate discovery support, but also miss the opportunity to sell from the long tail by mainly promoting items which customers might have bought anyway [22]. Given the high practical importance of the problem, an increasing number of research works have addressed the problem of popularity bias in recommender systems over the last decade. In particular, in most recent years the topic has become prevalent in the light of fairness and biases in recommender systems [36, 44], as well as in the context of potential harmful effects of recommendations such as Filter Bubbles, Echo Chambers, persuasion and manipulation [16, 48].
Within this paper, we provide a survey on the growing literature on popularity bias in recommender systems. The contributions and the content of the paper are as follows. We first elaborate on existing definitions of the concept and possible sources of popularity bias in Section 2. After describing our research methodology to identify relevant papers in Section 3, we provide statistics regarding the different types of contributions we observe in the literature in Section 4. We discuss technical proposals to deal with popularity bias in Section 5 and we review evaluation approaches in Section 6. The paper ends with
a discussion of our insights and an outlook on research gaps, future work and possible directions in Section 7.
## 2 Background
In this section, we define the term popularity bias, discuss the possible sources of bias in more depth, and outline practical negative effects resulting from popularity bias.
### Popularity Bias as an Exposure-Related Phenomenon
While we observe a largely shared understanding in the research community regarding the potential harms of popularity bias in recommender systems, no unique definition seems to exist so far. Most commonly, popularity bias is considered a _characteristic of the recommendations_ that are shown (exposed) to users.
In [6], for example, popularity bias is described as a phenomenon where "_popular items are recommended even more frequently than their popularity would warrant."_ In such an interpretation, the bias exists when the system recommends popular items to an exaggerated extent. Similar considerations regarding disparities in the recommendations were discussed in other works as well, e.g., in [94]. In other definitions, however, such proportions are not in the focus, and an emphasis on popular items per se is considered a bias. According to [3], _"collaborative filtering recommenders typically emphasize popular items (those with more ratings) much more than other 'long-tail' items._". Similarly, Boratto et al. [23] state that popularity bias can be described as the effect that recommender systems may "_tend to suggest popular items more than niche items, even when the latter would be of interest._" Such a concept is also adopted in [151] and other works.
We note that Boratto et al. in their discussion connect the bias that is observed in the recommendations with an underlying reason, i.e., the bias occurs when algorithms are trained on datasets where the observed interactions are not uniformly distributed across items. In some works, such skewed distributions themselves are referred to popularity bias, thus framing popularity bias as a _characteristic of the training data_ that a recommender system picks up on. Zhao et al. [148], for example, found that _"the observation data usually exhibits severe popularity bias, i.e., the distribution over items is quite imbalanced and even long-tailed._"
Finally, some works discuss popularity bias in recommender systems in the context of offline evaluation metrics. A particular challenge in this context can be that certain metrics, and in particular _precision_, can favor algorithms that have a tendency to recommend popular items. By averaging across users, optimizing for high precision means to try to satisfy the majority of the (popularity-oriented) users, _"regardless of the satisfaction of minorities"_[20]. This may then lead to
a competitive performance of non-personalized and popularity-oriented methods [40], and alternative evaluation protocols are proposed to deal with such problems, see also [19, 20, 46, 99, 140].
In this work, we adopt the previously discussed viewpoint and terminology where popularity bias is a phenomenon that is related to the popularity of the items that are recommended to users. Thus, we separate the observed phenomenon from the potential underlying sources of popularity bias.
### Sources of Biases and Bias Amplification
In most research works on recommender systems, the popularity of an item is assessed by the number of user interactions (e.g., ratings, clicks, purchases) that is observed in a dataset. We note that in most applications of recommender systems we actually would not expect a balanced distribution. In many domains, there may be items that are more popular than others. Some products in an e-commerce store might, for example, be of better quality or cheaper in price than others or strongly promoted through advertisements, leading to more observed purchases. In the entertainment domain, on the other hand, some movies or musical tracks may just appeal to a broader audience and we may therefore record more streaming events. We refer to such pre-existing, commonly skewed distributions regarding the popularity of items as the _natural bias_ in the data.
However, while such imbalanced distributions appear natural, a potential problem of recommender systems is that they might _reinforce_ these pre-existing distributions. Ultimately, this reinforcement may lead to detrimental effects in the long run, where the system increasingly puts more emphasis on already popular items, thereby reducing the chances of lesser known items to be exposed to users. Chen et al. [36] identify various factors that may ultimately lead to a feedback loop in recommender systems, as shown in Figure 1.
Internally, many recommender systems these days are based on some type of machine learning model. A central ability of any machine learning algorithm is to generalize from past experience (training instances) to deal with new situations (unseen instances) [101]. The general model of the domain that an algorithm learns therefore always reflects to a certain extent4 what is observed in the training data, including in particular any (pre-existing) bias in the data.
Footnote 4: Each algorithm may have its own inductive biases, i.e., a set of assumptions when performing the inductive leap from the training data to the general model [70].
Let us consider the very basic scenario of recommending shopping items that are _frequently bought together_, as implemented in today's major e-commerce platforms. In such an approach, it is intuitive to assume that items that are more popular in general will be bought together more often, and thus they will be recommended more frequently to customers. More generally, the suggestions that are subsequently made to users based on a machine learning model reflect to a certain extent what the recommender system has learned from the data and how it was optimized. In particular, depending on the
training, the algorithm may have learned--although not necessarily explicitly--that recommending popular items will give high "reward" in terms of the metric.
Ultimately, the recommendations presented to users are generally assumed to be able to influence their choices to a certain extent. Higher-ranked items in recommendation lists commonly receive more exposure and user attention and are more likely to be consumed [76], e.g., due to the position bias. As a result, they may be consumed or purchased more often than other options. Thus, in case the recommendations being influenced by popularity bias, it finally means that the already popular items profit more from this increased exposure than some lesser known ones. Importantly, when users adopt (i.e., consume or purchase) a recommended popular item, this fact will commonly be reflected in some ways in the data that is used to retrain the underlying model in a subsequent step. A successful recommendation of a popular item will, for example, further increase an item's purchase statistic. Moreover, as popular items are often good recommendations in terms of their general quality and appeal, the chances that they receive positive feedback, e.g., in the form of a rating, may also be high if we assume that people tend to provide feedback on things that they like.5
Footnote 5: This corresponds to the known problem that certain data points are “missing-not-at-random”, see [98] for an early study on the topic.
Overall, we observe that there are various stages where popularity biases can enter or be reinforced in a recommender system. Correspondingly, different approaches and starting points exist when the goal is to mitigate the potentially undesired effects of popularity bias in the recommendations.
Figure 1: Biases and the Feedback Loop of Recommendation, inspired by [36].
### Potential Negative Effects of Popularity Bias
Research on popularity bias is commonly motivated with examples of possible negative effects when an algorithm focuses (too much) on already popular items. Sometimes, recommending popular items is considered problematic, as this may _unfairly_ reduce and prevent the exposure of other items. In other cases, reference is made to potential reinforcement effects over time, often circumscribed as a situation where the "rich get richer".6
Footnote 6: Sometimes this phenomenon is referred to as “Matthew Effect” [129] or “Prefix Bias” [112].
At first sight, one may argue that there is nothing wrong with recommending popular items. In fact, recommending top selling items is quite common also in the offline world, e.g., in the form the _New York Times Best Seller_ book recommendations. Moreover, in a meritocratic society, it may not be considered problematic or unfair if these best sellers receive even more attention through recommendations, assuming that they are of higher quality than others or generally appealing to more people. As such, the above mentioned claims about potential harms of popularity bias sometimes seem too general.
However, when looking closer at the problem and the intended purpose and value of a recommender system [74], one can easily derive a number of ways in which popularity bias _(a)_ either _limits the potential value_ of the recommendations for individual stakeholders or _(b)_ where the _bias may actually be harmful_. In terms of limited value, consumers may find that popularity-biased recommendations do not help them to _discover_ new content (because of limited novelty) or content that matches their personal preferences (because of a limited level of personalization). Both aspects may in turn limit the engagement of consumers with the service that provides the recommendations or turn them away completely by losing their trust in the system. On the provider's side, recommending mostly popular items may furthermore lead to _missed sales opportunities_ (because the popular items would have been purchased anyway). Moreover, it may lead to decreased sales diversity over time (because a small set of popular items receives all the exposure). Corresponding reports from field and simulation studies can be found in [53, 54, 72].
Situations where a popularity-biased system may actually create harm (and not only provide limited value) can also arise in certain application domains. In recent years, various research works on fairness in recommender systems--see [42, 44, 132] for recent surveys--argued that popularity bias can lead to unfairness. For example, certain jobs may be mainly recommended to particular ethnic groups when the recommender systems perpetuates historical discrimination. Alternatively, a music recommender system may unfairly mostly promote music from certain groups of already popular artists, limiting the chances of exposure for artists which, e.g., may belong to an underrepresented gender or genre groups.
Another, yet quite different, harmful case may occur when a popularity-biased system promotes content that is harmful. We recall that in many applications popularity is measured in terms of the observed interactions with an
item. In particular, in social media it is not uncommon that controversial content (including fake news, misinformation and disinformation) receives a lot of attention as users are highly engaged with such content. A social media recommender system that optimizes for user engagement may therefore further promote such content by suggesting it to an increasingly larger audience. This accelerates the spreading of misinformation that can potentially cause more harm. Furthermore, such a popularity-biased system may also be vulnerable to recommend content which received many interactions through fake users, false reviews/ratings and automated bots, see [88] for an early work on attacks on recommender systems. Such circumstance, when becoming known to the users, causes distrust and avert the users from utilising the system.
Overall, regardless of whether the utility is reduced or actual harm is caused, it is important to consider the specifics and idiosyncrasies of a particular application use case when investigating questions of popularity bias. On the one hand, recommending popular items can in fact be the most beneficial option for a provider, e.g., when the top-selling items are also the ones that lead to the highest revenue, profit margin or other business Key Performance Indicator (KPI). On the other hand, recommending already popular items should not be considered unfair per se, but one has to scrutinize which underlying normative claims regarding fairness are affected by popularity-biased recommendations. Furthermore, we have to keep in mind that certain effects may only become visible in the long term. Promoting the most popular and recent celebrity gossip on a news website might lead to positive effects in the short run in terms of the click-through rates (CTR); it may however lead to limited engagement with the service in a longitudinal perspective.
Finally, we note that focusing on popular items can be a beneficial and helpful approach as well in certain situations. Recommending popular items is a very common strategy in _cold-start_ situations where little is known about the preferences of the user. For example, when a new user registers to a recommender system, the system has no or limited knowledge about the user's preferences and hence may fail to generate relevant recommendations for her. In such a case, a popularity-based _active learning_ strategy can be employed to select the top popular items to be proposed to the new user and acquire explicit ratings for them [112]. The advantage is that the user is very likely to be familiar with the popular items and hence can actually be able to rate these items. Despite the positive side, popular items are typically liked by the users and hence, their ratings often bring little information to the system [50].
Furthermore, there can be situations where a specific algorithm focuses too much on niche content. In such cases, the recommendations might appear too obscure for users, not raise their interest, and limit their satisfaction with the service [45]. Including a number of popular recommendations may help establish a certain level of familiarity with the recommendations at the user's side, and their trust that some recommendations are suitable for them. Adding a "_healthy dose of (unpersonalized) popularity_" is also not uncommon in industrial settings, e.g., for the personalized video ranking system at Netflix [58].
An Impact-Oriented Definition of Popularity Bias and its Relationship to Novelty, Diversity, and Fairness
As discussed in the beginning of this section, there is no unique definition of the term popularity bias in the literature. Some definitions may also not easy to interpret or apply. If we, for example, develop a recommender system that simply recommends the most popular items to everyone, it may be difficult to tell if this would represent a case where items are recommended "_more frequently than their popularity would warrant_", as described in [6] or [36]. Moreover, our discussions also show that recommending popular items is not necessarily harmful per se, and that it instead may depend on the particularities of a given use case.
Following our discussions and under the assumption that the term _bias_ generally indicates an undesirable or problematic aspect, we propose to use an _impact-oriented_ interpretation of the term in the future. Accordingly, we propose to define popularity bias in recommender systems as follows.
_A recommender system faces issues of popularity bias when the recommendations provided by the system focus on popular items to the extent that they limit the value of the system or create harm for some of the involved stakeholders._
We emphasize that our definition is aimed to be generic and encompassing in the sense that it does _(a)_ not prescribe a specific a way in which popularity is quantified, _(b)_ it does not make assumptions about the sources of the bias, and _(c)_ it may include both short-term or long-term effects of popularity bias.
The popularity of the recommended items is related with a number "beyond-accuracy" quality aspects of recommender systems, in particular to novelty, diversity, and serendipity [31, 77, 153].
Relationship to Novelty.A recommendation provided to a user is usually considered to be novel if the user has not previously known about it. Novelty is thus a central desirable feature, as novel recommendations per definition help users discover new (and hopefully relevant) things. The novelty of a set of recommendations can be empirically assessed with the help of user studies. In offline evaluations, we in contrast often cannot know with certainty if a user already knows an item. A common approach in the literature therefore is to assume that less popular items on average have a higher probability of being novel for the users. Technical realizations of novelty metrics are therefore frequently formulated as being inversely related to popularity metrics. Typically, a common goal in novelty-focused research is to increase the novelty level (or: reduce the popularity level) of the recommendations without sacrificing accuracy. In such settings, novelty-enhancing approaches can also be seen as methods to decrease popularity bias.
Serendipity is another concept that is related to novelty. Often, serendipity is viewed as a combination of unexpectedness and relevance [153], but other notions exist as well in the literature [153]. Clearly, a serendipitous item must also be novel. However, an item is often only considered unexpected if it is in some ways different from a user's usual taste profile.
Relationship to Fairness.Quite a number of recent research works equate the reduction of popularity bias with an increase of algorithm fairness, see [42]. Certainly, there may be use cases where this may be true. For example, there might be a group of artists on an online music platform which for societal or historical reasons do not have the same opportunity to reach a broad audience as others, e.g., because they belong to a generally underrepresented group. A recommender system that gives more exposure to the less popular content by these artists may then be considered to support a normative claim regarding the fairness towards the underrepresented group. This latter aspect of addressing an underlying normative claim is however essential. Simply increasing the exposure of arbitrary artists on a music platform or the exposure of items of certain providers on an e-commerce platform does not necessarily serve a fairness goal. The lack of popularity might in contrast be related to the general appeal of the content to a broader audience or the quality of the individual items.
Relationship to Diversity.Diversity usually refers to the property that the elements of a set of recommendations differ from each other in certain aspects. Depending on the selected criterion and use case, popularity bias can be related to diversity. In certain domains, e.g., in movie recommendation, the recommendation of widely known popular movies will probably result in a set of movies that is not too diverse in terms of the country of the production, the production budget, or the original language. All items of course share the property that they are blockbusters. In terms of the genre, one might however observe some diversity when recommending blockbuster movies. Thus, a reduction of popularity bias may in some cases lead to an increase with respect to certain forms of diversity, but the connection is not as direct as it is with novelty discussed above.
## 3 Methodology
Paper Retrieval Method.We adopted a semi-systematic approach to identify relevant research works. In our approach, we applied principles of systematic reviews as discussed in [82], but we also relied on additional means to discover additional papers in this constantly developing area. The overall process is illustrated in Figure 2
In the first step, we queried digital libraries to find an initial set of works on recommender systems that have the term "popularity bias" in the title, abstract or keywords. We used the following query term: "_popularity bias_" AND ("_recommender_" OR "_recommendation_").7. We limited the search to papers that appeared since 2000. The search processes returned 69 papers.
Footnote 7: The specific syntax is different for the used libraries. As digital libraries, we considered ACM DL ([https://dl.acm.org](https://dl.acm.org)), SpringerLink ([https://link.springer.com](https://link.springer.com)), ScienceDirect ([https://www.sciencedirect.com](https://www.sciencedirect.com)) and IEEE Xplore ([https://ieeexplore.ieee.org](https://ieeexplore.ieee.org)
Next, we applied a snowballing procedure to identify more relevant works by following the references cited in the initial set of works. Furthermore, we used Connected Papers8 as a tool to find additional related works, also using the keyword "long tail". After removing duplicates and filtering out works which were irrelevant to our survey in a manual process, we ended up with 88 papers, which we considered for the subsequent analyses in our study. We share the detailed list of the considered papers online for reproducibility.9
Footnote 8: [https://www.connectedpapers.com](https://www.connectedpapers.com)
Footnote 9: [https://docs.google.com/spreadsheets/d/1lvLtrlItfHyrwfc4GzUX-6aVR6rChq](https://docs.google.com/spreadsheets/d/1lvLtrlItfHyrwfc4GzUX-6aVR6rChq) 3WsbM09dBVyK4/edit?usp=sharing
Generally, our search query turned out to be quite precise and the large majority of papers that were retrieved through the query were relevant. The few papers that were excluded did not fulfill our inclusion criterion that the paper has to be centrally focused on questions of popularity bias. Thus, we excluded papers which only mention the term popularity bias somewhere in the text but provide another technical contribution. Furthermore, we excluded existing survey works from our analysis.
Relation To Other Surveys.The topic of popularity bias has been considered previously in surveys on related topics, such as biases in recommender systems, in general [36], undesired effects of recommender systems [48], or fairness issues in recommender systems [1]. While our work overlaps with these works to a certain extent, our study is exclusively focused on the problem of popularity bias.
Figure 2: Literature collection methodology.
To our knowledge, the recent conference paper by Ahanger et al. [14] is the only work that exclusively focuses on popularity biases in recommender systems. In their paper, the authors report the technical details of a selected set of recent algorithmic approaches to mitigate popularity biases. While our work is also concerned with technical approaches to bias mitigation, the scope of our present work is broader and we also aim to reflect on the developments in the area. Moreover, differently from this previous survey, our work is based on a larger collection of research works which we retrieved through a structured process as described above.
## 4 Survey Results: A Landscape of Research
In this section, we will first provide more statistics about publication outlets and the interest in the topic over time. Next, we will paint a landscape of existing research in terms of how scholars characterize the problem and what kind of contributions we can find in the literature.
### Publication Statistics
The earliest paper considered in our study was published in 2008. We note that this paper was not explicitly using the term "popularity bias", but it focused on how to deal with less popular items from the long tail in recommender systems [107]. During the next few years, only a few relevant papers were found. Since around 2018, however, we observe a strong increase in the research interest in the topic, in particular also using the term "bias". We may assume that much of the recent research in this area may also be fueled by the growing awareness and interest in the topic of fair recommendations, see [132]. As a result, a large majority (around 70%) of the considered works were published in the last five years.
Figure 3 shows where the identified research works on popularity bias were published. The largest fraction of papers was published in the ACM Conference on Recommender Systems (RecSys). However, we can observe that research is quite scattered, and relevant works are published in a variety of venues. With our survey, our goal is to provide an overview on the landscape of this existing research.
### Problem Characterizations & Research Motivations
Following our discussions above, recommending popular items may not be problematic _per se_, and in practice one has to take into account the specifics of the given use case, for example, to determine the extent to which a given bias should be mitigated.
In the first step of our analysis, we investigated how researchers motivate their work. To that purpose, we scanned all papers for statements in the abstract and introduction that characterize the phenomenon of popularity bias as well as
the potential harms of recommending popular items. We then applied a coding procedure to identify different categories of such statements. The coding was done by two researchers.
Figure 4 shows along which themes researchers _characterize_ the phenomenon of popularity bias.10 In the majority of cases, researchers mainly state in some form that popularity bias mainly or too strongly focus on popular items in the recommendations. This generally matches a central part of our definition from the previous section, i.e., that popularity bias is a phenomenon related to the recommendations that are presented to users.11 However, in a large fraction of the papers which rely on a characterization, focusing on popular items is considered problematic in itself, which may represent an oversimplification of the problem.
Footnote 10: We note that individual papers can fall into more than one category.
Footnote 11: Only a comparably small number of papers characterize popularity bias as a phenomenon of the underlying data.
The second most frequent characterization is that in the presence of popularity bias, long-tail items receive too limited exposure. While in some sense this might be seen as a direct consequence of the previous aspect, i.e., that a system may focus too much on popular items, this characterization also points to a potential harm, which is a crucial aspect according to our definition. However, only a few works mention that popularity bias may hinder the recommendation of _relevant_ long-tail items, which in reality is a highly crucial aspect. A few other works consider questions of recommendation quality in their characterization. In a few cases, popularity bias is assumed to lead to better predictions for popular
Figure 3: Number of papers per outlet. Venues grouped under the label “other” each have one published work included in this survey.
items. Other works in some ways fear quite the opposite, i.e., that the bias leads to the recommendation of irrelevant popular items.
Potential reinforcement effects are mentioned a number of times as a main aspect of popularity bias. However, when considering the technical contributions and experimental evaluations provided in many of these papers, the reinforcement effect is not actually investigated, e.g., by assessing the effect from a longitudinal perspective.
Finally, in a certain fraction of papers, we could not identify a clear motivational characterization of the investigated problem of popularity bias. Such papers for example analyze relationships between different quality metrics for recommender systems (including the popularity of the recommendations), without elaborating in depth about the underlying concept, e.g., [35]. Others like [134] consider skewed data distributions in their algorithmic design as one of several aspects. Finally, some works like [41] provide a formal definition for a particular notion of popularity bias, but consider popularity bias as one of several variables in a quantitative analysis of recommendation performance.
Next, we scanned the abstract and introductions for statements that describe the potential _negative effects_ of the bias. Such a description of the negative effects should generally guide the research presented in the paper, e.g., in terms of the evaluation metrics. The results of the coding process are shown in Figure 5.
The most frequently mentioned harms refer to the _recommendation quality_ as experienced by the users. Popularity bias may manifest itself in limited personalization quality, limited diversity or novelty, or in terms of limited opportunities for discovery. However, there is also a significant number of works which mention potential harms for the recommendation platform or the item providers,
Figure 4: Problem characterizations in the literature.
including limited exposure of certain items, missed business opportunities, or reduced consumer trust over time. Some works also raise the issue of potential vulnerabilities in terms of attacks on recommender systems which consider item popularity as a main factor to rank items highly. These observations clearly indicate that there is awareness in the community that popularity bias is a problem that may affect multiple stakeholders. We will discuss later in Section 10 how researchers quantify to what extent algorithmic approaches may help to reduce or prevent potential harms of popularity bias.
Given the recent interest in the community on questions of fairness of recommender systems, we finally scanned the descriptions of potential harms that we found in the paper for the term 'fair'. Only a small fraction, around 10%, explicitly mention fairness or unfairness in this context. However, considering the broader research setting addressed in the papers, we found that 59 of the 88 papers (about two thirds) do address questions of fairness in recommender systems. This confirms our intuition mentioned above that research on popularity bias in recommender systems is largely fueled by recent fairness research. Again, given that recommendation is a multistakeholder problem [1], different forms of fairness are considered in the examined works, including user fairness, item fairness, and provider fairness, see [28]. A slightly larger fraction (60%) of these works focus on user fairness, while the remaining works consider the perspective of items and their providers.
### Types of Contributions
Next, to better understand the landscape of existing research, we characterized the identified papers in terms of their contribution. We identified three main classes of such contributions based on the analysis of the main novel aspects of the papers:
Figure 5: Researcher motivation: Potential negative effects.
* Papers that _analyze_ or _quantify_ potentially existing biases;
* Papers that make technical proposals to _mitigate_ existing biases;
* Papers that try to _utilize_ popularity information to improve recommendations.
Figure 6 provides the statistics of the studied papers in terms of this categorization. The detailed categorization of the analyzed works in terms of the contribution can be found in Table 1. We note that one paper can fall into more than one category. Not surprisingly, since we focus on papers in the area of computer science, the majority of papers propose a technical approach to _mitigate_ some potential harms of popularity bias. A smaller number of works aim to mainly _quantify and analyze_ existing biases in datasets and/or propose computational metrics to assess the extent of the bias. Finally, a limited number of works try to _utilize_ information about the general popularity of an item for improved recommendations. We will review selected works in each category next.
\begin{table}
\begin{tabular}{l l} \hline Quantification & [7, 8, 9, 17, 25, 26, 33, 35, 35, 38, 41, 46, 49, 59, 66, 85, 86, 87, 94, 97, 100, 102, 103, 111, 115, 127, 135, 140, 146, 151, 151] \\ \hline Mitigation & [2, 3, 5, 9, 11, 18, 23, 23, 26, 29, 37, 39, 39, 43, 51, 55, 57, 62, 63, 65, 68, 69, 71, 79, 80, 83, 92, 93, 95, 96, 105, 106, 110, 114, 116, 118, 119, 120, 121, 122, 125, 128, 130, 131, 131, 133, 134, 136, 137, 138, 139, 141, 143, 146, 147, 149, 150, 151, 152, 152] \\ \hline Utilization & [107, 108, 145, 146, 148] \\ \hline \end{tabular}
\end{table}
Table 1: Categorization of papers in terms of _types of contribution_.
Figure 6: Types of research contributions.
## 5 Technical Approaches to Deal With Popularity Bias
In this section, we discuss a number of selected approaches to bias quantification, mitigation, and utilization in more depth.
### Bias Quantification Approaches
Papers in this category mainly aim to understand the extent and severity of a possible existing popularity bias and how such bias may impact users.
Before we review existing works that quantify popularity bias for different purposes, we note that any quantification approach--as well as mitigation techniques which we discuss later--requires the definition of appropriate metrics. We will review a multitude of metrics later in Section 6. According to our notion of popularity bias from above, these metrics primarily quantify _popularity properties of the recommendations_ and not, for example, of the underlying data. However, properties of the underlying data are central in many works, for example when it comes to deciding if an item is considered popular or not. A common strategy in the literature is to categorize items as being popular (short head) or unpopular (long tail), occasionally with an additional separation of the long tail into a middle part and distant tail, see [7, 25]. Commonly, this separation is based on the number of observed interactions for each item in the dataset. Yalcin et al. [135], in contrast, uses a definition where _blockbuster_ items not only have to have a high number of interactions, but they must have a high average rating as well. In any case, a central question in such approaches is how to define suitable thresholds. In the existing literature, mostly rules of thumb are applied for which no clear reasoning is provided.
An additional approach to quantify popularity-based phenomena is proposed by Celma et al. [33] and in [34]. In their work in the music domain, the authors not only use playcounts as popularity indicators but also rely on metrics from _complex network analysis_ to model the _connectedness_ of items based on their similarity. This, for example, allows them to analyze if the most popular items are mainly connected to other popular items as well, and to assess the chances if an item being exposed and discovered through recommendations.
Quantifying Effects on Users.One common goal in the literature in this area is to mainly quantify the extent of the popularity bias, and in many cases, these observations are then contrasted with other metrics such as accuracy. In such works, often a variety of algorithms from different families, e.g., collaborative and content-based, are compared on different datasets, see, e.g., [35, 38, 127]. The analysis in [72] furthermore shows that even algorithms from the same family, in that case collaborative filtering, can exhibit quite different tendencies to recommend popular items.
While these works usually measure popularity bias across the entire user base, there are a number of works that consider certain subgroups individually. Some works identify such subgroups based on demographics, e.g., based on age and gender [46, 94, 103] or language [49]. In these works, the goal often is to assess to
what extent popularity bias affects the utility of the provided recommendations for different subgroups. The findings in [46], for example, suggest that there is a non-trivial, and possibly detrimental, interaction of demographics with popularity bias. Elahi et al. [49], on the other hand, performed a comprehensive study on popularity bias and investigated, among other aspects, if the strength of bias effects is related to the user's language. Their analyses based on Twitter data indeed indicate that language may play a role and that some effects are more pronounced for English than for other languages. Finally, Sanchez et al. in [115] assessed the effect of popularity bias in Point-of-Interest recommendation on two different user segments: tourists and locals. Their analyses indicate that the utility of the recommendations declines for the latter group of users.
An alternative to segmenting users based on their properties or demographics is to group them based on their preferences or behavior. Specifically, one can categorize users according to their _popularity tendency_ or _mainstreamness_.12 One important question in such research works is if certain groups of users--in particular niche item lovers--receive less utility from the recommendations than others. In a number of works such phenomena are seen as a form of potential discrimination, leading to questions of fairness in recommender systems and its relationship to popularity bias [7, 25, 85, 86, 102, 111].
Footnote 12: Taking such user-individual preferences into account is central to _calibration_ approaches, see [105] for an early work that considers user popularity tendencies when generating recommendations.
Understanding Longitudinal Effects.Most of the works discussed so far adopt a static perspective, e.g., by assessing the popularity bias of a given algorithm at a certain point in time. One main problem of popularity bias however lies in the feedback loop that it can create, which cannot be directly assessed with such forms of "one-shot" evaluations. A number of research works therefore try to study longitudinal effects of biased recommendations. A common way to address such issues in the literature is to rely on a simulation approach. In [72], for example, it is assumed that users of a recommender system accept some item suggestions with a certain probability, and that they then provide feedback to the system in terms of ratings, which is fed back into the training data and recommendation model13. The results of the simulation indicate that different algorithms can either reinforce or reduce popularity bias over time. Later simulation approaches following similar ideas are presented in [38] and in [97].
Footnote 13: See [10] for a brief discussion of simulation approaches in recommender systems.
A quite different approach to study popularity bias over time was followed in [66]. In their work, the authors use an auditing approach to assess bias amplification effects on YouTube. Technically, they simulate the user experience with bots that perform random walks over recommended videos on a certain topic. One part of their findings suggests that _"YouTube is recommending increasingly popular but topically unrelated videos"_. Overall, the work is one of the few works in which popularity bias is studied "in-the-wild".
Popularity Aspects as Performance Predictors.One final goal of quantifying existing bias in the data is to use that information to predict the performance of different recommendation algorithms. The popularity distribution of the items was for example examined in [41] as one of several data characteristics that can impact the accuracy of the model. The experimental analysis indeed indicated that the various metrics that capture the characteristics of the popularity distribution can be helpful to contribute to accurate predictions. This seems, in particular, true for algorithms that are known to have a certain tendency towards popular items such as Bayesian Personalized Ranking [113]. A related analysis on the impact of dataset characteristics on algorithm performance can be found in [13], where the distribution of the ratings was used as a predictor in the form of the Gini-index.
### Bias Mitigation Approaches
Here, we will first categorize existing works based on the processing stage in which the bias is mitigated. Next, we will review a number of technical approaches in more depth.
#### 5.2.1 Categorization per Processing Stage.
As indicated in Figure 6, the majority of published papers are devoted to the problem of _mitigating_ existing biases. In this section, we will discuss these technical approaches in more depth. Inspired by the work by Adomavicius and Tuzhilin [12] on context-aware recommender systems, we categorize existing approaches according to the _processing stage_ in which a mitigation strategy is implemented within a recommendation algorithm.
We differentiate between _pre-processing_, _in-processing_, and _post-processing_ approaches. Roughly speaking, pre-processing means that the underlying dataset is adapted or filtered in a way before the learning phase. In a simplistic approach, one could, for example, disallow certain very popular items to be recommended in advance. In in-processing approaches, in contrast, the mitigation technique is part of the learning process, e.g., by considering item popularity in the loss function. In post-processing approaches, finally, often an accuracy-optimized list is adapted to account for biases, e.g., by re-ranking the items in a way that less popular items are brought to the front of the list.
Figure 7 shows the distribution of papers that propose mitigation strategies according to the processing stage. The detailed categorization per paper can be found in Table 2. We note here that the assignment of individual papers to certain categories in certain cases is subject to a certain level of interpretation, in particular when it comes to distinguishing between in-processing and post-processing approaches.14 The categorization of individual papers can be found in Table 2, where one paper can also be assigned to more than one category. In the following, we review selected approaches from the different categories.
#### 4.2.3 Pre-processing Approaches.
Pre-processing approaches to bias mitigation are the least common techniques in our survey. Plus, in many cases, such pre-processing techniques are complemented with additional in-process mitigation steps. Therefore, distinguishing between pre-processing and in-processing techniques often leaves some room for interpretation.
However, at least some approaches--in particular those that apply certain forms of dataset manipulation before model training--can be clearly considered to be pre-processing. Typical pre-processing steps include data sampling, item exclusion, or specific forms of creating positive-negative sample pairs for learning. In [39], for example, the authors describe an experiment in which the "short head" of highly popular items is removed from the catalogue. The goal of their work was to investigate through a user study how the user experience and the perceived utility of a recommender system changes when those highly-popular items are not recommended.
A lighter form of data sampling was applied in [119]. Here, the goal of the pre-processing step is to create a balanced dataset in order to mitigate different fairness issues, with popularity bias being one of them. Ultimately, through the balancing process, the authors aim to create fairer models. However, it has to be noted that such data sampling and balancing must be done with care, in particular to ensure that the remaining data are still representative.
Instead of sampling (i.e., reducing) the data, some authors propose to _augment_ the existing data instead through a pre-processing step. Such an augmen
\begin{table}
\begin{tabular}{l l l} \hline \hline Mitigation & Pre-Processing & [23; 39; 65; 71; 119] \\ \cline{3-3} & In-Processing & [3; 11; 18; 23; 26; 29; 37; 55; 57; 62; 63; 68; 69; 79; 80; 92; 93; 95; 105; 106; 110; 114; 116; 118; 120; 121; 122; 125; 128; 131; 133; 134; 137; 139; 141; 146; 147; 149, 150, 151, 152] \\ \hline \hline Post-Processing & [2; 5; 9; 39; 43; 51; 83; 96; 130; 131; 136; 138; 143; 152] \\ \hline \hline \end{tabular}
\end{table}
Table 2: Categorization of Papers per Processing Stage
Figure 7: Categorization of Approaches by Processing Stage.
tation could consist of incorporating certain types of item metadata or additional information about the users from external sources [29, 95]15, or to combine implicit and explicit feedback as done, e.g., in [71]. In this latter work, considering rating data is assumed to be useful to (a) more often recommend high-quality items regardless of their (current) popularity and to (b) better leverage existing user feedback during model training.
Footnote 15: Such additional data needs to be included as a separate model or integrated into an existing one, e.g., as part of a regularization term. In this case, such an approach could also be categorized as being _in-processing_.
An example of a bias mitigation approach that--also according to the authors--has both a pre-processing and an in-processing element is described in [23]. In what is considered the pre-processing operation, the authors propose specific sampling strategies both for point-wise and pair-wise optimization settings. In the case of pair-wise sampling, for example, the creation of item pairs for learning is not done randomly but depending on item popularity. A similar approach was proposed earlier in [72] for the Bayesian Personalized Ranking method.16
Footnote 16: We note that the work in [23] is an example where pre-processing and in-processing cannot be easily separated. One might view the sampling strategy being mainly a part of the main learning process.
#### 4.2.3 In-Process/Modeling Approaches.
In-process approaches are the most common techniques for popularity bias mitigation in the literature. While a variety of in-process techniques were proposed for different application domains and scenarios, they share a common principle, i.e., intervening in the recommendation model to minimize the influence of popular items so that bias is expectedly propagated less through the recommendations. In the following, we discuss the most common families of in-process bias mitigation approaches.
Regularization-based Approachesare a prominent group of methods for controlling the influence of popularity [3, 23, 79, 81, 120, 152]. Regularization typically entails adding a term to the optimization objective that lowers the effect of item popularity on the predicted item score. During the learning process, the regularization term thus penalizes the recommendation of popular items and/or helps to promote the less popular items. A specific weight factor (or: coefficient) is often added to the term to adjust the strength of the regularization and thereby balance the competing goals of accuracy and popularity bias.
In an early work in that area, Kamishima et al. [78], for example, proposed to use a specific regularization term in the optimization objective to build "information-neutral" recommendation systems. Information neutrality means that certain predefined features, as specified by the users, do not influence the recommendation outputs to a significant extent. This idea, which was initially developed in the context of the filter bubble phenomenon, was subsequently applied to the problem of popularity bias in [79], where the goal correspondingly is to end up with a popularity-neutral recommender system.
Later on, inspired by earlier work on dealing with accuracy-_diversity_ tradeoffs, Abdollahpouri et al. [3] proposed to balance popularity and accuracy through a regularization term that penalizes the recommendation of popular items in learning-to-rank approaches. We note that popularity bias is considered a fairness issue in their work, and that considering less popular items in the recommendations is mostly equated with increased fairness.
Boratto et al. [23] and Zhu et al. [152] recently proposed "correlation-based" regularization approaches for combining the predicted scores and item popularity values. In these approaches, the influence of popularity is reduced by applying a penalty when the relevance score for an item is predicted to be high primarily _due to its popularity_. Technically, these approaches build on an idea that was proposed earlier in [21] for increasing the fairness in recommender systems.
Constraint-based Approachesin general take into account a set of rules (constraints) in order to limit the space of solutions and guide the learning process of a model toward a more efficient and accurate result. As an example, the concept of (\(\alpha\), \(\beta\))-fairness17 was introduced by Wang et al. [131] as a constraint to equalize the exposure level among (similar) items. By embedding this constraint into a stochastic policy of a deep learning recommendation model, the popularity bias can be reduced.
Footnote 17: This fairness concept posits that “_similar items should receive similar coverage in the recommendations_“. The parameters \(\alpha\) and \(\beta\) determine item similarity and coverage similarity.
Another notable constraint-based approach was proposed in [120], where a technique to combine constraints and optimization tasks was adopted in a recommendation framework. In particular, the proposed technique extends the optimization objective for recommendation with a set of decision variables that define various constraints, e.g., upper and lower bounds, auxiliary variables, and weighted sums to adjust and control various features of the recommendations. For example, the general popularity of the recommendation can be controlled with an upper bound enforcing the recommendations to contain less popular items. The framework is versatile in the sense that various types of constraints can be easily incorporated. In their paper, the authors used the framework to address different problems and tasks, including provider fairness, popularity bias, and diversification.
Re-Weighting Approachescontrol the effect of popularity by adjusting the weights in the recommendation model in certain ways [18; 55; 57; 124; 147]. One early re-weighting approach was proposed by Steck in [124]. In this work, the trade-off between recommending long-tail items and accuracy is examined. To address this issue, the author suggests a new metric called "popularity-stratified recall", which combines the two objectives in a single performance measure in a way that recommendations from the long tail are considered to be more valuable. During training, one can then either decrease the weights of the (many) observed ratings
for the popular items or increase the weights of the _imputed_ (missing) ratings in the ranking process, see also [123].18
Footnote 18: A notable aspect of the work in [124] is that it reports the outcomes of an initial study with users. The study indicated that at least for this particular study setup, the users appreciated only a light bias towards less popular items.
Down-weighting the popular items was also proposed by Zhao et al. in [147], where the authors propose a weight adjustment mechanism that can leverage a number of factors reflective of the collected user data, e.g., the opinions of the users, co-rating information, and the values of the ratings provided by users. An example of a work that uses the opposite approach of up-weighting long tail items can be found in [55], where a boosting algorithm inspired by [117] is used to adjust the weights to boost the exposure of the less popular items.
We note here that many re-weighting works discussed above adopt a _static_ approach to assess the effects of popularity bias, Zhu et al. in [151] adopt a longitudinal perspective on the development of popularity bias over time.19 The rationale behind the work was that in real recommender systems the users repeatedly receive recommendations that are not necessarily interesting to them and hence have never been consumed by them. Such recommendations represent a false positive error and hence can be used as a source of negative feedback data. As a result, the probability of the user liking such a recommendation decreases with every new recommendation presented to the user. Following this idea and corresponding simulation results, the authors propose to gradually increase the debiasing strength of an underlying re-weighting (or: re-scaling) over time through a dynamically changing hyperparameter.
Footnote 19: See also [53; 72].
What can be considered a special case of re-weighting are methods based on Inverse Propensity Scoring (IPS) [69; 92; 118]. The concept of inverse propensity has been adopted from statistics and utilized in several prior works to reduce the influence of popularity. In the context of recommender systems, the propensity score can be defined as the probability that a user will find a particular item interesting, hence, like it, based on the observed characteristics and behavior of the user. Higher propensities often indicate higher popularity, so applying inverse propensity scoring can penalize highly popular items being recommended and hence niche items are promoted further. Schnabel et al. [118] are among the first who considered propensity scores to increase exposure for certain groups of items and hence mitigate selection bias. This is a phenomenon that is commonly believed to be tightly connected with popularity bias. Additionally, this work utilizes causal inference and counterfactual reasoning for unbiased recommendation quality estimation. Yang et al. [69] later described a related approach which additionally considers the dynamic aspect of propensity scoring. The authors argue that recommendation algorithms should account for user preference changes over time. Both of the previously discussed works are based on explicit user ratings. Lee at al. [92], in contrast, base their propensity scoring approach based on implicit feedback (click data). This approach extends the commonly adopted positive propensities by considering negative propensities from missing
data. The authors suggest that the meaning of the missing feedback is initially ambiguous--it is unclear whether it is negative feedback or just a yet unseen item. Thus, learning to estimate true positive and true negative preferences from both clicked and missing data in an unbiased way has the potential to improve the accuracy of recommendations significantly.
Unbiased, and thus more accurate, recommendations are also the focus in [128]. In this work, Wan et al. propose a modified loss function, named "cross pairwise" loss. The authors argue that cross-pairwise loss is less prone to bias than pairwise or pointwise loss approaches since it can better optimize the predicted scores towards true relevancy scores. Furthermore, it is assumed that the proposed technique can overcome some of the limitations of IPS-based methods, namely, eliminating the need to define propensities in order to describe the exposure mechanism for the recommendation model. Generally, a limitation of propensity-based techniques is that the actual values of the propensities are initially unknown and hence need to be approximated. This makes these techniques becoming sensitive to the choice of the propensity estimator and hence suffer from potential bias in estimation, estimation errors, and propensity misspecification [140]. Saito in [114] therefore suggested a propensity-independent loss function to address these potential limitations of IPS-based methods.
Graph-based Similarity Adjustmentis used to control the influence of popularity bias in graph-based recommender systems, e.g., in [37; 68], by "correcting" the way item or user similarity is defined. Chen et al. in [37] suggest an alternative to cosine similarity, which is typically used for graph-based collaborative filtering algorithms. The new similarity measure accounts for two important factors: user taste, which is represented by the user node degree, and item popularity, which is measured by item node degree. Including these two terms into a new similarity measure and controlling these terms with adjustable coefficients allows to define how strongly these factors influence the predicted score. This, in return, helps mitigating popularity bias and reducing the power of popularity. Another work using item node degree is [68], which also proposes using a novel similarity measure. The authors name it "balanced similarity index" and state that their approach is able to put more focus on items which are neither extremely popular nor unpopular. Both mentioned approaches use a coefficient to control the debiasing strength, which has to be fine-tuned to find the best trade-off between recommendation accuracy and popularity bias mitigation.
Integration of Side Informationis another approach to popularity bias mitigation based on additional representative (item) features. The rationale behind the approach is that popularity bias can be viewed as the lack of sufficient interaction data for unpopular items and thus the inability of the recommendation model to confidently predict the relevance score for these items. Hence, the incorporation of additional item features may help address this problem and compensate for the missing data. Moreover, collaborative filtering techniques are often considered to be more prone to reinforce popularity bias due to their sole reliance
on user interaction data [72]. Thus, incorporating content features may help to mitigate popularity bias.
In an earlier work in that direction, Sandholm and Ung [116] propose a model to generate real-time location-aware recommendations by incorporating item popularity. The approach forces the recommender to put more emphasis on the location-based relevance of an item, instead of promoting something highly popular but essentially irrelevant due to the user's current location. Similarly, Rahmani et al. in [110] utilize a set of contextual features for Point-of-Interest (POI) recommendation. Their approach incorporates not only geographical, but also social and temporal context information, combining them with context fusion. The authors then demonstrate how contextualized POI recommendations are less vulnerable to popularity bias than classic collaborative filtering approaches, even when no explicit mitigation approach is applied. Another approach proposed by Sun et al. [125] is a topic-based model enriched by incorporating social relations of users. A main assumption of this work is that modeling social relations can assist the recommender system in dealing with the lack of user interaction data for unpopular items, which in turn helps alleviate popularity bias.
Some of the works that rely on side information are not primarily focusing on lowering popularity bias. Instead, they focus on improving novelty or diversity. However, these aspects are often measured in terms of metrics that are based on item popularity statistics. Examples of such research works are [29; 63; 139], which propose multi-modal frameworks to enrich the recommendation model with various types of information for improving the recommendation quality. In [63], a model is designed to summarize sessions and create a dynamic user representation based on session interaction sequences. Building on that, the authors propose to combine multiple objectives based on diversity and relevance, using different user and item related features in the music domain. In [29] a TV-domain recommendation model is put forward based on different sources of data, including textual, audio-visual, and neural features, together with genre information. Examples of such audio-visual features are chromatic and luminance descriptors of video frames. In [139], finally, the authors utilize various types of side information to generate playlist recommendations. The paper incorporates this information in a multi-modal collaborative filtering technique to recommend relevant songs based on a playlist title and content, while keeping the recommendation diverse and novel.
Natural Language Processing-based Approaches leverage various kinds of textual information about users or items. One of the most common methods is analyzing textual information contained in user-provided reviews for the items. The approach described in [150], for example, relies both on implicit feedback data and review texts. User preference information is first extracted from the user reviews and then fused together with implicit feedback data before the user representation is learned to increase accuracy. Technically, the authors aim to mitigate popularity bias with the help of a two-headed decoder architecture and Noise-Contrastive Estimation (NCE). NCE allows training the model without the explicit assumption that missing interactions indicate a negative preference
as done in other models [134]. This way, missing data for unpopular items will not be automatically dismissed, increasing the accuracy of the recommendations for long-tail items.
Li et al. in [95] employ an autoencoder architecture using text reviews to reconstruct better representations for both users and items. The goal of this work is to optimize the performance of the recommender system for all user groups simultaneously regardless of their "mainstreamness". Shrivastava et al. [122] propose a similar approach in which opinions and preferences are extracted from user reviews and subsequently combined with rating data. In addition to that, the paper introduces a mechanism to enable the recommendations to optimize multiple objective functions, with the goal of maximizing novelty and serendipity while preserving item relevance. An alternative way of using textual information is proposed in [141], where topic modeling is applied to classify the items. Technically, Latent Dirichlet Allocation is used to tag items with fine-grained genre-like "topics" to better capture user preferences. This additional meta-data is then used to enrich the recommendation model.
Generally, the described methods are aiding popularity bias mitigation by providing more information extracted from textual data to the recommendation algorithms. This way, they help filling gaps in terms of sparse or missing data for tail items and ultimately enable more accurate representations of user preferences and item characteristics.
Causal Inference-based Approaches typically attempt to more deeply investigate the nature of popularity bias itself and what causes it [65; 133; 146; 149]. For example, Wei et al. in [133] model ranking prediction as a cause-and-effect relationship and determine the role of item popularity and user conformity in this relationship. The authors propose to adopt counterfactual inference to mitigate undesired popularity effects. A similar approach based on counterfactual inference is discussed in [65], with a different causality model. Both works introduce a counterfactual world to reduce the influence of popularity on the resulting recommendations.
In a related work, Zheng et al. [149] adopt causal models to describe how user interactions happen and hence try to attribute them to either user conformity or the true preferences of users. Zhang et al. in [146] also seek to remove the influence of popularity in a causal relationship, while taking into consideration the temporal aspect of recommendation and the fact that item popularity is not a constant. The authors introduce a measure called "popularity drift" to describe the shifting item popularities and predict popularity trends in the future. The authors claim that knowing these trends, a certain part of popularity bias can be actually retained to promote items that have the potential to become popular, but are not yet there and require an exposure boost.
#### 4.4.2 Post-Processing approaches
Post-processing techniques are quite popular for bias mitigation. The major benefits of post-processing approaches include their typically low cost of implementation, their versatility and their low _intrusiveness_,
i.e., post-processing techniques are commonly applied _on top_ of an underlying recommendation model. Moreover, some of the existing methods are very general and can be applied in various application domains.
Technically, the main forms of post-processing in the literature are
* re-scaling (score adjustment),
* re-ranking (reordering),
* rank aggregation
All of these methods are commonly based on one given recommendation list ranked by accuracy scores, and they then incorporate additional information in the post-processing phase. _Score adjustment_ works by updating the relevance scores of a given recommendation list to compensate for popularity bias by promoting certain items or penalizing the others. The updated scores are then used to re-order the list of the recommended items. In the case of _re-ranking_, the item order is changed as well, however the original relevance scores are considered less relevant and discarded in some cases. Instead, these approaches often operate solely on the item rank, swapping or exchanging the items to fulfill certain criteria. _Rank aggregation_ post-processing involves multiple recommendation lists produced for the same user by different models and is based on fusing these lists with rank aggregation methods. Last but not least, besides the described three methods, a post-filtering technique may simply remove certain (popular) items from a recommendation list.
In _re-scaling_ the goal is to boost or penalize certain items in the recommendation list. In the context of popularity consideration this could be seen as a _bias correction_ approach. The typical goals therefore are to _(a)_ include more or less popular items that could be potentially interesting to the user, _(b)_ exclude the popular items that the user is not interested in or already knows about anyway, and _(c)_ do not include the items that are both unpopular and uninteresting to the user. An example of a recent post-processing approach work can be found in [152], where the authors propose to add a _compensation score_ to the predicted preference score in a way to consider the above goals in appropriate ways.20 In another post-processing approach, Zhu et al. [151] apply bias correction as well, however with a dynamic perspective, where bias mitigation is applied iteratively and repeatedly over time.
Footnote 20: In in-processing approach based on regularization is proposed as well in this work.
_Re-ranking_ appears to be the most common technique among the reviewed works. Generally, these methods attempt to re-order the items in the recommendation list in such a way that it optimizes for a certain objective metric. For example, the approach described by Abdollahpouri et al. in [9] is targeted towards balancing the relevancy and popularity of items in the list, with a flexible parameter that gives more significance to either of the features. The same objective function has been earlier introduced by Steck in [124] for an in-processing mitigation approach. Klimashevskaia et al. in [83] later on reproduced this approach, demonstrating that even though the method is able to adjust the recommendations to the user popularity preferences, this does not necessarily mitigate
platform-wide popularity bias in a significant way. In an earlier work, Abdol-lahpouri et al. [5] proposed an adaptation of the xQuAD query diversification algorithm for popularity bias mitigation. In a related work, the authors also investigated the performance of this method from a longitudinal perspective in [2].
A number of re-ranking based works connect popularity bias closely to the concept of _novelty_. Both Oh et al. [105] and Bedi et al. [18] suggest ways of including more novel and underexposed items in recommendation lists to improve the utility of the recommendations. Other works aim to penalize only specific types of popular items, e.g., "blockbuster" items in [138], or implement certain application-specific features or metrics as in [130] in the context of crowdworker recommendation.
Finally, some works rely on techniques from graph and network science to rearrange the recommendation lists to achieve certain distribution goals. The visibility of items through bipartite graphs is considered in [96], and a stable matching algorithm is used in [51]. Both methods represent items and/or users within as nodes of a graph and use this model to investigate and increase the exposure of items in the resulting rearranged recommendation list. Zanon et al. [143] in contrast describe a graph-based approach of incorporating additional similarity information for re-ranking.
In _rank aggregation_, the popularity bias introduced in the item ranking during model training is counteracted by combining it with an alternative ranking. For instance, Dong et al. [43] suggest combining a given ranking with a reverse recommendation ranking via Two-Way Rank aggregation. Alternatively, item ranking can be also combined with an inverse popularity ranking for a user or a group of users, as proposed in [137]. A very particular way of relying on multiple ranked lists is proposed in [136]. Here, the idea is not to produce multiple lists and to combine them, but to _select_ one of the several pre-generated lists based on pre-defined criteria such as preference match, diversity, or popularity distribution.
### Bias Utilization Methods
There are a few works which try to make use of the fact that popular items are by definition liked by many--and are thus also "safe" recommendations, see our discussions above about Netflix adding popularity signal to their video ranker.
Zhao et al. [148] for example claim that not all item popularity is the same and it may often result from the genuine quality of an item and can thus lead to high-quality recommendations. The authors suggest to leverage this "true quality popularity" and mitigate other effects of popularity bias at the same time, disentangling them from each other. An area where the (recent) popularity of the items can be a highly-important signal is the news recommendation domain. The work in [108], for example, suggests that using article popularity can actually lead to sufficient topical diversity and coverage. A number of earlier works also demonstrate that considering the recent popularity of an article can be crucial for high recommendation accuracy as well [56; 67; 126]. Similar observations
regarding the importance of short-term popularity trends were reported for the e-commerce domain in [73].
A very different and malicious way of using the existing popularity bias of certain algorithms is discussed in [145]. Here the authors describe how popularity bias can be abused in an attack to artificially boost a target item, using the predictable behavior of a biased recommender. This vulnerability can falsely skew the popularity distribution even more, potentially leading to the loss of trustworthiness and hurting provider fairness on the platform as well. Overall, this latter work is a key example that demonstrates the importance of studying, understanding, and being able to control the popularity bias of a recommender system.
## 6 Evaluation Approaches
In this section, we review the methodology that is used in the research work on popularity bias. We will first analyze which application domains are in the focus and what data the researchers are using for experiments and evaluation. We will then look closer at which types of studies are performed to evaluate the quality of the recommendations and the effectiveness of popularity bias mitigation approaches.
### Domains and Datasets
Figure 8 provides an overview on the application domains that are considered in the examined works. The application domains were mainly identified based on the datasets that are used in the offline experiments. Similar to other survey works, e.g., [109], we grouped datasets into higher-level categories as shown in Figure 8.
We can observe that the large majority of works focus on the _media_ domain, including movies, music, books, and news. Among these the movie domain is dominating, and a large number of papers rely on one of the MovieLens datasets [64]. A smaller set of works tackles the issue of popularity bias in the context of e-commerce, and a few works concentrate on the tourism-related problem of POI recommendation. For a number of other application domains, only one or a few research works were identified. We categorized them as "other" application domains, which for example include fashion, scientific articles, jokes, or games.
During the investigation of the papers considered in this survey we noticed that a number of papers provide no specific argumentation why popularity bias can be harmful in a given application domain or why a specific dataset is used for the evaluation. In other cases authors argue that popularity bias might be especially harmful in certain domains, while presenting their work based on data from domains for which it may not be immediately clear what significant harms may emerge from popularity bias, e.g., movie recommendations. According to our discussion above in Section 4.1, we often found that the research motivation is given mostly in broad terms (e.g., that the recommendations contain too many
popular items or that the "rich get richer"). This phenomenon manifests itself also in the context of the evaluation of newly proposed mitigation approaches. This may point to a certain level of overgeneralization or oversimplification of the problem, where the choice of the evaluation dataset may almost appear arbitrary and where potential idiosyncrasies of a given application are not taken into account.
The datasets used for recommender system training and evaluation in the reviewed works all demonstrate skewed popularity distributions to some extent, showing the "long tail curve" (see some examples in Fig. 9). However, they often differ significantly in terms of size, density, and popularity distributions, making it difficult to compare effects and results between datasets. Moreover, researchers sometimes apply additional data pre-processing procedures, which may not always be documented in the papers in detail. Some authors, for example, exclude cold-start items or less active users from the dataset for better training, however, based on different thresholds. These factors may further aggravate the problem of non-comparable evaluation results.
Independent of the different characteristics of the used datasets, an important aspect to question is to what extent these frequently used datasets are truly representative of real-world problems of popularity bias. Datasets like the widely used ones from MovieLens are already pre-filtered and only contain users and items for which a certain number of interactions was recorded. However, in real-world applications, e.g., in e-commerce, only one or a few interactions may be recorded for a large fraction of the users and the items, and some items may have never been purchased during the data collection period [73]. Thus, in reality, the popularity distributions might be even more skewed than what we observe in the datasets used in academia.
Finally, there are certain application domains, which are usually described in the literature as the ones that could potentially experience significant fairness issues due to popularity bias, e.g., job recommendation, healthcare, or banking
Figure 8: Application domains
Figure 9: Examples of commonly used datasets. The plots show the interaction counts for each item within the dataset on the x-axis, sorted in descending order. The Gini-index expresses the inequality of the distribution, with values closer to 1 indicating a high inequality (range: 0-1).
applications. Unfortunately, public datasets in such domains are very scarce, and it stands to question if the analyses and mitigation techniques that were done in domains like movie recommendation generalize to such critical application areas. We acknowledge how challenging it can be for researchers to obtain or even publish such data. The future availability of data in such domains is however crucial for the development of truly impactful research on fairness-related questions of popularity bias.
### Evaluation Approaches
Next, we analyze the methodologies researchers rely on when investigating popularity bias mitigation techniques for recommender systems. As done commonly in the literature, we differentiate between offline (data-based) evaluations and studies, user studies (either in the lab or online), and field tests (A/B tests) [61]. Figure 10 shows that the landscape is very strongly dominated by offline experiments. This is also a general trend to an even larger extent in recommender system research in general (see [75] for an earlier survey21). Only four works report the outcomes of a user study [39; 93; 124; 141], and a single work was found which examined popularity bias effects in a field test [87]. Interestingly, all works that include some form of user study are comparably old and were published in 2015 or earlier. No work considered in our survey relied on alternative qualitative approaches like interviews or observational studies.
Footnote 21: The distribution is comparable to the field of fairness in recommender systems [42], which is also almost exclusively investigated with computational experiments
#### 6.2.1 Offline Evaluation
As noted before, the majority of the studies in this research field have primarily focused on evaluation based on offline experiments.
Figure 10: Distribution of used evaluation approach in the surveyed papers.
Many studies simply follow a traditional approach adopted from general machine learning research when conducting offline experiments: a pre-collected dataset is split into disjoint subsets for training, validation, and testing. This is frequently done by following common cross-validation methodologies, including k-fold cross-validation, hold-out, and leave-one-out. The split can be performed either randomly [9, 26, 38, 49, 55, 85, 97, 102, 103, 141, 145, 147, 152] or chronologically based on the timestamps of the user interactions [108, 115, 124, 146, 148]. This evaluation methodology is also applied using semi-synthetic datasets [66, 151]. The quality of recommendation, measured in terms of various evaluation metrics, is then compared before and after bias mitigation strategies are applied to the input or output of the recommender system (i.e., in the _pre-processing_ or _post-processing_ stage), or directly to the core recommender model (i.e., in the _in-processing_ stage).
The impact of popularity bias on different recommender systems and the performance of mitigation strategies can be viewed from _static (one-shot)_ and _dynamic (longitudinal)_ offline evaluation paradigms. Traditionally, the research community has focused more on the static paradigm. In this case the dataset is split for evaluation randomly and only once, often ignoring the timestamp of the feedback/interactions. Hence, this paradigm reflects the evaluation of a recommender system on an individual _"snapshot"_ of the system. Accordingly, the data used for training simulates the knowledge of the recommender about the users given at a certain point in time. The test data respectively simulate the information about users (and their preferences) that is "hidden" from the system at that point in time. The static evaluation paradigm, however, does not reflect temporal changes within the data distributions, and thus the outcomes might be less reliable as a result. Notwithstanding this limitation, this evaluation paradigm may still offer benefits for finding the most suitable design solution for an up-and-running recommender system (e.g., the best-performing algorithm) in certain situations [144].
The dynamic (longitudinal) evaluation paradigm, on the other hand, proposes a radically different perspective that can potentially lead to more trustworthy results. This evaluation paradigm primarily aims at a more continuous and long-term evaluation of a recommender system over a period of time. Hence, the performance of the recommender system is monitored considering the dynamics of the system properties and the data. Examples of the studies employing longitudinal evaluation methodologies are [27, 53, 66, 72, 97]. In this case timestamps are playing a central role in data splitting. Typically, the data is split into \(N\) time spans and for every period \(n\) the next period \(n+1\) is used as a test set. Afterward, the \(n+1\) subset is appended to the previous training set, the model is retrained on the new extended data and the process is repeated iteratively this way, simulating the temporal evolution of a recommender system (see Fig. 11). It is also possible to simulate user activity by predicting which items from the recommendation for each user will be consumed at every iteration and adding them to an extended train set instead. However, false or inaccurate predictions can lead to errors that might accumulate over time.
It is argued that studying the longitudinal evolution of a recommender system provides a better picture of a real-life user experience scenario. In the context of popularity bias mitigation, it can be particularly important to follow the longitudinal evaluation procedure when investigating the "reinforcement effect" of the bias in recommender systems [53]. This will allow obtaining a better reflection on the effectiveness of the mitigation strategies in real-world scenarios, where the behaviors and preferences of the users are constantly changing over time.
The differences in the evaluation methodologies make it often difficult to draw a conclusive direct comparison of different bias mitigation strategies. In addition to that, the reported results of the conducted experiments may also differ due to the dissimilarity in the characteristics of the used datasets, the chosen recommender algorithms, and even the choice of the hyper-parameters. For instance, the threshold popularity value used to divide the items into head and tail is an important factor and can substantially impact the outcome of the experiments. Many prior works considered 0.2 as a suitable choice for the threshold [3, 11, 81]. Hence, they considered the top 20% of items with the largest number of interactions by users as popular items. At the same time another group of works considered the head part to be represented by the top 10% [127] or even 1% of items[79]--hence they observed experimental outcomes that diverge from the former ones.
Notwithstanding the limitations, offline experiments can offer benefits and be indicative of the general performance of different popularity bias mitigation strategies. Moreover, it is generally agreed that a sound and comprehensive evaluation procedure may include an offline experiment followed up with an online experiment hence applying a three-step methodology [30, 60, 84, 112]: (i) identifying a set of candidate strategies from the literature and formulating a research hypothesis, (ii) comparing the performance of the candidate strategies through offline experiments based on pre-collected datasets and shortlisting the best
Figure 11: An example of week-by-week longitudinal evaluation data split for recommender system training and evaluation; from [47].
performing strategies, and (iii) conducting follow-up online experiments with real users to verify the impact of the selected strategies.
#### 4.1.3 Human-In-The-Loop Online Evaluation
Online evaluation in recommender systems typically involves simulated environments or even real-life settings in which a recommender system is tested by real users. This type of evaluation includes user studies [39, 93, 124, 141] and A/B testing [87]. The former typically requires a prototype, a mock-up recommendation platform or simply a list of provided recommendations that the users are normally required to evaluate and give their opinion on through ratings, feedback or questionnaires. This allows to observe user behavior close to a real-life scenario, without user modeling, predictions or assumptions. Furthermore, user studies allow researchers to gather such invaluable information as user personal opinions and their perception of the recommendation qualities. The downside of online evaluation procedures is often the complexity of the setup. Customarily, a user study platform needs to be deployed and a significant number of testing users need to be incentivized to participate in the study, honestly and diligently following the procedures. These difficulties make online studies more rare and uncommon in recommender system evaluation research--only four works in our literature collection reported results of a user study. Having not many of these works, we can look into more detail about the setups and protocols they are reporting.
The earliest work in our collection that included some form of user study is by Steck [124]. While the main focus of the paper is on providing a new "popularity-aware" metric for offline evaluations, the author also reports the initial outcomes of a user study in which 20 subjects participated. The task of the participants was to rank recommended lists with different levels of popularity bias mitigation in terms of recommendation usefulness. Interestingly, it turned out that the already small intervention towards the long tail of the recommendations led to a quickly lowered usefulness perception by the subjects and loss of user trust.
In the work by Yin et al. [141] extensive offline evaluations are complemented with a user study. In their study, 50 subjects rated movie recommendations that were generated by different algorithms, including ones optimized for long-tail recommendations, from different perspectives such as preference match, novelty, serendipity and overall assessment (quality). The results showed that their proposed method was effective in terms of increasing the novelty and serendipity level of the movies, while still being a good match for the user preferences and leading to recommendations that participants gave a high overall rating.
The brief work by Cremonesi et al. reported in [39] is entirely based on a user study. In their case, the authors created a platform simulating hotel recommendation and booking experiences. They conducted an online experiment in which 382 subjects participated, being assigned to one of six experimental groups. Three recommendation algorithms (one of them showing the most popular items) were tested in two scenarios each: (a) recommending accommodations during "low tourist" season, when all hotels are available; (b) recommending in "high tourist" season, when the most popular options are typically already
booked and are unavailable. The authors attempted to measure different objective and subjective aspects, with _satisfaction_ being the central subjective factors. It turned out that during low season, a non-personalized popular item recommendation strategy was indeed leading to the highest average satisfaction. During high season, however, a hybrid method performed best in this dimension. Overall, it turns out that recommending popular items can be effective in certain cases, and, hence, that popularity bias is not necessarily always bad.
Another online user study was described in [93], where the authors built a website that recommended music artists to the users based on their existing profiles on the _last.fm_ music service. Recommendations were created through a new algorithm designed for novelty and a baseline MF-based recommender. In total, 44 subjects completed the study in which they were asked to provide feedback on the relevance and _"freshness"_ (novelty) of the artists. The obtained results mainly indicated that the new algorithms were effective in increasing novelty at the price of reduced relevance.
Overall, the user studies discussed so far indicate that there indeed may exist a commonly assumed trade-off between recommendation accuracy and popularity bias mitigation. The studies in [124] and [39], however, indicate that focusing more on long tail items can relatively quickly negatively affect the users' perception of the recommendation quality in terms of usefulness or relevance. The drop in mean relevance reported in [93] is also not very small, decreasing from 3.8 to 3.3 on a five-point scale. Looking at the scale of the studies, only one [39] involved a larger sample of participants. In the other cases, mostly a few dozen participants were recruited. Since the user studies in two cases only serve as a complement to offline experiments, few details of the experiments are reported, which can make it difficult to assess to what extent the study might generalize, e.g., to other participant groups.
A/B tests on the other hand, are typically deployed on real-life industry-based platforms using recommender systems. The users on the platform are split into two (rarely more) groups of equal size. One group is a control group receiving ordinary treatment, while the other group would receive recommendation from the algorithm to be tested. In contrast to user studies such an evaluation approach is less invasive and even often performed without the users being aware of it to avoid priming and bias. The main drawback of such evaluation is the possible costs and risks of deploying new approaches on an industry platform, and the opportunity to do so is quite rare in the research community.
In the only field A/B test in the surveyed papers, Lacic et al. [87] studied: (a) the effects of the end user devices on item exposure and click-through rates, and (b) the effects of different algorithms on users. A two-week study was conducted on an Austrian newspaper website where a personalized content-based recommender was introduced. From the obtained results the authors conclude that content-based recommendations can reduce the popularity bias for the group of anonymous users over time even during one session. Unfortunately, the study was plagued by two major public events happening during the study period. Also,
certain details about the application of the personalized method to anonymous users remained unclear.
### Evaluation Metrics
A range of metrics has been employed by the research community to evaluate the performance of mitigation strategies for popularity bias and to measure the extent of existing bias in the data. These metrics can be grouped in different ways. For instance, from the multi-stakeholder perspective mainly two groups of metrics can be identified, _user-centered_ metrics and _item-centered_ metrics. While the former group of metrics takes into account the differences among users in terms of their preferences towards popular items, the latter group tends to ignore such differences and concentrates on item qualities instead. It is essential, however, to consider both sides in the evaluation process to assess the effects of the bias and its mitigation in a comprehensive manner [4].
In this work, we, however, adopt an alternative categorization and grouping, based in the main two research goals that we found in the papers that we analyzed for our survey:
* Some metrics are purely descriptive and are commonly utilized for bias characterization and item/user profiling. This includes metrics describing popularity distributions within datasets, such as Popularity skewness, or metrics that describe user profiles like Personal Popularity Tendency or Mainstreamness, see Table 3.
* Other metrics are instead predominantly used as objectives for the popularity bias mitigation process. Item-related examples of such metrics include Catalog Coverage, Average Recommendation Popularity or Item Statistical parity, see Table 4. Metrics like Miscalibration or User Popularity Deviation, on the other hand, can serve as user-centered optimization goals for bias mitigation.
We note that the descriptive metrics can be calculated based solely on the given interaction data. The metrics that are used for steering the mitigation process commonly require a recommendation model or a simulation of a recommendation process to be assessed.
Table 3 shows a list of _descriptive metrics_ that we found through our literature survey. The entries in the table are organized in two subcategories for the item and user perspective, respectively.
In Table 4 we list the metrics that are used as _optimization targets_ for bias mitigation. The metrics in this table are organized in four subcategories. Metrics in the subcategory "Recommendation Popularity Level" measure how popular the generated recommendations are. Metrics in the category "Catalogue Coverage and Distribution in Recommendations" describe the fraction of categories that actually appear in the recommendations (coverage) and how often they appear (distribution). Metrics in the group "Recommendation Personalization" determine how close the item popularity distribution in the recommendations is
to the user preference. Finally, metrics in the last category, "Tail Item Prediction" assess how much the accuracy of the recommendations is affected by item popularity or unpopularity.
Both in Table 3 and in Table 4 we provide references to works in which the metric was originally defined or used the first time in one of the works reviewed for this survey. The technical descriptions of each metric can be found in the referenced literature.
Overall, we observe that a rich variety of metrics and variations thereof is used in the literature, which makes it often difficult to compare the outcomes of different studies. We note that the variety of metrics is actually even higher as indicated in the tables, as we can find different implementations for some of the metrics as well. For example, the popularity of the items is often measured by the number of interactions recorded for each item in the dataset. In some cases, however, these interaction counts are normalized, whereas in others they are not. Furthermore, sometimes, special metrics like the _Blockbuster Score_[135] are used as well to assess the popularity of an individual item. We will further discuss existing issues with common evaluation approaches and metrics next in Section 7.
## 7 Discussion, Research Gaps, and Future Directions
In this section, we summarize and critically discuss the findings of our analyses, and we provide an outlook of promising directions for future research.
### Definition, Applications, and Datasets
Despite the significant uptake of research on the topic in the past ten years, no agreed-upon definition of what represents popularity bias has emerged so far, see our discussions of the various definitions in the literature in Section 2.4 and the statistics in Section 4.2 regarding the underlying researcher motivations to address issues of popularity bias.
Furthermore, we identified a number of research works where there was no detailed motivation provided in the papers on why popularity bias should be
\begin{table}
\begin{tabular}{l|l|l} \hline \hline
**Group** & **Metric Name** & **Source** \\ \hline \multirow{6}{*}{Popularity Bias} & Gini index & [11] \\ & Popularity skewness/kurtosis & [41, 87, 94] \\ & Mean/Median Popularity, Popularity Variance & [94] \\ & Popularity Bias Evaluation & [41] \\ & Long Tail Items Evaluation & [41] \\ & Popularity Drift & [146] \\ \hline \multirow{3}{*}{User Profiling / Categorizing} & Shannon entropy & [49] \\ & Personal Popularity Tendency (PPT) & [105] \\ \cline{1-1} & Mainstreamness & [25] \\ \hline \hline \end{tabular}
\end{table}
Table 3: Descriptive popularity bias metrics.
mitigated at all, i.e., which kinds of harm one seeks to avoid. In addition, no explanation is often provided on how the authors derived when a bias mitigation procedure is successful. In fact, even a reduction of the bias for a given metric by, e.g., 10%, might still lead to recommendations that contain many popular items.
In that context, a common assumption seems to be that recommending popular items is bad _per se_, and almost by definition leads to other effects such as limited diversity or a lack of fairness. As discussed earlier, at least for some
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline
**Group** & **Subgroup** & **Metric Name** & **Source** \\ \hline \multirow{9}{*}{Recommendation Popularity Level} & \multirow{9}{*}{-} & Average Popularity Count, Average Recommendation Popularity (ARP) & [2] \\ & & Popularity Count (PCount) (Non-normalized ARP) & [26] \\ & & Group Average Popularity (GAP) & [86, 102] \\ & & Average Percentage of Long Tail Items (APLT) & [5] \\ & & Popularity Lift & [8] \\ & & Discounted Cumulative Popularity (DCP) & [26] \\ & & Ideal Discounted Cumulative Popularity (IDCP) & [26] \\ & & Popularity Bias (POBK) & [26] \\ & & Supplier Popularity Deviation (SPD) & [57] \\ & & Mean/Median Popularity, Popularity Variance & [94] \\ & & Item Popularity Deviation (IPD) & [59] \\ \hline \multirow{9}{*}{Catalogue Coverage and Distribution in Recommendations} & \multirow{9}{*}{Item Distribution and Exposure} & Gini index & [11] \\ & & Entropy-Diversity & [11] \\ & & Herfindahl index & [11] \\ & & Coverage Disparity & [131] \\ & & fairRate@K & [130] \\ & & Equity of Attention for Group Fairness (EAGF) & [57] \\ & & Item Statistical Parity (ISP) & [23] \\ & Distribution & Item Equal Opportunity (IEO) & [23] \\ & and Exposure & Generalized Cross-Entropy (GCE) & [111] \\ & & Recommendation Ratio & [146] \\ & & Popularity-Opportunity & [151] \\ & & (\(\alpha,\beta\))-fairness & [131] \\ & & Bias reduction & [131] \\ & & Exposure Bias & [17] \\ \hline \multirow{9}{*}{Popularity-Aware Personalization} & \multirow{9}{*}{-} & Miscalibration (popularity) & [8] \\ & & Kullback-Leibler Divergence of Popularity Distributions & [94] \\ \cline{1-1} & & Kendall’s Tau of Popularity Distributions & [94] \\ \cline{1-1} & & Mean Absolute Deviation of Ranking Performance & [111] \\ \cline{1-1} & & User Popularity Deviation (UPDD), temporal version & [59, 83] \\ \hline \multirow{2}{*}{Tail Item Prediction Quality} & \multirow{2}{*}{-} & Popularity-Rank Correlation (item- or user-based) & [151] \\ \cline{1-1} & & Popularity Biasedness & [55] \\ \hline \hline \end{tabular}
\end{table}
Table 4: Objective-oriented popularity bias metrics.
users, the recommendation of popular items is what they expect and prefer, and some items might just be unpopular because they are of limited quality.
All in all, these observations point to a certain over-simplification of the problem and an overly abstract research operationalization, a phenomenon which can also be observed in today's research on fairness in recommender systems [42]. The fact that a large majority of the published research is based on datasets from the media domain, in particular on MovieLens datasets, may be seen as another factor that supports this hypothesis. In such a setting, the problem of mitigating popularity bias is reduced to designing or adopting algorithms that increase the value of certain computational bias metrics while not compromising recommendation accuracy too much. As such, popularity bias mitigation is seen to be not much different from approaches that seek to improve beyond-accuracy metrics such as diversity, novelty, or serendipity.
In practical applications, however, a more nuanced approach is required. Focusing the recommendations deliberately on popular items to some extent may in fact be a viable and successful strategy, see for example the discussions in the case of Netflix in [58]. In practice, two important questions in this context have to be answered: (a) when we should consider an item to be unpopular, and (b) what is the right amount of popularity bias, i.e., how do we find the right balance between recommending users what they probably like and helping them to explore new things. In many academic works on popularity bias, this balance is assumed to be given, e.g., by simply defining that the 30% least popular items are those that should be recommended more often to solve the problem.
In our work, we therefore propose a novel value and impact oriented definition of popularity bias, see Section 2.4. The main point of our definition is that popularity bias has to be addressed in case it limits the value of the recommendations or has a potentially harmful impact on some of the involved stakeholders. Adopting such a definition requires us to first think about the idiosyncrasies of the given application setting, which then allows us to select or design an appropriate computational metric. This stands in contrast to many of today's works in which the choice of the evaluation metric and of specific thresholds almost appears arbitrary.
In future works, we therefore believe that application-specific considerations have to be discussed more often, ultimately leading to research work that has the potential to be more impactful in practice. One important prerequisite to enable such works however lies in the availability of additional public datasets, in particular in domains where popularity bias and the related phenomena of fairness or diversity play a central role in society.
### Methodological Issues
The indications towards an oversimplification of the problem in today's research are corroborated by our observations reported in Section 6 on common evaluation approaches. Almost all of today's research is based on offline experiments, which divert from the question of how users would actually perceive the value of the recommendations they receive. In this context, research on popularity bias
systems suffers from a general tendency in recommender systems to rely on offline experiments [74]. In future works, therefore, research should be based much more often on experimental designs that include the human in the loop and which consider the impact of biased recommendations on the different stakeholders in a given application setting.
Clearly, offline experimentation will remain to have its place in research, e.g., to investigate if one algorithm has a stronger tendency to recommend popular items than another one or if popularity bias may lead to reinforcement effects in a longitudinal perspective, see, e.g., [72].22 In the current literature, unfortunately no clear standards for offline evaluations have emerged yet. As discussed earlier, a variety of evaluation metrics are used and also the evaluation protocols (e.g., in terms of data splitting) can diverge significantly, again making it difficult to assess how much progress is made in the field. This problem is aggravated by the fact that the level of reproducibility in recommender systems research, and in AI in general, is still limited to a certain extent [24, 52].
Footnote 22: Deciding whether a certain level of popularity bias is acceptable or even desirable to a certain extent however will remain to require an understanding of the specifics of a given application context.
Putting aside specific questions of offline experiments, we argue that more impactful research on popularity bias may only be reliably achieved if we rely more often on a richer methodological repertoire in the future. This may include both alternative forms of computational experiments, e.g., simulations to study longitudinal effects, experimental designs that involve humans in the evaluation process, as well as field studies in which the effects of popularity bias are analyzed in real-world environments. Ultimately, such an approach will require us to more frequently go beyond the comparably narrow perspective of treating recommender systems research as mostly research on algorithms. Instead, it is important to adopt a more holistic research perspective, which also considers the embedding of the recommender system in a given application and the expected impact and value for the involved stakeholders. Studying phenomena such as popularity bias without considering these surrounding factors may ultimately lead to a certain stagnation in this area, leaving the question open about how impactful such research might be in practice.
## 8 Summary
Recommender systems that have a bias towards recommending mostly popular items may be of limited value both for users and for providers, and such systems may even exert harmful effects in certain application settings. In this work, we have reviewed the existing literature on popularity bias in recommender systems. This research area is currently flourishing, partly due to its relation to such important topics as fairness. Nevertheless, we found that there still exists a multitude of future directions in this area, in particular in terms of a better understanding of the real-world implications of popularity bias.
## 9 Acknowledgements
This research was supported by industry partners and the Research Council of Norway with funding to MediaFutures: Research Centre for Responsible Media Technology and Innovation, through the Centres for Research-based Innovation scheme, project number 309339. |
2303.14101 | Compton-Getting effect due to terrestrial orbital motion observed on
cosmic ray flow from Mexico-city Neutron Monitor | We look for a diurnal anisotropy in the cosmic ray flow, using the
Mexico-City Neutron Monitor (NM) detector, due to the Earth's orbital motion
and predicted by Compton-Getting (C-G) in 1935, as a first-order relativistic
effect. The Mexico-City NM's geographic latitude is not very high
($19.33^{\circ}$N), and it has a high cutoff geomagnetic rigidity (8.2 GV) and
mountain altitude (2274 m asl) favoring the observation of the C-G effect.
Furthermore, during the solar cycle minima, the galactic cosmic ray flux is
maxima, and the solar magnetic field gets weakened, with a dipolar pattern. Its
influence on cosmic rays reaching Earth is the smallest. Analysis of the
combined counting rate during two solar minima, 2008 and 2019, from Mexico-city
NM's data yields the C-G effect with an amplitude variation of (0.043$\pm$
0.019)\%, and phase of (6.15$\pm$ 1.71) LT. The expected amplitude variation is
0.044\%, and the phase of 6.00 LT. | Carlos Navia, Marcel de Oliveira, Andre Nepomuceno | 2023-03-12T14:55:35Z | http://arxiv.org/abs/2303.14101v1 | Compton-Getting effect due to terrestrial orbital motion observed on cosmic ray flow from Mexico-city Neutron Monitor
###### Abstract
We look for a diurnal anisotropy in the cosmic ray flow, using the Mexico-City Neutron Monitor (NM) detector, due to the Earth's orbital motion and predicted by Compton-Getting (C-G) in 1935, as a first-order relativistic effect. The Mexico-City NM's geographic latitude is not very high (19.33\({}^{\circ}\)N), and it has a high cutoff geomagnetic rigidity (8.2 GV) and mountain altitude (2274 m asl) favoring the observation of the C-G effect. Furthermore, during the solar cycle minima, the galactic cosmic ray flux is maxima, and the solar magnetic field gets weakened, with a dipolar pattern. Its influence on cosmic rays reaching Earth is the smallest. Analysis of the combined counting rate during two solar minima, 2008 and 2019, from Mexico-city NM's data yields the C-G effect with an amplitude variation of (0.043\(\pm\) 0.019)%, and phase of (6.15\(\pm\) 1.71) LT. The expected amplitude variation is 0.044%, and the phase of 6.00 LT.
sun:activity, high-speed stream, cosmic rays modulation 0000-0002-4882-2880]C.E. Navia
0000-0002-4882-2880]M.N. de Oliveira
## 1 Introduction
The observed particle distributions in two frames of reference in relative motion are different. For instance, if the particle distribution is isotropic in a given reference frame, it must have an anisotropy in a reference frame in relative motion to the previous one in the direction of movement. That effect is known as the Compton-Getting (C-G) effect (Compton & Getting, 1935). Thus, considering that galactic cosmic rays have an almost isotropic distribution into the heliosphere (as is expected during the solar cycle minima), it is expected an anisotropy in the daily distribution of cosmic ray intensity at Earth due to the Earth's orbital motion around the Sun. The cosmic ray intensity should be higher, coming from the direction the Earth is moving, i.e., around 06:00 LT (Cutler & Groom, 1986).
However, as Earth's orbital velocity is relatively small, having an average value of 29.78 km/s, observation of the C-G effect requires high-energy cosmic particles to minimize distortions due to interplanetary magnetic field, the solar wind, and the Earth's magnetic field.
The Earth's orbital motion is measured using underground muon flux because underground muons come from galactic cosmic rays with high geomagnetic rigidity cutoff (above 1000 GV). Indeed, it is reporting an anisotropy due to the C-G effect, with an amplitude of about 0.025%. The parent particles were galactic cosmic rays with a stiffness of around 1.5 TeV/c during an observation period of 5.4 years (Cutler & Groom, 1986).
Other measurements indicate that the amplitude of the anisotropy in the secondary cosmic ray flux due to the C-G effect is no higher than 0.1% (Clay & Dawson, 1997). On the other hand, measurements of the C-G effect in spacecraft, through the hydrogen flux in the keV energy range, indicates that the C-G effect distorts the hydrogen flux. Monte Carlo simulations show that in the ram frame where the spacecraft is toward the emission source, the C-G effect forces the hydrogen flux to the ecliptic plane, while the opposite occurs in the anti-ram frame (Zirnstein et al., 2013). The energy spectrum of galactic cosmic rays follows a power law, from energies around \(\sim\) 8.0 GeV to ultra-high energies. Also, the energy spectrum decreases as energy increases. In detectors whose location has a geomagnetic rigidity cutoff above 9.0 GV, 80% of secondary particles at ground level come from galactic cosmic rays (mainly protons) of up to 2 TeV (Dasso et al., 2012). Places with a high cutoff in geomagnetic rigidity and located at small ge
ographic latitudes are more effective for observing the C-G effect. However, there is a side effect, the greater the rigidity cutoff, the smaller the cosmic ray flux. Thus the observation of the C-G effect requires long periods of observation.
However, we show that some NMs, especially those located at low latitudes and with a high geomagnetic rigidity cutoff and altitude, such as the Mexico-City NM, can observe the C-G effect. Neutron monitors detect a large variety of secondary particles, mainly nucleons produced mainly by the galactic cosmic rays which reach the Earth's atmosphere (Belov et al., 2018; Vaisanen et al., 2021).
The organization of paper is as follows: In Section 2, we present the data analysis with a brief description of Mexico City, including a theoretical aspect on the prediction of the C-G effect, followed by data analysis during two solar minima (2008-2019) and considering the two years together. We present in Section 3 a preliminary analysis of the seasonality of the C-G effect. The correlation between the phases of the C-G effect and the E-W asymmetry is analyzed in section 4. Finally, in section 5, we present a summary and conclusions of the article.
## 2 Analysis
### Mexico-City NM data
The Neutron Monitor Database (NMDB) ([http://www.nmdb.eu](http://www.nmdb.eu)) provides cosmic ray data from at least 18 neutron monitors distributed around the world, and operated in real-time. From these, the Mexico-City NM is among those located at not very high geographic latitudes (\(19.33^{\circ}\) N). In addition, it has a relatively high geomagnetic rigidity cutoff (8.2 GV) and a mountain altitude (2274 m asl) (Stenkin et al., 2001; Vargas Cardenas and Valdes-Galicia, 2012). These characteristics are favorable to observing the C-G effect.
The Mexico-city NM is in continuous operation since 1990. Fig. 1 shows the pressure & efficiency corrected cosmic ray time profiles from these three decades. The left (red) scale is associated with the counting rate. While the scale at right (blue) is associated with the monthly sunspot number. The well-known inverse correlation between the cosmic ray counting rate and the sunspot number is evident.
As already indicated, data from the Mexico City NM is available from NMDB. Despite data were corrected for pressure and efficiency, they still show some fluctuations in the counting rate
During the minimums of the solar cycles, the Sun's magnetic field pattern is like a dipole structure. Its strength weakens and provides less shielding to the galactic cosmic rays, arriving in the Earth's environment almost with an isotropic distribution. This behavior is responsible for the inverse relationship between the
Figure 1: Left red scale: Pressure & efficiency corrected cosmic ray count rate profiles, from Mexico-City NM, from 1991 to 2022. Right blue scale: monthly sunspot number profiles for the same period.
galactic cosmic ray intensity and the sunspot number. So, during the minimums of the solar cycles, the galactic cosmic ray intensity at Earth is higher. These characteristics are essential to observe the small anisotropy due to the terrestrial motion.
In more than 30 years of continuous operation, Mexico-City NM has recorded three solar minima. The start (minimum) of solar cycles 23, 24, and 25 was in August 1996, December 2008, and December 2019, respectively (Hathaway, 2015; Pishkalo, 2019). The maxima cosmic ray counting rate during the minima of cycles 24 and 25 was around 5% higher than the maximum counting rate from cycle 23. Fig. 1 summarizes the situation.
### Compton-Getting effect prediction
Assuming that an isotropic galactic cosmic rays flow reach the Earth with a power-law energy spectrum, such as \(E^{-\gamma}\), with \(\gamma=2.7\), a ground-level detector would see an increase in the count rate when its field of view looks along the direction of Earth's orbital motion, predicted as (Gleeson & Axford, 1968)
\[\frac{\Delta f}{f}=(\gamma+2)\frac{v}{c}\cos\lambda, \tag{1}\]
where \(v\) is the mean speed of Earth's orbital motion (29.78 km/s). Here, we assume that the galactic cosmic rays speed is close to c (the speed of light in a vacuum). This approximation is valid for cosmic rays with energies greater than GeV, and \(\lambda\) is the angle between the direction of the detector's sensitivity and Earth's velocity vector. For non-directional (large field of view) detectors such as NMs, \(\lambda\) is close to the geographic latitude angle where the detector is located. The C-G effect amplitude variation predicted to the Mexico-City NM is 0.044% and a phase at 06:00 LT.
### Cosmic ray anisotropies at Earth
In the region of energies below \(\sim\)50 TeV, the observed anisotropies of galactic cosmic rays mean that their propagation in the inner heliosphere (interplanetary space) is not isotropic. There are gradients parallel and perpendicular to the ecliptic plane. They are responsible for diurnal solar anisotropy and North-South anisotropy, among other smaller ones. Cosmic ray count rate at ground level provides valuable insight into the processes described above (Asipenka et al., 2009).
The large anisotropy observed at ground level is the diurnal solar anisotropy. Galactic cosmic ray propagation (with hardness below \(\sim\)50 GV) in the plane of the ecliptic corotates with the interplanetary magnetic field (IMF) (Parker, 1964; Axford, 1965). When the flux reaches Earth, it produces the diurnal solar anisotropy, approximately perpendicular to the Sun-Earth line, with a phase about 15:00 LT (under positive solar cycle polarity \(qA>0\)) and about 18:00 LT (under negative solar cycle polarity \(qA<0\)) (Sabbah, 2013). Like the C-G effect, the diurnal anisotropy depends on geographic latitude \(\lambda\) as \(\cos\lambda\)(Rao, 1972). Thus, the greater the latitude of the place, the smaller the amplitude of diurnal solar anisotropy, and it disappears in the polar regions.
Here we highlight an anisotropy due to the Earth's orbital motion, the so-called C-G effect, through analysing the daily counting rates at Mexico City NM. We select three years, 1996, 2008, and 2019, during the solar minima when the galactic cosmic ray flux reaching Earth is maximum. In these years, the Sun's magnetic field is weakened. It is like a well-behaved magnetic dipole. Consequently, the Sun's magnetic effects on cosmic rays are smaller.
However, 1996 data from Mexico-NM are not available at NMDB at 1 h or fewer time intervals, which impossibilities the analysis. So, 1996 NM data were not include. The analysis is restricted only to 2008 and 2019 years.
Let's start with the 2008 data (minimum start of cycle 24). Fig. 2 (top panel) shows the monthly counting rate time profiles (averathing 10 min) from Mexico-City NM data. Already Fig. 2 (bottom panel) shows the mean count daily rates for the full year 2008. The two curves are two Gaussian fits around the first and second peaks. The first peak is identified as due to the C-G effect, originating by the terrestrial motion, and the second one as the diurnal solar variation due to solar wind corotating structure reaching the Earth.
Table 1 indicates the amplitude variation in percentage relative to the daily mean counting rate in 2008 due to the C-G effect and diurnal solar variations, respectively. Table 1 also indicates the phase in LT for these two peaks.
Note that in 2008 the amplitude variation due to daily variation is around 18% greater than the amplitude variation due to the C-G effect.
Fig. 3 shows the same type of analysis for the 2019 data (minimum start of cycle 25). Also, Table 1 indicates the amplitude variation in percentage relative to the daily mean counting rate and the phase in LT due to the C-G effect and diurnal solar variations, respectively.
In most cases, the daily solar variation amplitude is higher than the C-G effect amplitude. However, note that in 2019, there is an anomaly. The amplitude variation due to the C-G effect is around 57.5% greater than the amplitude variation due to diurnal solar variation.
Finally, Fig. 4 represents a combined analysis including data from the 2008 and 2019 values. Again the Table 1 shows the values found for the amplitude varia
Figure 3: Same as Fig. 2, but for the year 2019.
Figure 2: Pressure & efficiency corrected counting rate, according to Mexico-City NM data. Top panel: monthly (averaging 10 min) counting rate during 2008. Bottom panel: hourly (averaging 10 min) counting rate during 2008. The bottom panel: includes the two Gaussian fits for the C-G effect and diurnal solar variation, respectively.
Figure 4: Hourly (averaging 10 min) counting rate from Mexico-City NM data, during combined years 2008-2019 (corrected by pressure & efficiency). The figure includes the two Gaussian fits for the C-G effect and diurnal solar variation, respectively.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & C-G Amplitude (\%) & C-G Phase LT & SD Amplitude (\%) & SD Phase LT \\ \hline Observed (2008) & 0.006\(\pm\) 0.006 & 06.17\(\pm\)1.20 & 0.033\(\pm\) 0.066 & 14.57\(\pm\)2.04 \\ Observed (2019) & 0.073\(\pm\)0.066 & 05.70\(\pm\)1.36 & 0.042\(\pm\)0.040 & 12.00 \(\pm\) 1.54 \\ Obs.(2008-2019) & 0.043\(\pm\) 0.019 & 6.15\(\pm\) 1.71 & 0.031\(\pm\) 0.014 & 12.90\(\pm\) 1.97 \\ Predicted & 0.044 & 06:00 & \(<\)0.6 & 15:00-18:00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Amplitude variation and phase to the Compton-Getting (C-G) effect and for the Solar Diurnal (SD) variation. The observation is from Mexico-City NM data.
tions due to the C-G effect and diurnal solar variation, respectively, as well as their phases.
We would point out that the amplitude variation and phase for the C-G effect obtained in the combined data analysis (2008-2019) agree with expected values.
## 3 Seasonality of the C-G effect
Despite low statistics, we also examine the cosmic-ray diurnal variation from Mexico-City NM data over 2019, according to the seasons. Dates regarding season here used are Winter (01 January to 20 March); Spring (20 March to 21 June); Summer (21 June to 23 September); Autumn (23 September to 31 December).
We see a significant variation, as shown in Fig. 5, showing the hourly (averaging 30 min) counting rate per second during the 12 (seasonably) months of 2019.
From Fig. 5, we also can see that the count rate during winter is the one that most contributes to the C-G effect observation. We also can observe that the counting rate during autumn does not contribute to the C-G effect observation but contributes significantly to diurnal solar variation.
Fluctuation on the interplanetary magnetic filed and the E-W asymmetry of the cosmic ray flow (see next section) are responsible for the phase fluctuations of the Compton-Getting effect and of the diurnal solar variation around values, in most cases, earlier than expected. The details of this mechanism are the subject of the next section.
## 4 C-G effect and the West-East asymmetry
From the 1930s onwards, it's known that the flow of cosmic rays reaching Earth from the West is higher than from the East; and is known as the East-West (E-W) effect (Kamiya, 1963; Dorman et al., 1967). This effect is due to the de deflection of the cosmic ray-charged particles by the Earth's magnetic field. The count excess from the West direction means that primary cosmic rays are predominantly positively charged particles.
The observation of muons data detected by underground telescopes, allows to obtain the E-W asymmetry of their parent particles, galactic cosmic rays with high magnetic rigidity (above 1000 GV) (Yasue et al., 1991).
So, cosmic ray directional telescopes can obtain the phase from the E-W asymmetry, and from it obtain the phase of the C-G effect adding (clockwise) a right angle (\(=+6\) hr) to E-W asymmetry phase:
\[Phase(C-G)=Phase(E-W)+6\ hr. \tag{2}\]
The E-W asymmetry is responsible for changing the phase of the C-G effect to times earlier than expected. Fig. 6 details this mechanism.
The E-W asymmetry can also explain the shift of the phase of the diurnal solar variation for early times than expected. In the case of 2019 (beginning of solar cycle 25) with the solar cycle's polarity conditions (\(qA>0\)), the expected phase for the diurnal solar variation is 15 hr LT, as shown by the right arrow at the top of Fig. 5 where shift values of up to 3 hr relative to the expected value is observed, while the average value in the (averaging 10 m) data is 12.90\(\pm\) 1.97 LT (see Table 1).
## 5 Discussions and conclusions
We present a study on the daily variation of count rate at the Mexico City NM detector located at low geographic latitude (19.33\({}^{\circ}\)N) and a high geomagnetic rigidity cutoff (8.2 GV). We highlight the study of anisotropy due to the terrestrial orbital movement known as the C-G effect.
To optimize the cosmic ray detection at ground level (2274 m asl), we used the 2008 and 2019 data, which coincides with the last two solar minima, when the flow of galactic cosmic rays reaching Earth is higher, and the Sun's magnetic field is weakened, with a dipolar pattern, and their influence on galactic cosmic rays is lowest.
Two modulations in the cosmic ray flux still survive in the Mexico-NM detector, even after corrections for variations in barometric pressure and efficiency. The first is in the early hours of the day (expect 6:00 hr LT), consistent with a modulation due to the Earth's orbital motion, i.e., the C-G effect. predicted by Compton & Getting (1935) as a first-order relativistic effect.
The second one is the known diurnal solar variation, in the early hours of the afternoon (expected between 15:00 and 18:00 LT), The cosmic ray diffusion theory (Parker, 1964; Axford, 1965), including other factors, such as the scattering effect due to irregularities in the solar magnetic field (Jokipii & Parker, 1969; Levy, 1976) and latitude effects (Rao, 1972), can describe the diurnal solar variation.
Table 1 summarizes the results, including the counting rate variation and the phase of both anisotropies, the C-G effect, and diurnal solar variation, respectively. Both the counting rate variation and phase for the C-G effect, obtained from combined data (2008-2019) from Mexico City NM, agree with those predicted.
We have also looked for the seasonality of the C-G effect, with the seasons during 2019. We observe a significant variation. The same is true for the solar diurnal variation. However, due to the low statistics, the results on the seasonality of the two anisotropies are preliminary.
Even so, the data obtained during the winter are the ones that contribute the most to the C-G effect. The
data collected in autumn has no contribution to the C-G effect but significantly contributes to diurnal solar variation. Fig. 5 summarizes the results.
Finally, we show that in addition to fluctuations in the interplanetary magnetic field, the E-W effect on the cosmic ray flow reaching Earth is also responsible for dispersion, observed in the phases of both anisotropies. In most cases, the E-W effect is responsible for bringing the phases forward, sometimes several hours ahead of the expected ones. In this study, this effect is more pronounced in diurnal solar variation. We highlight a correlation between the phases of the E-W and C-G effects (see Eq. /refphases)(Yasue et al., 1991). In the present case, the C-G effect phase for the combined data (2008-2019) is 06.15 LT. Consequently, the E-W phase will be close to 0.15 LT. This result is a prediction for the E-W phase at the Mexico City location.
## 6 Acknowledgments
We acknowledge the NMDB database (www.nmdb.eu), founded under the European Union's FP7 programme (contract no. 213007) for providing data. Mexico City neutron monitor data were kindly provided by the Cosmic Ray Group, Geophysical Institute, National Autonomous University of Mexico (UNAM), Mexico. This work is supported by Fundacao de amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ) under Grant E-26/010.101128/2018.
Figure 5: Pression & efficiency corrected counting rate, according to Mexico-City NM data. Top panel: montly (averaging 10 min) counting rate during 2008. Bottom panel: hourly (averaging 10 min) counting rate during 2008. The bottom panel includes the two Gaussian fits for the C-G effect and diurnal solar variation, respectively.
Figure 6: Scheme showing how to obtain the East-West asymmetry phase from the C-G effect phase or vice versa. Deflection of cosmic rays by Earth’s magnetic field in the west direction is responsible for the phase of the C-G effect be about 1 hour earlier than expected. |
2309.01231 | Two Games on Arithmetic Functions: SALIQUANT and NONTOTIENT | We investigate the Sprague-Grundy sequences for two normal-play impartial
games based on arithmetic functions, first described by Iannucci and Larsson in
\cite{sum}. In each game, the set of positions is N (natural numbers). In
saliquant, the options are to subtract a non-divisor. Here we obtain several
nice number theoretic lemmas, a fundamental theorem, and two conjectures about
the eventual density of Sprague-Grundy values.
In nontotient, the only option is to subtract the number of relatively prime
residues. Here are able to calculate certain Sprague-Grundy values, and start
to understand an appropriate class function. | Paul Ellis, Jason Shi, Thotsaporn Aek Thanatipanonda, Andrew Tu | 2023-09-03T17:38:01Z | http://arxiv.org/abs/2309.01231v1 | # Two Games on Arithmetic Functions: Saliquant and Nontotient
###### Abstract.
We investigate the Sprague-Grundy sequences for two normal-play impartial games based on arithmetic functions, first described by Iannucci and Larsson in [IL]. In each game, the set of positions is \(\mathbb{N}\). In saliquant, the options are to subtract a non-divisor. Here we obtain several nice number theoretic lemmas, a fundamental theorem, and two conjectures about the eventual density of Sprague-Grundy values.
In nontotient, the only option is to subtract the number of relatively prime residues. Here are able to calculate certain Sprague-Grundy values, and start to understand an appropriate class function.
## 1. Introduction
In this paper, we study two of the games introduced by [IL]. Their rules are as follows.
1. Saliquant. Subtract a non-divisor: For \(n\geq 1\), \(\operatorname{opt}(n)=\{n-k:1\leq k\leq n:k\nmid n\}\).
2. Nontotient. Subtract the number of relatively prime residues: For \(n\geq 1\), \(\operatorname{opt}(n)=\{n-\phi(n)\}\), where \(\phi\) is Euler's totient function.
In each case, we examine the normal-play variant only, so the usual Sprague-Grundy theory applies. In particular, the _mim-value_ of a position \(n\) is recursively given by
\[\mathcal{SG}(n)=\operatorname{max}\{\mathcal{SG}(x)\mid x\in\operatorname{ opt}(n)\},\]
where \(\operatorname{max}(A)\) is the least nonnegative integer not appearing in \(A\). Chapter 7 of [LIP] gives a readable overview for the newcomer. Note that for games of no choice, such as nontotient, \(\mathcal{SG}(n)\) calculates the parity of the number of moves required to reach a terminal position. The sole terminal position for nontotient is \(1\).
## 2. Let's play saliquant!
Inaucci and Larsson give a uniform upper bound for nim-values of saliquant positions and show that odd positions attain this bound:
**Lemma 2.1** ([IL],Theorem 4).: _In saliquant,_
* _If_ \(n\) _is odd, then_ \(\mathcal{SG}(n)=\frac{n-1}{2}\)__
* _For all_ \(n\geq 1\)_,_ \(\mathcal{SG}(n)<\frac{n}{2}\)__
Our task, therefore, will be to investigate the nim-values of even positions. The first few such values are:
\begin{tabular}{c|c c c c c c c c c c c c c c c c c c c c c c c c c c c} \(n\) & 2 & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 18 & 20 & 22 & 24 & 26 & 28 & 30 & 32 & 34 & 36 & 38 & 40 & 42 \\ \hline \(\mathcal{SG}(n)\) & 0 & 1 & 1 & 3 & 2 & 4 & 6 & 7 & 4 & 7 & 5 & 10 & 12 & 10 & 13 & 15 & 8 & 13 & 9 & 17 & 17 \\ \end{tabular}
First we can establish some particular cases where the nim-value will be below the uniform upper bound given in the last part of the Lemma:
**Lemma 2.2**.: __
* _If_ \(3\mid n\)_, then_ \(\mathcal{SG}(2n)\leq n-2\)_._
* _If_ \(5\mid n\)_, then_ \(\mathcal{SG}(4n)\leq 2n-3\)_._
Proof.: If \(3\mid n\), then \(2n-4\) is the largest possible option of \(2n\). So by Lemma 2.1 all options have a nim-value of at most \(n-3\). Hence \(\mathcal{SG}(2n)\leq n-2\).
Similarly, if \(5\mid n\), then \(4n-6\) is the largest possible option of \(4n\), with the exception of \(4n-3\). So all options have a nim-value of at most \(2n-4\), or exactly \(2n-2\). Hence \(\mathcal{SG}(4n)\leq 2n-3\).
Note that the former bound is sharp. For example, setting \(n=15\), we see that \(\mathcal{SG}(30)\) is \(13\). Next we establish a uniform lower bound.
**Lemma 2.3**.: _If \(p\) is the smallest prime divisor of \(n\), then \(\mathcal{SG}(n)\geq\mathcal{SG}(\frac{p-1}{p}n)\); in particular, \(\mathcal{SG}(2n)\geq\mathcal{SG}(n)\)._
Proof.: Let \(n-k<\frac{p-1}{p}n\), where \(p\) is the smallest prime divisor of \(n\). Then \(\frac{n}{p}<k<n\), and so \(k\nmid n\). Hence \(n-k\) is an option of \(n\). \(n\) has every option which \(\frac{p-1}{p}n\) has. Thus, \(\mathcal{SG}(n)\geq\mathcal{SG}(\frac{p-1}{p})n\).
**Corollary 2.4**.: \(\mathcal{SG}(n)\geq\frac{n-2}{4}\) _for all \(n\)._
Proof.: Lemma 2.1 establishes this for odd \(n\).
Next if \(n=2k\), where \(k\) is odd, then Lemma 2.3 tells us that \(\mathcal{SG}(n)\geq\mathcal{SG}(k)=\frac{k-1}{2}=\frac{n-2}{4}\).
Now let \(n=2^{m}k\), where \(k\) is odd and \(m\geq 2\). Then \(n-(k+2),n-(k+4),\ldots,1\) are all options of \(n\), with nim-values \(\frac{1}{2}(n-k-3),\frac{1}{2}(n-k-5),\ldots,0\), respectively. Thus
\[\mathcal{SG}(n)\geq\frac{1}{2}\left(n-k-1\right)\geq\frac{1}{2}\left(n-\frac {n}{4}-1\right)>\frac{1}{2}\left(\frac{n}{2}-1\right)>\frac{n-2}{4}\]
Next we prove a key lemma about the nim-values of even positions.
**Lemma 2.5**.: _If \(\mathcal{SG}(2n)=n-k\), then \(2k-1\mid n\)._
Proof.: Suppose \(\mathcal{SG}(2n)=n-k\). Then \(2n\) has no option of nim-value \(n-k\). Since \(\mathcal{SG}(2n-2k+1)=n-k\), it is not an option. In other words, \(2k-1\mid 2n\) and hence \(2k-1\mid n\).
From here, we can establish nim-values of several particular cases of even numbers. To start, the previous result immediately narrows the possibilities of double a prime or semiprime.
**Corollary 2.6**.: _Let \(p,q\) be odd primes, then_
* \(\mathcal{SG}(2p)=p-1\) _or_ \(p-\frac{p+1}{2}=\frac{p-1}{2}\)_; and_
* \(\mathcal{SG}(2pq)=pq-1,pq-\frac{p+1}{2},pq-\frac{q+1}{2}\)_, or_ \(pq-\frac{pq+1}{2}=\frac{pq-1}{2}\)_._
We next refine the first bullet point.
**Lemma 2.7**.: _Let \(p\geq 5\) be prime. Then the only possible option of \(2p\) with nim-value \(\frac{p-1}{2}\) is \(p+1\). Hence_
* \(\mathcal{SG}(p+1)=\frac{p-1}{2}\implies\mathcal{SG}(2p)=p-1\)__
* \(\mathcal{SG}(p+1)\neq\frac{p-1}{2}\implies\mathcal{SG}(2p)=\frac{p-1}{2}\)_._
Note that if \(p=3\), then \(p+1=4\) is not an option of \(2p=6\).
Proof.: Let \(x\in\operatorname{opt}(2p)\) such that \(\mathcal{SG}(x)=\frac{p-1}{2}\).
By Lemma 2.1, if \(x\) were odd, then we would have \(x=p\), but \(p\) is not an option of \(2p\). So let \(x=2n\) for some \(n\). By Lemma 2.5, since \(\mathcal{SG}(2n)=\frac{p-1}{2}=n-(n-\frac{p-1}{2})\), we have \(2(n-\frac{p-1}{2})-1\mid n\), and so \(2n-p\mid n\). On one hand, this implies that \(n<p\).
On the other hand, it means that we can write \(n=d(2n-p)=2dn-dp\) for some \(d\in\mathbb{N}\). Then \(n\mid dp\) and \(d\mid n\). Now since \(n<p\) and \(p\) is prime, we have \(n\mid d\). Finally, since \(d\mid n\), this means \(n=d\), and so \(2n-p=1\) or \(x=2n=p+1\).
In fact, this is enough to generate infinitely many examples for which our uniform lower bound is attained.
**Theorem 2.8**.: _If \(p\) is prime and \(p\equiv 5\bmod 6\), then \(\mathcal{SG}(2p)=\frac{p-1}{2}\)._
Proof.: Let \(p\) be prime where \(p\equiv 5\bmod 6\). We claim that \(\mathcal{SG}(p+1)\neq\frac{p-1}{2}\), and so the previous lemma implies that \(\mathcal{SG}(2p)=\frac{p-1}{2}\).
Indeed, since \(p\equiv 5\bmod 6\), we have \(p+1\equiv 0\bmod 6\). In particular, \(1,2,3\mid p+1\), so the largest possible option of \(p+1\) is \((p+1)-4=p-3\). So by Lemma 2.1, for all \(y\in\operatorname{opt}(p+1)\), \(\mathcal{SG}(y)<\frac{y}{2}\leq\frac{p-3}{2}\), that is \(\mathcal{SG}(y)\leq\frac{p-5}{2}\). Hence \(\mathcal{SG}(p+1)\leq\frac{p-3}{2}\).
**Corollary 2.9**.: _There are infinitely many \(n\in\mathbb{N}\) such that \(\mathcal{SG}(n)=\frac{n-2}{4}\)._
Proof.: It is well known that there are infinitely many primes \(p=5\bmod 6\). For each of these \(p\), letting \(n=2p\), we have \(\mathcal{SG}(n)=\mathcal{SG}(2p)=\frac{p-1}{2}=\frac{n-2}{4}\).
It is possible to keep refining this inquiry about numbers which are twice an odd. For example Corollary 2.6 could be extended for more than 2 odd prime factors, but we don't see how helpful it is. Instead, we investigate the remaining cases by decomposing even numbers as an odd number times a power of 2. As a first step, we can compute exact nim-values in the case that the odd part is 1, 3, 5, or 9.
**Lemma 2.10**.: _Let \(b\geq 1\). Then \(\mathcal{SG}((2a+1)2^{b})=(2a+1)2^{b-1}-a-1\) for \(a=0,1,2,4\)._
Proof.: This can be checked by hand for the cases when \(b=1\) or \(b=2\), so let \(b\geq 3\), and consider the options of \((2a+1)2^{b}\)
All odd numbers greater than \((2a+1)\) are non-divisors of \((2a+1)2^{b}\), so the odd numbers \(1,3,\ldots,(2a+1)2^{b}-(2a+3)\) are all options with nim-values \(0,1,\ldots,(2a+1)2^{b-1}-a-2\), respectively.
We claim that there is no option with nim-value \((2a+1)2^{b-1}-a-1\). Indeed \((2a+1)2^{b}-2a-1\) is not an option, and is the only odd number with nim-value \((2a+1)2^{b-1}-a-1\). Next, note that \(b\geq 3\) and \(a=0,1,2,4\) i.e. \((2a+1)=1,3,5,9\). Hence all even numbers less than \((2a+1)\) divide \((2a+1)2^{b}\) and the only even options are less than or equal to \((2a+1)2^{b}-2a-2\). By Lemma 2.1, their nim-values are less than \(\frac{(2a+1)2^{b}-2a-2}{2}=(2a+1)2^{b-1}-a-1\). Thus, there is no option with nim-value \((2a-1)2^{b-1}-a-1\), and \(\mathcal{SG}((2a-1)2^{b})=(2a-1)2^{b-1}-a-1\).
We now see that there are infinitely many _even_ values for which our uniform upper bound is obtained:
**Corollary 2.11**.: _Let \(b\geq 1\). Then \(\mathcal{SG}(2^{b})=2^{b-1}-1\). In particular, there are infinitely many \(m\) for which \(\mathcal{SG}(n)=\frac{n-2}{2}\)._
Note that the above proof does not work, for example, when \(a=3\) i.e. \(2a+1=7\), since \(6<2a+1\), and \(6\nmid 7(2^{b})\). In fact \(\mathcal{SG}(14)=6\), not \((2a+1)2^{b-1}-a-1=3\). Next we obtain a slightly weaker result when \(a=10\) and \(2a+1=21\).
**Lemma 2.12**.: _Let \(b\geq 1\). Then \(\mathcal{SG}(21(2^{b}))=21(2^{b-1})-11\) or \(21(2^{b-1})-4\)._
Proof.: In the case \(b=1\), we see \(\mathcal{SG}(42)=17\). For \(b\geq 2\), consider the options of \(21(2^{b})\). The odd numbers \(1,3,\ldots,21(2^{b})-23\) and \(21(2^{b})-19,\ldots,21(2^{b})-9\) are all options with nim-values \(0,1,\ldots,21(2^{b-1})-12\) and \(21(2^{b-1})-10,\ldots,21(2^{b-1})-5\), respectively. The numbers \(21(2^{b})-21\) and \(21(2^{b})-7\) with nim-values \(21(2^{b-1})-11\) and \(21(2^{b-1})-4\) are not options, and all larger odd numbers have nim-values greater than \(21(2^{b-1})-4\).
On the other hand, since \(2,4,6\mid 21(2^{b})\), Lemma 2.1 implies that any even options have nim-values less than \(\frac{21(2^{b})-8}{2}=21(2^{b-1})-4\). Hence \(\mathcal{SG}(21(2^{b}))=21(2^{b-1})-11\) or \(21(2^{b-1})-4\).
We end this section by showing that twice a Mersenne number is above the uniform lower bound. Note that if \(m=2n=2(2^{b}-1)\) then \(\frac{m-2}{4}=\frac{n-1}{2}=2^{b-1}-1\).
**Lemma 2.13**.: _Let \(b\geq 3\). Then \(\mathcal{SG}(2(2^{b}-1))>2^{b-1}-1\). In particular, if \(2^{b}-1\) is prime, then \(\mathcal{SG}(2(2^{b}-1))=2^{b}-2\)._
Proof.: By Corollary 2.11, \(\mathcal{SG}(2^{b})=2^{b-1}-1\), so we just need to show that \(2^{b}\in\operatorname{opt}(2(2^{b}-1))\).
Suppose otherwise and that \(2(2^{b}-1)-2^{b}\mid 2(2^{b}-1)\). Then \(2^{b}-2\mid 2(2^{b}-1)\). Thus either \(2^{b}-2\) and \(2^{b}-1\) share a common factor and so \(2^{b}-2=1\), or \(2^{b}-2\mid 2\) and so \(b\leq 2\). Both cases are impossible.
In the case \(2^{b}-1\) is prime, Corollary 2.6 implies \(\mathcal{SG}(2(2^{b}-1))=2^{b}-2\)
## 3. The Fundamental Theorem of Saliquant and density of values
Finally, we obtain our most general statement about nim-values of Saliquant. The two corollaries which follow were actually proved first, inspired by the proof of Corollary 2.4.
**Theorem 3.1**.: _For all \(a\geq 0,b\geq 1\),_
\[\mathcal{SG}\left((2a+1)2^{b}\right) =\frac{m}{2m+1}\left((2a+1)2^{b}-1\right)+\frac{1}{2m+1}\left((2a +1)2^{b-1}-a-1\right)\] \[=(2a+1)2^{b-1}-\frac{1}{2}\left(\frac{2a+1}{2m+1}+1\right)\]
_for some non negative integer \(m\). Thus_
\[\mathcal{SG}\left((2a+1)2^{b}\right)=(2a+1)2^{b-1}-\frac{d+1}{2},\text{ where }d\text{ is a factor of }2a+1.\]
This theorem unifies several edge cases, as well. If we set \(a=0\), then we must have \(d=1\), obtaining Corollary 2.11. Let \(f(a,b,m)\) be the function given by Theorem 3.1. If we set \(b=0\), then \(f(a,b,m)\) is never an integer, but \(\lim_{m\to\infty}f(a,b,m)=\frac{n-1}{2}\), matching Lemma 2.1.
Fixing \(a\) and \(b\), \(f(a,b,m)\) is a linear rational function in \(m\), thus monotonic for \(m\geq 0\), and it is easily checked that it is increasing. Hence its minimum is obtained when \(m=0\), with an upper bound given by \(m\to\infty\). Thus we have the following corollary, which itself is a generalization of Lemma 2.10.
**Corollary 3.2**.: _For all \(a,b\geq 1\),_
\[\frac{(2a+1)2^{b}}{2}-a-1\leq\mathcal{SG}\left((2a+1)2^{b}\right)<\frac{(2a+1) 2^{b}}{2}-\frac{1}{2}.\]
The upper bound is the same as in Lemma 2.1. If we fix \(a\) and let \(b\) grow large, the lower bound is an asymptotic improvement over Corollary 2.4 from \(\mathcal{O}(\frac{n}{4})\) to \(\mathcal{O}(\frac{n}{2})\). Furthermore, we will see experimentally below that all values of \(f(a,b,m)\) are obtained. To illustrate the theorem, set \(b=1\) to obtain all possible nim-values of even numbers which are not multiples of \(4\):
**Corollary 3.3**.: _For all \(a\geq 1\), \(\mathcal{SG}(4a+2)\) must have the form_
\[\frac{(4m+1)a+m}{2m+1}\quad\left(=a,\frac{5a+1}{3},\frac{9a+2}{5},\frac{13a+3 }{7},\frac{17a+4}{9},\frac{21a+5}{11},\ldots\right)\]
_for some \(m\geq 0\)._
Proof of Theorem 3.1.: Suppose \(a,b\geq 1\). Let \(X=\mathcal{SG}\left((2a+1)2^{b}\right)\). Then
\[X=((2a+1)2^{b-1})-((2a+1)2^{b-1}-X),\]
so by Lemma 2.5, we have
\[\left(2\left((2a+1)2^{b-1}-X\right)-1\right)\mid(2a+1)2^{b-1}.\]
Thus there is some \(Q_{1}\) so that
\[Q_{1}(a2^{b+1}+2^{b}-2X-1)=(2a+1)2^{b-1}.\]
Since \((a2^{b+1}+2^{b}-2X-1)\) is odd, \(2^{b-1}\mid Q_{1}\). Pick \(Q_{2}\) so that \(Q_{2}2^{b-1}=Q_{1}\). This gives
\[Q_{2}(a2^{b+1}+2^{b}-2X-1)=2a+1.\]
Next since \(Q_{2}\) is odd, we can set \(Q_{2}=2m+1\) for some \(m\geq 0\), giving
\[(2m+1)(a2^{b+1}+2^{b}-2X-1)=2a+1.\]
Finally, solving for \(X\) gives the desired result.
Now that we know the specific possible values \(\mathcal{SG}(n)\) can take based on the decomposition \(n=(2a+1)2^{b}\), a natural question is how these values are distributed. For a given \(b>0\), \(m\geq 0\), define
\[S_{b,m}=\{a\in\mathbb{N}\mid\mathcal{SG}((2a+1)2^{b})=f(a,b,m)\}.\]
The experimental density of \(S_{b,m}\) for \(b=1,2,3,4\) and \(m=0,1,2,3,4\) are shown in Table 1. For \(b=1\), we measured up to \(a=5000\); for \(b=2,3\), up to \(a=2000\); and for \(b=4\), up to \(a=1000\). The associated Maple program can be found at the third author's website [http://www.thotsaporn.com](http://www.thotsaporn.com).
In Figure 1, we can see some of these values, with the corresponding labels given in Table 1. For example, consider the entry of the table marked **(C)**. It says that the density of numbers of the form \(x=8a+4\) for which \(\mathcal{SG}(x)=3a+1\) is \(0.561\). Then we can see that the line in the figure with slope \(\frac{3}{8}\) (also marked **(C)**) has about half density. Contrast with the entry marked **(D)**, corresponding to the line with slope \(\frac{5}{12}\). It is very sparse, as seen in the figure. Notice that the \(y\)-intercept of each of these lines corresponds to \(a=-\frac{1}{2}\), which in each case gives
\[f\left(-\frac{1}{2},b,m\right)=\frac{1}{2m+1}\left(-m2^{b}-2^{b-1}+\frac{1}{2} +m2^{b}-m+2^{b-1}-1\right)=\frac{-m-\frac{1}{2}}{2m+1}=-\frac{1}{2},\]
which is ok to be negative, since the game is only meaningfully defined on positive numbers. Finally, the line marked **(A)** is \(y=\frac{x-1}{2}\), which includes all odd \(x\) and some even \(x\), per Lemma 2.1 and Corollary 2.11.
We next show a straightforward upper bound for these densities, noting that each of the values in Table 1 are well below this bound.
**Lemma 3.4**.: _Given \(b\geq 1\), \(m\geq 0\), the density of \(S_{b,m}\) is at most \(\frac{1}{2m+1}\)._
Proof.: Fix \(b\geq 1\), \(m\geq 0\), and consider
\[(2m+1)f(a,b,m) =\left(a(m2^{b+1}+2^{b}-1)+m(2^{b}-1)+(2^{b-1}-1)\right)\] \[=a\left((2m+1)2^{b}-1\right)+(2m+1)2^{b-1}-m-1\] \[\equiv-a-m-1\bmod(2m+1)\]
Thus \(f(a,b,m)\) is only an integer when \(a\equiv m\bmod(2m+1)\), and so \(\frac{1}{2m+1}\) is an upper bound for how frequently \(\mathcal{SG}((2a+1)2^{b})\) can attain this value.
Given the values in Table 1, we suspect that most of these values are actually \(0\):
**Conjecture 1**.: _For a given \(b>0\),_
* _If_ \(m=0\)_,_ \(S_{b,m}\) _has positive density, and_
* _If_ \(m>0\)_,_ \(S_{b,m}\) _has density_ \(0\)_, but is nonempty._
We can also look at how fixing \(a\) and \(b\) affects the value of \(m\). Define \(M(a,b)=m\) where \(\mathcal{SG}((2a+1)2^{b})=f(a,b,m)\), and consider Table 2.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} & \(a=3\) & \(a=4\) & \(a=5\) & \(a=6\) & \(a=7\) & \(a=8\) & \(a=9\) & \(a=10\) & \(a=11\) \\ \hline \(b=1\) & 3 & 0 & 0 & 6 & 2 & 0 & 0 & 1 & 0 \\ \(b=2\) & 0 & 0 & 0 & 0 & 0 & 8 & 0 & 1 & 0 \\ \(b=3\) & 0 & 0 & 0 & 6 & 0 & 0 & 9 & 0 & 0 \\ \(b=4\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 11 \\ \(b=5\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \(b=6\) & 0 & 0 & 0 & 0 & 0 & 0 & 9 & 0 & 0 \\ \(b=7\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \(b=8\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \(b=9\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \(b=10\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{tabular}
\end{table}
Table 2. Experimental values of \(M(a,b)\).
If one imagines running along any row of Table 2 and tracking the distribution of \(m\), they would get the densities achieved in Table 1 as \(a\) tends toward infinity. For our second conjecture, we instead consider the behavior of \(m\) rather than of \(a\), noting that it is difficult to generate more data as the values grow exponentially as \(b\) increases.
Given the sparsity of each column, we suspect that in each column all but a finite number of values are non-zero.
**Conjecture 2**.: _For a given \(a>0\), for sufficiently large \(b\), \(M(a,b)=0\), in which case_
\(\mathcal{SG}((2a+1)2^{b})=(2a+1)2^{b-1}-a-1\)_._
Note that Lemma 2.10 proves a stronger form of the conjecture for \(a=0,1,2,4\), and Lemma 2.12 shows that when \(a=10\) we have either \(m=0\) or \(m=1\).
## 4. Nontorient
Denoting \(\phi(n)=|\{1\leq k\leq n\mid k\text{ is not a factor of }n\}|\), [IL] also define two games based on \(\phi(n)\):
* Totient: \(\operatorname{opt}(n)=\phi(n)\)
* Nontotient: \(\operatorname{opt}(n)=n-\phi(n)\)
In this section we make some headway in understanding nontorient. First recall that \(\phi(ab)=\phi(a)\phi(b)\), and for prime \(p\), \(\phi(p^{k})=p^{k-1}(p-1)\). Thus if \(n=p_{1}^{k_{1}}\dots p_{m}^{k_{m}}\), we have \(\phi(n)=\prod p_{i}{}^{k_{i}-1}(p_{i}-1)\). In particular \(\phi(1)=1\). Define \(g(n):=\operatorname{opt}(n)=n-\phi(n)\). We immediately obtain:
**Lemma 4.1**.: _For \(n>2\), \(\phi(n)\) is even, and so \(g(n)\) has the same parity as \(n\)._
For the rest of the section, let \(p\) and \(q\) always represent primes. As noted in [IL], \(g(p^{k})=p^{k-1}\). Hence the game on \(p^{k}\) terminates after \(k\) moves and so \(\mathcal{SG}(p^{k})=0\) if and only if \(k\) is even. They also note that \(g(p^{k}q)=p^{k-1}(q+p-1)\), and so in the case that \(q+p-1\) is a power of \(p\), this becomes easy to compute. Consider for example the prime pairs \((p,q)=(2,7)\) or \((3,7)\). We can extend this as follows. First note that
\[g(p^{k}q^{l})=p^{k}q^{l}-p^{k-1}(p-1)q^{l-1}(q-1)=p^{k-1}q^{l-1}(p+q-1).\]
Then we have
**Theorem 4.2**.:
1. _If_ \(q=p^{b}-p+1\) _where_ \(b\) _is even, then_ \(\mathcal{SG}(p^{k}q^{l})=0\) _if and only if_ \(q\) _is even._
2. _If_ \(q=p^{b}-p+1\) _where_ \(b\) _is odd, then_ \(\mathcal{SG}(p^{k}q^{l})=0\) _if and only if_ \(q+l\) _is even._
Proof.: In this case \(g(p^{k}q^{l})=p^{k-1}q^{l-1}\left(p+(p^{b}-p+1)-1\right)=p^{k+b-1}q^{l-1}\). So after \(l\) moves, the position will be \(p^{k+l(b-1)}\), and thus the game terminates after \(k+l(b-1)+l=k+lb\) moves.
Some prime pairs \((p,q)\) that satisfy part (a) are \((2,3)\), \((3,7)\), \((7,43)\), \((13,157)\), \((3,79)\), \((11,14631)\), \((3,727)\). For part (b) we have \((2,7)\), \((7,337)\), and \((19,2476081)\). Part (b) also applies to each pair \((2,2^{p}-1)\) for each Mersenne prime \(2^{p}-1\). As a next step, one might analyze cases which reduce to one of the above cases in a predictable number of steps. For example
**Corollary 4.3**.: \(\mathcal{SG}(2^{k}5)=0\) _if and only if \(k\) is odd._
Proof.: Here \(g(2^{k}5)=2^{k-1}(6)=2^{k}3\), and so the result follows by Theorem 4.2 (a).
The authors of [IL] were able to use Harold Shapiro's height function, \(H(n)=H(\phi(n))+1\), to give a method for computing the nim-value of any natural number in totient. Motivated by this success, they suggest analyzing a class function \(\operatorname{dist}(n)=i\), which gives the least \(i\) for which \(g^{i}(n)\) is a prime power. We instead analyze the function \(C(n)=i\) if \(g^{i}(n)=1\). The initial values are:
\begin{tabular}{c|c c c c c c c c c c c c c c c c c c} \(n\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\ \hline \(C(n)\) & 0 & 1 & 1 & 2 & 1 & 3 & 1 & 3 & 2 & 4 & 1 & 4 & 1 & 4 & 2 & 4 & 1 & 5 & 1 & 5 \\ \end{tabular}
**Lemma 4.4**.: \(C(4n)=C(2n)+1\)_._
Proof.: Note that for \(k>1\) and \(m\) odd, \(\phi(2^{k}m)=\phi(2^{k})\phi(m)=2^{k-1}\phi(m)=2\phi(2^{k-1}m)\). Hence we have \(g(4n)=4n-\phi(4n)=4n-2\phi(2n)=2(2n-\phi(2n))=2g(2n)\). So if \(g^{i}(2n)=1\), then \(g^{i}(4n)=2\), which means \(g^{i+1}(4n)=1\).
**Corollary 4.5**.: \(\mathcal{SG}(4n)=1-\mathcal{SG}(2n)\)_._
For example, knowing that \(\mathcal{SG}(10)=0\), we again obtain Corollary 4.3. We end this section with some observations about the function \(C(n)\) for even \(n\).
**Lemma 4.6**.: _If \(n\) is even and \(2^{i-1}<n\leq 2^{i}\), then \(C(n)\geq i\)._
The least value we don't see equality is \(C(30)=6\).
Proof.: We proceed by induction on \(i\), observing initial cases in the table above. Suppose \(2^{i-1}<n\leq 2^{i}\) and \(n=2k\). Then \(\phi(n)\leq k\), so \(g(n)\geq k>2^{i-2}\). By Lemma 4.1\(g(n)\) is also even, so we have by induction that \(C(g(n))\geq i-1\). Hence \(C(n)\geq i\).
**Lemma 4.7**.: _If \(p\) is an odd prime, then \(C(2p)=C(2(p+1))\)._
Proof.: We have \(g(2p)=p+1\), so \(C(2p)=C(p+1)+1=C(2(p+1))\), by Lemma 4.4.
The first time the conclusion does not hold is when \(p=15\), since \(C(30)=6\) and \(C(32)=5\).
**Theorem 4.8**.: _Let \(i\geq 1\). The set \(S_{i}=\{C(n)\mid 2^{i-1}\leq n\leq 2^{i}\text{ and }n\text{ is even}\}\) is an interval of \(\mathbb{N}\)._
Proof.: We proceed by induction, again noting initial values in the chart above. For each \(i\geq 1\), Lemma 4.6 implies that the minimal possible value of \(S_{i}\) is \(i\), and this is in fact obtained by \(C(2^{i})\).
Now let \(i\geq 2\), and suppose that the maximal value in \(S_{i-1}\) is \(M\). By induction \(S_{i-1}=\{i-1,i,\ldots,M\}\). Lemma 4.4 then implies that \(\{i,i+1,\ldots,M+1\}\subseteq S_{i}\).
Next suppose that \(\{i,i+1,\ldots,M+1\}\neq S_{i}\). Then there is some even \(2^{i-1}\leq y\leq 2^{i}\) for which \(C(y)>M+1\). In this case, since \(g(y)\) is even and \(C(g(y))>M\), we must have \(2^{i-1}\leq g(y)\). Thus \(C(g(y))=C(y)-1\in S_{i}\).
|
2310.14039 | Deformation of singular curves on surfaces | In this paper, we consider deformations of singular complex curves on complex
surfaces. Despite the fundamental nature of the problem, little seems to be
known for curves on general surfaces. Let $C\subset S$ be a complete integral
curve on a smooth surface. Let $\tilde C$ be a partial normalization of $C$,
and $\varphi\colon \tilde C\to S$ be the induced map. In this paper, we
consider deformations of $\varphi$. The problem of the existence of
deformations will be reduced to solving a certain explicit system of polynomial
equations. This system is universal in the sense that it is determined solely
by simple local data of the singularity of $C$, and does not depend on the
global geometry of $C$ or $S$. Under a relatively mild assumption on the
properties of these equations, we will show that the map $\varphi$ has
virtually optimal deformation property. | Takeo Nishinou | 2023-10-21T15:31:38Z | http://arxiv.org/abs/2310.14039v1 | # Deformation of singular curves on surfaces
###### Abstract.
In this paper, we consider deformations of singular complex curves on complex surfaces. Despite the fundamental nature of the problem, little seems to be known for curves on general surfaces. Let \(C\subset S\) be a complete integral curve on a smooth surface. Let \(\tilde{C}\) be a partial normalization of \(C\), and \(\varphi\colon\tilde{C}\to S\) be the induced map. In this paper, we consider deformations of \(\varphi\). The problem of the existence of deformations will be reduced to solving a certain explicit system of polynomial equations. This system is universal in the sense that it is determined solely by simple local data of the singularity of \(C\), and does not depend on the global geometry of \(C\) or \(S\). Under a relatively mild assumption on the properties of these equations, we will show that the map \(\varphi\) has virtually optimal deformation property.
email : [email protected]
## 1. Introduction
The study of algebraic surfaces and curves on them has a long history. With the exception of curves, it is one of the subjects which has been studied the longest and most frequently in algebraic geometry. It has been the first and most deeply investigated in various developments in algebraic geometry, including birational geometry, various moduli theories, and more recent advances such as gauge theory and Gromov-Witten type invariants.
However, knowledge of how curves on algebraic surfaces actually behave is surprisingly limited. The most famous result known in this direction would be the so-called Severi's problem asking the irreducibility of the moduli space of nodal plane curves of the given genus and degree, solved affirmatively by Harris [13]. There are studies in this direction for other surfaces of non-positive Kodaira dimension, see [9, Theorem B]. See also [11, 12] for extensive study from multiple points of view. In spite of these studies, very few have been known about the behavior of curves on surfaces of positive Kodaira dimension, especially those on surfaces of general type (see, for example, [7, 8] for results in this direction).
The most naive questions to be asked in the study in this direction would be the following.
**Problem 1**.: ([9, Problem A]) Let \(D\) be an integral curve on a smooth complete algebraic surface \(X\). Is it possible to deform \(D\) in \(X\) into a nodal curve while preserving its geometric genus?
A weaker version of it asks the following.
**Problem 2**.: Given \(D\) and \(X\) as above, let \(C\) be the normalization of \(D\) and \(\varphi\colon C\to D\) be the natural map. Is it possible to deform \(\varphi\) into an immersion?
These problems are almost completely open for surfaces of positive Kodaira dimension. One of the main reasons for this would be the difficulty of applying powerful techniques of moduli theory to these problems due to the large potential obstruction. In this paper, we attempt to fill this gap and obtain information about the behavior of curves in a way independent of the nature of ambient surfaces. Roughly speaking, our result claims that under a certain mild condition, the deformation property of singular curves is almost as optimal as possible on any surface. In particular, assuming that condition, Problem 2 also has an almost optimal answer.
We study the so-called equigeneric (equivalently, equinormalisable) deformations of curves on surfaces from parametric point of view (see [9, 11]). In fact, we will deal with more general situations, but in the introduction, we restrict ourselves to equigeneric deformations. Specifically, given a complete integral curve \(\overline{C}\) on a smooth algebraic surface \(X\), we study the deformation of the map \(\varphi\colon C\to X\), where \(C\) is the normalization of \(\overline{C}\) and \(\varphi\) is the naturally induced map. We assume that the map \(\varphi\) satisfies the semiregularity condition of Definition 25, which is a natural generalization of the classical semiregularity for embedded curves [21, 22, 6, 14] and will be satisfied if the class of \(\overline{C}\) is sufficiently ample (see the paragraph after Definition 25). As in these classical case, this is a natural assumption and there seems to be little hope to control the deformation theory without it.
Let \(p\in C\) be a point where the map \(\varphi\) is not regular. Then, taking a suitable analytic coordinate \(s\) on a neighborhood of \(p\) in \(C\) and coordinates \(z,w\) on a neighborhood of \(\varphi(p)\) in \(X\), the map \(\varphi\) can be represented as
\[(\varphi^{*}z,\varphi^{*}w)=(s^{a},s^{b}+s^{b+1}g_{0}(s)),\]
where \(a,b\) are integers satisfying \(a<b\), and \(g_{0}(s)\) is a convergent series (see [11, Chapter I, Corollary 3.8]). For notational ease, we will write \((\varphi^{*}z,\varphi^{*}w)\) and its analogues simply by \((z,w)\) from now on. Up to a reparameterization of \(C\), a \(k\)-th order deformation of \(\varphi\) on a neighborhood of \(p\) is written in the form
\[(z,w)=(s^{a}+\sum_{i=0}^{a-2}c_{a-i}s^{i},\ s^{b}+s^{b+1}g_{0}(s)+\sum_{j=1}^{ k}t^{j}g_{j}(s)),\]
where \(k\) is a positive integer, \(c_{a-i}\in t\mathbb{C}[t]/t^{k+1}\), \(g_{j}(s)\) is a convergent series, and \(t\) is a generator of \(\mathbb{C}[t]/t^{k+1}\). It turns out that taking \(c_{i}\in t^{i}\mathbb{C}[t]/t^{k+1}\) will be convenient and we assume this. Given a global \(k\)-th order deformation \(\varphi_{k}\) of \(\varphi\), the obstruction to deforming \(\varphi_{k}\) one step further can be calculated by a Cech cocycle obtained as the difference of local \(k+1\)-th order deformations of \(\varphi_{k}\), which takes values in the normal sheaf of \(\varphi\). However, usually calculating this is very difficult and there is little hope to achieve it in general.
To overcome this difficulty, we choose a special type of deformations of \(\varphi\) around each singular point. Namely, let \(S\) be a local parameter on a punctured neighborhood of \(p\) on \(C\) defined over \(\mathbb{C}[[t]]\) which satisfies
\[S^{a}=s^{a}+\sum_{i=0}^{a-2}c_{a-i}s^{i},\]
where we regard \(c_{i}\) as an element of \(t^{i}\mathbb{C}[[t]]\) by taking the coefficients of \(t^{l}\), \(l>k\) to be zero. Explicitly, we can choose \(S\) by solving this equation, so that
\[S=s(1+\sum_{i=1}^{\infty}\prod_{j=0}^{i-1}(\frac{1}{a}-j)\frac{1}{i!}(\sum_{l=2 }^{a}\frac{c_{l}}{s^{l}})^{i}).\]
Then, consider the deformation of \(\varphi\) on the punctured neighborhood \(\hat{U}_{p}\) of \(p\) given by
\[(z,w)=(S^{a},S^{b}+S^{b+1}g_{0}(S)).\]
Note that this reduces to the original parameterization \((z,w)=(s^{a},s^{b}+s^{b+1}g_{0}(s))\) of \(\varphi\) over \(\mathbb{C}[t]/t\). In particular, though a priori the parameter \(S\) is defined only on the punctured neighborhood \(\hat{U}_{p}\), it extends to the whole neighborhood \(U_{p}\) of \(p\) over \(\mathbb{C}[t]/t\). An important point is that this is true even up to some positive order of \(t\). Namely, though \(S\) contains singular terms with respect to \(s\), \(z=S^{a}=s^{a}+t\sum_{i=0}^{a-2}c_{a-i}s^{i}\) never contains such singular terms, and singular terms in \(w=S^{b}+S^{b+1}g_{0}(S)\) originate from terms of the form \(s^{b}(\sum_{l=2}^{a}\frac{c_{l}}{s^{l}})^{i}\) for some \(i\). Since we have \(c_{i}\in t^{i}\mathbb{C}[[t]]\), singular terms do not appear until we consider deformations of order \(b+1\) with respect to \(t\). It follows that, up to this order, the curve defined on the punctured neighborhood \(\hat{U}_{p}\) by the parameterization \((z,w)=(S^{a},S^{b}+S^{b+1}g_{0}(S))\) actually extends to \(p\). Moreover, the extended curve has the same image as the original \(\varphi\) around \(p\). Thus, we can glue these locally defined curves around the singular points of \(\varphi\) and the image of \(\varphi\) away from the singular points into a global curve. However, since \(S\) is a singular parameter around \(p\), the domain curve \(C\) must be nontrivially deformed in general. Therefore, we have a nontrivial deformation of \(\varphi\) while the image remains the same.
**Remark 3**.: _The same type of deformations appeared in [3, 4] in the case of first order deformations and played crucial role in [9]. In some sense, our argument extends it as far as possible._
As we noted above, as long as there is no singular term in \((S^{a},S^{b}+S^{b+1}g_{0}(S))\), there is no obstruction to glue locally defined deformations, if we deform the domain curve \(C\) suitably. Our study of obstructions begins when a singular term appears in \((S^{a},S^{b}+S^{b+1}g_{0}(S))\). Assume that we have constructed a \(k\)-th order deformation \(\varphi_{k}\) of \(\varphi\) in the above way, and that a singular term appears in \((S^{a},S^{b}+S^{b+1}g_{0}(S))\) at the order \(t^{k+1}\) at some singular point \(p\) of \(\varphi\). In this case, although we cannot extend the curve defined on a punctured neighborhood \(\hat{U}_{p}\) to the whole neighborhood \(U_{p}\) of \(p\), we can construct a \(k+1\)-th order deformation of \(\varphi_{k}\) on \(U_{p}\) simply by removing those singular terms (see Proposition 14). On the other hand, away from singular points of \(\varphi\), the map \(\varphi_{k}\) is locally the same as \(\varphi\), so we can take \(\varphi\) itself as a \(k+1\)-th order local deformation. The advantage of this construction is that we can explicitly compute the obstruction cocycle. For even higher order deformations, this construction allows us to compute the leading terms of the obstruction completely explicitly. However, the actual computation is rather subtle, since when we compare local deformations at higher orders, full non-linearity of various coordinate changes comes into play. This will be done in Section 4.2. The main result here is Proposition 57. It asserts that if a certain system of polynomial equations written in terms of the polynomials \(f_{b+i}^{(b)}\), \(i=1,\ldots,a-1\) (see below, or Lemma 15), has a solution, then, at each order of deformation, the obstruction can be taken sufficiently small so that
we can apply perturbation method explained below to eliminate the obstruction. The system of polynomial equations mentioned above is the equations \(\{\star_{\eta}\}\) in Definition 40, where \(\eta\) parameterizes a basis of the dual space of the obstructions. In particular, each \(\eta\) naturally couples with the obstruction cocycle, and if all of these couplings are zero, the map deforms. The equations \(\{\star_{\eta}\}\) imply that the leading term of such a coupling vanishes.
Now, assume that we have constructed an \(l\)-th order deformation \(\varphi_{l}\) of \(\varphi\) with \(l\geq k\). The obstruction to deforming \(\varphi_{l}\) one step further can be computed as in the last paragraph, and usually it does not vanish. This means that the map \(\varphi_{l}\) does not deform any more. Therefore, we need to find another path. So, we return to some lower order deformation \(\varphi_{l^{\prime}}\), \(l^{\prime}<l\), and attempt to reconstruct the deformations \(\overline{\varphi}_{m}\), \(m>l^{\prime}\), in a different way so that
* we can deform it at least up to the order \(t^{l}\), and
* the new map \(\overline{\varphi}_{l}\) has vanishing obstruction, so that we can continue the deformation up to higher orders.
We will achieve this in a way explained below.
As we noted above, the leading terms of the obstruction are expressed in terms of polynomials of the coefficients \(c_{l}\) introduced above. These polynomials are written as \(f^{(b)}_{b+i}\), \(i=1,\ldots,a-1\), in the main text (see Lemma 15). Thus, the study of the obstruction is reduced to controlling the value of these polynomials. For this, in addition to the conditions \(\{\star_{\eta}\}\) above, we also need to assume a suitable transversality property for these polynomials, referred to as the condition (T) in Definition 28. Specifically, we require a solution \(\{c_{l}\}\) to the system of equations \(\{\star_{\eta}\}\) that also fulfills the condition (T). In fact, this is the assumption of the main theorem (Theorem 41). Fortunately, this condition turns out to be relatively mild, as explained in the argument following Theorem 4.
When we control the values of \(f^{(b)}_{b+i}\) using the above transversality assumption, we need to change the value of \(c_{l}\in t^{l}\mathbb{C}[[t]]\). Here, to change the values of \(f^{(b)}_{b+i}\) at some order of \(t\), we need to change \(c_{l}\) at lower orders of \(t\). This amounts to changing the map \(\varphi_{l^{\prime}}\) for some \(l^{\prime}<l\) using the notation in the last paragraph. The difficulty is that during the reconstruction of deformations \(\overline{\varphi}_{m}\), \(m>l^{\prime}\), the same problem might happen. Namely, some non-vanishing obstruction appears, and to deal with it, we might need to return to even lower deformation \(\varphi_{l^{\prime\prime}}\), \(l^{\prime\prime}<l^{\prime}\). Also, even if we could construct a new map \(\overline{\varphi}_{l}\), it is unclear that the obstruction to deforming it vanishes.
We can overcome this difficulty again using deformations which do not change the image. Namely, we will show that we can modify the coefficients \(c_{l}\) so that the resulting maps \(\overline{\varphi}_{m}\) has the same image as \(\varphi_{m}\). Then, since the maps \(\varphi_{m}\), \(l^{\prime}\leq m\leq l\) already exists, it is easy to see that the obstruction to deforming \(\overline{\varphi}_{m}\) (\(l^{\prime}\leq m<l\)) one step further vanishes. Thus, we obtain a map \(\overline{\varphi}_{l}\), which also has the same image as \(\varphi_{l}\). In this situation, we can compare the obstructions to deforming \(\varphi_{l}\) and \(\overline{\varphi}_{l}\), and since the change of the coefficients \(c_{l}\) was taken so that it cancels the original obstruction to deforming \(\varphi_{l}\), the obstruction to deforming \(\overline{\varphi}_{l}\) vanishes.
Again, the computation is rather subtle. The primary reason is that when we calculate the obstruction cocycle, we compare the local deformations of \(\overline{\varphi}_{m}\), which are defined using local coordinates. However, altering the values of \(c_{l}\) also changes the coordinate
introduced above, making the determination of the collect values of \(c_{l}\) a highly nontrivial task.
In any case, we obtain a new deformation \(\overline{\varphi}_{l+1}\) of \(\varphi\), which coincides with \(\varphi_{l}\) over \(\mathbb{C}[t]/t^{l+1}\). To deform \(\overline{\varphi}_{l+1}\), again we need to return to some \(\overline{\varphi}_{l^{\prime\prime}}\) and repeat the argument. In this way, we can construct a deformation \(\varphi_{N}\) of \(\varphi\) up to any high order of \(t^{N}\). Finally, we will show that this can be done in a way that if we need to return to a map \(\varphi_{a(N)}\), \(a(N)<N\), to deform \(\varphi_{N}\), we have \(a(N)\to\infty\) as \(N\to\infty\). Thus, eventually we obtain a projective system \(\{\varphi_{i}\}\) of deformations of \(\varphi\) in which \(\varphi_{j}\) reduces to \(\varphi_{i}\) over \(\mathbb{C}[t]/t^{i+1}\) for \(j>i\). Summarizing, we have the following (see Theorem 41 for the precise statement, in which the domain curve is not assumed to be smooth).
**Theorem 4**.: _Let \(\varphi\colon C\to X\) be a map from a regular complete curve to a regular surface which is birational to the image. Assume the map \(\varphi\) is semiregular. Let \(\{p_{1},\ldots,p_{r}\}\) be the singular points of the map \(\varphi\). Then, if there is a set of solutions to the system of equations \((\star_{\eta})\) which also satisfies the condition \((\mathrm{T})\), there is a deformation of \(\varphi\) which deforms the singularity of \(\varphi\) at each \(p_{j}\) non-trivially._
The set of equations \(\{\star_{\eta}\}\) depends on the geometry of the curve, and checking the condition of Theorem 41 (the existence of solutions of \(\{\star_{\eta}\}\) satisfying the transversality condition (T)) for each individual curve is a cumbersome task. However, it appears that in most cases, a significantly stricter condition holds true, and we can, to a large extent, disregard the individual properties of curves. Here, we outline this point. See Section 5 for details. First, we note that the dual of the obstruction space is given by \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\), where \(Z\) is the ramification divisor of the map \(\varphi\), as discussed in Sections 2.2 and 2.3. Due to the semiregularity assumption, sections of \(H^{0}(C,\varphi^{*}\omega_{X})\) do not pair with obstruction classes non-trivially. Therefore, to calculate obstructions, we need to assess how they interact when paired with meromorphic sections of the pull back of \(\omega_{X}\) to \(C\). In particular, the set \(\{\eta\}\), which parameterizes the equations \(\{\star_{\eta}\}\), forms a basis for the quotient \(H^{0}(C,\varphi^{*}\omega_{X}(Z))/H^{0}(C,\varphi^{*}\omega_{X})\).
Let \(p\in C\) be a singular point of the map \(\varphi\). In terms of the notation above, \(a\) is the multiplicity of the singularity of the image \(\varphi(p)\). In particular, \(a-1\) is the coefficient of \(p\) in the ramification divisor \(Z\). On the other hand, \(a-1\) is the same as the number of the freedom for deforming the singularity of \(\varphi\) at \(p\), which is parameterized by \(c_{l}\), \(l=1,\ldots,a-1\), introduced above. Thus, if there are \(a-1\) sections of \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\) which are singular only at the point \(p\) as sections of \(H^{0}(C,\varphi^{*}\omega_{X})\), and form a basis of the quotient space \(H^{0}(C,\varphi^{*}\omega_{X}((a-1)p))/H^{0}(C,\varphi^{*}\omega_{X})\), then virtually we cannot expect that the singularity of the map \(\varphi\) at \(p\) to deform. To ensure a nontrivial deformation, it is reasonable to assume that the space \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\) does not contain such a set of sections for each singular point of \(\varphi\). This is what we refer to as condition (D) in Definition 71.
On the other hand, we introduce the condition (G) in Definition 70, and prove that under the condition (D), the condition (G) implies that there is a set of solutions of the system \((\star_{\eta})\) which satisfies the condition (T). Combined with Theorem 4, this implies the following (which is the same as Theorem 72).
**Theorem 5**.: _If the conditions (D) and (G) hold at each singular point of \(\varphi\), the map \(\varphi\) deforms._
The advantage of the condition (G) is, contrary to the condition (T) (combined with the equations \((\star_{\eta})\)), it does not depend on the properties of the map \(\varphi\). In fact, the condition (G) depends only on the pair of positive integers \((a,b)\) and is independent of all the other geometry. Thus, the problem of the existence of deformations of \(\varphi\) is reduced to the cohomological calculation of checking the condition (D) (which is considerably easier than the deformation problem), and the condition (G), which is completely independent of the deformation problem. As we noted above, the condition (D) is virtually the minimal requirement for the existence of deformations of \(\varphi\). Theorem 5 claims that, if the condition (G) holds, then any semiregular map satisfying the condition (D) deforms. In other words, assuming the condition (G), these maps have almost optimal possible deformation property.
Usually, the condition (G) is much stronger than the condition required to apply Theorem 4. Fortunately, however, the condition (G) seems to hold in almost all cases, though at present we do not know how to prove it in general. We have checked it by computer calculation for the values of \((a,b)\) roughly up to \(a+b<30\) (see Table 1), and found that the condition (G) holds except only one case of \(a=4,b=6\). Even in this exceptional case, 2 out of 3 cases of the condition (G) hold (see the argument at the end of Section 5.1). The condition (G) holds if the system of polynomials \(f_{b+i}^{(b)}\), \(i=1,\ldots,a-1\) has a transversality property similar to that of a generic system of polynomials. The exceptional case of \(a=4,b=6\) seems to be caused by an accidental factorization of some of the polynomials \(f_{b+i}^{(b)}\) due to the smallness of the degree.
Another advantage of this result is that we use only a very small part of the data of singularities. Namely, as we noted above, the condition (G) only depends on the pair of numbers \(a\) and \(b\). In particular, for \(a>2\), once one shows the condition (G) for some \(a\) and \(b\), it applies to infinitely many types of singularities.
On the other hand, if the singularities are of multiplicity two (in other words, double points), we can even deduce the necessary and sufficient condition for the existence of deformations. In this case, we do not need the conditions (D) and (G), and we can deduce results considerably stronger than Theorems 4 and 5. The reason for this is that for \(a=2\), the function \(f_{b+i}^{(b)}\) is simply a power of \(c_{2}\), and this fact allows us to manipulate the obstruction much more efficiently than other cases, even without transversality condition.
So, assume that the map \(\varphi\) has singular points \(\{p_{1},\ldots,p_{l}\}\) each of which is a double point. Then, we have the following, which is essentially the same as Theorem 74 but expressed in a slightly different manner.
**Theorem 6**.: _The semiregular map \(\varphi\) deforms if and only if either one of the following conditions holds._
1. _There is at least one_ \(p_{i}\) _such that there is no section of_ \(H^{0}(C,\varphi^{*}\omega_{X}(p_{i}))\) _which is not contained in_ \(H^{0}(C,\varphi^{*}\omega_{X})\)_._
2. _The set_ \(H^{0}(C,\bar{\mathcal{N}}_{\varphi})\) _is not zero._
Here, \(\bar{\mathcal{N}}_{\varphi}\) is the non-torsion part of the normal sheaf of \(\varphi\), see the paragraph following Definition 8.
Now, let us return to Problem 2 mentioned at the beginning of the introduction, The above results imply that, if the singularities of \(\varphi\) are double points or the ones for which the condition (G) is checked, we can deform \(\varphi\) until the condition (D) is violated, or a singularity which does not satisfy the condition (G) (presumably it happens only when
\(a=4,b=6\)) appears. Since the condition (D) is virtually the minimal requirement for the existence of deformations as we mentioned above, it follows that on any surface, Problem 2 has an almost optimal answer under condition (G).
### Notation
We will work in the complex analytic category. In the body of the paper, we will study non-constant maps \(\varphi\colon C\to X\) from a curve \(C\) to a smooth complex surface \(X\) and their deformations. Here, the domain \(C\) will deform but the target \(X\) is fixed. Usually, we assume the curve \(C\) is integral. A deformation of \(\varphi\) over \(\operatorname{Spec}\mathbb{C}[t]/t^{k+1}\) will be written as \(\varphi_{k}\colon C_{k}\to X\times\operatorname{Spec}\mathbb{C}[t]/t^{k+1}\), or often as \(\varphi_{k}\colon C_{k}\to X\) for notational simplicity. Let \(p\in C\) be a regular point and \(s\) be a local parameter around \(p\). Let \(\{z,w\}\) be a local coordinate system on a neighborhood of \(\varphi(p)\) on \(X\). Then, the pull back of the coordinates \(z\) and \(w\) by \(\varphi\) are functions \(z(s),w(s)\) of \(s\), and we call \((z(s),w(s))\) a local parameterization of the map \(\varphi\). By the image of a map \(\varphi\) or \(\varphi_{k}\), we mean the analytic locally ringed space with the annihilator structure, see [11, Chapter I, Definition 1.45]. That is, if \(U\) is an open subset of \(C_{k}\) with the induced structure of an analytic locally ringed space, and \(V\) is an open subset of \(X\) such that \(\varphi_{k}(U)\) is closed in \(V\), we associate the structure sheaf
\[\mathcal{O}_{V}/\mathcal{A}nn_{\mathcal{O}_{V}}((\varphi_{k})_{*}\mathcal{O}_ {U})\]
to the image \(\varphi_{k}(U)\).
## 2. Localization of obstructions
### Meromorphic differential forms and cohomological pairings
We begin with giving a presentation of a Cech cocycle in a way suited to our purpose. Let \(C\) be a complete integral curve and \(\mathcal{L}\) be an invertible sheaf on it. Let \(\{q_{1},\dots,q_{s}\}\) be the set of singular points on \(C\). Let \(\{p_{1},\dots,p_{e}\}\) be a non-empty set of non-singular points on \(C\). Take an open covering \(\{U_{1},\dots,U_{m}\}\) of \(C\). We assume each of \(p_{i}\) and \(q_{i}\) is contained in a unique element of this covering, and denote them as \(U_{p_{i}}\) and \(U_{q_{i}}\), respectively. We also assume that the normalization of each \(U_{j}\) is a disc when it does not contain any \(q_{i}\), whereas the normalization of \(U_{q_{i}}\) is a disjoint union of \(b_{q_{i}}\) discs, where \(b_{q_{i}}\) is the number of branches of \(C\) at \(q_{i}\). Here, a disc means an analytic subset which is analytically isomorphic to \(D=\{z\in\mathbb{C}\mid|z|<1\}\). We also assume \(U_{i}\cap U_{j}\) is a disc or empty when \(i\neq j\).
We associate a meromorphic section \(\xi_{j}\) of \(\mathcal{L}|_{U_{j}}\) with each \(U_{j}\) which can have a pole only at \(\{p_{1},\dots,p_{e}\}\). In particular, if \(U_{j}\) does not contain any of \(\{p_{1},\dots,p_{e}\}\), \(\xi_{j}\) is a regular section of \(\mathcal{L}|_{U_{j}}\). It follows that \(\xi_{ij}=\xi_{i}-\xi_{j}\) is a section of \(\mathcal{L}|_{U_{i}\cap U_{j}}\), since \(U_{i}\cap U_{j}\) does not contain any \(p_{k}\). Thus, the set of sections \(\{\xi_{i}\}\) determines a Cech 1-cocycle \(\{\xi_{ij}\}\) with values in \(\mathcal{L}\) for the covering \(\{U_{i}\}\) mentioned above. Any class in \(H^{1}(C,\mathcal{L})\) can be represented in this way (see Proposition 7 below).
Let \(\omega_{C}\) be the dualizing sheaf of \(C\). This sheaf is defined using appropriate meromorphic differential forms which have poles only at the (normalization of) singular points of \(C\). Namely, \(\omega_{C}\) is given by the sheaf of Rosenlicht differentials [19]. We do not need details of it in this paper, and we omit its definition. See [19] or other expositions, for example, [2, Chapter VIII, Section 1] or [5, Section II.6]. Let \(\psi\) be a global section of \(\mathcal{L}^{\vee}\otimes\omega_{C}\). Then, \(\{\xi_{ij}\}\) and \(\psi\) make a natural pairing. The value of this pairing is given as follows. Namely, on \(U_{j}\), the fiberwise pairing between \(\xi_{j}\) and \(\psi\) gives a meromorphic section \((\psi,\xi_{j})\) of \(\omega_{C}|_{U_{j}}\). If \(p_{i}\in U_{p_{i}}\), \((\psi,\xi_{p_{i}})\) may have a pole at \(p_{i}\), and let \(r_{p_{i}}\) be its residue. The section \((\psi,\xi_{q_{i}})\) may also have a pole at a singular point \(q_{i}\) of \(C\) due to the pole of \(\omega_{C}\). However,
the contribution to the pairing from such a pole vanishes due to the defining property of Rosenlicht differentials.
Then, the following is a special case of [16, Proposition 10].
**Proposition 7**.:
1. _Any cohomology class in_ \(H^{1}(C,\mathcal{L})\) _can be represented by some set_ \(\{\xi_{i}\}\) _of local meromorphic sections on open subsets_ \(\{U_{i}\}\) _as above._
2. _The pairing between_ \(\{\xi_{ij}\}\) _and_ \(\psi\) _is given by_ \[\langle\psi,\{\xi_{ij}\}\rangle=\sum_{i=1}^{e}r_{p_{i}}.\] _This gives the natural nondegenerate pairing between_ \(H^{1}(C,\mathcal{L})\) _and its dual space_ \(H^{0}(C,\mathcal{L}^{\vee}\otimes\omega_{C})\)_._
### Normal sheaf of a map
Let \(\varphi\colon C\to X\) be a map from a complete integral curve to a smooth surface. Let \(\{p_{1},\dots,p_{e}\}\) be the set of points where \(\varphi\) is not a local embedding. We assume each \(p_{i}\) is a regular point of \(C\). We take a covering \(\mathcal{U}=\{U_{1},\dots,U_{m}\}\) of \(C\) as in Section 2.1. In particular, for each point \(p\in\{p_{1},\dots,p_{e}\}\), there is a unique open subset in \(\mathcal{U}\) containing \(p\). We write it by \(U_{p}\) as before. Let \(q\in C\setminus\{p_{1},\dots,p_{e}\}\) be any point. There is a neighborhood \(U_{q}\) of \(q\) in \(C\) on which \(\varphi\) is an isomorphism onto its image. Let \(V\) be a suitable open subset of \(X\) such that \(U_{q}\) is one of the connected components of \(\varphi^{-1}(V)\). There is a usual normal sheaf of the image of \(U_{q}\) defined by \(\mathcal{N}_{\varphi(U_{q})}=\mathcal{O}_{V}(\varphi(U_{q}))|_{\varphi(U_{q})}\). We will regard this also as a sheaf on \(U_{q}\) in an obvious way.
On the other hand, on a neighborhood \(U_{p_{i}}\) of \(p_{i}\), consider the sheaf \(\mathcal{N}_{\varphi|_{U_{p_{i}}}}\) defined by the exact sequence
\[0\to\mathcal{T}_{U_{p_{i}}}\to\varphi_{U_{p_{i}}}^{*}\mathcal{T}_{X}\to \mathcal{N}_{\varphi|_{U_{p_{i}}}}\to 0.\]
Here, \(\mathcal{T}_{U_{p_{i}}}\) and \(\mathcal{T}_{X}\) are the tangent sheaves.
The sheaves \(\mathcal{N}_{\varphi(U_{q})}\) and \(\mathcal{N}_{\varphi|_{U_{p_{i}}}}\) are naturally isomorphic on their intersection. Namely, they are naturally identified with the pull back of the normal sheaf of the image \(\varphi(U_{q}\cap U_{p_{i}})\). Thus, we obtain a global sheaf on \(C\).
**Definition 8**.: We denote the sheaf on \(C\) obtained in this way by \(\mathcal{N}_{\varphi}\).
The sheaf \(\mathcal{N}_{\varphi}\) has torsion at singular points \(\{p_{1},\dots,p_{e}\}\) of \(\varphi\). In particular, there is an exact sequence of sheaves on \(C\),
\[0\to\mathcal{H}_{\varphi}\to\mathcal{N}_{\varphi}\to\bar{\mathcal{N}}_{\varphi }\to 0,\]
where \(\mathcal{H}_{\varphi}\) is a torsion sheaf and \(\bar{\mathcal{N}}_{\varphi}\) is locally free. The sheaf \(\bar{\mathcal{N}}_{\varphi}\) is also described on a neighborhood \(U_{p_{i}}\) of \(p_{i}\) as follows.
**Lemma 9**.: _[_20_, Section 3.4.3]_ _There is an exact sequence_
\[0\to\mathcal{T}_{U_{p_{i}}}(Z_{p_{i}})\to\varphi|_{U_{p_{i}}}^{*}\mathcal{T}_ {X}\to\bar{\mathcal{N}}_{\varphi}|_{U_{p_{i}}}\to 0.\]
_Here, \(Z=(d\varphi)\) is the ramification divisor and \(Z_{p_{i}}\) is its restriction to \(p_{i}\). _
The set \(\{p_{1},\dots,p_{e}\}\) is the support of \(Z\). Given a \(k\)-th order deformation \(\varphi_{k}\) of \(\varphi\) for a non-negative integer \(k\), the obstruction to deforming it one step further is represented by a cocycle defined by taking the difference of local deformations on \(U_{i}\). The obstruction class belongs to the cohomology group \(H^{1}(C,\bar{\mathcal{N}}_{\varphi})\).
### Obstructions to deformations of singular curves on surfaces
By Lemma 9, we have the following.
**Lemma 10**.: _We have an isomorphism_
\[\bar{\mathcal{N}}_{\varphi}\cong\varphi^{*}\omega_{X}^{-1}\otimes\omega_{C}(-Z),\]
_of sheaves on \(C\), where \(\omega_{X}\) is the canonical sheaf of \(X\) and \(\omega_{C}\) is the dualizing sheaf of the possibly singular reduced curve \(C\)._
Proof.: Outside the set \(\{p_{1},\ldots,p_{e}\}\), this follows by the adjunction. At \(\{p_{1},\ldots,p_{e}\}\), this follows from Lemma 9. These isomorphisms glue, since on the intersection of open subsets, \(\bar{\mathcal{N}}_{\varphi}\) is naturally isomorphic to the normal sheaf of the image of \(\varphi\).
By the Serre duality, we have
\[H^{1}(C,\bar{\mathcal{N}}_{\varphi})\cong H^{0}(C,\varphi^{*}\omega_{X}(Z))^{ \vee}.\]
Let \(p\) be a singular point of \(\varphi\). Taking a suitable coordinate system \(\{z,w\}\) around \(\varphi(p)\) on \(X\), and a suitable local parameter \(s\) on the open subset \(U_{p}\) of \(C\), the map \(\varphi\) can be parameterized as
\[(z,w)=(s^{a},\;s^{b}+s^{b+1}g_{0}(s)),\]
where \(g_{0}(s)\) is an analytic function, \(a-1\) is the vanishing order of \(d\varphi\) at \(p\) (in other words, the coefficient of \(p\) in the ramification divisor \(Z\)), and \(b\) is a positive integer larger than \(a\). See, for example, [11, Chapter I, Corollary 3.8]. By a coordinate change on \(X\), we can assume \(b\) is not a multiple of \(a\). We assume this hereafter.
Its first order deformation is, up to a change of the parameter \(s\), of the following form:
\[(z,w)=(s^{a}+t\sum_{i=0}^{a-2}c_{a-i}s^{i},\;s^{b}+s^{b+1}g_{0}(s)+tg_{1}(s)),\]
where \(c_{i}\) is a complex number, and \(g_{1}\) is an analytic function. Note that since \(U_{p}\) is non-singular, its infinitesimal deformation is trivial. Thus, if we have a \(k\)-th order deformation \(\varphi_{k}\colon C_{k}\to X\times\operatorname{Spec}\mathbb{C}[t]/t^{k+1}\) of \(\varphi\), the restriction of it to \(U_{p}\) gives a map between locally ringed spaces from \(U_{p}\times\operatorname{Spec}\mathbb{C}[t]/t^{k+1}\) to \(X\times\operatorname{Spec}\mathbb{C}[t]/t^{k+1}\). We call any function on \(U_{p}\times\operatorname{Spec}\mathbb{C}[t]/t^{k+1}\) which reduces to the given parameter \(s\) on \(U_{p}\) over \(\mathbb{C}[t]/t\) a parameter on \(U_{p}\times\operatorname{Spec}\mathbb{C}[t]/t^{k+1}\), and we will denote it again by \(s\). Similarly, we will also consider a parameter on \(U_{p}\) defined over \(\mathbb{C}[[t]]\).
The part \(t\sum_{i=0}^{a-2}c_{a-i}s^{i}\), which can be seen as an element of the \((a-1)\)-dimensional vector space \(V_{p}:=\{c_{a}+c_{a-1}s+\cdots+c_{2}s^{a-2}\,|\,c_{i}\in\mathbb{C}\}\), corresponds to the torsion part of \(\mathcal{N}_{\varphi}\) (precisely speaking, it is the sum of a torsion element and a non-torsion element, see Example 11 below). The part \(g_{1}\) corresponds to a section of the sheaf \(\bar{\mathcal{N}}_{\varphi}\). Similarly, given a \(k\)-th order deformation \(\varphi_{k}\) of \(\varphi\) whose restriction to \(U_{p}\times\operatorname{Spec}\mathbb{C}[t]/t^{k+1}\) is parameterized as
\[(z,w)=(s^{a}+\sum_{j=1}^{k}\sum_{i=0}^{a-2}t^{j}c_{a-i,j}s^{i},\;s^{b}+s^{b+1} g_{0}(s)+\sum_{j=1}^{k}t^{j}g_{j}(s)),\]
its local \((k+1)\)-th order deformations are given by
\[(z,w)=(s^{a}+\sum_{j=1}^{k+1}\sum_{i=0}^{a-2}t^{j}c_{a-i,j}s^{i},\;s^{b}+s^{b+1}g _{0}(s)+\sum_{j=1}^{k+1}t^{j}g_{j}(s)),\]
where \(c_{a-i,j}\) is a complex number and \(g_{j}(s)\) is an analytic function. Here, the parameter \(s\) on \(U_{p}\times\operatorname{Spec}\mathbb{C}[t]/t^{k+2}\) is chosen so that it reduces to a given parameter on \(U_{p}\times\operatorname{Spec}\mathbb{C}[t]/t^{k+1}\).
**Example 11**.: In the case of the map \(\varphi\colon\mathbb{C}\to\mathbb{C}^{2}\) given by \(s\mapsto(s^{2},s^{3})\), whose image is the ordinary cusp, the sheaf \(\mathcal{N}_{\varphi}\) is given by \(\mathcal{O}_{\mathbb{C}}\langle\partial_{z},\partial_{w}\rangle/(2s\partial _{z}+3s^{2}\partial_{w})\), where \(\{z,w\}\) is the standard coordinate system on \(\mathbb{C}^{2}\), and \(\partial_{z}\), \(\partial_{w}\) are the standard generators of the tangent sheaf of \(\mathbb{C}^{2}\). In this case, \(t\sum_{i=0}^{a-2}c_{a-i}s^{i}=tc_{2}\) and it corresponds to the section \(c_{2}\partial_{z}\) of \(\mathcal{N}_{\varphi}\). It is the sum of a torsion element \(c_{2}(\partial_{z}+\frac{3s}{2}\partial_{w})\) and a non-torsion element \(-\frac{3c_{2}s}{2}\partial_{w}\).
Let \(k\) be a non-negative integer and let \(\varphi_{k}\) be a \(k\)-th order deformation of \(\varphi\). Let \(\mathcal{G}_{k+1}\) be the group of automorphisms of \(\mathcal{O}_{U_{p}}\times\mathbb{C}[t]/t^{k+2}\) which are the identity over \(\mathbb{C}[t]/t^{k+1}\), where \(\mathcal{O}_{U_{p}}\) is the sheaf of functions on the neighborhood \(U_{p}\) of \(p\), considered as an analytic locally ringed space. The group \(\mathcal{G}_{k+1}\) acts on the ringed space \(U_{p}\times\operatorname{Spec}\mathbb{C}[t]/t^{k+2}\) as automorphisms. Consequently, it also acts on the set of \((k+1)\)-th order local deformations of \(\varphi\) on \(U_{p}\) which restricts to \(\varphi_{k}\) over \(\mathbb{C}[t]/t^{k+1}\). Let \(\varphi_{k+1}|_{U_{p}},\varphi^{\prime}_{k+1}|_{U_{p}}\) be such local deformations. We call them equivalent if there is some \(g\in\mathcal{G}_{k+1}\) such that \(\varphi_{k+1}|_{U_{p}}=\varphi^{\prime}_{k+1}|_{U_{p}}\circ g\). In the following statement, we use the same notation as above. In particular, \(V_{p}=\{c_{a}+c_{a-1}s+\cdots+c_{2}s^{a-2}\,|\,c_{i}\in\mathbb{C}\}\).
**Proposition 12**.: _Given a \(k\)-th order analytic deformation \(\varphi_{k}\) of \(\varphi\) on \(U_{p}\), the set of equivalence classes of \((k+1)\)-th order analytic deformations of \(\varphi\) on \(U_{p}\) which reduce to \(\varphi_{k}\) is naturally isomorphic to the set of sections of the sheaf_
\[\mathcal{O}_{U_{p}}\cdot\partial_{w}\oplus i_{*}V_{p}\cdot\partial_{z}\]
_on \(U_{p}\). Here, \(\{\partial_{z},\partial_{w}\}\) is the pullback of the natural basis of the tangent bundle of the coordinate neighborhood \(\{z,w\}\) on \(X\) by \(\varphi\). Also, the vector space \(V_{p}\) is regarded as the constant sheaf on the point \(p\), and \(i\colon\{p\}\to U_{p}\) is the inclusion._
Proof.: We define a map from the set of equivalence classes of deformations to the set of sections of \(\mathcal{O}_{U_{p}}\cdot\partial_{w}\oplus i_{*}V_{p}\cdot\partial_{z}\) by
\[\varphi_{k+1}|_{U_{p}}\mapsto g_{k+1}(s)\cdot\partial_{w}+g_{1,k+1}(s)\cdot \partial_{z},\]
where \(g_{1,k+1}(s)=\sum_{i=0}^{a-2}c_{a-i,k+1}s^{i}\). This is well-defined since if we apply an element of \(\mathcal{G}_{k+1}\), it changes the coordinate \(s\) to another one of the form \(s+t^{k+1}b_{k+1}(s)\), where \(b_{k+1}(s)\) is a holomorphic function. Then, it breaks the form of \(g_{1,k+1}(s)\). Namely, it produces a non-zero coefficient of \(s^{a^{\prime}}\), \(a^{\prime}\geq a-1\).
Conversely, given a section \(\alpha(s)\cdot\partial_{w}+\beta(s)\cdot\partial_{z}\) of \(\mathcal{O}_{U_{p}}\cdot\partial_{w}\oplus i_{*}V_{p}\cdot\partial_{z}\), we define a deformation of \(\varphi_{k}\) by taking \(g_{k+1}(s)=\alpha(s)\) and \(g_{1,k+1}(s)=\beta(s)\). Then, we obtain the inverse mapping.
The sheaf \(\mathcal{O}_{U_{p}}\cdot\partial_{w}\oplus i_{*}V_{p}\cdot\partial_{z}\) is naturally isomorphic to the sheaf \(\mathcal{N}_{\varphi}\) restricted to \(U_{p}\), though the direct sum decomposition does not coincide with the more natural
\(\mathcal{H}_{\varphi}\oplus\bar{\mathcal{N}}_{\varphi}\) appeared in the previous subsection, as in Example 11. However, the part \(\mathcal{O}_{U_{p}}\cdot\partial_{w}\) is naturally isomorphic to \(\bar{\mathcal{N}}_{\varphi}\).
#### 2.3.1. Explicit presentation of the obstruction cocycle in the first non-trivial case
By Proposition 12, if a local deformation of \(\varphi\) associated with the summand \(i_{*}V_{p}\cdot\partial_{z}\) can be extended globally after modifying by sections of \(\bar{\mathcal{N}}_{\varphi}\) if necessary, it gives a non-trivial deformation of \(\varphi\). Moreover, it deforms the singularity of \(\varphi\) at \(p\), since it lowers the multiplicity \(a\) of the singularity. So, we study the obstruction to the deformations associated with this part. Take a covering \(\mathcal{U}=\{U_{1},\ldots,U_{m}\}\) of \(C\) as before. Recall that for each singular point \(p\in\{p_{1},\ldots,p_{e}\}\) of \(\varphi\), there is a unique open subset \(U_{p}\) belonging to \(\mathcal{U}\) which contains \(p\). Let \(s\) be a parameter on \(U_{p}\). We also denote by \(s\) a parameter on \(U_{p}\) defined over \(\mathbb{C}[[t]]\) which reduces to the given one over \(\mathbb{C}[t]/t\). Let \(S\) be a local parameter over \(\mathbb{C}[[t]]\) on a punctured neighborhood \(\mathring{U}_{p}=U_{p}\setminus\{p\}\) of \(p\) in \(C\) which satisfies
\[S^{a}=s^{a}+t\sum_{i=0}^{a-2}c_{a-i}s^{i}.\]
Given a polynomial \(\beta(s)\), since one of the \(a\)-th roots of \(s^{a}+t\beta(s)\) is given by \(s(1+\sum_{i=1}^{\infty}\prod_{j=0}^{i-1}(\frac{1}{a}-j)\frac{1}{i!}(t\frac{ \beta(s)}{s^{a}})^{i})\), we can take
\[S=s(1+\sum_{i=1}^{\infty}\prod_{j=0}^{i-1}(\frac{1}{a}-j)\frac{1}{i!}(t\sum_{ l=2}^{a}\frac{c_{l}}{s^{l}})^{i}).\]
Consider a first order local deformation of \((z,w)=(s^{a},s^{b}+s^{b+1}g_{0}(s))\) (now this is considered over \(\mathbb{C}[t]/t\)) of the form
\[(z,w)=(S^{a},\;S^{b}+S^{b+1}g_{0}(S)),\]
defined on \(\mathring{U}_{p}\). Evidently, we have \(S^{a}=s^{a}+t\sum_{i=0}^{a-2}c_{a-i}s^{i}\), which holds up to any order with respect to \(t\). One also sees that \(S^{b}+S^{b+1}g_{0}(S)\), expanded in terms of \(s\) as a function defined over \(\mathbb{C}[t]/t^{2}\), can be extended to \(p\). In other words, the expansion does not include any term with negative powers of \(s\). This means that the deformed curve is also extended to \(U_{p}\), and is still locally defined by the same equation as that of \((z,w)=(s^{a},s^{b}+s^{b+1}g_{0}(s))\), now considered over \(\mathbb{C}[t]/t^{2}\). On an open subset \(U_{j}\) which does not contain a singular point of \(\varphi\), we can regard \(\varphi\) itself as a local deformation of \(\varphi\) over \(\mathbb{C}[t]/t^{2}\).
The obstruction to the existence of a global deformation is given by the difference of local deformations on overlaps like \(U_{p}\cap U_{j}\). If we take local deformations as in the previous paragraph, the difference gives the zero section of the sheaf \(\bar{\mathcal{N}}_{\varphi}\) on each \(U_{p}\cap U_{j}\). Thus, there is no obstruction to the existence of a first order deformation. Note that a deformation obtained in this way has the image whose local defining equations are the same as those of \(\varphi\), but regarded as equations over \(\mathbb{C}[t]/t^{2}\).
**Remark 13**.: _Note that although the image is the same, the map over \(\mathbb{C}[t]/t^{2}\) may not be a trivial deformation of \(\varphi\), since the difference between local deformations can give a nontrivial cocycle with values in the tangent sheaf of \(C\). In this case, the domain curve of the deformed map is a nontrivial deformation of \(C\), and consequently, the map is also nontrivially deformed._
The same holds until a singular term appears in the expansion of \(S^{b}+S^{b+1}g_{0}(S)\) at some \(p\in\{p_{1},\ldots,p_{e}\}\). Let \(t^{k}\) be the minimal order where such a term appears. Let \(\varphi_{k-1}\) be the \((k-1)\)-th order deformation of \(\varphi\) obtained in the way described above. Thus, the image of \(\varphi_{k-1}\) is the same as that of \(\varphi\). In particular, on \(U_{p}\), the map \(\varphi_{k-1}\) has the parameterization \((z,w)=(S^{a},S^{b}+S^{b+1}g_{0}(S))\).
Now, we take local deformations of \(\varphi_{k-1}\) as follows. On an open subset \(U_{i}\), which does not contain a singular point \(p\) at which a singular term appears in \(S^{b}+S^{b+1}g_{0}(S)\), we take the trivial local deformation of \(\varphi_{k-1}\) as above, whose image is given by the same defining equation as the image of \(\varphi\) restricted to \(U_{i}\), but considered over \(\mathbb{C}[t]/t^{k+1}\). On the open subset \(U_{p}\) which contains \(p\), we have a parameterization of \(\varphi_{k-1}\) given by \((z,w)=(S^{a},S^{b}+S^{b+1}g_{0}(S))\) defined over \(\mathbb{C}[t]/t^{k}\). If we expand \(S^{b}+S^{b+1}g_{0}(S)\) after substituting \(S=s(1+\sum_{i=1}^{\infty}\prod_{j=0}^{i-1}(\frac{1}{a}-j)\frac{1}{i!}(t\sum_{ l=2}^{a}\frac{c_{l}}{s!})^{i})\), it contains singular terms of the order \(t^{k}\), and we take a local deformation on \(U_{p}\) by simply discarding these singular terms. We write it by \((z,w)=(S^{a},\overline{S^{b}+S^{b+1}g_{0}(S)})\).
The difference between the local deformations gives the obstruction Cech 1-cocycle to deforming \(\varphi_{k-1}\). In terms of the formulation in Section 2.1, this is described as follows. By taking a refinement of the open covering \(\{U_{i}\}\) if necessary, we assume that if each \(U_{i}\) and \(U_{j}\) (\(i\neq j\)) contains a singular point of \(\varphi\), we have \(U_{i}\cap U_{j}=\emptyset\).
**Proposition 14**.: _The obstruction cocycle associated with the local deformations of \(\varphi_{k-1}\) described above is represented by the following set of local meromorphic sections of \(\bar{\mathcal{N}}_{\varphi}\) on each \(U_{i}\). Namely, give the zero section to all the open subsets \(U_{i}\) which do not contain a singular point of the map \(\varphi\). On the open subset \(U_{p}\) containing a singular point \(p\), attach a meromorphic section \((\overline{S^{b}+S^{b+1}g_{0}(S)}-(S^{b}+S^{b+1}g_{0}(S)))\partial_{w}\) (precisely speaking, the coefficient of \(t^{k}\) of it) using the notation above._
Note that if a singular term does not appear in the expansion of \(S^{b}+S^{b+1}g_{0}(S)\) at some \(q\in\{p_{1},\ldots,p_{e}\}\), the meromorphic section attached to \(U_{q}\) is the zero section.
Proof.: The coefficient of \(t^{k}\) of the meromorphic function \(S^{b}+S^{b+1}g_{0}(S)\), which we write by \((S^{b}+S^{b+1}g_{0}(S))_{k}\), corresponds to a holomorphic section \((S^{b}+S^{b+1}g_{0}(S))_{k}\partial_{w}\) of \(\bar{\mathcal{N}}_{\varphi}\) on the punctured disc \(\mathring{U}_{p}\) under the correspondence in Proposition 12 (precisely speaking, its slightly modified version over \(\mathring{U}_{p}\)). Note that this corresponds to a deformation of \(\varphi_{k-1}\) on \(\mathring{U}_{p}\) whose image is defined by the same equation as the image of \(\varphi\). In particular, on the intersection \(\mathring{U}_{p}\cap U_{j}\) with another open subset, the images of the local deformation given by \((S^{b}+S^{b+1}g_{0}(S))_{k}\partial_{w}\) on \(\mathring{U}_{p}\) and that given by the zero section on \(U_{j}\) coincide, since they are both defined by the same equation. It follows that the difference on \(\mathring{U}_{p}\cap U_{j}\) between these local deformations belongs to the tangent sheaf of \(\varphi(C)\), which is zero in \(\bar{\mathcal{N}}_{\varphi}\).
Therefore, the section of \(\bar{\mathcal{N}}_{\varphi}\) on the intersection \(U_{p}\cap U_{j}\) given by the difference of the local deformations corresponding to \((z,w)=(S^{a},\overline{S^{b}+S^{b+1}g_{0}(S)})\) on \(U_{p}\) and to the zero section on \(U_{j}\) equals to \((\overline{S^{b}+S^{b+1}g_{0}(S)}-(S^{b}+S^{b+1}g_{0}(S)))\partial_{w}\) (precisely speaking, the coefficient of \(t^{k}\) of it). This proves the claim.
In general, such a section makes a non-trivial residue pairing with elements of \(H^{0}(C,\bar{\mathcal{N}}_{\varphi}^{\vee}\otimes\omega_{C})\), which calculates a contribution to the obstruction to deforming \(\varphi_{k-1}\), by Proposition
7. At higher orders, the calculation of a representative of the obstruction class is more involved. See Section 4.2. In the rest of this paper, we study this contribution to the obstruction in more detail.
#### 2.3.2. Coefficients of singular terms
Take a deformation of the form \((z,w)=(S^{a},S^{b}+S^{b+1}g_{0}(S))\) of \((z,w)=(s^{a},s^{b}+s^{b+1}g_{0}(s))\) around a singular point \(p\) of \(\varphi\) as before. It is primarily defined only over \(\mathring{U}_{p}\). Here, \(S\) satisfies \(S^{a}=s^{a}+t\sum_{i=0}^{a-2}c_{a-i}s^{i}\). Although we took \(c_{i}\) to be complex numbers in the above argument, in general we can take them to be elements of \(\mathbb{C}[[t]]\). For notational simplicity, we rewrite \(S^{a}\) as \(S^{a}=s^{a}+\sum_{i=0}^{a-2}c_{a-i}s^{i}\), where \(c_{i}\in t\mathbb{C}[[t]]\). Moreover, it will be convenient to take \(c_{i}\in t^{i}\mathbb{C}[[t]]\) in view of certain homogeneity property, see Definition 29. We assume this hereafter.
Recall that we can assume \(b\) is not a multiple of \(a\). Then, we have
\[S^{b}=s^{b}(1+\sum_{k=2}^{a}\frac{c_{k}}{s^{k}})^{\frac{b}{a}}=s^{b}(1+\sum_{ i=1}^{\infty}\prod_{l=0}^{i-1}(\frac{b}{a}-l)\frac{1}{i!}(\sum_{k=2}^{a} \frac{c_{k}}{s^{k}})^{i}).\]
We write this in the form
\[S^{b}=s^{b}(1+\sum_{i=1}^{\infty}f_{i}^{(b)}(\mathbf{c})\frac{1}{s^{i}}),\]
where \(\mathbf{c}=(c_{2},\ldots,c_{a})\). The coefficients \(f_{i}^{(b)}\) are given as follows.
**Lemma 15**.: _We have_
\[f_{b+j}^{(b)}(\mathbf{c})=\sum_{\lambda\in\mathcal{P}(b+j;[2,a])}\begin{pmatrix} &\frac{b}{a}\\ \lambda(2)&\cdots&\lambda(a)\end{pmatrix}c_{2}^{\lambda(2)}\cdots c_{a}^{ \lambda(a)},\]
_(this is the coefficient of \(s^{-j}\)) here, \(\mathcal{P}(b+j;[2,a])\) is the set of partitions of \(b+j\) using only integers in \([2,a]\), and \(\lambda(h)\) is the multiplicity of the integer \(h\) in the partition \(\lambda\). Also,_
\[\begin{pmatrix}\alpha&\\ \beta_{1}&\cdots&\beta_{k}\end{pmatrix}=\frac{\prod_{i=0}^{\beta_{1}+\cdots+ \beta_{k}-1}(\alpha-i)}{\beta_{1}!\cdots\beta_{k}!}.\]
The coefficients of the expansions of the other terms in \(S^{b}+S^{b+1}g_{0}(S)\) can be expressed similarly. We write it as \(S^{b^{\prime}}=s^{b^{\prime}}(1+\sum_{i=1}^{\infty}f_{i}^{(b^{\prime})}( \mathbf{c})\frac{1}{s^{i}})\), \(b^{\prime}\geq b\).
### Comparison of parameterizations
In later sections, we will construct deformations of \(\varphi\). In general, given an \(N\)-th order deformation \(\varphi_{N}\) of \(\varphi\) for some positive integer \(N\), there is a non-trivial obstruction to deforming \(\varphi_{N}\) one step further. It means that the map \(\varphi_{N}\) itself does not deform any more. To overcome this problem, we will change the values of \(c_{i}\) at each singular point of \(\varphi\) at the orders lower than \(t^{N+1}\) to eliminate the obstruction associated with \(\varphi_{N}\). This results in changing the map \(\varphi_{N}\) in lower orders. More precisely, we construct a new map \(\bar{\varphi}_{N}\) which is equal to \(\varphi_{N}\) only up to some order \(t^{N^{\prime}}\), \(N^{\prime}<N\), but the obstruction to deforming \(\bar{\varphi}_{N}\) vanishes. Thus, there is a map \(\bar{\varphi}_{N+1}\) extending \(\bar{\varphi}_{N}\). We will do this in a way that \(N^{\prime}\) goes to \(\infty\) when \(N\) goes to \(\infty\). Thus, eventually we will obtain a formal deformation of \(\varphi\). Then, by applying an appropriate algebraization theorem [1], we will have an actual deformation.
In this argument, knowing the relation between the parameterizations on a neighborhood of a singular point of \(\varphi\) associated with different values of \(c_{i}\) is important. In this section, we will study this issue.
At a singular point \(p\) of \(\varphi\), write
\[S^{b}+S^{b+1}g_{0}(S)=\sum_{l=-\infty}^{\infty}\sigma_{-l}(\mathbf{c})s^{l},\]
where \(\sigma_{-l}\) is a series, which is the sum of polynomials of the form \(f_{b^{\prime}-l}^{(b^{\prime})}\) in Lemma 15, with \(b^{\prime}\geq b\). Here, \(S=s(1+\sum_{i=1}^{\infty}\prod_{j=0}^{i-1}(\frac{1}{a}-j)\frac{1}{i!}(\sum_{k= 2}^{a}\frac{c_{k}}{s^{k}})^{i})\) as in Section 2.3.1. As in the previous subsection, we take \(c_{i}\in t^{i}\mathbb{C}[[t]]\). Assume that we have an \(N\)-th order deformation \(\varphi_{N}\) of \(\varphi\). Let \(\{z,w\}\) be a local coordinate system on \(X\) around \(\varphi(p)\). We can take it so that the pull back of it by \(\varphi_{N}\) is given in the form
\[(z,w)=(s^{a}+c_{2}(N)s^{a-2}+\cdots+c_{a}(N),\sum_{l=0}^{\infty}\sigma_{-l}(c _{2}(N),\ldots,c_{a}(N))s^{l}+h_{N}(s,t)),\ \ \mbox{mod}\ t^{N+1},\]
where \(c_{i}(N)\in t^{i}\mathbb{C}[[t]]\) and \(h_{N}(s,t)\) is a holomorphic function. From now on, we will write \((c_{2}(N),\ldots,c_{a}(N))=\mathbf{c}(N)\) for notational simplicity.
We will perturb \(\mathbf{c}(N)\) to \(\mathbf{c}(N+1)\) in the form
\[c_{i}(N+1)=c_{i}(N)+\delta_{i}, \tag{1}\]
where \(\delta_{i}\in t^{i+1}\mathbb{C}[[t]]\). We compare the parameterizations
\[(z,w)=(s^{a}+c_{2}(N)s^{a-2}+\cdots+c_{a}(N),\sum_{l=0}^{\infty}\sigma_{-l}( \mathbf{c}(N))s^{l})\]
and
\[(z,w)=(s^{a}+c_{2}(N+1)s^{a-2}+\cdots+c_{a}(N+1),\sum_{l=0}^{\infty}\sigma_{-l }(\mathbf{c}(N+1))s^{l}).\]
Note that we have dropped the part \(h_{N}(s,t)\). A calculation needed for this part will be provided later in this subsection.
We first note the following.
**Lemma 16**.: _On a punctured neighborhood of \(p\), there is a change of the parameter \(s\) defined over \(\mathbb{C}[[t]]\) which transforms \(s^{a}+c_{2}(N+1)s^{a-2}+\cdots+c_{a}(N+1)\) into \(s^{a}+c_{2}(N)s^{a-2}+\cdots+c_{a}(N)\)._
Proof.: Put
\[s(N+1)=s-\frac{1}{a}\sum_{i=2}^{a}\frac{\delta_{i}^{\prime}}{s^{i-1}}+\sum_{i= a+1}^{\infty}\frac{\varepsilon_{i}}{s^{i-1}},\]
where \(\delta_{i}^{\prime}\) and \(\varepsilon_{i}\) are unknown series in \(\mathbb{C}[[t]]\). The equation
\[s(N+1)^{a}+c_{2}(N+1)s(N+1)^{a-2}+\cdots+c_{a}(N+1)=s^{a}+c_{2}(N)s^{a-2}+ \cdots+c_{a}(N)\]
can be solved order by order with respect to \(s\) so that the unknown variables \(\delta_{i}^{\prime}\) and \(\varepsilon_{i}\) are uniquely determined. Explicitly, in the above equation, the condition that the coefficients
of \(s^{a-i}\) on the left and the right hand sides coincide for \(2\leq i\leq a\), is equivalent to the equation of the form
\[\delta^{\prime}_{i}=F_{i}(\delta_{2},\ldots,\delta_{i},\delta^{\prime}_{2}, \ldots,\delta^{\prime}_{i-1},c_{2}(N),\ldots,c_{a}(N)),\]
where \(F_{i}\) is a polynomial. This fixes \(\delta^{\prime}_{i}\) uniquely, since by induction we can assume \(\delta^{\prime}_{2},\ldots,\delta^{\prime}_{i-1}\) are already fixed. Similarly, for \(l<0\), the condition that the coefficient of \(s^{l}\) in the above equation vanishes is equivalent to the equation
\[\varepsilon_{a-l}=F_{a-l}(\delta_{2},\ldots,\delta_{a},\delta^{\prime}_{2}, \ldots,\delta^{\prime}_{a},\varepsilon_{a+1},\ldots,\varepsilon_{a-l-1},c_{2} (N),\ldots,c_{a}(N)),\]
for some polynomial \(F_{a-l}\). Again, this determines \(\varepsilon_{a-l}\) uniquely.
**Definition 17**.: For an element \(\delta\in\mathbb{C}[[t]]\), let \(ord(\delta)\) be the maximal integer \(k\) such that \(\delta\) is divisible by \(t^{k}\).
**Lemma 18**.: _The coefficient \(\delta^{\prime}_{i}\) in the proof of Lemma 16 is given by_
\[\delta^{\prime}_{i}=\delta_{i}-\sum_{j=2}^{i-2}\frac{a-j}{a}\delta_{i-j}c_{j}( N)+O(\delta^{2}),\]
_for \(4\leq i\leq a\), where \(O(\delta^{2})\) is the sum of terms which are quadratic or more with respect to \(\delta_{2},\ldots,\delta_{a}\). Also, we have \(\delta^{\prime}_{2}=\delta_{2}\) and \(\delta^{\prime}_{3}=\delta_{3}\). Moreover, we have_
\[ord(\delta^{\prime}_{i})\geq i+\min_{j\in\{2,3,\ldots,i-2,i\}}\{ord(\delta_{j })-j\}.\]
Proof.: This follows from direct calculation. For the last claim, we note that if a monomial \(\prod_{j=2}^{a}\delta_{j}^{p_{j}}\prod_{k=2}^{a}(\delta^{\prime}_{k})^{q_{k}} \prod_{l=2}^{a}c_{l}(N)^{r_{l}}\) is contained in the part \(O(\delta^{2})\), we have
\[\sum_{j=2}^{a}jp_{j}+\sum_{k=2}^{a}kq_{k}+\sum_{l=2}^{a}lr_{l}=i,\]
and at least two of \(p_{j},q_{k}\) are not zero. Then, the claim follows by induction using the fact \(ord(c_{l}(N))\geq l\).
Note that since we assume \(ord(\delta_{i})\geq i+1\), we have \(ord(\delta^{\prime}_{i})\geq i+1\). Similarly, we have the following.
**Lemma 19**.: _For the coefficient \(\varepsilon_{i}\) in the proof of Lemma 16, we have_
\[ord(\varepsilon_{i})\geq i+\min_{2\leq j\leq a}\{ord(\delta_{j})-j\}.\]
Proof.: As in the proof of Lemma 16, \(\varepsilon_{a-l}\) can be expressed as a polynomial \(F_{a-l}\) of \(\delta_{i},\delta^{\prime}_{i},c_{i}(N)\) and \(\varepsilon_{a+1},\ldots,\varepsilon_{a-l-1}\). Also, if \(\prod_{i=2}^{a}\delta_{i}^{p_{i}}\prod_{j=2}^{a}(\delta^{\prime}_{j})^{q_{j}} \prod_{k=2}^{a}c_{k}(N)^{r_{k}}\prod_{m=a+1}^{a-l-1}\varepsilon_{m}^{s_{m}}\) is a monomial in \(F_{a-l}\), we have
\[\sum_{i=2}^{a}ip_{i}+\sum_{j=2}^{a}jq_{j}+\sum_{k=2}^{a}kr_{k}+\sum_{m=a+1}^{a -l-1}ms_{m}=a-l.\]
We can inductively assume \(ord(\varepsilon_{i})\geq i+\min_{2\leq j\leq a}\{ord(\delta_{j})-j\}\) for \(i<a-l\). Also, at least one of \(p_{i},q_{j},s_{m}\) is not zero. From these observations and Lemma 18, we obtain the claim.
By definition of \(c(N+1)\), we have the following.
**Proposition 20**.: _We have_
\[\sum_{l=-\infty}^{\infty}\sigma_{-l}(\mathbf{c}(N+1))s(N+1)^{l}=\sum_{l=-\infty }^{\infty}\sigma_{-l}(\mathbf{c}(N))s^{l},\]
_which holds over \(\mathbb{C}[[t]]\). Here, \(s(N+1)\) is given in Lemma 16._
Proof.: The parameterization
\[(z,w)=(s^{a}+c_{2}(N+1)s^{a-2}+\cdots+c_{a}(N+1),\sum_{l=-\infty}^{\infty} \sigma_{-l}(\mathbf{c}(N+1))s^{l})\]
is obtained by substituting \(S=s(1+\sum_{i=1}^{\infty}\prod_{j=0}^{i-1}(\frac{1}{a}-j)\frac{1}{i!}(\sum_{k=2 }^{a}\frac{c_{k}(N+1)}{s^{k}})^{i})\) to \((z,w)=(S^{a},S^{b}+S^{b+1}g_{0}(S))\). Substituting \(s(N+1)\) to \(s\), we have
\[(z,w)=(s^{a}+c_{2}(N)s^{a-2}+\cdots+c_{a}(N),\sum_{l=-\infty}^{\infty}\sigma_{ -l}(\mathbf{c}(N+1))s(N+1)^{l})\]
according to the definition of \(s(N+1)\). However, looking at \(z\), this must be identical to the substitution of \(S=s(1+\sum_{i=1}^{\infty}\prod_{j=0}^{i-1}(\frac{1}{a}-j)\frac{1}{i!}(\sum_{k =2}^{a}\frac{c_{k}(N)}{s^{k}})^{i})\) to \((z,w)=(S^{a},S^{b}+S^{b+1}g_{0}(S))\). Then, by comparing \(w\), we obtain the claimed identity.
Thus, we have the following.
**Corollary 21**.: _The equality_
\[\sum_{l=0}^{\infty}\sigma_{-l}(\mathbf{c}(N+1))s(N+1)^{l}-\sum_{l=0}^{\infty} \sigma_{-l}(\mathbf{c}(N))s^{l}=\sum_{l=-\infty}^{-1}\sigma_{-l}(\mathbf{c}(N ))s^{l}-\sum_{l=-\infty}^{-1}\sigma_{-l}(\mathbf{c}(N+1))s(N+1)^{l}\]
_holds. \(\square\)_
We also have the following by direct calculation.
**Lemma 22**.: _The equality_
\(\sum_{l=-\infty}^{-1}\sigma_{-l}(\mathbf{c}(N+1))s(N+1)^{l}\)__
\(=\sum_{l=-\infty}^{-1}\sigma_{-l}(\mathbf{c}(N+1))(s-\frac{1}{a}\sum_{i=2}^{ a}\frac{\delta_{i}^{l}}{s^{i-1}}+\sum_{i=a+1}^{\infty}\frac{\varepsilon_{i}}{s^{i-1 }})^{l}\)__
\(=\sum_{l=-\infty}^{-1}(\sigma_{-l}(\mathbf{c}(N+1))-\sum_{i=2}^{a}\frac{l+i}{ a}\delta_{i}^{l}\bar{\sigma}_{-l-i}(\mathbf{c}(N+1))+\sum_{i=a+1}^{\infty}(l+i) \varepsilon_{i}\bar{\sigma}_{-l-i}(\mathbf{c}(N+1))+\nu_{l})s^{l}\)__
_holds. Here, \(\nu_{l}\) is the sum of terms which are quadratic or more with respect to \(\delta_{i}\) and \(\varepsilon_{i}\). Also, \(\bar{\sigma}_{m}=\sigma_{m}\) for \(m>0\) and \(0\) otherwise. \(\square\)._
Finally, we give a calculation for the part \(h_{N}(s,t)\). Let \(g(s,t)\) be any holomorphic function. Substituting \(s(N+1)\) to \(s\), we obtain a meromorphic function \(g(s(N+1),t)\). Let \(g(s(N+1),t)_{reg}\) be its regular part and \(g(s(N+1),t)_{sing}=g(s(N+1),t)-g(s(N+1),t)_{reg}\) be its singular part with respect to the expansion using \(s\).
**Lemma 23**.: _There is a holomorphic function \(\bar{h}_{N}(s,t)\) such that_
\[\bar{h}_{N}(s(N+1),t)_{reg}=h_{N}(s,t),\ \ \text{mod}\ t^{N+2}.\]
Proof.: Substituting \(s(N+1)\) to \(s\) in \(h_{N}(s,t)\), the difference
\[H_{1}(s,t)=h_{N}(s(N+1),t)_{reg}-h_{N}(s,t)\]
can be divided by \(t^{m}\), where \(m\) is the minimal order of \(\{\delta^{\prime}_{i},\varepsilon_{i}\}\). This follows from the definition of \(s(N+1)\), see also Lemmas 18 and 19. Applying the similar process to \(h_{N}-H_{1}\), we see that the difference
\[\begin{array}{ll}H_{2}(s,t)&=(h_{N}-H_{1})(s(N+1),t)_{reg}-h_{N}(s,t)\\ &=(h_{N}(s(N+1),t)_{reg}-h_{N}(s,t))-H_{1}(s(N+1),t)_{reg}\\ &=H_{1}(s,t)-H_{1}(s(N+1),t)_{reg}\end{array}\]
can be divided by \(t^{2m}\). Similarly, \(H_{3}(s,t)=(h_{N}-H_{1}-H_{2})(s(N+1),t)_{reg}-h_{N}(s,t)\) can be divided by \(t^{3m}\). Repeating this, if we put \(\bar{h}_{N}(s,t)=(h_{N}-H_{1}-\cdots-H_{k})(s,t)\) so that \((k+1)m>N+1\), it satisfies the requirement of the claim.
### The dual space of obstructions
Recall that the obstruction to deforming the map \(\varphi\) lies in \(H^{1}(C,\bar{\mathcal{N}}_{\varphi})\cong H^{0}(C,\varphi^{*}\omega_{X}(Z))^ {\vee}\). There is a natural map
\[i\colon H^{0}(X,\omega_{X})\to H^{0}(C,\varphi^{*}\omega_{X}(Z))\]
given by the pullback. The argument in [16] (see also [6, 14]) shows the following.
**Proposition 24**.: _Let \(\varphi_{N}\) be an \(N\)-th order deformation of \(\varphi\) and let \(o_{N}\in H^{1}(C,\bar{\mathcal{N}}_{\varphi})\) be the obstruction class to deforming it one step further. Then, under the pairing between \(H^{1}(C,\bar{\mathcal{N}}_{\varphi})\) and \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\), elements in \(\operatorname{Im}i\) pair with \(o_{N}\) trivially. In particular, to identify the obstruction class \(o_{N}\), it suffices to consider the pairing between it and elements in \(\operatorname{Coker}i\)._
Proof.: We have a natural inclusion \(\bar{\mathcal{N}}_{\varphi}\to\varphi^{*}\mathcal{N}_{\varphi(C)}\). Therefore, we have a natural map
\[\varphi_{*}\bar{\mathcal{N}}_{\varphi}\to\varphi_{*}\varphi^{*}\mathcal{N}_{ \varphi(C)}\cong\mathcal{N}_{\varphi(C)}\otimes\varphi_{*}\mathcal{O}_{C}.\]
By the Leray's spectral sequence, we have an isomorphism \(H^{1}(C,\bar{\mathcal{N}}_{\varphi})\cong H^{1}(\varphi(C),\varphi_{*}\bar{ \mathcal{N}}_{\varphi})\). Thus, we have a natural map
\[\pi\colon H^{1}(C,\bar{\mathcal{N}}_{\varphi})\to H^{1}(\varphi(C),\varphi_{*} \varphi^{*}\mathcal{N}_{\varphi(C)})\cong H^{1}(\varphi(C),\mathcal{N}_{ \varphi(C)}).\]
The latter isomorphism is due to the natural exact sequence
\[0\to\mathcal{N}_{\varphi(C)}\to\varphi_{*}\varphi^{*}\mathcal{N}_{\varphi(C)} \to\mathcal{Q}\to 0,\]
where \(\mathcal{Q}\) is a torsion sheaf. Under this map, the obstruction class \(o_{N}\) gives the obstruction class \(\bar{o}_{N}\) to deforming the image \(\varphi(C)\). The dual of it gives a map \(H^{1}(\varphi(C),\mathcal{N}_{\varphi(C)})^{\vee}\to H^{1}(C,\bar{\mathcal{N }}_{\varphi})^{\vee}\). By the Serre duality and the adjunction, we have a natural isomorphism
\[H^{1}(\varphi(C),\mathcal{N}_{\varphi(C)})^{\vee}\cong H^{0}(\varphi(C), \omega_{X}|_{\varphi(C)}).\]
Then, the composition of natural maps
\[H^{0}(X,\omega_{X})\to H^{0}(\varphi(C),\omega_{X}|_{\varphi(C)})\cong H^{1} (\varphi(C),\mathcal{N}_{\varphi(C)})^{\vee}\to H^{1}(C,\bar{\mathcal{N}}_{ \varphi})^{\vee}\cong H^{0}(C,\varphi^{*}\omega_{X}(Z))\]
is the map \(i\).
The argument in [16] shows that in the natural pairing between \(H^{1}(\varphi(C),\mathcal{N}_{\varphi(C)})\) and \(H^{0}(\varphi(C),\omega_{X}|_{\varphi(C)})\), the class of \(H^{1}(\varphi(C),\mathcal{N}_{\varphi(C)})\) which is the obstruction to deforming \(\varphi(C)\) pairs trivially with those classes of \(H^{0}(\varphi(C),\omega_{X}|_{\varphi(C)})\) coming from \(H^{0}(X,\omega_{X})\). In particular, the classes in \(H^{0}(X,\omega_{X})\) annihilates the class \(\bar{o}_{N}\). It follows that the image of the map \(i\) annihilates the class \(o_{N}\)
**Definition 25**.: We call a map \(\varphi\)_semiregular_ if the map \(i\colon H^{0}(X,\omega_{X})\to H^{0}(C,\varphi^{*}\omega_{X}(Z))\) induces a surjection onto the subspace \(H^{0}(C,\varphi^{*}\omega_{X})\).
Classically, the semiregularity was defined for subvarieties [6, 14, 21, 22]. Namely, in the case of curves on surfaces, a curve \(C\subset X\) is semiregular if the embedding \(i\colon C\to X\) is semiregular in the above sense.
**Example 26**.: If the surface \(X\) is Fano or Calabi-Yau, any map \(\varphi\colon C\to X\) is semiregular.
In general, we have an exact sequence on the closed subvariety \(\varphi(C)\) of \(X\)
\[0\to\omega_{X}|_{\varphi(C)}\to\varphi_{*}\varphi^{*}\omega_{X}\to\mathcal{Q} \to 0,\]
where \(\mathcal{Q}\) is a torsion sheaf defined by this sequence. Taking the cohomology, we have
\[\begin{array}{rl}0\to H^{0}(\varphi(C),\omega_{X}|_{\varphi(C)})&\to H^{0}( \varphi(C),\varphi_{*}\varphi^{*}\omega_{X})\to H^{0}(\mathcal{Q})\\ &\to H^{1}(\varphi(C),\omega_{X}|_{\varphi(C)})\to H^{1}(\varphi(C),\varphi_{* }\varphi^{*}\omega_{X})\to 0.\end{array}\]
Then, map \(\varphi\) is semiregular in the above sense if
* the curve \(\varphi(C)\) is semiregular in the classical sense (that is, the inclusion \(\varphi(C)\to X\) is semiregular), and
* the map \(H^{0}(\varphi(C),\omega_{X}|_{\varphi(C)})\to H^{0}(\varphi(C),\varphi_{*} \varphi^{*}\omega_{X})\) is surjective.
Taking the dual, we have
\[0\to\operatorname{Hom}_{\mathcal{O}_{\varphi(C)}}(\varphi_{*}\varphi^{*} \omega_{X},\omega_{\varphi(C)})\to\operatorname{Hom}_{\mathcal{O}_{\varphi(C )}}(\omega_{X}|_{\varphi(C)},\omega_{\varphi(C)})\to H^{0}(\mathcal{Q})^{ \vee},\]
where \(\omega_{\varphi(C)}\) is the dualizing sheaf of \(\varphi(C)\). The map \(H^{0}(\varphi(C),\omega_{X}|_{\varphi(C)})\to H^{0}(\varphi(C),\varphi_{*} \varphi^{*}\omega_{X})\) is surjective if and only if the map \(\operatorname{Hom}_{\mathcal{O}_{\varphi(C)}}(\omega_{X}|_{\varphi(C)}, \omega_{\varphi(C)})\to H^{0}(\mathcal{Q})^{\vee}\) is surjective. Note that we have
\[\operatorname{Hom}_{\mathcal{O}_{\varphi(C)}}(\omega_{X}|_{\varphi(C)}, \omega_{\varphi(C)})\cong\operatorname{Hom}_{\mathcal{O}_{\varphi(C)}}( \mathcal{O}_{\varphi(C)},\omega_{X}^{\vee}|_{\varphi(C)}\otimes\omega_{ \varphi(C)})\cong H^{0}(\varphi(C),\mathcal{N}_{\varphi(C)}).\]
The pairing between \(H^{0}(\mathcal{Q})\) and the image of \(H^{0}(\varphi(C),\mathcal{N}_{\varphi(C)})\) in \(H^{0}(\mathcal{Q})^{\vee}\) is given as follows. Namely, an element of \(H^{0}(\mathcal{Q})\) is represented by a germ \(\xi\) of the sheaf \(\varphi_{*}\varphi^{*}\omega_{X}\) (modulo the germs of the sheaf \(\omega_{X}|_{\varphi(C)}\)), which is a section of \(\omega_{X}\) whose coefficient belongs to \(\varphi_{*}\mathcal{O}_{C}\). Given a section \(\alpha\) of \(H^{0}(\varphi(C),\mathcal{N}_{\varphi(C)})\), since we have \(\omega_{X}|_{\varphi(C)}\otimes\mathcal{N}_{\varphi(C)}\cong\omega_{\varphi(C)}\), \(\xi\) pairs naturally with \(\alpha\) to give a germ of a section of \(\omega_{\varphi(C)}\) with coefficient in \(\varphi_{*}\mathcal{O}_{C}\). Pulling it back to \(C\) gives a germ of a meromorphic \(1\)-form on \(C\), and its residue is the value of the pairing.
Thus, if \(\varphi(C)\) is sufficiently ample and the normal sheaf \(\mathcal{N}_{\varphi(C)}\) has plenty of global sections, the map \(\operatorname{Hom}_{\mathcal{O}_{\varphi(C)}}(\omega_{X}|_{\varphi(C)}, \omega_{\varphi(C)})\to H^{0}(\mathcal{Q})^{\vee}\) will be surjective. Also, if \(\varphi(C)\) is sufficiently positive, it is semiregular in the classical sense. Thus, if \(\varphi(C)\) is sufficiently positive (compared to the number of singular points), the map \(\varphi\) is semiregular in the sense of Definition 25.
## 3. Deformation of singular curves
Let \(\varphi\colon C\to X\) be a map from a complete integral curve to a smooth surface as before. Let \(\{p_{1},\,\ldots,p_{e}\}\) be the set of points on \(C\) where \(\varphi\) is singular. Namely, we assume \(C\) is non-singular at \(p_{i}\) and \(d\varphi=0\) there, as in Section 2.2. At each \(p_{j}\), the image of \(\varphi\) is parameterized as \((z,w)=(s^{a_{j}},s^{b_{j}}+s^{b_{j}+1}g_{0}(s))\) for some integers \(1<a_{j}<b_{j}\). Let us write \(Z=(d\varphi)=\sum_{j=1}^{e}(a_{j}-1)p_{j}\). From each singular point \(p\in\{p_{1}\,\ldots,p_{e}\}\), there is a contribution to the obstruction controlled by the functions \(\sigma_{-l}(\mathbf{c})\) in the notation of the
previous section. We write the parameter \(c_{i}\) at \(p_{j}\) by \(c_{i}^{(j)}\) and \(\mathbf{c}\) by \(\mathbf{c}^{(j)}\) for clarity. Also, we write the functions \(\sigma_{-l}\) defined on a neighborhood of \(p_{j}\) by \(\sigma_{-l}^{(j)}\). We assume that each \(c_{i}^{(j)}\) belongs to \(t^{d_{j}i}\mathbb{C}[[t]]\), where \(d_{j}\) is a positive integer (to be fixed later, see Definition 34), and write \(c_{i}^{(j)}=t^{d_{j}i}\bar{c}_{i}^{(j)}\). Assume we have constructed an \(N\)-th order deformation \(\varphi_{N}\) of \(\varphi\) for some non-negative integer \(N\).
Recall that the function \(\sigma_{-l}^{(j)}(\mathbf{c}^{(j)})\) is a sum of polynomials of the form \(f_{b^{\prime}-l}^{(b^{\prime})}\), \(b^{\prime}\geq b_{j}\), \(l<b^{\prime}\), in the notation of Section 2.3.2. Also, recall that we assume \(b_{j}\) is not a multiple of \(a_{j}\). Among these functions, the ones with \(l<0\) are relevant to the calculation of the obstruction. Substituting \(c_{i}^{(j)}=t^{d_{j}i}\bar{c}_{i}^{(j)}\) to \(f_{b_{j}-l}^{(b_{j})}\), we have
\[f_{b_{j}-l}^{(b_{j})}(\mathbf{c}^{(j)})=t^{d_{j}(b_{j}-l)}f_{b_{j}-l}^{(b_{j}) }(\mathbf{\bar{c}}^{(j)}).\]
Properties of the functions \(f_{b_{j}-l}^{(b_{j})}\) are crucial to the study of obstructions.
### Functions \(F_{-n}^{(j)}\)
We introduce some functions related to \(f_{b_{j}-l}^{(b_{j})}\) for later purposes, see Definition 40. Recall that around a singular point \(p_{j}\) of \(\varphi\), we introduced a function \(S=s_{j}(1+\sum_{i=1}^{\infty}\prod_{l=0}^{i-1}(\frac{1}{a_{j}}-l)\frac{1}{i!}( \sum_{k=2}^{a_{j}}\frac{c_{k}}{s_{j}^{k}})^{i})\) by solving \(S^{a_{j}}=s_{j}^{a_{j}}+\sum_{i=0}^{a_{j}-2}c_{a_{j}-i}s_{j}^{i}\). We write it as
\[S=s_{j}(1+\sum_{i=-\infty}^{-1}\gamma_{-i}^{(j)}s_{j}^{i}).\]
Note that we have
\[\gamma_{-i}^{(j)}=f_{-i}^{(1)}(\mathbf{c}^{(j)})=\sum_{\lambda\in\mathcal{P}(- i;[2,a_{j}])}\begin{pmatrix}&\frac{1}{a_{j}}\\ \lambda(2)&\cdots&\lambda(a_{j})\end{pmatrix}c_{2}^{\lambda(2)}\cdots c_{a_{j} }^{\lambda(a_{j})}\]
in the notation of Section 2.3.2. We can solve this and express \(s_{j}\) in terms of \(S\),
\[s_{j}=S(1+\sum_{i=-\infty}^{-1}\theta_{-i}^{(j)}S^{i}).\]
Furthermore, for a negative integer \(l\), we write
\[s_{j}^{l}=S^{l}\sum_{i=-\infty}^{0}\Theta_{-i}^{(j;l)}S^{i}.\]
Using this notation, we introduce the following functions.
**Definition 27**.: For a positive integer \(n\), we define the function \(F_{-n}^{(j)}\) by
\[F_{-n}^{(j)}=\sum_{i=-n}^{-1}\Theta_{i+n}^{(j;i)}f_{b_{j}-i}^{(b_{j})}.\]
The function \(\Theta_{-i}^{(j;l)}\) is given by
\[\Theta_{-i}^{(j;l)}=\sum_{\lambda\in\mathcal{P}(-i;[2,\infty))}\begin{pmatrix} l\\ \lambda(2)&\lambda(3)\end{pmatrix}\dots\end{pmatrix}\prod_{k=2}^{\infty}(\theta_{ k}^{(j)})^{\lambda(k)}.\]
On the other hand, \(\theta_{k}\) is determined by the condition
\[\begin{array}{ll}S&=S(1+\sum_{i=-\infty}^{-1}\theta_{-i}^{(j)}S^{i})(1+\sum_{i=- \infty}^{-1}\gamma_{-i}^{(j)}(S(1+\sum_{n=-\infty}^{-1}\theta_{-n}^{(j)}S^{n})) ^{i})\\ &=S+\sum_{i=-\infty}^{-1}\theta_{-i}^{(j)}S^{i+1}+\sum_{i=-\infty}^{-1}\gamma_ {-i}^{(j)}S^{i+1}(1+\sum_{n=-\infty}^{-1}\theta_{-n}^{(j)}S^{n})^{i+1}.\end{array}\]
Thus, we have
\[\sum_{i=-\infty}^{-1}\theta_{-i}^{(j)}S^{i+1}=-\sum_{i=-\infty}^{-1}\gamma_{- i}^{(j)}S^{i+1}(1+\sum_{n=-\infty}^{-1}\theta_{-n}^{(j)}S^{n})^{i+1}. \tag{2}\]
Comparing the coefficients of \(S^{i+1}\) on the left and right hand sides, we see
\[\theta_{-i}^{(j)}=-\gamma_{-i}^{(j)}-\sum_{k=i+2}^{-2}\gamma_{-k}^{(j)}(\sum_ {n=i-k}^{-1}\frac{(k+1)\cdot k\cdot\cdots\cdot(k+n+2)}{(-n)!})\sum_{\lambda \in\mathcal{P}_{-n}(k-i;[2,\infty))}\begin{pmatrix}-n\\ \lambda(2)\ \lambda(3)\ \cdots\end{pmatrix}\prod_{l=2}^{\infty}(\theta_{l}^{(j)})^{ \lambda(l)},\]
where \(\mathcal{P}_{-n}(k-i;[2,\infty))\) is the set of partitions of \(k-i\) of length \(-n\), using integers larger than one.
Note that \(\gamma_{-i}^{(j)}\) is weighted homogeneous of degree \(-i\) in the sense of Definition 29 below. Also, we have \(\gamma_{1}^{(j)}=0\). We can recursively solve Eq.(2), and it is easy to see that we can write
\[\theta_{-i}^{(j)}=-\gamma_{-i}^{(j)}+O(\gamma^{2}),\]
where \(O(\gamma^{2})\) is the sum of monomials of \(\gamma_{-k}^{(j)}\) which is quadratic or more, and \(\theta_{-i}^{(j)}\) is also weighted homogeneous of degree \(-i\). It follows that \(\Theta_{-i}^{(j;l)}\) is also weighted homogeneous of degree \(-i\). Therefore, the function \(F_{-n}^{(j)}\) is weighted homogeneous of degree \(b_{j}+n\).
### The condition (T)
In this subsection, we introduce the condition (T), which ensures a certain transversality property of the set of polynomials \(\{f_{b+i}^{(b)}\}\) appeared in Lemma 15. Our final goal is to prove the existence of deformations of \(\varphi\) when these conditions are met at the singular points \(p_{1},\ldots,p_{e}\).
**Definition 28**.: Let \(2<a<b\) be integers where \(b\) is not a multiple of \(a\). We say that the polynomials \(f_{b+1}^{(b)},\ldots,f_{b+a-1}^{(b)}\) satisfy the condition (T) at a point \(\tilde{\mathbf{c}}=(\tilde{c}_{2},\ldots,\tilde{c}_{a})\in\mathbb{C}^{a-1}\) if the hypersurfaces in \(\mathbb{C}^{a-1}\) defined by
\[\bar{f}_{b+j}^{(b)}(\mathbf{c})=f_{b+j}^{(b)}(\tilde{\mathbf{c}}),\ \ j\in\{1,\ldots,a-1\}\]
have a transversal intersection at \(\tilde{\mathbf{c}}\). Here,
\[\bar{f}_{b+j}^{(b)}(\mathbf{c})=f_{b+j}^{(b)}(\mathbf{c})+\sum_{k=2}^{j-1} \frac{(j-k)(c_{k}-\tilde{c}_{k})}{a}f_{b+j-k}^{(b)}(\tilde{\mathbf{c}})-\sum_{ k=2}^{j-1}\frac{j-k}{a}\sum_{l=2}^{k-2}\frac{a-l}{a}(c_{k-l}-\tilde{c}_{k-l}) \tilde{c}_{l}f_{b+j-k}^{(b)}(\tilde{\mathbf{c}}).\]
When \(a=2\), we say \(f_{b+1}(b)\) satisfies the condition (T) at any \(\tilde{c}\in\mathbb{C}^{\times}\) by definition.
The additional terms \(\sum_{k=2}^{j-1}\frac{(j-k)(c_{k}-\tilde{c}_{k})}{a}f_{b+j-k}^{(b)}(\tilde{ \mathbf{c}})-\sum_{k=2}^{j-1}\frac{j-k}{a}\sum_{l=2}^{k-2}\frac{a-l}{a}(c_{k-l }-\tilde{c}_{k-l})\tilde{c}_{l}f_{b+j-k}^{(b)}(\tilde{\mathbf{c}})\) reflect the calculation in Section 2.4, see Lemma 59.
Note that \(f_{b+j}^{(b)}\) is the sum of the monomials of the lowest degree of \(\sigma_{j}^{(b)}\) in view of the following Definition 29.
**Definition 29**.: We call a holomorphic function \(f\) of \(\mathbf{c}\in\mathbb{C}^{a-1}\) satisfying \(f(\alpha^{2}c_{2},\ldots,\alpha^{a}c_{a})=\alpha^{d}f(\mathbf{c})\) weighted homogeneous of the degree \(d\), where \(\alpha\) is any constant and \(d\) is a non-negative integer. In particular, the function \(f^{(b)}_{b+j}\) is weighted homogeneous of the degree \(b+j\).
In Proposition 31 below, the variables \(c_{2},\ldots,c_{a}\) of \(f^{(b)}_{b+j}\) will take values in \(\mathbb{C}[[t]]\) as before. More precisely, we will take \(c_{i}\in t^{di}\mathbb{C}[[t]]\) for a fixed positive integer \(d\). Accordingly, in the above definition of \(\bar{f}^{(b)}_{b+j}\), we may replace the constant \(\tilde{c}_{i}\) by some \(c_{i}(-\infty)\in t^{di}\mathbb{C}[[t]]\) satisfying \(c_{i}(-\infty)=t^{di}\tilde{c}_{i}\) mod \(t^{di+1}\). In this case, \(\bar{f}^{(b)}_{b+j}\) becomes
\[\begin{array}{c}\bar{f}^{(b)}_{b+j}(\mathbf{c})=f^{(b)}_{b+j}(\mathbf{c})+ \sum_{k=2}^{j-1}\frac{(j-k)(c_{k}-c_{k}(-\infty))}{a}f^{(b)}_{b+j-k}(\mathbf{c} (-\infty))\\ \hskip 14.226378pt-\sum_{k=2}^{j-1}\frac{j-k}{a}\sum_{l=2}^{k-2}\frac{a-l}{a}( c_{k-l}-c_{k-l}(-\infty))c_{l}(-\infty)f^{(b)}_{b+j-k}(\mathbf{c}(-\infty)), \end{array} \tag{3}\]
where \(\mathbf{c}(-\infty)=(c_{1}(-\infty),\ldots,c_{a-1}(-\infty))\). In the construction of deformations of \(\varphi\), we need to eliminate obstructions, which essentially involves solving a system of polynomial equations. These equations have solutions, primarily thanks to the transversality property guaranteed by the condition (T). However, the actual equations contain additional higher order terms, though these terms do not significantly affect the properties of the equations. Proposition 31 is formulated in such a way that it can be applied to cases with these extra terms. To simplify the description, we introduce the following notation.
**Definition 30**.: Fix \(\mathbf{c}(-\infty)=(c_{2}(-\infty),\ldots,c_{a}(-\infty))\in\mathbb{C}[[t]]\). We symbolically write \(o_{b+j}(\mathbf{c})=o_{b+j}(c_{2},\ldots,c_{a})\in\mathbb{C}[c_{2},\ldots,c_{ a}][[t]]\) for any series which can be expressed as a sum of terms of the following form
1. \(\alpha t^{l}c_{2}^{l_{2}}\cdots c_{a}^{l_{a}}\), \(l+d\sum_{p=2}^{a}pl_{p}\geq d(b+j)+1\), or
2. \(\alpha t^{l}c_{2}^{l_{2}}\cdots c_{a}^{l_{a}}(c_{k}-c_{k}(-\infty))(c_{l}-c_{ l}(-\infty))\), \(l+d\sum_{p=2}^{a}pl_{p}\geq d(b+j)-d(k+l)\),
where \(\alpha\) is a complex number, \(l,l_{2},\ldots,l_{a}\) are non-negative integers. The explicit form of \(o_{b+j}(\mathbf{c})\) may vary depending on the equations in which they appear.
In particular, a polynomial weighted homogeneous of the degree larger than \(b+j\) can be a summand of \(o_{b+j}\). The following calculation is crucial for solving the relevant polynomial equations.
**Proposition 31**.: _Fix a positive integer \(k\). Assume \(f^{(b)}_{b+1},\ldots,f^{(b)}_{b+a-1}\) satisfy the condition_ (T) _at \(\tilde{\mathbf{c}}\in\mathbb{C}^{a-1}\). Assume the system of equations_
\[\bar{f}^{(b)}_{b+j}(\mathbf{c})=t^{d(b+j)}f^{(b)}_{b+j}(\tilde{\mathbf{c}})+ o_{b+j}(\mathbf{c}),\ \ \text{mod}\ t^{d(b+j)+k},\ \ j\in\{1,\ldots,a-1\}\]
_has a solution \(\mathbf{c}(k-1)=(c_{2}(k-1),\ldots,c_{a}(k-1))\in(\mathbb{C}[[t]])^{a-1}\) satisfying \(c_{i}(k-1)=t^{di}\tilde{c}_{i}\) mod \(t^{di+1}\), \(i=1,\ldots,a-1\). Here, the functions \(\bar{f}^{(b)}_{b+j}(\mathbf{c})\) are defined by Eq.(3) with respect to some fixed \((c_{2}(-\infty),\ldots,c_{a}(-\infty))\) satisfying \(c_{i}(-\infty)=t^{di}\tilde{c}_{i}\) mod \(t^{di+1}\). Then, the same system of equations with \(k\) replaced by \(k+1\) has a solution \(\mathbf{c}(k)\) which extends the given solution in the sense that \(c_{i}(k)-c_{i}(k-1)=0\) mod \(t^{di+k}\) holds._
Proof.: In the case \(a=2\), it is easy to see that we have \(\bar{f}^{(b)}_{b+1}=f^{(b)}_{b+1}=c_{2}^{\frac{b+1}{2}}\). In this case, the proof of the claim is easy and will be omitted.
So, assume we have \(a>2\). First, we will perturb \(c_{2}(k-1),\ldots,c_{a}(k-1)\) so that the equation
\[\bar{f}^{(b)}_{b+1}(\mathbf{c})=t^{d(b+1)}f^{(b)}_{b+1}(\tilde{\mathbf{c}})+o_ {b+1}(\mathbf{c}),\ \ \text{mod}\ t^{d(b+1)+k+1}\]
holds. Let \(\tilde{h}_{d(b+1)+k}\) be the coefficient of \(t^{d(b+1)+k}\) in \(o_{b+1}(\mathbf{c}(k-1))-\bar{f}_{b+1}^{(b)}(\mathbf{c}(k-1))\). By the condition (T), we can take a complex vector
\[\tilde{\mathbf{c}}_{1}=(c_{2,1}\ldots,c_{a,1})\in\cap_{j\in\{2,\ldots,a-1\}} \ker d\bar{f}_{b+j}^{(b)}(\tilde{\mathbf{c}})\subset\mathbb{C}^{a-1}\]
such that
\[\bar{f}_{b+1}^{(b)}(\tilde{\mathbf{c}}+\varepsilon\tilde{\mathbf{c}}_{1})=f_{ b+1}^{(b)}(\tilde{\mathbf{c}})+\varepsilon\tilde{h}_{d(b+1)+k}+O(\varepsilon^{2})\]
holds for any small positive real number \(\varepsilon\). Then, we have
\[\begin{array}{l}\bar{f}_{b+1}^{(b)}(t^{2d}(\bar{c}_{2}(k-1)+t^{k}c_{2,1}), \ldots,t^{ad}(\bar{c}_{a}(k-1)+t^{k}c_{a,1}))\\ =t^{d(b+1)}f_{b+1}^{(b)}(\tilde{\mathbf{c}})+o_{b+1}(t^{2d}(\bar{c}_{2}(k-1)+ t^{k}c_{2,1}),\ldots,t^{ad}(\bar{c}_{a}(k-1)+t^{k}c_{a,1}))\mod t^{d(b+1)+k+1}, \end{array}\]
as required. Here, we write \(c_{l}(k-1)=t^{dl}\bar{c}_{l}(k-1)\). On the other hand, the equalities
\[\begin{array}{l}\bar{f}_{b+j}^{(b)}(t^{2d}(\bar{c}_{2}(k-1)+t^{k}c_{2,1}), \ldots,t^{ad}(\bar{c}_{a}(k-1)+t^{k}c_{a,1}))\\ =t^{d(b+j)}f_{b+j}^{(b)}(\tilde{\mathbf{c}})+o_{b+j}(t^{2d}(\bar{c}_{2}(k-1)+t^ {k}c_{2,1}),\ldots,t^{ad}(\bar{c}_{a}(k-1)+t^{k}c_{a,1})),\mod t^{d(b+j)+k}, \quad j\geq 2,\end{array}\]
still hold by the homogeneity of \(f_{b+j}^{(b)}\) and definition of \(o_{b+j}\).
To make the equations
\[\bar{f}_{b+j}^{(b)}(\mathbf{c})=t^{d(b+j)}f_{b+j}^{(b)}(\tilde{\mathbf{c}})+o_ {b+j}(\mathbf{c}),\mod t^{d(b+j)+k+1},\;\;j\geq 2\]
hold, we add appropriate vectors \(t^{k}\tilde{\mathbf{c}}_{j}\), where \(\tilde{\mathbf{c}}_{j}\in\cap_{l\in\{1,\ldots,a-1\}\setminus\{j\}}\ker d\bar{ f}_{b+l}^{(b)}(\tilde{\mathbf{c}})\), to \(\mathbf{c}(k-1)+t^{k}\tilde{\mathbf{c}}_{1}\) for all \(j\in\{2,\ldots,a-1\}\), as in the case of \(j=1\) above. By the condition \(\tilde{\mathbf{c}}_{j}\in\cap_{l\in\{1,\ldots,a-1\}\setminus\{j\}}\ker d\bar{ f}_{b+l}^{(b)}\), adding \(t^{k}\tilde{\mathbf{c}}_{j}\) changes \(\bar{f}_{b+l}^{(b)}\), \(l\in\{1,\ldots,a-1\}\setminus\{j\}\), only by quadratic or higher terms with respect to \(t^{k}\tilde{\mathbf{c}}_{j}\). It follows that adding \(t^{k}\tilde{\mathbf{c}}_{j}\) changes \(\bar{f}_{b+l}^{(b)}\), \(l\in\{1,\ldots,a-1\}\setminus\{j\}\), and \(o_{b+l}\) only at the orders higher than \(t^{d(b+l)+k}\). This shows that \(\mathbf{c}(k-1)+\sum_{j=1}^{a-1}t^{k}\tilde{\mathbf{c}}_{j}\) solves the given system of equations for \(k\) replaced by \(k+1\).
### Basis of the dual space of obstructions
Take an element \(\eta\) of \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\). Fix a local coordinate system \(\{z_{j},w_{j}\}\) around \(\varphi(p_{j})\) on \(X\). Also, fix a local parameter \(s_{j}\) on \(C\) around \(p_{j}\) as in Section 2.3.
By Proposition 7, an obstruction cocycle to deforming \(\varphi\) can be represented by a set of local meromorphic sections of the sheaf \(\bar{\mathcal{N}}_{\varphi}\) on a suitable covering of \(C\). We take a covering \(\mathcal{U}\) as in Section 2.2. In particular, for each \(p_{j}\), there is a unique open subset \(U_{p_{j}}\) containing it.
The natural pairing between such a representative of the obstruction class and a section \(\eta\in H^{0}(C,\varphi^{*}\omega_{X}(Z))\) is also given by Proposition 7. Explicitly, recall that the sheaf \(\bar{\mathcal{N}}_{\varphi}\) is isomorphic to \(\varphi^{*}\omega_{X}^{-1}\otimes\omega_{C}(-Z)\). Therefore, \(\varphi^{*}\omega_{X}(Z)\) is isomorphic to \(\bar{\mathcal{N}}_{\varphi}^{\vee}\otimes\omega_{C}\). Write \(\eta\) in the form
\[\eta=\varphi^{*}(dz_{j}\wedge dw_{j})\widetilde{\eta},\]
on a neighborhood of \(p_{j}\), where \(\widetilde{\eta}\) is a local section of \(\mathcal{O}_{C}(Z)\). Using the notation in Section 2.3, the fiberwise pairing between \(\bar{\mathcal{N}}_{\varphi}\) and \(\bar{\mathcal{N}}_{\varphi}^{\vee}\otimes\omega_{C}\) is explicitly given by
\[\varphi^{*}(dz_{j}\wedge dw_{j})\widetilde{\eta}\otimes\xi\partial_{w_{j}} \mapsto\xi\widetilde{\eta}\varphi^{*}dz_{j}=a_{j}\xi\widetilde{\eta}s_{j}^{a_{j }-1}ds_{j},\]
where \(\xi\) is a meromorphic function on a neighborhood of \(p_{j}\) representing the obstruction class. We write this by \((\eta,\xi\partial_{w_{j}})\).
**Definition 32**.: For an element \(\eta\in H^{0}(C,\varphi^{*}\omega_{X}(Z))\), we define a subset \(psupp(\eta)\subset\{1,\ldots,e\}\times\mathbb{Z}_{>0}\) by the property that \((j,m)\in\{1,\ldots,e\}\times\mathbb{Z}_{>0}\) belongs to \(psupp(\eta)\) if and only if
\[Res_{p_{j}}(\eta,\frac{1}{s_{j}^{a_{j}-m}}\partial_{w_{j}})\neq 0.\]
This is equivalent to the condition that if we expand \(\widetilde{\eta}\) in terms of \(s_{j}\), its coefficient of \(\frac{1}{s_{j}^{m}}\) is non-zero.
**Remark 33**.: _The parameter \(s\) on the domain curve \(C\) was chosen so that the map \(\varphi\) is locally represented in the form \((z,w)=(s^{a},\;s^{b}+s^{b+1}g_{0}(s))\) as in Section 2.3. Here \(b>a\) and \(a\) does not divide \(b\). When we fix local coordinates \(z,w\) on the target \(X\), the choice of \(s\) with this property is unique. When we choose another coordinates \(z^{\prime},w^{\prime}\) on \(X\), with the point \(z^{\prime}=w^{\prime}=0\) corresponding to the image of a singularity of \(\varphi\), we will need to reparameterize the curve \(C\) to represent the map \(\varphi\) in the above form. However, if we write the new parameter as \(s^{\prime}\), it is related to the original by_
\[s^{\prime}=\alpha s+O(s^{a}),\]
_where \(\alpha\) is a nonzero constant. This replacement does not affect the condition \(Res_{p}(\eta,\frac{1}{s^{a-m}}\partial_{w})\neq 0\), so the definition of \(psupp(\eta)\) does not depend on the choice of the coordinates, as long as we choose them so that \((z,w)=(s^{a},\;s^{b}+s^{b+1}g_{0}(s))\) holds, as we always do in this paper._
The residue of the meromorphic 1-form \((\eta,\xi\partial_{w_{j}})\) is the local contribution at the point \(p_{j}\) to the obstruction paired with \(\eta\). In our case, the coefficient of \(s_{j}^{-m}\) of the section \(\xi\) has the form \(F_{-m}^{(j)}\) as defined in Definition 27, with some modification in higher order terms, see Section 4.2.
We prove our main theorem (Theorem 41 below) by explicitly constructing a deformation of \(\varphi\). First, we construct a basis of the space \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\) suitable for our calculations. We introduce the integers \(M\) and \(d_{j}\), \(j=1,\ldots,e\), as follows.
**Definition 34**.: Let \(M\) be the least common multiple of \(b_{1}+1,\ldots,b_{e}+1\). Also, at each \(p_{j}\), we take \(d_{j}=\frac{M}{b_{j}+1}\). Note that with this definition, we have
\[f_{b_{j}+1}^{(b_{j})}(t^{2d_{j}}\tilde{c}_{2},\ldots,t^{a_{j}d_{j}}\tilde{c}_ {a_{j}})=t^{M}f_{b_{j}+1}^{(b_{j})}(\tilde{\mathbf{c}}).\]
**Definition 35**.: We introduce a total order to the set \(\{1,\ldots,e\}\times\mathbb{Z}_{>0}\) by the rule that \((j,m)>(j^{\prime},m^{\prime})\) if and only if
1. \(d_{j}(b_{j}+a_{j}-m)<d_{j^{\prime}}(b_{j^{\prime}}+a_{j^{\prime}}-m^{\prime})\), or
2. \(d_{j}(b_{j}+a_{j}-m)=d_{j^{\prime}}(b_{j^{\prime}}+a_{j^{\prime}}-m^{\prime})\) and \(j>j^{\prime}\).
In particular, if \(j=j^{\prime}\), we have \((j,m)>(j,m^{\prime})\) if and only if \(m>m^{\prime}\).
**Definition 36**.: For \(\eta\in H^{0}(C,\varphi^{*}\omega_{X}(Z))\), let \(P(\eta)=(j(\eta),m(\eta))\) be the maximal element of \(psupp(\eta)\) with respect to the above order. Using this notation, let us define \(ord(\eta)\in\mathbb{Z}\) by
\[ord(\eta)=d_{j(\eta)}(b_{j(\eta)}+a_{j(\eta)}-m(\eta)).\]
Note that we have \(ord(\eta)=\min_{(j,m)\in psupp(\eta)}\{d_{j}(b_{j}+a_{j}-m)\}\). We set \(ord(\eta)=\infty\) if \(\eta\) belongs to \(H^{0}(C,\varphi^{*}\omega_{X})\).
For a positive integer \(N\), let us define the subspace \(V_{N}\) of \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\) by
\[V_{N}=\{\eta\in H^{0}(C,\varphi^{*}\omega_{X}(Z))\mid ord(\eta)\geq N\}.\]
Also, define \(V_{\infty}=H^{0}(C,\varphi^{*}\omega_{X})\). They compose a sequence of subspaces
\[V_{\infty}\subset\cdots\subset V_{N+1}\subset V_{N}\subset\cdots\]
of \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\). Let
\[V_{\infty}\subset V_{i_{k}}\subset\cdots V_{i_{2}}\subset V_{i_{1}}=H^{0}(C, \varphi^{*}\omega_{X}(Z))\]
be the maximal strictly increasing subsequence. That is, we have \(V_{\infty}=V_{i_{k}+1}\neq V_{i_{k}}\) and \(V_{i_{j+1}}=V_{i_{j+1}-1}=\cdots=V_{i_{j}+1}\neq V_{i_{j}}\), for \(j=1,\ldots,k-1\).
We have a refinement of the above sequence
\[V_{i_{j+1}}\subset V_{i_{j},1}\subset V_{i_{j},2}\subset\cdots\subset V_{i_{ j},e}=V_{i_{j}},\]
where
\[V_{i_{j},n}=\{\eta\in H^{0}(C,\varphi^{*}\omega_{X}(Z))\mid ord(\eta)\geq i_{ j},\,\text{and}\,\,j(\eta)\leq n\,\,\text{if}\,\,ord(\eta)=i_{j}\},\]
using the notation \(P(\eta)=(j(\eta),m(\eta))\). Let
\[V_{i_{j+1}}\subset V_{i_{j},n_{1}}\subset V_{i_{j},n_{2}}\subset\cdots\subset V _{i_{j},n_{u_{j}}}=V_{i_{j}}\]
be the subsequence such that
\[\dim V_{i_{j},n_{r+1}}=\dim V_{i_{j},n_{r}}+1,\,\,\,r=0,\ldots,u_{j}-1,\]
where we define \(V_{i_{j},n_{0}}=V_{i_{j+1}}\), and \(u_{j}=\dim V_{i_{j}}-\dim V_{i_{j+1}}\).
For \(r=1,\ldots,u_{j}\), let \(\eta_{r}^{(i_{j})}\in V_{i_{j},n_{r}}\setminus V_{i_{j},n_{r-1}}\) be any vector. Let \(W_{i_{j}}\) be the subspace of \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\) spanned by \(\{\eta_{1}^{(i_{j})},\ldots,\eta_{u_{j}}^{(i_{j})}\}\). Then, the following is obvious.
**Lemma 37**.: _There is a direct sum decomposition_
\[H^{0}(C,\varphi^{*}\omega_{X}(Z))=H^{0}(C,\varphi^{*}\omega_{X})\oplus W_{i_{ 1}}\oplus\cdots\oplus W_{i_{k}}.\]
_Also, the set \(\{\eta_{1}^{(i_{j})},\ldots,\eta_{u_{j}}^{(i_{j})}\}\) is a basis of \(W_{i_{j}}\). _
By Proposition 24, we have the following.
**Corollary 38**.: _Assume that we have an \(N\)-th order deformation \(\varphi_{N}\) of \(\varphi\) for some non-negative integer \(N\). Then, to prove the existence of deformations of \(\varphi_{N}\) over \(\mathbb{C}[t]/t^{N+2}\), it suffices to show that the pairings between the obstruction class and elements in \(\{\eta_{1}^{(i_{j})},\ldots,\eta_{u_{j}}^{(i_{j})}\}\), \(j=1,\ldots,k\), vanish. _
**Definition 39**.: We write \(\mathcal{I}=\cup_{j=1}^{k}\{\eta_{1}^{(i_{j})},\ldots,\eta_{u_{j}}^{(i_{j})}\}\).
### Main theorem
We use the same notation as in the previous subsection. Also, let \(\{(l_{l},m_{1}),\ldots,(l_{v},m_{v})\}\) be the subset of \(\{1,\ldots,e\}\times\mathbb{Z}_{>0}\) consisting of the elements satisfying \(d_{l_{q}}(b_{l_{q}}+a_{l_{q}}-m_{l_{q}})=i_{j}\), \(q=1,\ldots,v\). Let us take \(\eta\in\{\eta_{1}^{(i_{j})},\ldots,\eta_{u_{j}}^{(i_{j})}\}\). We write
\[\{(l_{1}^{\prime},m_{1}^{\prime}),\ldots,(l_{w}^{\prime},m_{w}^{\prime})\}= psupp(\eta)\cap\{(l_{1},m_{1}),\ldots,(l_{v},m_{v})\}.\]
Recall that we introduced a function \(F_{-\eta}^{(j)}\) in Section 3.1.
**Definition 40**.: For \(\eta\in\{\eta_{1}^{(i_{j})},\ldots,\eta_{u_{j}}^{(i_{j})}\}\), define the equation \((\star_{\eta})\) on \(\prod_{j=1}^{e}\mathbb{C}^{a_{j}-1}\) by
\[(\star_{\eta})\quad\sum_{q=1}^{w}Res_{p_{l_{q}^{\prime}}}(\eta,F_{-(a_{l_{q}^{ \prime}}-m_{q}^{\prime}}^{(l_{q}^{\prime})}(\tilde{\mathbf{c}}^{(l_{q}^{\prime })}))s^{-(a_{l_{q}^{\prime}}-m_{q}^{\prime})}\partial_{w_{l_{q}^{\prime}}})=0,\]
where \(\tilde{\mathbf{c}}^{(l_{q}^{\prime})}\in\mathbb{C}^{a_{l_{q}^{\prime}}-1}\). We will be dealing with those constants of the form \(\mathbf{c}^{(j)}=(c_{2}^{(j)},\ldots,c_{a_{j}}^{(j)})\in(\mathbb{C}[[t]])^{a_{j }-1}\), where \(c_{i}^{(j)}=t^{d_{j}i}\tilde{c}_{i}^{(j)}+o(t^{d_{j}i+1})\), \(\tilde{c}_{i}^{(j)}\in\mathbb{C}\). We say that \(\{\mathbf{c}^{(j)}\}_{j=1,\ldots,e}\) satisfies the equation \((\star_{\eta})\) if \(\{\tilde{c}_{i}^{(j)}\}_{j=1,\ldots,e;i=2,\ldots,a_{j}}\in\prod_{j=1}^{e} \mathbb{C}^{a_{j}-1}\) does.
The following is the main theorem of this paper.
**Theorem 41**.: _Assume \(\varphi\) is semiregular in the sense of Definition 25. If there is a point \(\tilde{\mathbf{c}}^{(j)}\in\mathbb{C}^{a_{j}-1}\) at each \(p_{j}\in\{p_{1}\,\ldots,p_{e}\}\) satisfying the condition_ (T) _for \(j=1,\ldots,e\), and also the equations \((\star_{\eta})\) for all \(\eta\in\mathcal{I}\), then there is a deformation of \(\varphi\) which deforms the singularity of \(\varphi\) at each \(p_{j}\) non-trivially._
The equations \((\star_{\eta})\) can usually be replaced by much simpler conditions, see Section 5. Note that if \(\tilde{\mathbf{c}}\) satisfies (T), it is not a zero vector, because if \(\tilde{\mathbf{c}}=0\), we have \(\bar{f}_{b+j}^{(b)}=f_{b+j}^{(b)}\), and these polynomials are singular at \(\tilde{\mathbf{c}}=0\).
## 4. Proof of the main theorem
### Deformation up to the order \(t^{M-1}\)
Now, we begin the proof of Theorem 41. It is easy to show that the map \(\varphi\) has a deformation up to the order \(t^{M-1}\), as we will see shortly. For each \(j\in\{1,\ldots,e\}\), take \(\tilde{\mathbf{c}}^{(j)}\in\mathbb{C}^{a_{j}-1}\) at which the condition of Theorem 41 is satisfied.
Let \(M\) be the integer introduced in Definition 34. Recall that up to the order \(t^{M-1}\), there is no singular term in the expansion of \(S^{b_{j}}+S^{b_{j}+1}g_{0}(S)\) at each \(p_{j}\), see Section 2.3.1. This means that the expression \((z_{j},w_{j})=(S^{a_{j}},S^{b_{j}}+S^{b_{j}+1}g_{0}(S))\) still makes sense, and defines a curve whose image is the same as that of \(\varphi\). In particular, we can take local deformations such that the difference of them gives a section of the tangent sheaf of \(C\), which is zero on the sheaf \(\bar{\mathcal{N}}_{\varphi}\) where the obstruction takes value. Thus, there is no obstruction to deforming \(\varphi\).
### Leading terms of the obstruction
To construct higher order deformations, we need to know and control the obstruction classes. As we discussed in Section 2.3.1, at low orders where the expression \(S^{b}+S^{b+1}g_{0}(S)\) does not contain singular terms, the obstruction trivially vanishes. Then, at the order when \(S^{b}+S^{b+1}g_{0}(S)\) gives a singular term for the first time, the obstruction is contributed only from the singular points, and is calculated as in the proof of Proposition 14. At higher orders, there are additional contributions to the obstruction. In general, finding a representative of the obstruction class using meromorphic sections on open subsets, as discussed in Section 2.1 will be very hard. The point is that we can still find it for the leading order terms. In this subsection, we will calculate them.
We fix an open covering \(\{U_{i}\}\) of the domain \(C\) of the map \(\varphi\) as before. Let \(\{p_{1},\ldots,p_{e}\}\) be the set of singular points of the map \(\varphi\). Recall that the curve \(C\) is regular at \(p_{i}\), while \(C\) may have singular points elsewhere. Assume we have an \(N\)-th order deformation \(\varphi_{N}\colon C_{N}\to X\) of \(\varphi\) for some non-negative integer \(N\). The obstruction class is calculated
by the difference of local \((N+1)\)-th order deformations. As we saw in Section 2.1, such a class can be represented by a set of local meromorphic sections \(\{\xi_{i}\}\) of \(\bar{\mathcal{N}}_{\varphi}\) associated with the covering \(\{U_{i}\}\). We write by \(C_{N}\setminus\{p_{1},\ldots,p_{e}\}\) the locally ringed space, which is the restriction of the structure of a locally ringed space on \(C_{N}\) to the topological space underlying \(C\setminus\{p_{1},\ldots,p_{e}\}\). We first show that the restriction of \(\varphi_{N}\) to \(C_{N}\setminus\{p_{1},\ldots,p_{e}\}\) is an immersion. To see this, it suffices to prove the following.
**Lemma 42**.: _Let \(C^{\circ}_{N,i}\), \(i=1,2\), be flat deformations of \(C\setminus\{p_{1},\ldots,p_{e}\}\) over \(\mathbb{C}[t]/t^{N+1}\). Assume that there is a map \(\tau^{\circ}_{N}\colon C^{\circ}_{N,1}\to C^{\circ}_{N,2}\) over \(\operatorname{Spec}\mathbb{C}[t]/t^{N+1}\) which reduces to \(id_{C\setminus\{p_{1},\ldots,p_{e}\}}\) over \(\mathbb{C}[t]/t\). Then, \(\tau^{\circ}_{N}\) is an isomorphism._
Proof.: When \(N=0\), the claim is trivial. So, we assume \(N\) is a positive integer. It suffices to show that for each \(q\in C\setminus\{p_{1},\ldots,p_{e}\}\), there is an open neighborhood of it such that the restriction of \(\tau^{\circ}_{N}\) to it is an isomorphism. Note that \(q\) may be a singular point of the curve \(C\). Consider an affine open neighborhood \(U_{q}=\operatorname{Spec}R_{q}\) of \(q\) in \(C\setminus\{p_{1},\ldots,p_{e}\}\), where \(R_{q}\) is some ring. Let \(U_{q,N,i}=\operatorname{Spec}\mathcal{R}_{q,N,i}\), \(i=1,2\), be the restriction of the structure of locally ringed spaces on \(C^{\circ}_{N,i}\) to the topological space underlying \(U_{q}\). The restriction of \(\tau^{\circ}_{N}\) to \(U_{q,N,1}\) gives a map
\[\mathcal{R}_{q,N,2}\to\mathcal{R}_{q,N,1}.\]
We have a commutative diagram
By assumption, when \(N=1\), the map \(\mathcal{R}_{q,0,2}\to\mathcal{R}_{q,0,1}\) is the identity map of \(R_{q}\). Therefore, the map \(\mathcal{R}_{q,1,2}\to\mathcal{R}_{q,1,1}\) is also an isomorphism. By induction, it follows that the map \(\mathcal{R}_{q,N,2}\to\mathcal{R}_{q,N,1}\) is an isomorphism, too.
**Corollary 43**.: _The restriction of the map \(\varphi_{N}\) to \(C_{N}\setminus\{p_{1},\ldots,p_{e}\}\) is an immersion._
Proof.: For any point \(q\in C\setminus\{p_{1},\ldots,p_{e}\}\), there is an open subset \(U_{q}\subset C\setminus\{p_{1},\ldots,p_{e}\}\) such that the restriction of \(\varphi\) to \(U_{q}\) is an embedding. Apply Lemma 42 to the case where \(C^{\circ}_{N,1}\) is the restriction of the structure of a locally ringed space on \(C_{N}\) to \(U_{q}\) (which we write by \(U_{q}\) again), and \(C^{\circ}_{N,2}\) is the image (in the sense of Section 1.1) of the restriction of \(\varphi_{N}\) to it. Then, the restriction of \(\varphi_{N}\) to \(U_{q}\) is an isomorphism by Lemma 42. The claim follows from this.
**Corollary 44**.: _In the range \(N<M\), the curve \(C_{N}\setminus\{p_{1},\ldots,p_{e}\}\) is isomorphic to the product \(C\setminus\{p_{1},\ldots,p_{e}\}\times\operatorname{Spec}\mathbb{C}[t]/t^{N+1}\)._
Proof.: This follows from the fact that the image of \(\varphi_{N}\) is the same as that of \(\varphi\) for \(N<M\). In particular, we have a map \(\psi\colon C_{N}\setminus\{p_{1},\ldots,p_{e}\}\to C\setminus\{p_{1},\ldots, p_{e}\}\times\operatorname{Spec}\mathbb{C}[t]/t^{N+1}\) which restricts to the identity map over \(\mathbb{C}[t]/t\). Then, the claim follows from Lemma
**Remark 45**.: _If \(U\) is an open subset of \(C\), we will often use the same letter to denote the locally ringed space which is the restriction of \(C_{N}\) to \(U\), if any confusion would not happen._
We observe the following generalization of Proposition 14.
**Proposition 46**.: _In the range where \(N<2M-1\) holds, a representative \(\{\xi_{i}\}\) of the obstruction class to deforming \(\varphi_{N}\) can be taken so that \(\xi_{i}=0\) unless \(U_{i}=U_{p_{j}}\) for some \(p_{j}\), where \(U_{p_{j}}\) is the unique open subset in the covering \(\{U_{i}\}\) containing \(p_{j}\)._
Proof.: Let \(U_{1}\) and \(U_{2}\) be open subsets in \(C\setminus\{p_{1},\ldots,p_{e}\}\). Let \(F_{1}=0\) and \(F_{2}=0\) be defining equations, defined over \(\mathbb{C}[t]/t^{N+1}\), of the images of \(\varphi_{N}|_{U_{1}}\) and \(\varphi_{N}|_{U_{2}}\) on suitable open subsets \(W_{1}\) and \(W_{2}\) of \(X\), respectively. By taking these open subsets sufficiently small, we can assume
\[\varphi(U_{1}\cup U_{2})\cap(W_{1}\cap W_{2})=\varphi(U_{1}\cap U_{2})\]
holds. Recall that the image of \(\varphi_{N}\) is the same as that of \(\varphi\) over \(\mathbb{C}[t]/t^{M}\). That is, we can assume \(F_{1}=0\) and \(F_{2}=0\) reduce to the defining equations of the images of \(\varphi|_{U_{1}}\) and \(\varphi|_{U_{2}}\) over \(\mathbb{C}[t]/t^{M}\), respectively. In particular, \(F_{1}\) and \(F_{2}\) do not contain terms of the order lower than \(M\) with respect to \(t\). On the intersection \(U_{1}\cap U_{2}\), we have
\[F_{1}=g_{12}F_{2}\ \bmod t^{N+1},\]
where \(g_{12}\) is an invertible holomorphic function on an open subset of \(X\). The function \(g_{12}\) does not contain terms of the order lower than \(M\) with respect to \(t\), either.
One obtains local deformations of \(\varphi_{N}\) on \(U_{1}\) and \(U_{2}\) by considering the equations \(F_{1}=0\) and \(F_{2}=0\) as defined over \(\mathbb{C}[t]/t^{N+2}\). Note that, since \(\varphi\) is an immersion on \(C\setminus\{p_{1},\ldots,p_{e}\}\) by Corollary 43, this automatically fixes a local deformation of the domain curve \(C_{N}\) over \(U_{1}\) and \(U_{2}\). By the above observation, the equality
\[F_{1}=g_{12}F_{2}\ \bmod t^{N+2},\]
still holds in the range \(N<2M-1\). This implies that there is no contribution to the obstruction cocycle from the difference of these local deformations at these orders.
Now, take any local deformation of \(\varphi_{N}\) on the open subset \(U_{p_{j}}\) containing \(p_{j}\). Let \(U_{i}\) and \(U_{j}\) be open subsets of \(C\) that intersect \(U_{p_{j}}\), but do not contain \(p_{j}\). The difference between local deformations of \(\varphi_{N}\) on \(U_{p_{j}}\) and on such open subsets as \(U_{i}\) and \(U_{j}\) contributions to the obstruction cocycle. Recall that the difference between local deformations on \(U_{p_{j}}\) and on \(U_{i}\) gives a section of \(\bar{\mathcal{N}}_{\varphi}|_{U_{p_{j}}\cap U_{i}}\). Then, by the above observation, the restriction of such a section to \(U_{p_{j}}\cap U_{i}\cap U_{j}\) coincides with that associated with the open subsets \(U_{p_{j}}\) and \(U_{j}\), in the range \(N<2M-1\). It follows that the difference between local deformations on \(U_{p_{j}}\) and \(U_{i}\) can be extended to a meromorphic section of \(\bar{\mathcal{N}}_{\varphi}\) on \(U_{p_{j}}\). On the other hand, assign zero sections to the other open subsets of \(\{U_{i}\}\). Then, by construction, this represents the obstruction class in the sense of Proposition 7.
Now, we assume \(N\geq M-1\). Let \(\xi_{j}\) be the meromorphic section of \(\bar{\mathcal{N}}_{\varphi}\) on \(U_{p_{j}}\) constructed in the above proposition. For later purposes, we need to calculate \(\{\xi_{j}\}\) more precisely. Namely, we will show that under the condition \((\star_{\eta})\), \(\eta\in\mathcal{I}\), in Theorem 41, the obstruction contributed by \(\{\xi_{j}\}\) can be absorbed in the term \(o_{b+j}(\mathbf{c})\) in the system of equations of Proposition 31, which we will need to solve in later sections to construct
deformations of \(\varphi_{N}\). The calculation is rather subtle, since when we compare local deformations at higher orders, full non-linearity of various coordinate changes comes into play. We can achieve this by using the special nature of the deformation \(\varphi_{N}\). Namely, the fact that it has the same image as \(\varphi\) up to the order \(t^{M-1}\).
We use the same notation as in the proof of Proposition 46. The map \(\varphi_{N}\) has a parameterization on \(U_{p_{j}}\) of the form
\[\begin{array}{ll}(z_{j},w_{j})&=(s_{j}^{a_{j}}+c_{2}^{(j)}(N)s_{j}^{a_{j}-2 }+\cdots+c_{aj}^{(j)}(N),\sum_{l=0}^{\infty}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}( N))s_{j}^{l}+t^{M}H_{N}(s_{j},t))\\ &=(S^{a_{j}},S^{b_{j}}+S^{b_{j}+1}g_{0}(S)-\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j )}(\mathbf{c}^{(j)}(N))s_{j}^{l}+t^{M}H_{N}(s_{j},t)),\ \ \mathrm{mod}\ t^{N+1},\end{array} \tag{4}\]
as in Section 2.4. Here, \(S=s_{j}(1+\sum_{i=1}^{\infty}\prod_{l=0}^{i-1}(\frac{1}{a_{j}}-l)\frac{1}{i!}( \sum_{k=2}^{a_{j}}\frac{c_{k}^{(j)}(N)}{s_{j}^{k}})^{i})\) and \(H_{N}\) is a holomorphic function. Also, \(\mathbf{c}^{(j)}(N)=(c_{2}^{(j)}(N),\ldots,c_{aj}^{(j)}(N))\), where \(c_{i}^{(j)}(N)\in t^{d_{j}i}\mathbb{C}[[t]]\). Considering this as an expression over \(\mathbb{C}[t]/t^{N+2}\), we obtain a local deformation of \(\varphi_{N}\) on \(U_{p_{j}}\). Precisely speaking, we first regard \(-\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}\) as an expression over \(\mathbb{C}[[t]]\), and reduce it over \(\mathbb{C}[t]/t^{N+2}\). Then, the term \(S^{b_{j}}+S^{b_{j}+1}g_{0}(S)-\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{ c}^{(j)}(N))s_{j}^{l}\) does not contain a pole up to the order \(N+1\) with respect to \(t\) after substituting \(S=s_{j}(1+\sum_{i=1}^{\infty}\prod_{l=0}^{i-1}(\frac{1}{a_{j}}-l)\frac{1}{i!}( \sum_{k=2}^{a_{j}}\frac{c_{k}^{(j)}(N)}{s_{j}^{k}})^{i})\). On the other hand, we regard \(t^{M}H_{N}(s_{j},t)\) simply as an expression over \(\mathbb{C}[t]/t^{N+2}\). In other words, it does not contain a term of the order \(N+1\) with respect to \(t\).
By taking a refinement of the open covering \(\{U_{i}\}\) if necessary, we assume that if \(U_{i}\) contains a singular point of the domain curve \(C\), it does not intersect any \(U_{p_{j}}\). Note that if \(U_{i}\) does not contain a singular point of \(C\), the restriction of the structure of a locally ringed space on \(C_{N}\) to \(U_{i}\) is isomorphic to the product \(U_{i}\times\operatorname{Spec}\mathbb{C}[t]/t^{N+1}\). The same holds for \(U_{p_{j}}\), since it does not contain a singular point of \(C\), either. Then, we have the following.
**Lemma 47**.: _On an open subset \(U_{i}\) which intersects \(U_{p_{j}}\) but does not contain \(p_{j}\), we have a local parameterization of \(\varphi_{N}\) of the form_
\[(z_{i},w_{i})=(f_{i}(s_{i})+t^{M}f_{i,1}(s_{i},t),g_{i}(s_{i})+t^{M}g_{i,1}(s_ {i},t)),\ \ \mathrm{mod}\ t^{N+1}.\]
_Here, \(f_{i},g_{i}\), \(f_{i,1}\) and \(g_{i,1}\) are holomorphic functions, \(s_{i}\) is a suitable local parameter on \(U_{i}\times\operatorname{Spec}\mathbb{C}[t]/t^{N+1}\) (that is, a function which reduces to a parameter on \(U_{i}\) over \(\mathbb{C}[t]/t\)), and \(\{z_{i},w_{i}\}\) is a suitable local coordinate system on \(X\)._
Proof.: By Corollaries 43 and 44, the reduction over \(\mathbb{C}[t]/t^{M}\) of the map \(\varphi_{N}|_{C_{N}\setminus\{p_{1},\ldots,p_{e}\}}\) is an immersion. Also, the image of \(\varphi_{N}|_{C_{N}\setminus\{p_{1},\ldots,p_{e}\}}\) is the same as that of \(\varphi|_{C\setminus\{p_{1},\ldots,p_{e}\}}\) over \(\mathbb{C}[t]/t^{M}\). Thus, by pulling back a fixed parameter on \(\varphi(U_{i})\), we obtain a parameter \(s_{i}\) on \(U_{i}\times\operatorname{Spec}\mathbb{C}[t]/t^{M}\). By extending this arbitrarily, we obtain a parameter on \(U_{i}\times\operatorname{Spec}\mathbb{C}[t]/t^{N+1}\). Using this parameter, it is clear that the map \(\varphi_{N}\) has a parameterization of the given form.
Regarding this parameterization as defined over \(\mathbb{C}[t]/t^{N+2}\), we have a local deformation of \(\varphi_{N}\) on \(U_{i}\). In the range \(N<2M-1\), this is a type of local deformations taken in the proof of Proposition 46. Thus, we can use it for the calculation of \(\xi_{j}\).
We now have local deformations of \(\varphi_{N}\) on \(U_{p_{j}}\) and on open subsets \(U_{i}\) which intersect \(U_{p_{j}}\). Since the obstruction is represented by the difference between them, we need to study their properties. First, let us clarify the relation between the coordinate functions used in these expressions. On the curve \(C_{N}\), we have a coordinate change between \(s_{i}\) and \(s_{j}\) defined over \(\mathbb{C}[t]/t^{N+1}\). When we compare local deformations of \(\varphi_{N}\) on \(U_{p_{j}}\) and \(U_{i}\), the relation between \(s_{i}\) and \(s_{j}\) is given simply by regarding this coordinate change over \(\mathbb{C}[t]/t^{N+1}\) as a relation over \(\mathbb{C}[t]/t^{N+2}\), so that \(s_{i}\) is described as a function of \(s_{j}\) which does not contain terms of the order \(t^{N+1}\) (in fact, any other choice which reduces to the given relation over \(\mathbb{C}[t]/t^{N+1}\) will suffice). Recall that the function \(S\) is related to \(s_{j}\) by \(S=s_{j}(1+\sum_{i=1}^{\infty}\prod_{l=0}^{i-1}(\frac{1}{a_{j}}-l)\frac{1}{i!} (\sum_{k=2}^{a_{j}}\frac{c_{k}^{(j)}(N)}{s_{j}^{k}})^{i})\). By solving this, we can express \(s_{j}\) in terms of \(S\) (see Section 3.1). Substituting it into \(s_{j}\), we obtain an expression of \(s_{i}\) in terms of \(S\), defined over \(\mathbb{C}[t]/t^{N+2}\).
On the other hand, on the target space \(X\), the functions \(z_{j}\) and \(w_{j}\) are expressed as holomorphic functions of \(z_{i}\) and \(w_{i}\) on the intersection of local charts. Note that
\[(z_{i},w_{i})=(f_{i}(s_{i}),g_{i}(s_{i})),\ \ \text{mod}\ t^{M}\]
gives a parameterization of the image of the map \(\varphi|_{U_{i}}\). Also, recall that
\[(z_{j},w_{j})=(S^{a},S^{b}+S^{b+1}g_{0}(S))\]
gives a parameterization of the image of the map \(\varphi\) restricted to \(U_{p_{j}}\setminus\{p_{j}\}\), the punctured neighborhood of \(p_{j}\). This holds at any order with respect to \(t\).
From these observations, we see the following. The point of this lemma is that although the relation between \(s_{i}\) and \(s_{j}\) may depend on \(t\) at the orders lower than \(t^{M}\) since the domain curve \(C\) may deform in general, the relation between \(s_{i}\) and \(S\) does not.
**Lemma 48**.: _The relation between \(s_{i}\) and \(S\) is given by_
\[s_{i}=a(S)+t^{M}b(S,t),\ \ \text{mod}\ t^{N+1},\]
_where \(a(S)\) and \(b(S,t)\) are holomorphic functions._
Proof.: By Corollary 43, the map \(\varphi_{N}\) restricted to \(C\setminus\{p_{1},\dots,p_{e}\}\) is an immersion. Then, by taking \(U_{i}\) small enough, we can assume that the image \(\varphi_{N}(U_{i})\) is isomorphic to \(U_{i}\). Note that we use the convention of Remark 45.
Then, \(s_{i}\) gives a local coordinate on the image \(\varphi_{N}(U_{i})\). On the other hand, the function \(S\) on \(C_{N}\), when reduced over \(\mathbb{C}[t]/t^{M}\), is the pull back of a fixed parameter on the punctured neighborhood of \(\varphi(p_{j})\) on \(\varphi(C)\) by the map \(\varphi_{N}\) reduced over \(\mathbb{C}[t]/t^{M}\). Since the reduction of \(\varphi_{N}\) over \(\mathbb{C}[t]/t^{M}\), when restricted to \(C\setminus\{p_{1},\dots,p_{e}\}\), does not depend on \(t\) by Corollaries 43 and 44, the coordinate change between \(s_{i}\) and \(S\) does not contain terms of the order lower than \(M\) with respect to \(t\). This proves the claim.
Now, let us calculate the section \(\xi_{j}\) associated with the open subset \(U_{p_{j}}\) in Proposition 46. First, let us compute the coordinate transformation of
\[(z_{i},w_{i})=(f_{i}(s_{i})+t^{M}f_{i,1}(s_{i},t),g_{i}(s_{i})+t^{M}g_{i,1}(s_ {i},t)),\ \ \text{mod}\ t^{N+2},\]
into \(\{z_{j},w_{j}\}\) in terms of \(S\). By the observation so far, it will be of the form
\[(z_{j},w_{j})=(S^{a}+t^{M}F(S,t),S^{b}+S^{b+1}g_{0}(S)+t^{M}G(S,t))\ \ \text{mod}\ t^{N+2},\]
where \(F,G\) are holomorphic functions.
**Lemma 49**.: _In the range \(N<2M-1\), the functions \(t^{M}F(S,t)\) and \(t^{M}G(S,t)\) do not contain terms of the order \(N+1\) with respect to \(t\)._
Proof.: Note that, by construction, the expression \((z_{i},w_{i})=(f_{i}(s_{i})+t^{M}f_{i,1}(s_{i},t),g_{i}(s_{i})+t^{M}g_{i,1}(s_{i },t))\), mod \(t^{N+2}\), does not contain terms of the order \(N+1\) with respect to \(t\). From this and the relation \(s_{i}=a(S)+t^{M}b(S,t)\), and the fact that the coordinate change between \(\{z_{i},w_{i}\}\) and \(\{z_{j},w_{j}\}\) does not depend on \(t\), the claim follows.
Also, note that the above expression for \((z_{j},w_{j})\) coincides with \((z_{j},w_{j})=(S^{a},S^{b}+S^{b+1}g_{0}(S)-\sum_{l=-\infty}^{-1}\sigma_{-l}^{( j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}+t^{M}H_{N}(s_{j},t))\) over \(\mathbb{C}[t]/t^{N+1}\), see Eq.(4). Combined with the above lemma, we have the following.
**Lemma 50**.: _We have_
\[t^{M}F(S,t)=0,\ \ mod\ t^{N+2},\]
_in the range \(N<2M-1\). _
It follows that the local deformations of \(\varphi_{N}\) on \(U_{p_{j}}\) and \(U_{i}\) have the same \(z_{j}\)-part. Therefore, the difference between these local deformations is given by the coefficient of \(t^{N+1}\) of the difference of the \(w_{j}\)-part, namely,
\[t^{M}G(S,t)-(-\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j }^{l}+t^{M}H_{N}(s_{j},t)) \tag{5}\]
after substituting \(S=s_{j}(1+\sum_{i=1}^{\infty}\prod_{l=0}^{i-1}(\frac{1}{a_{j}}-l)\frac{1}{i!} (\sum_{k=2}^{a_{j}}\frac{c_{k}^{(j)}(N)}{s_{j}^{k}})^{i})\). Note that it has no term lower than the order \(t^{N+1}\) by the remark before Lemma 50. Recall that \(t^{M}G(S,t)\) does not contain terms of the order \(N+1\) with respect to \(t\) before substituting \(S=s_{j}(1+\sum_{i=1}^{\infty}\prod_{l=0}^{i-1}(\frac{1}{a_{j}}-l)\frac{1}{i!} (\sum_{k=2}^{a_{j}}\frac{c_{k}^{(j)}(N)}{s_{j}^{k}})^{i})\), while \(-\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}+t^{M}H_ {N}(s_{j},t)\) does. We will evaluate (5) using these observations. Since the calculation is a little tricky, we outline the strategy:
* Although our ultimate goal is to evaluate (5) after substituting \(S=s_{j}(1+\sum_{i=1}^{\infty}\prod_{l=0}^{i-1}(\frac{1}{a_{j}}-l)\frac{1}{i!} (\sum_{k=2}^{a_{j}}\frac{c_{k}^{(j)}(N)}{s_{j}^{k}})^{i})\) to \(G(S,t)\), we begin by comparing \(G(S,t)\) with the result of substituting \(s_{j}=S(1+\sum_{i=-\infty}^{-1}\theta_{-i}^{(j)}S^{i})\) (see below) to \(-\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}+t^{M}H_ {N}(s_{j},t)\) (Lemma 51).
* Then, by studying the result of substituting \(S=s_{j}(1+\sum_{i=1}^{\infty}\prod_{l=0}^{i-1}(\frac{1}{a_{j}}-l)\frac{1}{i!} (\sum_{k=2}^{a_{j}}\frac{c_{k}^{(j)}(N)}{s_{j}^{k}})^{i})\) back to the result of Lemma 51 carefully (Lemmas 52, 53 and Proposition 54), we obtain the desired result (Corollary 55).
Now, we begin the study of the term \(t^{M}G(S,t)\) defined over \(\mathbb{C}[t]/t^{N+2}\). We recall some notations from Section 3.1. We can solve \(S=s_{j}(1+\sum_{i=1}^{\infty}\prod_{l=0}^{i-1}(\frac{1}{a_{j}}-l)\frac{1}{i!} (\sum_{k=2}^{a_{j}}\frac{c_{k}^{(j)}(N)}{s_{j}^{k}})^{i})\), and express \(s_{j}\) as a Laurent series of \(S\). Note that under the condition \(c_{k}^{(j)}(N)\in t^{d_{j}k}\mathbb{C}[[t]]\), if we write
\[S=s_{j}(1+\sum_{i=-\infty}^{-1}\gamma_{-i}^{(j)}s_{j}^{i}),\]
we have \(\gamma_{-i}^{(j)}\in t^{-d_{j}i}\mathbb{C}[[t]]\). Also, note that \(\gamma_{-i}^{(j)}=f_{-i}^{(1)}(\mathbf{c}^{(j)}(N))\) in the notation of Section 2.3.2. From this, it is not difficult to see that if we write
\[s_{j}=S(1+\sum_{i=-\infty}^{-1}\theta_{-i}^{(j)}S^{i}),\]
we also have \(\theta_{-i}^{(j)}\in t^{-d_{j}i}\mathbb{C}[[t]]\).
The following is easy to see.
**Lemma 51**.: _The term \(t^{M}G(S,t)\) is obtained by substituting \(s_{j}=S(1+\sum_{i=-\infty}^{-1}\theta_{-i}^{(j)}S^{i})\) to \(-\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}+t^{M}H_{ N}(s_{j},t)\) and discarding all the terms whose order is higher than \(N\) with respect to \(t\)._
Proof.: Since we have \(t^{M}G(S,t)-(-\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j }^{l}+t^{M}H_{N}(s_{j},t))=0\), mod \(t^{N+1}\), the claim is obvious over \(\mathbb{C}[t]/t^{N+1}\). Then, the claim follows from Lemma 49.
To compute Eq.(5), we need to calculate the term of the order \(N+1\) with respect to \(t\) when we substitute \(S=s_{j}(1+\sum_{i=-\infty}^{-1}\gamma_{-i}^{(j)}s_{j}^{i})\) to \(t^{M}G(S,t)\). By substituting \(s_{j}=S(1+\sum_{i=-\infty}^{-1}\theta_{-i}^{(j)}S^{i})\) to \(s_{j}^{l}\), \(l<0\), we have
\[s_{j}^{l}=S^{l}(1+\sum_{i=-\infty}^{-1}\theta_{-i}^{(j)}S^{i})^{l}.\]
As in Section 3.1, we write this as
\[s_{j}^{l}=S^{l}\sum_{m=-\infty}^{0}\Theta_{-m}^{(j;l)}S^{m},\]
where \(\Theta_{-m}^{(j;l)}\in t^{-d_{j}m}\mathbb{C}[[t]]\). We write \(N-(M-1)=d_{j}n_{j}+r\), where \(0\leq r<d_{j}\). Since we have \(M=d_{j}(b_{j}+1)\), we can write it as
\[N+1=d_{j}(b_{j}+n_{j}+1)+r.\]
For \(l<0\), let
\[(\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))S^{l}\sum_{m=-\infty}^{0}\Theta_{-m}^{ (j;l)}S^{m})^{\leq N}\]
be the sum of terms of \(\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}=\sigma_{-l}^{(j)}(\mathbf{c}^{ (j)}(N))S^{l}\sum_{m=-\infty}^{0}\Theta_{-m}^{(j;l)}S^{m}\) whose order with respect to \(t\) is at most \(N\), in the expression using \(S\).
**Lemma 52**.: _Any term of \((\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))S^{l}\sum_{m=-\infty}^{0}\Theta_{-m}^{ (j;l)}S^{m})^{\leq N}\) has the order at least \(-n_{j}-1\) with respect to \(S\). When \(N+1\) is a multiple of \(d_{j}\), the bound is given by \(-n_{j}\). In particular, if we have \(-n_{j}-1>l\), \((\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))S^{l}\sum_{m=-\infty}^{0}\Theta_{-m}^{ (j;l)}S^{m})^{\leq N}\) is equal to zero._
Proof.: A term of \(S^{l}\sum_{m=-\infty}^{0}\Theta_{-m}^{(j;l)}S^{m}\) which is of the order \(n\leq l\) with respect to \(S\) has the coefficient in \(t^{d_{j}(l-n)}\mathbb{C}[[t]]\). Therefore, if a term of the order \(n\) with respect to \(S\) is contained in \((\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))S^{l}\sum_{m=-\infty}^{0}\Theta_{-m}^{ (j;l)}S^{m})^{\leq N}\), we have
\[d_{j}(b_{j}-l)+d_{j}(l-n)\leq N=d_{j}(b_{j}+n_{j}+1)+r-1.\]
It follows that the inequality
\[n\geq-n_{j}-1-\frac{r-1}{d_{j}}\]
holds. The claim follows from this.
**Lemma 53**.: _Assume we have \(-n_{j}-1\leq l\). If we substitute \(S=s_{j}(1+\sum_{i=-\infty}^{-1}\gamma_{-i}^{(j)}s_{j}^{i})\) to \((\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))S^{l}\sum_{m=-\infty}^{0}\Theta_{-m}^{( j;l)}S^{m})^{\leq N}\), then we have_
\[\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}+\sum_{m=-n_{j}-1}^{l}o_{b_{j}-m }s_{j}^{m},\ \ \text{mod}\ t^{N+2},\]
_when \(N+1\) is not a multiple of \(d_{j}\), and_
\[\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}-\sigma_{-l}^{(j)}(\mathbf{c}^{ (j)}(N))\Theta_{l+n_{j}+1}^{(j;l)}s_{j}^{-n_{j}-1}+\sum_{m=-n_{j}}^{l}o_{b_{j}- m}s_{j}^{m},\ \ \text{mod}\ t^{N+2},\]
_when \(N+1=d_{j}(b_{j}+n_{j}+1)\). Note that we have \(\Theta_{0}^{(j;l)}=1\). Here, \(o_{b_{j}-m}=o_{b_{j}-m}(\mathbf{c}^{(j)}(N))\) is the notation in Definition 30._
Proof.: Let \(J_{l}(s_{j},t)\) be the series obtained by substituting \(S=s_{j}(1+\sum_{i=-\infty}^{-1}\gamma_{-i}^{(j)}s_{j}^{i})\) to \((\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))S^{l}\sum_{m=-\infty}^{0}\Theta_{-m}^{ (j;l)}S^{m})^{\leq N}\). Then, \(J_{l}(s_{j},t)\) is the sum of \(\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}\) and terms of the order at least \(N+1\) with respect to \(t\). We write it as
\[J_{l}(s_{j},t)=\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}+\rho(s_{j},t).\]
Assume \(N+1\) is not a multiple of \(d_{j}\). Then, we have \(J_{l}(s_{j},t)=0\), mod \(t^{N+2}\), if \(N+1<d_{j}(b_{j}-l)\). If we have \(N+1>d_{j}(b_{j}-l)\), we can write \(J_{l}(s_{j},t)\) in the form
\[J_{l}(s_{j},t)=\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}+\sum_{m=-n_{j} -1}^{l}o_{b_{j}-m}s_{j}^{m},\ \ \text{mod}\ t^{N+2}.\]
Namely, since we have \(d_{j}(b_{j}+n_{j}+1)<N+1\), the coefficient of \(s_{j}^{m}\), \(m\geq-n_{j}-1\), is absorbed in \(o_{b_{j}-m}\) when it is of the order \(N+1\) with respect to \(t\).
Assume \(N+1\) is a multiple of \(d_{j}\), that is, \(N+1=d_{j}(b_{j}+n_{j}+1)\). If \(-n_{j}-1=l\), so that \(N+1=d_{j}(b_{j}-l)\), it is clear that we have \((\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))S^{l}\sum_{m=-\infty}^{0}\Theta_{-m}^{ (j;l)}S^{m})^{\leq N}=0\). Assume \(-n_{j}-1<l\). Let us write by
\[(\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))S^{l}(\sum_{m=-\infty}^{0}\Theta_{-m}^{ (j;l)}S^{m}))^{=N+1}\]
the sum of the terms of \((\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))S^{l}(\sum_{m=-\infty}^{0}\Theta_{-m}^{ (j;l)}S^{m}))^{\leq N+1}\) whose coefficient is of the order \(N+1\) with respect to \(t\). In this case, if we substitute \(S=s_{j}(1+\sum_{i=-\infty}^{-1}\gamma_{-i}^{(j)}s_{j}^{i})\) to \((\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))S^{l}(\sum_{m=-\infty}^{0}\Theta_{-m}^{ (j;l)}S^{m}))^{\leq N+1}\), the result is \(\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}\), mod \(t^{N+2}\). It follows that the term \(\rho(s_{j},t)\) in \(J_{l}(s_{j},t)\) and the term \((\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))S^{l}(\sum_{m=-\infty}^{0}\Theta_{-m}^{ (j;l)}S^{m}))^{=N+1}\) cancel, mod \(t^{N+2}\), after substituting \(S=s_{j}(1+\sum_{i=-\infty}^{-1}\gamma_{-i}^{(j)}s_{j}^{i})\) to the latter. On the other
hand, the term \((\sigma^{(j)}_{-l}(\mathbf{c}^{(j)}(N))S^{l}(\sum_{m=-\infty}^{0}\Theta^{(j:l)}_{ -m}S^{m}))^{=N+1}\) is written in the form
\[\sigma^{(j)}_{-l}(\mathbf{c}^{(j)}(N))\Theta^{(j:l)}_{l+n_{j}+1}S^{-n_{j}-1}+ \sum_{m=-n_{j}}^{l}o_{b_{j}-m}S^{m},\ \ \mathrm{mod}\ t^{N+2}.\]
The claim follows from this.
Now, we can show the following.
**Proposition 54**.: _If \(N+1=d_{j}(b_{j}+n_{j}+1)+r\) is not a multiple of \(d_{j}\), after substituting \(S=s_{j}(1+\sum_{i=-\infty}^{-1}\gamma_{-i}^{(j)}s_{j}^{i})\), we can write \(t^{M}G(S,t)\) in the form_
\[-\sum_{m=-n_{j}-1}^{-1}\sigma^{(j)}_{-m}(\mathbf{c}^{(j)}(N))s_{j}^{m}+\sum_{m =-\infty}^{-1}o_{b_{j}-m}s_{j}^{m}+t^{M}h(s_{j},t),\]
_over \(\mathbb{C}[t]/t^{N+2}\). Here, \(h(s_{j},t)\) is a holomorphic function which does not contain a negative power of \(s_{j}\). If \(N+1=d_{j}(b_{j}+n_{j}+1)\), we can write it in the form_
\[-\sum_{m=-n_{j}}^{-1}\sigma^{(j)}_{-m}(\mathbf{c}^{(j)}(N))s_{j}^{m}+\sum_{m=- n_{j}}^{-1}\sigma^{(j)}_{-m}(\mathbf{c}^{(j)}(N))\Theta^{(j;m)}_{m+n_{j}+1}s_{j}^ {-n_{j}-1}+\sum_{m=-\infty}^{-1}o_{b_{j}-m}s_{j}^{m}+t^{M}h(s_{j},t),\]
_over \(\mathbb{C}[t]/t^{N+2}\)._
Proof.: Recall that \(G(S,t)\) is given by
\[(-\sum_{l=-\infty}^{-1}\sigma^{(j)}_{-l}(\mathbf{c}^{(j)}(N))S^{l}\sum_{m=- \infty}^{0}\Theta^{(j:l)}_{-m}S^{m}+t^{M}H_{N}(S\sum_{m=-\infty}^{0}\Theta^{(j :1)}_{-m}S^{m},t))^{\leq N},\]
by Lemma 51. Note that \(H_{N}(s_{j},t)\) is a holomorphic function, that is, it does not contain a singular term with respect to \(s_{j}\). Therefore, for \(l<0\), the coefficient of \(S^{l}\) in \(t^{M}H_{N}(S\sum_{m=-\infty}^{0}\Theta^{(j:1)}_{-m}S^{m},t)\) belongs to \(t^{M+d_{j}(-l+1)}\mathbb{C}[[t]]\). Since we have \(M+d_{j}(-l+1)=d_{j}(b_{j}-l+2)>d_{j}(b_{j}-l)\), such a term can be absorbed in \(o_{b_{j}-l}S^{l}\). Thus, we can write
\[(t^{M}H_{N}(S\sum_{m=-\infty}^{0}\Theta^{(j:1)}_{-m}S^{m},t))^{\leq N}=\sum_{m =-\infty}^{-1}o_{b_{j}-m}S^{m}+t^{M}\tilde{h}(S,t),\]
for any \(N\). Here, \(\tilde{h}\) does not contain a negative power of \(S\). By the same argument, it is easy to see that after substituting \(S=s_{j}(1+\sum_{i=-\infty}^{-1}\gamma_{-i}^{(j)}s_{j}^{i})\), this is written in the form \(\sum_{m=-\infty}^{-1}o_{b_{j}-m}s_{j}^{m}+t^{M}h(s_{j},t)\).
On the other hand, by Lemmas 52 and 53, we have
\[(-\sum_{l=-\infty}^{-1}\sigma^{(j)}_{-l}(\mathbf{c}^{(j)}(N))S^{l}\sum_{m=- \infty}^{0}\Theta^{(j:l)}_{-m}S^{m})^{\leq N}=-\sum_{m=-n_{j}-1}^{-1}\sigma^{(j )}_{-m}(\mathbf{c}^{(j)}(N))s_{j}^{m}+\sum_{m=-\infty}^{-1}o_{b_{j}-m}s_{j}^{m },\ \ \mathrm{mod}\ t^{N+2},\]
when \(N+1\) is not a multiple of \(d_{j}\), and
\[(-\sum_{l=-\infty}^{-1}\sigma^{(j)}_{-l}(\mathbf{c}^{(j)}(N))S^{l}\sum_{m=- \infty}^{0}\Theta^{(j:l)}_{-m}S^{m})^{\leq N}\]
\[=-\sum_{m=-n_{j}}^{-1}\sigma^{(j)}_{-m}(\mathbf{c}^{(j)}(N))s_{j}^{m}+\sum_{m=- n_{j}}^{-1}\sigma^{(j)}_{-m}(\mathbf{c}^{(j)}(N))\Theta^{(j;m)}_{m+n_{j}+1}s_{j}^{-n_{j}- 1}+\sum_{m=-\infty}^{-1}o_{b_{j}-m}s_{j}^{m},\ \ \mathrm{mod}\ t^{N+2},\]
when \(N+1=d_{j}(b_{j}+n_{j}+1)\), after substituting \(S=s_{j}(1+\sum_{i=-\infty}^{-1}\gamma_{-i}^{(j)}s_{j}^{i})\). Note that in the latter case, \(\sigma_{n_{j}+1}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{-n_{j}-1}\) is excluded. The claim follows from the observation so far.
Recall that we are calculating the difference \(t^{M}G(S,t)-(-\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^ {l}+t^{M}H_{N}(s_{j},t))\), after substituting \(S=s_{j}(1+\sum_{i=-\infty}^{-1}\gamma_{-i}^{(j)}s_{j}^{i})\). By the above proposition, we have the following.
**Corollary 55**.: _If \(N+1=d_{j}(b_{j}+n_{j}+1)+r\) is not a multiple of \(d_{j}\), we can write_
\[t^{M}G(S,t)-(-\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j} ^{l}+t^{M}H_{N}(s_{j},t))=\sum_{m=-\infty}^{-1}o_{b_{j}-m}s_{j}^{m}+t^{M}k(s_{j },t),\ \ \text{mod}\ t^{N+2}.\]
_If \(N+1=d_{j}(b_{j}+n_{j}+1)\), we can write_
\[\begin{array}{l}t^{M}G(S,t)-(-\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}( \mathbf{c}^{(j)}(N))s_{j}^{l}+t^{M}H_{N}(s_{j},t))\\ =\sum_{m=-n_{j}-1}^{-1}\sigma_{-m}^{(j)}(\mathbf{c}^{(j)}(N))\Theta_{m+n_{j}+1} ^{(j;m)}s_{j}^{-n_{j}-1}+\sum_{m=-\infty}^{-1}o_{b_{j}-m}s_{j}^{m}+t^{M}k(s_{j },t),\ \ \text{mod}\ t^{N+2}.\end{array}\]
_Here, \(k(s_{j},t)\) is a holomorphic function which does not contain a negative power of \(s_{j}\). _
By Proposition 46, the functions in Corollary 55 give a representative \(\{\xi_{j}\}\) of the obstruction class.
Recall that the obstruction is calculated by the pairing between the local meromorphic section \(\xi_{j}\) on the open subset \(U_{p_{j}}\) and elements in \(\mathcal{I}\) of Definition 39 as in Proposition 7, in the range \(N<2M-1\). Now, let us consider these pairings. Note that the fiberwise pairing between elements of \(\mathcal{I}\) and a meromorphic section of \(\bar{\mathcal{N}}_{\varphi}|_{U_{p_{j}}}\) gives a meromorphic 1-form on \(U_{p_{j}}\).
Let us recall some notations introduced in Sections 3.3 and 3.4. Recall that we introduced the notation \(P(\eta)=(j(\eta),m(\eta))\) and \(ord(\eta)=d_{j(\eta)}(b_{j(\eta)}+a_{j(\eta)}-m(\eta))\) in Definition 36. We write \(d_{j(\eta)}(b_{j(\eta)}+a_{j(\eta)}-m(\eta))=i_{n}\) for some \(n\in\{1,\ldots,k\}\). Let \(\{(l_{1},m_{1}),\ldots,(l_{v},m_{v})\}\) be the subset of \(\{1,\ldots,e\}\times\mathbb{Z}_{>0}\) consisting of the elements satisfying \(d_{l_{q}}(b_{l_{q}}+a_{l_{q}}-m_{q})=i_{n}\), \(q=1,\ldots,v\). We write \(psupp(\eta)\cap\{(l_{1},m_{1}),\ldots,(l_{v},m_{v})\}=\{(l_{1}^{\prime},m_{1}^ {\prime}),\ldots,(l_{w}^{\prime},m_{w}^{\prime})\}\). Recall that when we have a deformation \(\varphi_{N}\) of \(\varphi\) over \(\mathbb{C}[t]/t^{N+1}\), we have constants \(\mathbf{c}^{(j)}(N)=(c_{2}^{(j)}(N),\ldots,c_{a_{j}}^{(j)}(N))\) at each singular point \(p_{j}\) of \(\varphi\). The term \(c_{i}^{(j)}(N)\) is of the form \(c_{i}^{(j)}(N)=t^{d_{j}i}\bar{c}_{i}^{(j)}(N)\), where \(\bar{c}_{i}^{(j)}(N)\in\bar{c}_{i}^{(j)}+t\mathbb{C}[[t]]\). We can consider the condition \((\star_{\eta})\) for the constants \((\tilde{c}_{2}^{(j)},\ldots,\tilde{c}_{a_{j}}^{(j)})\), \(j=1,\ldots,e\), see Definition 40.
**Corollary 56**.: _Assume that the condition \((\star_{\eta})\) holds for any \(\eta\in\mathcal{I}\). Then, for any \(\eta\in\mathcal{I}\) and \(N<2M-1\), the pairing between \(\eta\) and the obstruction cocycle to deforming \(\varphi_{N}\) in Proposition 46 is of the form \(o_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}(\mathbf{c}^{(j(\eta))}(N))\)._
Proof.: According to the definition of \(psupp(\eta)\) and \(P(\eta)=(j(\eta),m(\eta))\), when we examine the summands of \(\sum_{m=-\infty}^{-1}o_{b_{j}-m}(\mathbf{c}^{(j)}(N))s_{j}^{m}\) in Corollary 55 at each \(j=1,\ldots,e\), only the terms \(o_{b_{j}-m}(\mathbf{c}^{(j)}(N))s_{j}^{m}\), that satisfy the condition \(d_{j}(b_{j}-m)\geq d_{j(\eta)}(b_{j(\eta)}+a_{j(\eta)}-m(\eta))\), pair with \(\eta\) in a non-trivial manner. Consequently, the contribution from the part \(\sum_{m=-\infty}^{-1}o_{b_{j}-m}(\mathbf{c}^{(j)}(N))s_{j}^{m}\) of \(\xi_{j}\) to the pairing between \(\eta\) and the obstruction cocycle takes the form \(o_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}(\mathbf{c}^{(j(\eta))}(N))\). Note that the part \(t^{M}k(s_{j},t)\) pairs with \(\eta\) trivially,
for any \(j=1,\ldots,e\). This observation confirms the claim when \(N+1\) is not a multiple of \(d_{j(\eta)}\).
Now, let us assume that \(N+1\) is a multiple of \(d_{j(\eta)}\). If the equality \(N+1=d_{j(\eta)}(b_{j(\eta)}+a_{j(\eta)}-m(\eta))\) does not hold, it is easy to see that the pairing between \(\eta\) and the obstruction class takes the form \(o_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}(\mathbf{c}^{(j(\eta))}(N))\) by the construction of \(\mathcal{I}\). So, let us assume that the equality \(N+1=d_{j(\eta)}(b_{j(\eta)}+a_{j(\eta)}-m(\eta))\) holds. We use the notation before this corollary. Thus, we write \(d_{j(\eta)}(b_{j(\eta)}+a_{j(\eta)}-m(\eta))=i_{n}\) for some \(n\), and define the pairs \(\{(l^{\prime}_{1},m^{\prime}_{1}),\ldots,(l^{\prime}_{w},m^{\prime}_{w})\}\) as above. Then, the pairing between \(\eta\) and the obstruction cocycle is the sum of terms of the form \(o_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}(\mathbf{c}^{(j(\eta))}(N))\), and the terms contributed from
\[\sum_{m=-(a_{l^{\prime}_{r}}-m^{\prime}_{r})}^{-1}f_{b_{l^{\prime}_{r}}-m}^{(b _{l^{\prime}_{r}})}(\mathbf{c}^{(l^{\prime}_{r})}(N))\Theta_{m+a_{l^{\prime}_{ r}}-m^{\prime}_{r}}^{(l^{\prime}_{r};m)}s_{l^{\prime}_{r}}^{-(a_{l^{\prime}_{r}}-m^{ \prime}_{r})}=F_{-(a_{l^{\prime}_{r}}-m^{\prime}_{r})}^{(l^{\prime}_{r})}( \mathbf{c}^{(l^{\prime}_{r})}(N))s_{l^{\prime}_{r}}^{-(a_{l^{\prime}_{r}}-m^{ \prime}_{r})},\ \ r=1,\ldots,w.\]
Here, recall that \(f_{b_{l^{\prime}_{r}}-m}^{(b_{l^{\prime}_{r}})}\) is the leading term of \(\sigma_{-m}^{(l^{\prime}_{r})}\). This contribution is nothing but the one that appeared in the condition \((\star_{\eta})\). Thus, under the condition \((\star_{\eta})\), the pairing between \(\eta\) and the obstruction cocycle is of the form \(o_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}(\mathbf{c}^{(j(\eta))}(N))\).
In the range \(N\geq 2M-1\), the obstruction class may not be written in the form as in Proposition 46. However, the pairing between any section \(\eta\in\mathcal{I}\) and the obstruction cocycle is of the order \(t^{N+1}\). Since \(N+1\geq 2M>\max_{j=1,\ldots,e}\{d_{j}(b_{j}+a_{j}-1)\}\), the pairing is of the form \(o_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}(\mathbf{c}^{(j)}(N))\). Thus, we can conclude the following.
**Proposition 57**.: _For any \(N\geq M-1\), assume we have constructed a deformation \(\varphi_{N}\) of \(\varphi\) over \(\mathbb{C}[t]/t^{N+1}\). Also, assume that the condition \((\star_{\eta})\) holds for any \(\eta\in\mathcal{I}\). Let \(\xi_{N+1}\) be an obstruction cocycle to deforming \(\varphi_{N}\) obtained by the difference of local deformations of \(\varphi_{N}\). Then, we can take \(\xi_{N+1}\) so that the pairing between \(\eta\in\mathcal{I}\) and \(\xi_{N+1}\) is of the form \(o_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}(\mathbf{c}^{(j(\eta))}(N))\), where \(\mathbf{c}^{(j)}(N)=(c_{2}^{(j)}(N),\ldots,c_{a_{j}}^{(j)}(N))\) is given by Eq.(4) above. _
Note that \(c_{i}^{(j)}(N)\in t^{d_{j}i}\mathbb{C}[[t]]\), and the pairing between \(\eta\) and \(\xi_{N+1}\) gives an element of \(t^{N+1}\mathbb{C}[t]/t^{N+2}\). The coefficient of \(t^{N+1}\) is the value of the pairing between \(\eta\) and the obstruction cohomology class \(o_{N+1}\in H^{1}(C,\bar{\mathcal{N}}_{\varphi})\) represented by \(\xi_{N+1}\).
### Deformation at higher orders
Assume that we have constructed a deformation \(\varphi_{N}\) of \(\varphi\) up to the order \(t^{N}\) for some \(N\geq M-1\). In particular, we have fixed the constants \(\mathbf{c}^{(j)}(N)=(c_{2}^{(j)}(N),\ldots,c_{a_{j}}^{(j)}(N))\), for each \(j=1,\ldots,e\). Here, \(\{p_{1},\ldots,p_{e}\}\) is the set of singular points of \(\varphi\). We write \(c_{i}^{(j)}(N)=t^{d_{j}i}\bar{c}_{i}^{(j)}\), \(\bar{c}_{i}^{(j)}\in\hat{c}_{i}^{(j)}+t\mathbb{C}[[t]]\), and we assume that \((\hat{c}_{2}^{(j)},\ldots,\hat{c}_{a_{j}}^{(j)})\) satisfies the condition of Theorem 41.
Let us consider the deformation at the order \(t^{N+1}\). At this order, there is an obstruction cocycle of the order \(t^{N+1}\) defined by the difference of local deformations. Our goal is to make its cohomology class zero by making further modification to \(c_{i}^{(j)}(N)\), and to construct a deformation of \(\varphi\) of the order \(t^{N+1}\). In this process, we have two primary tasks. The first task is to describe the system of equations whose solution corresponds to the value of \(c_{2}^{(j)},\ldots,c_{a_{j}}^{(j)}\), at which the obstruction at the order \(t^{N+1}\) vanishes. We need to check that this system is of the form described in Proposition 31 in order to obtain a solution. This will be done in Proposition 60. Second, we need to verify that the solution actually
corresponds to some curve. Namely, to eliminate the obstruction at the order \(t^{N+1}\), we have to change the values of \(c_{2}^{(j)},\ldots,c_{a_{j}}^{(j)}\), at lower orders. Thus, it is unclear that using the new values of \(c_{2}^{(j)},\ldots,c_{a_{j}}^{(j)}\), we can deform \(\varphi\) even up to the order \(t^{N}\). We will show that with the new values of \(c_{2}^{(j)},\ldots,c_{a_{j}}^{(j)}\), we can construct a new deformation \(\bar{\varphi}_{N}\) of \(\varphi\) of the order \(t^{N}\), and the obstruction to deforming it vanishes. This is the content of Section 4.3.4.
#### 4.3.1. Local and virtual local deformations
Locally on a neighborhood \(U_{p_{j}}\) of \(p_{j}\), the map \(\varphi_{N}\) has a parameterization of the form
\[(z_{j},w_{j})=(s_{j}^{a_{j}}+c_{2}^{(j)}(N)s_{j}^{a_{j}-2}+\cdots+c_{a}^{(j)}(N ),\sum_{l=0}^{\infty}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}+t^{M}H_{j }(s_{j},t)),\ \ \mathrm{mod}\ t^{N+1}, \tag{6}\]
where \(H_{j}(s_{j},t)\) is a holomorphic function on a neighborhood of \(p_{j}\).
To calculate the obstruction to deforming \(\varphi_{N}\) to the next order, we considered specific local deformations of \(\varphi_{N}\) in the previous subsection. Namely, regarding Eq.(6) as an expression over \(\mathbb{C}[t]/t^{N+2}\), it gives a local deformation of \(\varphi_{N}\) around \(p_{j}\). Away from the singular points of \(\varphi_{N}\), we take local deformations as in the proof of Proposition 46 in the range \(N<2M-1\). In the range \(N\geq 2M-1\), we take any local deformation. Let \(o_{N+1}\in H^{1}(C,\bar{\mathcal{N}}_{\varphi})\) be the obstruction class associated with these local deformations.
We will compare this with a curve given by the parameterization
\[(z_{j},w_{j})=(s_{j}^{a_{j}}+c_{2}^{(j)}(N+1)s_{j}^{a_{j}-2}+\cdots+c_{a_{j}}^ {(j)}(N+1),\sum_{l=0}^{\infty}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1))s_{j}^{ l}+t^{M}\bar{H}_{N}(s_{j},t)),\ \ \mathrm{mod}\ t^{N+2}, \tag{7}\]
around each \(p_{j}\), and find values of \(c_{i}^{(j)}(N+1)\) in a way that the map \(\varphi\) can be deformed up to the order \(t^{N+1}\). Here, we take
\[c_{i}^{(j)}(N+1)=c_{i}^{(j)}(N)+\delta_{i},\]
to be a modification of \(c_{i}^{(j)}(N)\), where \(\delta_{i}\in t^{d_{j}i+1}\mathbb{C}[[t]]\) is to be determined. By Lemma 16, for any choice of \(c_{i}^{(j)}(N+1)\), there is a change of parameters from \(s_{j}\) to \(s_{j}(N+1)\) on a punctured neighborhood of \(p_{j}\) such that
\[s_{j}(N+1)^{a_{j}}+c_{2}^{(j)}(N+1)s_{j}(N+1)^{a_{j}-2}+\cdots+c_{a_{j}}^{(j)} (N+1)=s_{j}^{a_{j}}+c_{2}^{(j)}(N)s_{j}^{a_{j}-2}+\cdots+c_{a_{j}}^{(j)}(N).\]
This holds over \(\mathbb{C}[[t]]\). Moreover, \(\bar{H}_{N}(s_{j},t)\) is a holomorphic function determined by \(H_{N}(s_{j},t)\) and \(c_{i}^{(j)}(N+1)\) by Lemma 23. In particular, we have \(\bar{H}_{N}(s_{j}(N+1),t)_{reg}=H_{N}(s_{j},t)\), \(\mathrm{mod}\ t^{N+2}\). Note that the parameterization Eq.(7) may not coincide with the restriction of \(\varphi_{N}\) over \(\mathbb{C}[t]/t^{N+1}\). That is, it is not a local deformation of \(\varphi_{N}\) in general, contrary to the one associated with Eq.(6) above. Therefore, we call the curve given by the parameterization Eq.(7) a _virtual local deformation_.
#### 4.3.2. Comparison between local and virtual local deformations
Substituting \(s_{j}(N+1)\) to \(s_{j}\) on the right hand side of Eq.(7), we obtain a parameterization
\[\begin{array}{l}(z_{j},w_{j})\\ =(s_{j}^{a_{j}}+c_{2}^{(j)}(N)s_{j}^{a_{j}-2}+\cdots+c_{a_{j}}^{(j)}(N),\\ \sum_{l=0}^{\infty}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1))s_{j}(N+1)^{l}+t^{M }\bar{H}_{N}(s_{j}(N+1),t)),\\ =(s_{j}^{a_{j}}+c_{2}^{(j)}(N)s_{j}^{a_{j}-2}+\cdots+c_{a_{j}}^{(j)}(N),\\ \sum_{l=0}^{\infty}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1))s_{j}(N+1)^{l}+t^{M }(\bar{H}_{N}(s_{j}(N+1),t)_{sing}+H_{N}(s_{j},t))),\end{array}\]
\(\bmod t^{N+2}\), by Lemma 23. Now, the difference between local and virtual local deformations is given by the difference of the coordinate \(w_{j}\):
\[\begin{array}{l}\sum_{l=0}^{\infty}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_ {j}^{l}+t^{M}H_{N}(s_{j},t)\\ \qquad\qquad-(\sum_{l=0}^{\infty}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1))s_{j} (N+1)^{l}+t^{M}(\bar{H}_{N}(s_{j}(N+1),t)_{sing}+H_{N}(s_{j},t)))\\ =-\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))s_{j}^{l}+\sum_{ l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1))s_{j}(N+1)^{l}-t^{M}\bar{H} _{N}(s_{j}(N+1),t)_{sing},\end{array} \tag{8}\]
\(\bmod t^{N+2}\), by Corollary 21. Note that for fixed \(\mathbf{c}^{(j)}(N)\), the parameter \(s_{j}(N+1)\) is determined by \(c_{i}^{(j)}(N+1)\). Thus, the coefficient of \(s_{j}^{l}\) in \(\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1))s_{j}(N+1)^{l}-t ^{M}\bar{H}_{N}(s_{j}(N+1),t)_{sing}\) is a function of \(\mathbf{c}^{(j)}(N+1)\), and we write it as \(\tilde{\sigma}_{-l}^{(j)}\), that is,
\[\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1))s_{j}(N+1)^{l}-t ^{M}\bar{H}_{N}(s_{j}(N+1),t)_{sing}=\sum_{l=-\infty}^{-1}\tilde{\sigma}_{-l}^ {(j)}(\mathbf{c}^{(j)}(N+1))s_{j}^{l}\]
over \(\mathbb{C}[t]/t^{N+2}\). We note the following equality.
**Lemma 58**.: _When \(c_{i}^{(j)}(N)=c_{i}^{(j)}(N+1)\), we have_
\[\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}(N))=\sigma_{-l}^{(j)}(\mathbf{c}^ {(j)}(N)).\]
_This holds over \(\mathbb{C}[[t]]\)._
Proof.: Recall that when \(c_{i}^{(j)}(N)\) is fixed, both \(s_{j}(N+1)\) and \(\bar{H}_{N}\) are determined by \(c_{i}^{(j)}(N+1)=c_{i}^{(j)}(N)+\delta_{i}\), and when \(\delta_{i}=0\), we have \(s_{j}(N+1)=s_{j}\) and \(\bar{H}_{N}=H_{N}\). The claim follows from this.
In the argument below, the notation \(o_{b_{j}-l}\) symbolically refers to functions of the form described in Definition 30, and their explicit values may vary in different equations. Recall that \(o_{b_{j}-l}\) is an element of \(\mathbb{C}[c_{2}^{(j)},\ldots,c_{a_{j}}^{(j)}][[t]]\). In the following argument, \(c_{i}^{(j)}\) is regarded as a variable, while \(c_{i}^{(j)}(N)\) is a fixed element in \(\mathbb{C}[[t]]\) which plays the role of \(c_{i}(-\infty)\) in Definition 30. Based on Lemma 22, we observe the following.
**Lemma 59**.: _In the range \(-(a_{j}-1)\leq l\leq-1\), the term \(\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)})=\tilde{\sigma}_{-l}^{(j)}(c_{2}^{ (j)},\ldots,c_{a_{j}}^{(j)})\) can be written as_
\[\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)})=\bar{f}_{b_{j}-l}^{(b_{j})}( \mathbf{c}^{(j)})+o_{b_{j}-l}(\mathbf{c}^{(j)})\]
_in the notation of Proposition 31. Here \(c_{i}^{(j)}\) is a variable which takes values in \(c_{i}^{(j)}(N)+t^{d_{j}i+1}\mathbb{C}[[t]]\), and_
\[\bar{f}_{b_{j}+\alpha}^{(b_{j})}(\mathbf{c}^{(j)})=f_{b_{j}+\alpha}^{(b_{j})}( \mathbf{c}^{(j)})+\sum_{k=2}^{\alpha-1}\frac{(\alpha-k)(c_{k}^{(j)}-c_{k}^{(j) }(N))}{a_{j}}f_{b_{j}+\alpha-k}^{(b_{j})}(\mathbf{c}^{(j)}(N))\\ -\sum_{k=2}^{\alpha-1}\frac{\alpha-k}{a_{j}}\sum_{l=2}^{k-2}\frac{a_{j}-l}{a _{j}}(c_{k-l}^{(j)}-c_{k-l}^{(j)}(N))c_{l}^{(j)}(N)f_{b_{j}+\alpha-k}^{(b_{j}) }(\mathbf{c}^{(j)}(N)),\]
_as in Proposition 31._
Proof.: By definition, the term \(\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)})\) has contributions from \(\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)})s_{j}(N+1)^{l}\) and from \(t^{M}\bar{H}_{N}(s_{j}(N+1),t)_{sing}\). First, we study the contribution from \(\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)})s_{j}(N+1)^{l}\).
By Lemma 22, we have
\[\sum_{l=-\infty}^{-1}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)})s_{j}(N+1)^{l}=\sum_{ l=-\infty}^{-1}(\sigma_{-l}^{(j)}(\mathbf{c}^{(j)})-\sum_{k=2}^{a_{j}}\frac{l+k}{a _{j}}\delta_{k}^{\prime}\bar{\sigma}_{-l-k}^{(j)}(\mathbf{c}^{(j)})+\sum_{k=a _{j}+1}^{\infty}(l+k)\varepsilon_{k}\bar{\sigma}_{-l-k}^{(j)}(\mathbf{c}^{(j) })+\nu_{l})s_{j}^{l},\]
using the notation there. Here, we write \(c_{i}^{(j)}-c_{i}^{(j)}(N)=\delta_{i}\). The constants \(\delta_{i}^{\prime}\) and \(\varepsilon_{i}\) are determined by \(\delta_{i}\) as in the proof of Lemma 16, and these constants in turn determine \(s_{j}(N+1)\). Moreover, \(\nu_{l}\) is the sum of terms which depend on \(\delta_{i}\), \(\delta_{i}^{\prime}\) and \(\varepsilon_{i}\) quadratically or more, and it is easy to see that \(\nu_{l}\) can be written in the form \(o_{b_{j}-l}(\mathbf{c}^{(j)})\). Note that in the sum \(\sum_{k=a_{j}+1}^{\infty}(l+k)\varepsilon_{k}\bar{\sigma}_{-l-k}^{(j)}( \mathbf{c}^{(j)}(N))\), we have \(-l-k<0\) for \(-(a_{j}-1)\leq l\leq-1\). Thus, \(\bar{\sigma}_{-l-k}^{(j)}=0\) by definition and we do not need to deal with this sum.
Note that we have
\[-\sum_{k=2}^{a_{j}}\frac{l+k}{a_{j}}\delta_{k}^{\prime}\bar{\sigma}_{-l-k}^{(j )}(\mathbf{c}^{(j)})=-\sum_{k=2}^{-l-1}\frac{l+k}{a_{j}}\delta_{k}^{\prime} \bar{\sigma}_{-l-k}^{(j)}(\mathbf{c}^{(j)}),\]
by definition of \(\bar{\sigma}_{-l}\). The term \(\frac{l+k}{a_{j}}\delta_{k}^{\prime}\bar{\sigma}_{-l-k}(\mathbf{c}^{(j)})\) can be written in the form
\[\begin{array}{l}\frac{l+k}{a_{j}}\delta_{k}^{\prime}\bar{\sigma}_{-l-k}( \mathbf{c}^{(j)})\\ =\frac{l+k}{a_{j}}(\delta_{k}-\sum_{m=2}^{k-2}\frac{a_{j}-m}{a_{j}}\delta_{k-m }c_{m}^{(j)}(N)+O(\delta^{2}))\bar{\sigma}_{-l-k}^{(j)}(\mathbf{c}^{(j)})\\ =(\frac{(l+k)(c_{k}^{(j)}-c_{k}^{(j)}(N))}{a_{j}}-\frac{l+k}{a_{j}}\sum_{m=2}^ {k-2}\frac{a_{j}-m}{a_{j}}(c_{k-m}^{(j)}-c_{k-m}^{(j)}(N))c_{m}^{(j)}(N)+O( \delta^{2}))\bar{\sigma}_{-l-k}^{(j)}(\mathbf{c}^{(j)}).\end{array}\]
Here, \(O(\delta^{2})\) is the sum of terms which depends on \(\delta_{i}\) quadratically or more, and it is easy to see that \(O(\delta^{2})\bar{\sigma}_{-l-k}^{(j)}(\mathbf{c}^{(j)})\) can be written in the form \(o_{b_{j}-l}(\mathbf{c}^{(j)})\).
Also, we have
\[\bar{\sigma}_{-l-k}(\mathbf{c}^{(j)})=f_{b_{j}-l-k}^{(b_{j})}(\mathbf{c}^{(j)} (N))+o_{b_{j}-l-k}^{(b_{j})}(\mathbf{c}^{(j)}(N)),\]
for \(-l-k>0\). Thus, we have
\[\begin{array}{l}\frac{l+k}{a_{j}}\delta_{k}^{\prime}\bar{\sigma}_{-l-k}( \mathbf{c}^{(j)})\\ =(\frac{(l+k)(c_{k}^{(j)}-c_{k}^{(j)}(N))}{a_{j}}-\frac{l+k}{a_{j}}\sum_{m=2}^ {k-2}\frac{a_{j}-m}{a_{j}}(c_{k-m}^{(j)}-c_{k-m}^{(j)}(N))c_{m}^{(j)}(N))f_{b_ {j}-l-k}^{(b_{j})}(\mathbf{c}^{(j)}(N))+o_{b_{j}-l}^{(b_{j})}(\mathbf{c}^{(j)} (N)).\end{array}\]
From these observations, we can write
\[\sum_{l=-\infty}^{-1}(\sigma_{-l}^{(j)}(\mathbf{c}^{(j)})-\sum_{k=2}^{-l-1} \frac{l+k}{a_{j}}\delta_{k}^{\prime}\bar{\sigma}_{-l-k}^{(j)}(\mathbf{c}^{(j)})+ \nu_{l})=\bar{f}_{b_{j}-l}^{(b_{j})}(\mathbf{c}^{(j)})+o_{b_{j}-l}(\mathbf{c}^{( j)}).\]
Finally, the coefficient of \(s_{j}^{l}\) in \(t^{M}\bar{H}_{N}(s_{j}(N+1),t)_{sing}\) has the order at least \(M+d_{j}(-l+1)\) with respect to \(t\), and it can be absorbed into \(o_{b_{i}-l}\), since \(M>d_{j}b_{j}\). This proves the claim.
#### 4.3.3. Fixing a virtual local deformation
Let us recall some notations. At \(p_{j}\in\{p_{1},\ldots,p_{e}\}\), we write
\[S^{b_{j}}+S^{b_{j}+1}g_{0}(S)=\sum_{l=-\infty}^{\infty}\sigma_{-l}^{(j)}( \mathbf{c}^{(j)})s_{j}^{l},\]
as before. Here, \(S=s_{j}(1+\sum_{l=1}^{\infty}\prod_{i=0}^{l-1}(\frac{1}{a_{j}}-i)\frac{1}{l!}( \sum_{k=2}^{a_{j}}\frac{c_{k}^{(j)}}{s_{j}^{k}})^{l})\) and \(s_{j}\) is a local coordinate on \(C\) around \(p_{j}\). In particular, for \(-(a_{j}-1)\leq l\leq-1\), \(\sigma_{-l}^{(j)}\) has the form \(f_{b_{j}-l}^{(b_{j})}+o_{b_{j}-l}\) in the notation of Proposition 31.
Recall that we have a subset \(\mathcal{I}\) of \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\), see Definition 39. The obstruction class associated with the map \(\varphi_{N}\) pairs with these sections. The map \(\varphi_{N}\) determines the constants \(\mathbf{c}^{(j)}(N)=(c_{2}^{(j)}(N),\ldots,c_{a_{j}}^{(j)}(N))\), \(c_{i}^{(j)}(N)\in t^{d_{ji}}\mathbb{C}[[t]]\) for each \(j=1,\ldots,e\). Of course, terms of \(c_{i}^{(j)}(N)\) of sufficiently high order with respect to \(t\) does not affect \(\varphi_{N}\). So, the map \(\varphi_{N}\) does not determine \(\mathbf{c}^{(j)}(N)\) uniquely, but it does not matter in the following argument. We need to change the values of \(\mathbf{c}^{(j)}(N)\) to \(\mathbf{c}^{(j)}(N+1)\) to cancel the obstruction.
In this subsection, we prove the following. Recall that \(o_{N+1}\in H^{1}(C,\bar{\mathcal{N}}_{\varphi})\) denotes the obstruction class to deforming \(\varphi_{N}\).
**Proposition 60**.: _Given \(\mathbf{c}^{(j)}(N)\), \(j=1,\ldots,e\), satisfying \((\star_{\eta})\) for any \(\eta\) (see Definition 40), there is a set of series \(\mathbf{c}^{(j)}(N+1)\in\mathbb{C}[[t]]^{a_{j}-1}\) which satisfy the following conditions._
1. _The equality_ \[c_{i}^{(j)}(N+1)-c_{i}^{(j)}(N)=0\ \begin{cases}\text{mod }t^{d_{j}i+N+1-d_{j}(b_{j} +a_{j}-1)}\text{, when }N\geq d_{j}(b_{j}+a_{j}-1),\\ \text{mod }t^{d_{j}i+1}\text{, otherwise,}\end{cases}\] _holds for_ \(j=1,\ldots,e\)_,_ \(i=2,\ldots,a_{j}\)_._
2. _The equality_ \[\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1))=\sigma_{-l}^{(j)}(\mathbf{c} ^{(j)}(N)),\ \text{ mod }t^{N+1},\] _holds for_ \(l<0\)_,_ \(j=1,\ldots,e\)_._
3. _The equality_ \[\sum_{j=1}^{e}Res_{p_{j}}(\eta,\sum_{l=-\infty}^{-1}(\sigma_{-l}^{(j)}( \mathbf{c}^{(j)}(N))-\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1)))s_{j}^{l }\partial_{w_{j}})=t^{N+1}(\eta,o_{N+1}),\ \text{ mod }t^{N+2},\] _holds for any_ \(\eta\in\mathcal{I}\)_. Here,_ \((\eta,o_{N+1})\) _is the pairing between_ \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\) _and_ \(H^{1}(C,\bar{\mathcal{N}}_{\varphi})\cong H^{0}(C,\varphi^{*}\omega_{X}(Z))^{ \vee}\)_._
Proof.: We will construct such \(c_{i}^{(j)}(N+1)\) by a bootstrapping type argument based on Proposition 31. To use Proposition 31, we consider the equation
\[(\star_{n,\eta})\quad\sum_{j=1}^{e}Res_{p_{j}}(\eta,\sum_{l=-\infty}^{-1}( \sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))-\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{( j)}))s_{j}^{l}\partial_{w_{j}})=t^{N+1}(\eta,o_{N+1}),\ \text{ mod }t^{n+1},\]
for any non-negative integer \(n\), not just for \(n=N+1\). Here, \(\mathbf{c}^{(j)}=(c_{2}^{(j)},\ldots,c_{a_{j}}^{(j)})\) are variables such that \(c_{i}^{(j)}\) takes values in \(c_{i}^{(j)}(N)+t^{d_{j}i+1}\mathbb{C}[[t]]\). Recall that \(P(\eta)=(j(\eta),m(\eta))\) is the largest element in \(psupp(\eta)\) with respect to the order on the set \(\{1,\ldots,e\}\times\mathbb{Z}_{>0}\) introduced in Definition 35. In the following lemma, we regard \(\mathbf{c}^{(j(\eta))}\) as variables and \(\mathbf{c}^{(j)}\), \(j\neq j(\eta)\), as constants in \((*_{n,\eta})\). Also, recall that \(c_{i}^{(j)}(N)\) is of the form \(c_{i}^{(j)}(N)=t^{d_{j}i}\tilde{c}_{i}^{(j)}+t^{d_{j}i+1}\mathbb{C}[[t]]\), where \(\tilde{c}_{i}^{(j)}\in\mathbb{C}\).
**Lemma 61**.: _The equation \((*_{n,\eta})\) can be written in the form_
\[\bar{f}_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}^{b_{j(\eta)}}(\mathbf{c}^{(j(\eta)) })=t^{d_{j(\eta)}(b_{j(\eta)}+a_{j(\eta)}-m(\eta))}f_{b_{j(\eta)}+a_{j(\eta)}- m(\eta)}^{(b_{j(\eta)})}(\tilde{\mathbf{c}}^{(j(\eta))})+o_{b_{j(\eta)}+a_{j( \eta)}-m(\eta)}(\mathbf{c}^{(j(\eta))}),\ \ \text{mod}\ t^{n+1}. \tag{9}\]
Proof.: Consider the summand \(Res_{p_{j}}(\eta,\sum_{l=-\infty}^{-1}(\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))- \tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}))s_{j}^{l}\partial_{w_{j}})\), \(j\neq j(\eta)\), in \((*_{n,\eta})\). Since \(c_{i}^{(j)}\) takes values in \(c_{i}^{(j)}(N)+t^{d_{j}i+1}\mathbb{C}[[t]]\), by Lemma 58, it is easy to see that \(Res_{p_{j}}(\eta,\sum_{l=-\infty}^{-1}(\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N)) -\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}))s_{j}^{l}\partial_{w_{j}})\) is contained in \(t^{d_{j}(b_{j}+a_{j}-k_{j})+1}\mathbb{C}[[t]]\), where \(k_{j}=\max\{k\mid(j,k)\in psupp(\eta)\}\) if \(\{k\mid(j,k)\in psupp(\eta)\}\) is non-empty and \(k_{j}=0\) otherwise. Here, we have \(d_{j}(b_{j}+a_{j}-k_{j})\geq d_{j(\eta)}(b_{j(\eta)}+a_{j(\eta)}-m(\eta))\) by the construction of \(\mathcal{I}\). Thus, \(Res_{p_{j}}(\eta,\sum_{l=-\infty}^{-1}(\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N)) -\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}))s_{j}^{l}\partial_{w_{j}})\) can be written in the form \(o_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}(\mathbf{c}^{(j(\eta))})\).
On the other hand, the summand
\[Res_{p_{j(\eta)}}(\eta,\sum_{l=-\infty}^{-1}(\sigma_{-l}^{(j(\eta))}(\mathbf{c }^{(j(\eta))}(N))-\tilde{\sigma}_{-l}^{(j(\eta))}(\mathbf{c}^{(j(\eta))}))s_{ j(\eta)}^{l}\partial_{w_{j(\eta)}})\]
can be written in the form \(-\bar{f}_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}^{(b_{j(\eta)})}(\mathbf{c}^{(j(\eta ))})+f_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}^{(b_{j(\eta)})}(\mathbf{c}^{(j(\eta)) }(N))+o_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}(\mathbf{c}^{(j(\eta))})\) up to a multiplicative constant by a same argument as in Lemmas 59. Note that the term \(f_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}^{(b_{j(\eta)})}(\mathbf{c}^{(j(\eta))}(N))\) comes from \(\sigma_{-l}^{(j)}(\mathbf{c}^{(j(\eta))}(N))\), \(l=-a_{j(\eta)}+m(\eta)\), and it can be written in the form \(t^{d_{j(\eta)}(b_{j(\eta)}+a_{j(\eta)}-m(\eta))}f_{b_{j(\eta)}+a_{j(\eta)}-m( \eta)}^{(b_{j(\eta)})}(\tilde{\mathbf{c}}^{(j(\eta))})+o_{b_{j(\eta)}+a_{j( \eta)}-m(\eta)}(\mathbf{c}^{(j(\eta))})\).
Finally, the term \(t^{N+1}(\eta,o_{N+1})\) can be written in the form \(o_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}(\mathbf{c}^{(j(\eta))})\) by Proposition 57.
Now, we define a system of equations of the form in Proposition 31 for each \(p_{j}\). For those \((j,k)\), \(j=1,\ldots,e\), \(1\leq k\leq a_{j}-1\), such that there is some \(\eta\in\mathcal{I}\) where \(P(\eta)=(j(\eta),m(\eta))=(j,k)\), we assign the equation given in Lemma 61 above.
For other \((j,k)\), we assign the equation
\[\tilde{\sigma}_{-(a_{j}-k)}^{(j)}(\mathbf{c}^{(j)})=\sigma_{-(a_{j}-k)}^{(j)}( \mathbf{c}^{(j)}(N)), \tag{10}\]
mod \(t^{n+1}\). This can be written in the form
\[\bar{f}_{b_{j}+a_{j}-k}^{(b_{j})}(\mathbf{c}^{(j)})=t^{d(b_{j}+a_{j}-k)}f_{b_{j }+a_{j}-k}^{(b_{j})}(\tilde{\mathbf{c}}^{(j)})+o_{b_{j}+a_{j}-k}(\mathbf{c}^{( j)}),\ \ \text{mod}\ t^{n+1},\ \ k\in\{1,\ldots,a_{j}-1\}.\]
**Definition 62**.: We write the equation (9) by \((S_{j(\eta),m(\eta)})_{n}\), when it is considered over \(\mathbb{C}[t]/t^{n+1}\). Similarly, we write the equation (10) by \((S_{j,k})_{n}\), when it is considered over \(\mathbb{C}[t]/t^{n+1}\).
The equations (9) and (10) give a system of equations of the form described in Proposition 31 for each \(j=1,\ldots,e\). For example, for the case \(j=1\), let \(\eta_{1}^{(1)},\ldots,\eta_{q_{1}}^{(1)}\) be the
subset of \(\mathcal{I}\) consisting of those \(\eta\) satisfying \(j(\eta)=1\), where \(P(\eta)=(j(\eta),m(\eta))\). Then, the system of equations consists of
\[(S_{1,m(\eta_{i}^{(1)})})_{d_{1}(b_{1}+a_{1}-m(\eta_{i}^{(1)}))+\beta},\ \ i=1, \ldots,q_{1},\]
coming from (9) and
\[(S_{1,k})_{d_{1}(b_{1}+a_{1}-k))+\beta},\ \ k\in\{1,\ldots,a_{1}-1\}\setminus\{m( \eta_{1}^{(1)}),\ldots,m(\eta_{q_{1}}^{(1)})\},\]
coming from (10). Here, \(\beta\) is an integer. Note that if we have \(i\neq j\), then \(m(\eta_{i}^{(1)})\neq m(\eta_{j}^{(1)})\). Similarly, let \((S_{j,k})_{d_{j}(b_{j}+a_{j}-k))+\beta}\), \(k=1,\ldots,a_{j}-1\), be the system of equations for the other \(j=2,\ldots,e\).
As we noted above, in general, the equation \((S_{j,k})_{n}\) associated with the point \(p_{j}\) depends on the variables \(\mathbf{c}^{(i)}=(c_{2}^{(i)},\ldots,c_{a_{i}}^{(i)})\), \(i\neq j\), attached to the other singular points of \(\varphi\). Here, each \(c_{l}^{(i)}\) takes values in \(t^{d_{l}i}\bar{c}_{l}^{(i)}+t^{d_{l}i+1}\mathbb{C}[[t]]\). Thus, we need to be careful when we apply Proposition 31 to the system of equations \((S_{j,k})_{n}\), \(k=1,\ldots,a_{j}-1\), for each \(j\), so that the solutions provided for each \(j\) are compatible with the system of equations \((S_{j^{\prime},k})_{n}\), \(k=1,\ldots,a_{j^{\prime}}-1\), associated with the other singular points. This point is addressed in Lemma 63 below. In fact, Lemma 63 asserts that if we modify the values of \(c_{l}^{(i)}\) as demonstrated in the proof of Proposition 31, the equations \((S_{j,k})_{n}\), \(j\neq i\), remain independent of how they are modified. This is important for the bootstrapping argument used below to work.
**Lemma 63**.: _Let \(l\in\{1,\ldots,e\}\) and let \(\beta\) be a positive integer. Let \(n\geq M-1\) be an integer. If we modify \(c_{i}^{(j)}\), \(j>l\), by adding a term \(\delta_{i}\in t^{d_{j}i+\beta}\mathbb{C}[[t]]\), the terms in the system of equations \((S_{l,k})_{n}\), \(k=1,\ldots,a_{l}-1\), are modified only at the orders higher than \(t^{d_{l}(b_{l}+a_{l}-k)+\beta}\)._
Proof.: If the equation \((S_{l,k})_{n}\) does not correspond to some \((*_{n,\eta})\), \(\eta\in\mathcal{I}\), the claim is obvious because the equation does not depend on \(c_{i}^{(j)}\), \(j\neq l\). Therefore, assume that \((S_{l,k})_{n}\) corresponds to some \((*_{n,\eta})\). If we write \(P(\eta)=(j(\eta),m(\eta))\), we have \(j(\eta)=l\) and \(m(\eta)=k\). In the equation \((*_{n,\eta})\), the part \(\tilde{\sigma}_{-q}^{(j)}(\mathbf{c}^{(j)})\) depends on \(c_{i}^{(j)}\).
By the construction of \(\{\eta_{1},\ldots,\eta_{\sigma}\}\), if \(j>l\), we have \(d_{l}(b_{l}+a_{l}-k)<d_{j}(b_{j}+a_{j}-k_{j})\), here \(k_{j}=\max\{\kappa\mid(j,\kappa)\in psupp(\eta)\}\) if \(\{k\mid(j,k)\in psupp(\eta)\}\) is non-empty and \(k_{j}=0\) otherwise. Thus, if we modify \(c_{i}^{(j)}\), \(j>l\), by adding a term \(\delta_{i}\in t^{d_{j}i+\beta}\mathbb{C}[[t]]\), the part of the term \(\tilde{\sigma}_{-q}^{(j)}(\mathbf{c}^{(j)})\) which pairs with \(\eta\) nontrivially will be modified only at the orders at least \(d_{j}(b_{j}+a_{j}-k_{j})+\beta>d_{l}(b_{l}+a_{l}-k)+\beta\). This proves the claim.
We also remark the following.
**Lemma 64**.: _The family \(\mathbf{c}^{(j)}=\mathbf{c}^{(j)}(N)\), \(j=1,\ldots,e\), gives a solution to all \((S_{l,k})_{d_{l}(b_{l}+a_{l}-k)}\), \(l=1,\ldots,e\), \(k=1,\ldots,a_{l}-1\)._
Proof.: If the equation \((S_{l,k})_{d_{l}(b_{l}+a_{l}-k)}\) does not come from an element of \(\mathcal{I}\), that is, it is of the form Eq.(10), the claim follows from Lemma 58. Assume that the equation \((S_{l,k})_{d_{l}(b_{l}+a_{l}-k)}\) is of the form \((*_{d_{l}(b_{l}+a_{l}-k),\eta})\) for some \(\eta\in\mathcal{I}\). In this case, we have \(l=j(\eta)\) and \(k=m(\eta)\), where \(P(\eta)=(j(\eta),m(\eta))\) as we noted above. Recall that the term \(t^{N+1}(\eta,o_{N+1})\) can be written in the form \(o_{b_{j(\eta)}+a_{j(\eta)}-m(\eta)}(\mathbf{c}^{(j(\eta))})\). In particular,
if \((\eta,o_{N+1})\) does not vanish, we have \(N+1\geq d_{j(\eta)}(b_{j(\eta)}+a_{j(\eta)}-m(\eta))+1\). Thus, over \(\mathbb{C}[t]/t^{d_{j(\eta)}(b_{j(\eta)}+a_{j(\eta)}-m(\eta))+1}\), \(t^{N+1}(\eta,o_{N+1})\) is zero. The claim follows from this observation and Lemma 58.
Now, we will construct a solution to these systems of equations over \(\mathbb{C}[t]/t^{N+2}\) by a bootstrapping type argument. Let us consider the case \(j=1\). Consider \((S_{1,k})_{d_{1}(b_{1}+a_{1}-k)+1}\), \(k=1,\ldots,a_{1}-1\), as equations with the variables \(\mathbf{c}^{(1)}\), and with \(\mathbf{c}^{(j)}=\mathbf{c}^{(j)}(N)\), \(j\neq 1\), regarded as constants. Then, by applying Proposition 31 to the solution given in Lemma 64, this system of equations has a solution \(\mathbf{c}^{(1)}[1]\).
Next, consider the case \(j=2\). As in Lemma 64, \(\mathbf{c}^{(j)}(N)\), \(j=1,\ldots,e\), give a solution to \((S_{2,k})_{d_{2}(b_{2}+a_{2}-k)}\), \(k=1,\ldots,a_{2}-1\). This is true even after we change the value of \(\mathbf{c}^{(1)}\) from \(\mathbf{c}^{(1)}(N)\) to \(\mathbf{c}^{(1)}[1]\). Namely, we have \(c_{i}^{(1)}(N)-c_{i}^{(1)}[1]\in t^{d_{1}i+1}\mathbb{C}[[t]]\), and this implies that modifying \(\mathbf{c}^{(1)}(N)\) to \(\mathbf{c}^{(1)}[1]\) possibly changes the equation \((S_{2,k})_{d_{2}(b_{2}^{\prime}+a_{2}-k)}\) only at the order higher than \(t^{d_{2}(b_{2}+a_{2}-k)}\). It follows that it in fact does not change the equation at all, since \((S_{2,k})_{d_{2}(b_{2}+a_{2}-k)}\) is an equation defined over \(\mathbb{C}[t]/t^{d_{2}(b_{2}+a_{2}-k)+1}\).
Now, consider the equations \((S_{2,k})_{d_{2}(b_{2}+a_{2}-k)+1}\), \(k=1,\ldots,a_{2}-1\), as equations with the variables \(\mathbf{c}^{(2)}\), and with \(\mathbf{c}^{(j)}=\mathbf{c}^{(j)}(N)\), \(j\neq 1,2\), and \(\mathbf{c}^{(1)}=\mathbf{c}^{(1)}[1]\) regarded as constants. Again, by Proposition 31, this system of equations has a solution \(\mathbf{c}^{(2)}[1]\). Here, an important point is that since \(c_{i}^{(2)}[1]\) is obtained by adding some element in \(t^{d_{2}i+1}\mathbb{C}[[t]]\) to \(c_{i}^{(2)}(N)\), it does not change the equation \((S_{1,k})_{d_{1}(b_{1}+a_{1}-k)+1}\), which is defined over \(\mathbb{C}[t]/t^{d_{1}(b_{1}+a_{1}-k)+2}\), by Lemma 63. In particular, \(\mathbf{c}^{(1)}[1]\) remains as a solution to the system \((S_{1,k})_{d_{1}(b_{1}+a_{1}-k)+1}\), \(k=1,\ldots,a_{1}-1\), although the term \(\mathbf{c}^{(2)}\) is replaced from \(\mathbf{c}^{(2)}(N)\) to \(\mathbf{c}^{(2)}[1]\). Repeating the same procedure for \(j=1,\ldots,e\), we obtain a solution \(\mathbf{c}^{(j)}[1]\) to all \((S_{j,k})_{d_{j}(b_{j}+a_{j}-k)+1}\), \(k=1,\ldots,a_{j}-1\).
Then, let us return to the case \(j=1\) and construct a solution \(\mathbf{c}^{(1)}[2]\) to \((S_{1,k})_{d_{1}(b_{j}+a_{j}-k)+2}\) by adding elements in \(t^{d_{1}i+2}\mathbb{C}[[t]]\) to \(c_{i}^{(1)}[1]\). Here, the elements \(c_{i}^{(j)}=c_{i}^{(j)}[1]\), \(j\geq 2\), are considered as constants. This can be achieved through Proposition 31. Since modifying \(c_{i}^{(1)}[1]\) in this way changes the equations \((S_{j,k})_{d_{j}(b_{j}+a_{j}-k)+1}\), \(j\geq 2\), only at the orders higher than \(d_{j}(b_{j}+a_{j}-k)+1\), it does not affect the validity of the solution \(\mathbf{c}^{(j)}=\mathbf{c}^{(j)}[1]\) to \((S_{j,k})_{d_{j}(b_{j}+a_{j}-k)+1}\), \(j\geq 2\). Then, repeat the procedure for all \(j=1,\ldots,e\), and return to \(j=1\) again. Iterating this \(q\) times until
\[\min_{j\in\{1,\ldots,e\},k\in\{1,\ldots,a_{j}-1\}}\{d_{j}(b_{j}+a_{j}-k)+q\}= \min_{j\in\{1,\ldots,e\}}\{d_{j}(b_{j}+1)+q\}=N+1\]
holds, we obtain the required solution \(\mathbf{c}^{(j)}(N+1)=\mathbf{c}^{(j)}[q]\). When we have \(N\geq d_{j}(b_{j}+a_{j}-1)\), instead of \((S_{j,k})_{d_{j}(b_{j}+a_{j}-k)}\), we can start solving the system of equations from \((S_{j,k})_{d_{j}(b_{j}+a_{j}-k)+N-d_{j}(b_{j}+a_{j}-1)}\). Then, by Proposition 31, the solution satisfies (1) of the claim of Proposition 60.
Finally, we show that this solution satisfies the claim of Proposition 60 (2). We note the following.
**Lemma 65**.: _For those \(l\) satisfying \(-(a_{j}-1)\leq l\leq-1\), which are not contained in the set \(\{-(a_{j(\eta)}-m(\eta))\,|\,\eta\in\mathcal{I}\text{ satisfies }j(\eta)=j\}\), the claim (2) holds._
Proof.: This is clear since for such \(l\), we have \(\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1))=\sigma_{-l}^{(j)}(\mathbf{c}^{( j)}(N)),\mod t^{N+2}\), by construction.
Consider the case when \(l=-(a_{j(\eta_{0})}-m(\eta_{0}))\), where \(P(\eta_{0})=(j(\eta_{0}),m(\eta_{0}))\) is the smallest among \(P(\eta)\), \(\eta\in\mathcal{I}\), with respect to the order introduced in Definition 35. In this case, by substituting \(c_{i}^{(j)}=c_{i}^{(j)}(N+1)\) to the equation \((*_{N,\eta_{0}})\), we have
\[\sum_{j=1}^{e}Res_{p_{j}}(\eta_{0},\sum_{l=-\infty}^{-1}(\sigma_{-l}^{(j)}( \mathbf{c}^{(j)}(N))-\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1)))s_{j}^{l }\partial_{w_{j}})=0,\mod t^{N+1}.\]
Let us write \(k_{j}=\max\{k\mid(j,k)\in psupp(\eta_{0})\}\). If there is no positive integer \(k\) such that \((j,k)\) belongs to \(psupp(\eta_{0})\), we put \(k_{j}=0\) as before. Then, we have
\[\begin{array}{l}Res_{p_{j}}(\eta_{0},\sum_{l=-\infty}^{-1}(\sigma_{-l}^{(j)} (\mathbf{c}^{(j)}(N))-\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1)))s_{j}^{ l}\partial_{w_{j}})\\ =Res_{p_{j}}(\eta_{0},\sum_{l=-\infty}^{-(a_{j}-k_{j})}(\sigma_{-l}^{(j)}( \mathbf{c}^{(j)}(N))-\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1)))s_{j}^{ l}\partial_{w_{j}}),\mod t^{N+1}.\end{array}\]
Note that we have \(k_{j(\eta_{0})}=m(\eta_{0})\).
**Lemma 66**.: _Any integer \(l\) satisfying \(-(a_{j}-1)\leq l\leq-(a_{j}-k_{j})\) is contained in the range of Lemma 65 for any \(j\neq j(\eta_{0})\). That is, for any \(j\neq j(\eta_{0})\), the set of integers \(\{-(a_{j}-1),\ldots,-1\}\) is disjoint from the set \(\{-(a_{j(\eta)}-m(\eta))\,|\,\eta\in\mathcal{I}\text{ satisfies }j(\eta)=j\}\)._
Proof.: By definition of \(P(\eta_{0})\), we have
\[d_{j(\eta_{0})}(b_{j(\eta_{0})}+a_{j(\eta_{0})}-m(\eta_{0}))\leq d_{j}(b_{j}+ a_{j}-m),\]
for any \((j,m)\in psupp(\eta_{0})\). Moreover, the inequality is strict when \(j>j(\eta_{0})\) holds. If there is an \(\eta_{1}\in\mathcal{I}\) satisfying
\[j(\eta_{1})>j(\eta_{0})\]
and
\[m(\eta_{1})\leq k_{j(\eta_{1})}\]
so that \(l=-(a_{j(\eta_{1})}-m(\eta_{1}))\) is in the range
\[-(a_{j(\eta_{1})}-1)\leq l\leq-(a_{j(\eta_{1})}-k_{j(\eta_{1})}),\]
we have
\[\begin{array}{ll}ord(\eta_{0})&=d_{j(\eta_{0})}(b_{j(\eta_{0})}+a_{j(\eta_{ 0})}-m(\eta_{0}))\\ &<d_{j(\eta_{1})}(b_{j(\eta_{1})}+a_{j(\eta_{1})}-k_{j(\eta_{1})})\\ &\leq d_{j(\eta_{1})}(b_{j(\eta_{1})}+a_{j(\eta_{1})}-m(\eta_{1}))=ord(\eta_{1 }).\end{array}\]
However, this contradicts to the assumption that \(P(\eta_{0})\) is the smallest among \(P(\eta)\), \(\eta\in\mathcal{I}\). This proves the lemma for \(j>j(\eta_{0})\).
If there is an \(\eta_{2}\in\mathcal{I}\) satisfying
\[j(\eta_{0})>j(\eta_{2})\]
and
\[-(a_{j(\eta_{2})}-1)\leq-(a_{j(\eta_{2})}-m(\eta_{2}))\leq-(a_{j(\eta_{2})}-k_ {j(\eta_{2})}),\]
we have
\[\begin{array}{ll}ord(\eta_{0})&=d_{j(\eta_{0})}(b_{j(\eta_{0})}+a_{j(\eta_{ 0})}-m(\eta_{0}))\\ &\leq d_{j(\eta_{2})}(b_{j(\eta_{2})}+a_{j(\eta_{2})}-k_{j(\eta_{2})})\\ &\leq d_{j(\eta_{2})}(b_{j(\eta_{2})}+a_{j(\eta_{2})}-m(\eta_{2}))=ord(\eta_{2 }).\end{array}\]
If this inequality is strict, we have \(P(\eta_{2})<P(\eta_{0})\). If the equality \(ord(\eta_{0})=ord(\eta_{2})\) holds, since we have \(j(\eta_{0})>j(\eta_{2})\), again we have \(P(\eta_{2})<P(\eta_{0})\). This contradicts to the assumption, and the proof is complete.
Thus, we have
\[\begin{array}{l}Res_{p_{j}}(\eta_{0},\sum_{l=-\infty}^{-1}(\sigma_{-l}^{(j)} (\mathbf{c}^{(j)}(N))-\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1)))s_{j}^{ l}\partial_{w_{j}})\\ =Res_{p_{j}}(\eta_{0},\sum_{l=-\infty}^{-a_{j}}(\sigma_{-l}^{(j)}(\mathbf{c}^{( j)}(N))-\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1)))s_{j}^{l}\partial_{w_{j}}), \ \ \mbox{mod}\ t^{N+1},\end{array}\]
for \(j\neq j(\eta_{0})\). Now, we note the following.
**Lemma 67**.: _For any \(j=1,\ldots,e\), we have_
\[\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))-\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{ (j)}(N+1))=0,\ \ \mbox{mod}\ t^{N+1},\]
_if \(l<-(a_{j}-1)\)._
Proof.: In the range \(N<d_{j}(b_{j}+a_{j}-1)\), since \(\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))\) and \(\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1))\) has the order at least \(d_{j}(b_{j}-l)>d_{j}(b_{j}+a_{j}-1)\geq N+1\), the equality is obvious. In the range \(N\geq d_{j}(b_{j}+a_{j}-1)\), we have
\[c_{i}^{(j)}(N+1)-c_{i}^{(j)}(N)\in t^{d_{j}i+N+1-d_{j}(b_{j}+a_{j}-1)}\mathbb{ C}[[t]]\]
by the claim (1), and it follows that
\[\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))-\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{ (j)}(N+1))\in t^{N+1-d_{j}(b_{j}+a_{j}-1)+d_{j}(b_{j}-l)}\mathbb{C}[[t]].\]
Since we have
\[\begin{array}{l}N+1-d_{j}(b_{j}+a_{j}-1)+d_{j}(b_{j}-l)=N+1-d_{j}(a_{j}+l-1) >N+1,\end{array}\]
the equality follows.
Thus, we have
\[\begin{array}{l}\sum_{j=1}^{e}Res_{p_{j}}(\eta_{0},\sum_{l=-\infty}^{-1}( \sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N))-\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{ (j)}(N+1)))s_{j}^{l}\partial_{w_{j}})\\ =Res_{p_{j(\eta_{0})}}(\eta_{0},\sum_{l=-\infty}^{-(a_{j(\eta_{0})}-m(\eta_{0}) )}(\sigma_{-l}^{(j(\eta_{0}))}(\mathbf{c}^{(j(\eta_{0}))}(N))-\tilde{\sigma}_ {-l}^{(j(\eta_{0}))}(\mathbf{c}^{(j(\eta_{0}))}(N+1)))s_{j(\eta_{0})}^{l} \partial_{w_{j(\eta_{0})}})\\ =0,\ \ \mbox{mod}\ t^{N+1}.\end{array}\]
Moreover, for those \(l\) in \(\{-(a_{j(\eta_{0})}-1),\ldots,-(a_{j(\eta_{0})}-m(\eta_{0}))-1\}\), we have \(\tilde{\sigma}_{-l}^{(j(\eta_{0}))}(\mathbf{c}^{(j(\eta_{0}))}(N+1))=\sigma_{-l }^{(j(\eta_{0}))}(\mathbf{c}^{(j(\eta_{0}))}(N)),\ \ \mbox{mod}\ t^{N+1}\), by Lemma 65. Thus, we have
\[\begin{array}{l}Res_{p_{j(\eta_{0})}}(\eta_{0},\sum_{l=-\infty}^{-(a_{j(\eta_ {0})}-m(\eta_{0}))}(\sigma_{-l}^{(j(\eta_{0}))}(\mathbf{c}^{(j(\eta_{0}))}(N))- \tilde{\sigma}_{-l}^{(j(\eta_{0}))}(\mathbf{c}^{(j(\eta_{0}))}(N+1)))s_{j( \eta_{0})}^{l}\partial_{w_{j(\eta_{0})}})\\ =Res_{p_{j(\eta_{0})}}(\eta_{0},\sigma_{a_{j(\eta_{0})}-m(\eta_{0})}^{(j(\eta_ {0}))}(\mathbf{c}^{(j(\eta_{0}))}(N))-\tilde{\sigma}_{a_{j(\eta_{0})}-m(\eta_{ 0})}^{(j(\eta_{0}))}(\mathbf{c}^{(j(\eta_{0}))}(N+1))s_{j(\eta_{0})}^{-(a_{j( \eta_{0})}-m(\eta_{0}))}\partial_{w_{j(\eta_{0})}})\\ =0,\ \ \mbox{mod}\ t^{N+1},\end{array}\]
Since \((j(\eta_{0}),m(\eta_{0}))\in psupp(\eta_{0})\), this equality implies
\[\sigma_{a_{j(\eta_{0})}-m(\eta_{0})}^{(j(\eta_{0}))}(\mathbf{c}^{(j(\eta_{0}))} (N))-\tilde{\sigma}_{a_{j(\eta_{0})}-m(\eta_{0})}^{(j(\eta_{0}))}(\mathbf{c}^{ (j(\eta_{0}))}(N+1))=0,\ \ \mbox{mod}\ t^{N+1}.\]
We can apply the same argument to other elements in \(\mathcal{I}\) in order from the smallest with respect to the order of \(P(\eta)\), and conclude that \(\tilde{\sigma}_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1))=\sigma_{-l}^{(j)}(\mathbf{c}^ {(j)}(N))\ \mbox{mod}\ t^{N+1}\) holds for all \(l\in\{-(a_{j}-1),\ldots,-1\}\). This finishes the proof of Proposition 60.
#### 4.3.4. Proof of Theorem 4.1
Finally, we can prove the following.
**Proposition 68**.: _There is an \((N+1)\)-th order deformation \(\varphi_{N+1}\) of \(\varphi\) which reduces to \(\varphi_{N}\) over \(\mathbb{C}[t]/t^{N^{\prime}}\) for some \(N^{\prime}\leq N+1\). Moreover, in the range \(N\geq\max_{j}\{d_{j}(b_{j}+a_{j}-1)\}\), we can take \(N^{\prime}=N+3-\max_{j}\{d_{j}(b_{j}+a_{j}-1)\}\)._
Proof.: First, we will show that using the parameters \(c_{i}^{(j)}(N+1)\) determined in Proposition 60, we can construct a deformation \(\bar{\varphi}_{N}\) of \(\varphi\) over \(\mathbb{C}[t]/t^{N+1}\). On a neighborhood of \(p_{j}\), we take a curve defined by the parameterization
\[\begin{array}{c}(z_{j},w_{j})=(s_{j}^{a_{j}}+c_{2}^{(j)}(N+1)s_{j}^{a_{j}-2} +\cdots+c_{a_{j}}^{(j)}(N+1),\\ \sum_{l=0}^{\infty}\sigma_{-l}^{(j)}(\mathbf{c}^{(j)}(N+1))s_{j}^{l}+t^{M}\bar {H}_{N}(s_{j},t)),\ \ \mathrm{mod}\ t^{N+2},\end{array} \tag{11}\]
where \(c_{i}^{(j)}(N+1)\) is the one satisfying the equations of Proposition 60, and \(\bar{H}_{N}(s_{j},t)\) is determined from it as in Section 4.3.1.
**Lemma 69**.: _The image of the curve given by the above parameterization Eq.(11) coincides with the restriction of the image of \(\varphi_{N}\) to a neighborhood of \(p_{j}\) over \(\mathbb{C}[t]/t^{N+1}\)._
Proof.: This follows from the calculation Eq.(8) in Section 4.3.2 and Proposition 60 (2).
Moreover, for some \(N^{\prime}\) with \(N^{\prime}\leq N+1\), we have \(c_{i}^{(j)}(N+1)=c_{i}^{(j)}(N)\ \mathrm{mod}\ t^{N^{\prime}}\), \(i=2,\ldots,a_{j}\). In this case, we have \(s_{j}(N+1)=s_{j}\), \(\mathrm{mod}\ t^{N^{\prime}}\), and \(\bar{H}_{N}(s_{j},t)=H_{N}(s_{j},t)\), \(\mathrm{mod}\ t^{N^{\prime}}\). Thus, over \(\mathbb{C}[t]/t^{N^{\prime}}\), the above parameterization and \(\varphi_{N}\) are the same even as maps. If \(N^{\prime}=N+1\) for all \(j=1,\ldots,e\), we can take \(\bar{\varphi}_{N}=\varphi_{N}\). So, let us assume we have \(N^{\prime}\leq N\) for some \(j\). We write this map as \(\varphi_{N^{\prime}-1}|_{U_{p_{j}}}\).
Now, on a neighborhood \(U_{p_{j}}\) of \(p_{j}\), we have two deformations of the map \(\varphi_{N^{\prime}-1}|_{U_{p_{j}}}\). The first one \(\varphi_{N^{\prime},p_{j}}\) is obtained by restricting \(\varphi_{N}\) to \(U_{p_{j}}\) and reducing it to a map over \(\mathbb{C}[t]/t^{N^{\prime}+1}\). The other one \(\bar{\varphi}_{N^{\prime},p_{j}}\) is derived from the above parameterization using \(c_{i}^{(j)}(N+1)\) by reducing it to a map over \(\mathbb{C}[t]/t^{N^{\prime}+1}\). These two deformations can be different as maps, but they have the same image by Lemma 69. On the other hand, on the complement of the singular points \(\{p_{1},\ldots,p_{e}\}\) of \(\varphi\), we have a deformation \(\varphi_{N^{\prime},c}\) of \(\varphi_{N^{\prime}-1}\) given by the restriction of \(\varphi_{N}\).
The deformations \(\varphi_{N^{\prime},p_{j}}\) and \(\varphi_{N^{\prime},c}\) clearly glue to a map \(\varphi_{N^{\prime}}\), since they are the restrictions of the given map \(\varphi_{N}\). On the other hand, since \(\varphi_{N^{\prime},p_{j}}\) and \(\bar{\varphi}_{N^{\prime},p_{j}}\) have the same image, the difference between \(\bar{\varphi}_{N^{\prime},p_{j}}\) and \(\varphi_{N^{\prime},c}\) on the overlap gives a section of the tangent sheaf of the curve \(C\). In particular, it is zero as a section of \(\bar{\mathcal{N}}_{\varphi}\) where the obstruction takes value. Therefore, possibly after deforming the domain curve of \(\varphi_{N^{\prime}}\), the local deformations \(\bar{\varphi}_{N^{\prime},p_{j}}\) and \(\varphi_{N^{\prime},c}\) also glue and give a global map \(\bar{\varphi}_{N^{\prime}}\). Note that the images of \(\varphi_{N^{\prime}}\) and \(\bar{\varphi}_{N^{\prime}}\) are the same. Note also that the map \(\bar{\varphi}_{N^{\prime}}\) still has the parameterization (11) around \(p_{j}\). Namely, we have not changed the locally defined maps \(\bar{\varphi}_{N^{\prime},p_{j}}\) and \(\varphi_{N^{\prime},c}\), up to automorphisms, but just changed the gluing of the domain.
Now, we will deform \(\bar{\varphi}_{N^{\prime}}\). Although the domain curve of \(\bar{\varphi}_{N^{\prime}}\) may be different from that of \(\varphi_{N^{\prime}}\), the restriction of the maps \(\varphi_{N^{\prime}}\) and \(\bar{\varphi}_{N^{\prime}}\) to the complement of \(\{p_{1},\ldots,p_{e}\}\) are equivalent up to automorphisms by construction. Then, again by Lemma 69, the argument in the above paragraph still applies, and we obtain a map \(\bar{\varphi}_{N^{\prime}+1}\) deforming \(\bar{\varphi}_{N^{\prime}}\). By repeating this process, we obtain a map \(\bar{\varphi}_{N}\). The image of \(\bar{\varphi}_{N}\) is again the same as that of \(\varphi_{N}\).
Now, we prove that there is a deformation \(\varphi_{N+1}\) of \(\bar{\varphi}_{N}\). The above parameterization using \(c_{i}^{(j)}(N+1)\) gives a local deformation of \(\bar{\varphi}_{N}\) on a neighborhood of \(p_{j}\). By the above construction, the restrictions of \(\bar{\varphi}_{N}\) and \(\varphi_{N}\) to the complement of \(\{p_{1},\dots,p_{e}\}\) are identical. Thus, on the complement of \(\{p_{1},\dots,p_{e}\}\), we can take the same local deformation of \(\bar{\varphi}_{N}\) as that of \(\varphi_{N}\). Recall that the obstruction to deforming \(\varphi_{N}\) was given by the class \(o_{N+1}\). The local deformations of \(\bar{\varphi}_{N}\) and \(\varphi_{N}\) differ only on a neighborhood of \(p_{j}\), and their difference is given by the calculation (8). Then, Proposition 60 (3) shows that the difference between these local deformations of \(\bar{\varphi}_{N}\) and \(\varphi_{N}\) gives a meromorphic section of \(\bar{\mathcal{N}}_{\varphi}\) which, through the calculation of Proposition 7, gives a local contribution to the obstruction cancelling the given \(o_{N+1}\). Thus, the obstruction to deforming \(\bar{\varphi}_{N}\) vanishes.
Finally, the assertion about \(N^{\prime}\) follows from Proposition 60 (1) concerning the order of \(c_{i}(N+1)-c_{i}(N)\).
Assume that we have constructed a deformation \(\varphi_{N}\) of \(\varphi\), for some \(N\geq M-1\). Let \(\psi_{N^{\prime}-1}\) be the reduction of \(\varphi_{N}\) to a map over \(\mathbb{C}[t]/t^{N^{\prime}}\), where \(N^{\prime}=3\) when \(N\leq\max_{j}\{d_{j}(b_{j}+a_{j}-1)\}\), and \(N^{\prime}=N+3-\max_{j}d_{j}(b_{j}+a_{j}-1)\) when \(N\geq\max_{j}\{d_{j}(b_{j}+a_{j}-1)\}\). By Proposition 68, there is a map \(\varphi_{N+1}\) and let \(\psi_{N^{\prime}}\) be the reduction of \(\varphi_{N+1}\) to a map over \(\mathbb{C}[t]/t^{N^{\prime}+1}\). By Proposition 68, \(\psi_{N^{\prime}}\) is a deformation of \(\psi_{N^{\prime}-1}\), though \(\varphi_{N+1}\) may not be a deformation of \(\varphi_{N}\). Thus, we obtain a deformations \(\psi_{n}\) of \(\varphi\), \(n\in\mathbb{N}\), up to any order. Applying a suitable algebraization result [1], this finishes the proof of Theorem 41.
## 5. Application of the main theorem
### Condition \((\mathrm{G})\) and deformations in general
Let \(\varphi\colon C\to X\) be a semiregular map from a curve to a surface as before, and let \(\{p_{1},\dots,p_{e}\}\) be the set of singular points of \(\varphi\). Since there is no specific reason that the functions \(f^{(b_{j})}_{b_{j}+i}\) expressing the condition in Theorem 41 have non-trivial relations, it would not be too optimistic to expect that these and the associated functions \(\bar{f}^{(b_{j})}_{b_{j}+i}\), \(F_{-n}\) behave like generic ones. If this is true, the criterion for the existence of deformations becomes largely independent of the surface \(X\) and the map \(\varphi\). Namely, we can formulate the condition for the existence of deformations as follows.
**Definition 70**.: Using the notation of Definition 28, we say that the polynomials \(f^{(b)}_{b+1},\dots,f^{(b)}_{b+a-1}\) satisfy the condition \((\mathrm{G})\) if for any \(i\), \(1\leq i\leq a-1\), there is a point \(\tilde{\mathbf{c}}\in\mathbb{C}^{a-1}\) such that
\[F_{-n}(\tilde{\mathbf{c}})=0,\ \ \forall n\in\{1,\dots,a-1\}\setminus\{i\},\ \ F_{-i}(\tilde{\mathbf{c}})\neq 0,\]
and \(f^{(b)}_{b+1},\dots,f^{(b)}_{b+a-1}\) satisfy the condition \((\mathrm{T})\) at \(\tilde{\mathbf{c}}\). Since this condition only depends on the pair of integers \((a,b)\), where \(b>a\) and \(a\) does not divide \(b\), we also say that the pair \((a,b)\) satisfies the condition \((\mathrm{G})\). Recall that for each singular point \(p\) of \(\varphi\), such a pair of integers \((a,b)\) is determined. If this pair satisfies the condition \((\mathrm{G})\), we will say that the singular point \(p\) satisfies the condition \((\mathrm{G})\).
**Definition 71**.: We will say that a point \(p_{j}\in\{p_{1}\,\dots,p_{e}\}\) satisfies the condition \((\mathrm{D})\) if the inequality
\[\dim H^{0}(C,\varphi^{*}\omega_{X}((a_{j}-1)p_{j}))<\dim H^{0}(C,\varphi^{*} \omega_{X})+a_{j}-1\]
holds. Here, \(a_{j}-1\) is the coefficient of \(p_{j}\) in the ramification divisor \(Z\) of \(\varphi\).
**Theorem 72**.: _If the conditions_ (D) _and_ (G) _hold at each \(p_{j}\), the map \(\varphi\) deforms._
Proof.: First, we define the set of \(e\)-tuple of positive integers
\[\mathcal{L}=\{(l_{1},\ldots,l_{e})\mid l_{j}\in\{1,\ldots,a_{j}-1\}\}\]
in the following way. Namely, \((l_{1},\ldots,l_{e})\in\mathcal{L}\) if and only if there is a direct sum decomposition
\[H^{0}(C,\varphi^{*}\omega_{X}(Z))=H^{0}(C,\varphi^{*}\omega_{X})\oplus H_{0} \oplus H_{1}\]
such that,
1. if \(\eta\in H_{1}\) and \((j,m)\in psupp(\eta)\), we have \(m<l_{j}\),
2. if \(\eta\in H_{0}\) and \((j,m)\in psupp(\eta)\), we have \(m\geq l_{j}\), and
3. there is no element \(\eta\in H_{0}\) such that \(psupp(\eta)=\{(j,l_{j})\}\) for some \(j\in\{1,\ldots,e\}\).
Using this notation, we prove the following.
**Lemma 73**.: _If the condition_ (D) _holds at each \(p_{j}\), the set \(\mathcal{L}\) is not empty._
Proof.: Let \(\mathcal{I}_{j}\) be the subset of \(\{1,\ldots,a_{j}-1\}\) such that \(k\in\mathcal{I}_{j}\) if and only if there is a section \(\eta\in H^{0}(C,\varphi^{*}\omega_{X}(Z))\) satisfying \(psupp(\eta)=\{(j,k)\}\). By the condition (D), we have \(I_{j}\neq\{1,\ldots,a_{j}-1\}\) for each \(j\). If \(\min_{l\in I_{j}}l>1\) or \(I_{j}=\emptyset\), take \(l_{j}=1\). If \(\min_{l\in I_{j}}l=1\), take \(l_{j}=\min\{\{1,\ldots,a_{j}-1\}\setminus I_{j}\}\). Then, there is a direct sum decomposition \(H^{0}(C,\varphi^{*}\omega_{X}(Z))=H^{0}(C,\varphi^{*}\omega_{X})\oplus H_{0} \oplus H_{1}\) associated with \((l_{1},\ldots,l_{e})\) which satisfies the conditions (i), (ii) and (iii) above. Namely, \(H_{0}\) and \(H_{1}\) are uniquely determined, modulo \(H^{0}(C,\varphi^{*}\omega_{X})\), by the properties (i) and (ii).
We fix an element \((l_{1},\ldots,l_{e})\) of \(\mathcal{L}\). Also, fix \(\tilde{\mathbf{c}}^{(j)}\in\mathbb{C}^{a_{j}-1}\setminus\{0\}\) for each \(j=1,\ldots,e\), which satisfies
\[F_{-n}(\tilde{\mathbf{c}}^{(j)})=0,\ \ n\in\{1,\ldots,a_{j}-1\}\setminus\{a_{j}- l_{j}\},\ \ F_{-(a_{j}-l_{j})}(\tilde{\mathbf{c}}^{(j)})\neq 0,\]
and \(f^{(b_{j})}_{b_{j}+1},\ldots,f^{(b_{j})}_{b_{j}+a_{j}-1}\) satisfy the condition (T) at \(\tilde{\mathbf{c}}_{j}\). Let \(M\) be the least common multiple of \(b_{j}+a_{j}-l_{j}\), \(j=1,\ldots,e\), and define the integer \(d_{j}\) by \(d_{j}=\frac{M}{b_{j}+a_{j}-l_{j}}\). Introduce a total order to the set \(\{1,\ldots,e\}\times\mathbb{Z}_{>0}\) by the rule that \((j,m)>(j^{\prime},m^{\prime})\) if and only if
1. \(d_{j}(b_{j}+a_{j}-m)<d_{j^{\prime}}(b^{\prime}_{j^{\prime}}+a_{j^{\prime}}-m^ {\prime})\), or
2. \(d_{j}(b_{j}+a_{j}-m)=d_{j^{\prime}}(b^{\prime}_{j^{\prime}}+a_{j^{\prime}}-m^ {\prime})\) and \(j>j^{\prime}\).
For \(\eta\in H^{0}(C,\varphi^{*}\omega_{X}(Z))\), let \(P(\eta)=(j(\eta),m(\eta))\) be the maximal element of \(psupp(\eta)\) with respect to this order. Using this notation, let us define \(ord(\eta)\in\mathbb{Z}\) by
\[ord(\eta)=d_{j(\eta)}(b_{j(\eta)}+a_{j(\eta)}-m(\eta)).\]
We set \(ord(\eta)=\infty\) if \(\eta\) belongs to \(H^{0}(C,\varphi^{*}\omega_{X})\).
We take a basis \(\{\lambda_{1},\ldots,\lambda_{a},\mu_{1},\ldots,\mu_{b},\nu_{1},\ldots,\nu_{c}\}\) of \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\), where \(\{\lambda_{1},\ldots,\lambda_{a}\}\), \(\{\mu_{1},\ldots,\mu_{b}\}\) and \(\{\nu_{1},\ldots,\nu_{c}\}\) are bases of \(H^{0}(C,\varphi^{*}\omega_{X})\), \(H_{1}\) and \(H_{0}\), respectively. We take \(\{\lambda_{1},\ldots,\lambda_{a}\}\) and \(\{\mu_{1},\ldots,\mu_{b}\}\) arbitrarily. We take \(\{\nu_{1},\ldots,\nu_{c}\}\) as in Section 3.3. Namely, for a positive integer \(N\leq M\), let \(V_{N}\) be the subspace of \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\) defined by
\[V_{N}=\{\eta\in H^{0}(C,\varphi^{*}\omega_{X}(Z))\mid ord(\eta)\geq N\}.\]
Let
\[H^{0}(C,\varphi^{*}\omega_{X})\oplus H_{1}\subset V_{i_{k}}\subset\cdots V_{i _{2}}\subset V_{i_{1}}=H^{0}(C,\varphi^{*}\omega_{X}(Z))\]
be the maximal strictly increasing subsequence, here \(i_{k}=M\).
We have a refinement
\[V_{i_{j+1}}\subset V_{i_{j},n_{1}}\subset V_{i_{j},n_{2}}\subset\cdots\subset V_{i_ {j},n_{u_{j}}}=V_{i_{j}}\]
such that
\[\dim V_{i_{j},n_{r+1}}=\dim V_{i_{j},n_{r}}+1,\ \ r=0,\ldots,u_{j}-1,\]
as in Section 3.3. Then, we define \(\{\nu_{1},\ldots,\nu_{c}\}\) by successively choosing a general element of \(V_{i_{j},n_{k}}\) as in Section 3.3.
For any \(\mu_{i}\), the condition \((\star_{\mu_{i}})\) of Definition 40 obviously holds, since all the relevant \(F_{-n}(\tilde{\mathbf{c}})\) are zero. Take any \(\nu_{i}\). By the conditions (i), (ii) and (iii) above, \(\nu_{i}\) satisfies either
(a) \(ord(\nu_{i})<M\), or
(b) \(ord(\nu_{i})=M\), and \(\sharp(psupp(\nu_{i})\cap\{(1,l_{1}),\ldots,(e,l_{e})\})\geq 2\).
In the case (a), the condition \((\star_{\nu_{i}})\) obviously holds again. In the case (b), we note that the condition \((\star_{\nu_{i}})\) is of the form
\[\sum_{\{j\;|\;(j,l_{j})\in psupp(\nu_{i})\}}Res_{p_{j}}(\nu_{i},f^{(b_{j})}_{b_ {j}+a_{j}-l_{j}}(\tilde{\mathbf{c}}^{(j)})s^{-(a_{j}-l_{j})}\partial_{w_{j}}) =0.\]
Since we have \(\sharp(psupp(\nu_{i})\cap\{(1,l_{1}),\ldots,(e,l_{e})\})\geq 2\), we can rescale each \(\tilde{\mathbf{c}}_{j}\) so that the condition \((\star_{\nu_{i}})\) holds. Moreover, by the way we defined \(\nu_{i}\), we can perform this rescaling simultaneously for all \(\nu_{i}\), \(i=1,\ldots,c\). Then, we can apply the proof of Theorem 41 and obtain a deformation of \(\varphi\).
The condition (G) in Theorem 72 can be restated as follows. Namely, the polynomials \(f^{(b_{j})}_{b_{j}+1},\ldots,f^{(b_{j})}_{b_{j}+a_{j}-1}\) satisfy the condition (G) if and only if for each \(i=1,\ldots,a_{j}-1\), the element \(F_{-i}\cdot\overline{Jac}_{j}\) is not contained in \(rad(F_{-1},\ldots,\bar{F}_{-i},\ldots,F_{-(a_{j}-1)})\), where \(\bar{F}_{-i}\) means we remove \(F_{-i}\), \(rad(\cdots)\) means the radical of the ideal generated by \((\cdots)\), and \(\overline{Jac}_{j}\) is the Jacobian of the map
\[(\bar{f}^{(b_{j})}_{b_{j}+1},\ldots,\bar{f}^{(b_{j})}_{b_{j}+a_{j}-1})\colon \mathbb{C}^{a_{j}-1}\to\mathbb{C}^{a_{j}-1}.\]
Recall that \(F_{-i}\) is of the form \(F^{(j)}_{-i}=\sum_{k=-i}^{-1}\Theta^{(j;k)}_{k+i}f^{(b_{j})}_{b_{j}-k}\) and \(\Theta^{(j;k)}_{0}=1\), \(\Theta^{(j;k)}_{1}=0\). Therefore, the ideal \((F_{-1},\ldots,\bar{F}_{-i},\ldots,F_{-(a_{j}-1)})\) can be written in the form
\[(f^{(b_{j})}_{b_{j}+1},\ldots,f^{(b_{j})}_{b_{j}+i-1},f^{(b_{j})}_{b_{j}+i+1}, f^{(b_{j})}_{b_{j}+i+2}+\Theta^{(j;-i)}_{2}f^{(b_{j})}_{b_{j}+i},\ldots,f^{(b_{ j})}_{b_{j}+a_{j}-1}+\Theta^{(j;-i)}_{a_{j}-1-i}f^{(b_{j})}_{b_{j}+i}),\]
and we can replace \(F_{-i}\cdot\overline{Jac}_{j}\) by \(f^{(b_{j})}_{b_{j}+i}\cdot\overline{Jac}_{j}\).
Recall that the functions \(f^{(b)}_{b+i}\) (and so the functions \(\bar{f}^{(b)}_{b+i}\) and \(F_{-i}\), too) are determined solely by the pair of positive integers \(b>a\geq 2\). The most optimistic (but not overly optimistic) expectation is that the condition (G) holds for any pair \(b>a\geq 2\) of positive integers. Computation by Macaulay2 [10] suggests that the condition (G) holds in most cases, see Table 1.
The only exceptional case occurs when \(a=4\) and \(b=6\). This is attributed to an accidental factorization of a function due to the smallness of the degree. Namely, when we have \(a=4\) and \(b=6\), the relevant functions are given by the following:
\[\begin{array}{l}F_{-1}=-\frac{3}{16}c_{2}^{2}c_{3}+\frac{3}{4}c_{3}c_{4},\\ F_{-2}=\frac{3}{128}c_{2}^{4}-\frac{3}{16}c_{2}c_{3}^{2}-\frac{3}{16}c_{2}^{2} c_{4}+\frac{3}{8}c_{4}^{2},\\ F_{-3}=\frac{3}{64}c_{3}^{2}c_{3}-\frac{1}{16}c_{3}^{3}-\frac{1}{16}c_{2}c_{3}c_ {4},\end{array}\]
\[\overline{Jac}=\frac{27}{16384}c_{2}^{6}c_{3}+\frac{27}{2048}c_{2}^{3}c_{3}^{3}+ \frac{27}{1024}c_{3}^{5}-\frac{81}{4096}c_{2}^{4}c_{3}c_{4}-\frac{27}{512}c_{2}c _{3}^{3}c_{4}+\frac{81}{1024}c_{2}^{2}c_{3}c_{4}^{2}-\frac{27}{256}c_{3}c_{4}^{3}.\]
The condition \(F_{-1}=F_{-3}=0\) implies
1. \(c_{3}=0\), or
2. \(-c_{2}^{2}+4c_{4}=0\) and \(3c_{2}^{3}-4c_{3}^{2}-12c_{2}c_{4}=0\).
In the first case, \(\overline{Jac}\) is also zero, and (G) does not hold. Moreover, in the second case, the equations implies \(c_{3}=0\), too. Thus, in this case the condition (G) fails.
Note that even in this case, a part of the condition (G) holds. Namely, for \(i=1\) and \(3\), the condition of Definition 70 holds. This implies that the conclusion of Theorem 72 applies to these cases, too.
Under the condition (G), the problem of whether deformations of \(\varphi\) exists is entirely reduced to verifying the cohomological condition (D), which is much easier than the obstruction calculation.
### Deformation of double points
If \(p\in C\) is a double point of the semiregular map \(\varphi\colon C\to X\), that is, when the multiplicity \(a=2\), the function \(f_{b+1}^{(b)}=\bar{f}_{b+1}^{(b)}=F_{-1}\) takes the simple form \(c_{2}^{\frac{b+1}{2}}\). In this case, Theorem 72 can be significantly strengthened. Let \(\{p_{1},\ldots,p_{l}\}\) be the set of points on \(C\) at which \(\varphi\) has singularities, and assume each of them is a double point. Then, the following holds.
**Theorem 74**.: _The semiregular map \(\varphi\) deforms if and only if at least one of the following conditions holds._
1. _There is at least one_ \(p_{i}\) _such that there is no section of_ \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\) _whose polar support is_ \(\{p_{i}\}\)_. Here,_ \(Z=p_{1}+\cdots+p_{l}\) _is the ramification divisor of_ \(\varphi\)_._
2. _The set_ \(H^{0}(C,\bar{\mathcal{N}}_{\varphi})\) _is not zero._
Proof.: Assume the condition of (1) is satisfied. First, assume that for any \(p_{i}\), there is a section of \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\) which contains \(p_{i}\) in its polar support. By changing the numbering if necessary, we can assume that \(p_{1}\) is a point satisfying the condition of (1). Also, we assume that among the points \(\{p_{1},\ldots,p_{l}\}\), \(p_{m},\ldots,p_{l}\) are the points for which there is a section of \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\) whose polar support is \(\{p_{i}\}\), \(i=m,\ldots,l\). Here, \(1<m\leq l+1\) and when \(m=l+1\), the set \(\{p_{m},\ldots,p_{l}\}\) is empty. Then, we can take a basis \(\{\eta_{1},\ldots,\eta_{k}\}\) of \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\) so that \(p_{1}\) is contained in the polar support of \(\eta_{k}\) only when \(k=1\). We can assume that
\[psupp(\eta_{k-l+i})=\{p_{i}\},\]
\begin{table}
\begin{tabular}{|r|l|} \hline \(a\)=3 & (G) holds for \(4\leq b\leq 30\) \\ \hline
4 & (G) holds for \(5\leq b\leq 30\) except \(b=6\) \\ \hline
5 & (G) holds for \(6\leq b\leq 30\) \\ \hline
6 & (G) holds for \(7\leq b\leq 30\) \\ \hline
7 & (G) holds for \(8\leq b\leq 20\) \\ \hline
8 & (G) holds for \(9\leq b\leq 20\) \\ \hline
9 & (G) holds for \(10\leq b\leq 20\) \\ \hline
10 & (G) holds for \(11\leq b\leq 15\) \\ \hline \end{tabular}
\end{table}
Table 1.
for \(i=m,\ldots,l\). Also, following the construction in Section 3.3, we can assume that \(psupp(\eta_{i})\), \(1\leq i\leq k-l+m-1\) is disjoint from \(\{p_{m},\ldots,p_{l}\}\), and that for each \(1\leq i\leq k-l+m-1\),
\[psupp(\eta_{i})\nsubseteq\bigcup_{j<i}psupp(\eta_{j})\]
holds.
For each \(p_{j}\), we have a pair of integers \((a_{j},b_{j})=(2,b_{j})\). Let \(M\) be the least common multiple of the integers
\[\frac{b_{1}+1}{2},\ldots,\frac{b_{l}+1}{2}.\]
Define the integer \(d_{j}\) by \(d_{j}=\frac{2M}{b_{j}+1}\). For each \(p_{j}\), we have an element \(c_{2}^{(j)}\), and we take it in \(t^{d_{j}}\mathbb{C}[[t]]\). Then, as in Section 4.1, the map \(\varphi\) can be deformed up to the order \(M-1\) with respect to \(t\).
Now, consider the deformation of \(\varphi\) of the order \(M\). It is easy to see that we can take \(c_{2}^{(j)}=t^{d_{j}}\tilde{c}_{2}^{(j)}\), \(j=1,\ldots,m-1\), so that they satisfy the conditions \((\star_{\eta_{i}})\), \(1\leq i\leq k-l+m-1\), in the sense of Definition 40. Here, \(\tilde{c}_{2}^{(j)}\) is a nonzero complex number. Also, we take \(c_{2}^{(j)}=0\), \(j=m,\ldots,l\). Thus, the conditions \((\star_{\eta_{i}})\), \(k-l+m\leq i\leq k\), also hold obviously. Then, since the equation \((\star_{\eta_{i}})\) is itself the condition for the vanishing of the obstruction at the order \(t^{M}\), the map \(\varphi\) deforms up to the order \(t^{M}\) for these values of \(c_{2}^{(j)}\). Let us write this map as \(\varphi_{M}\).
Let us consider the deformations of the higher order. Following the argument in Section 4.2, by the above choice of \(c_{2}^{(j)}\), we see that extending the map \(\varphi_{M}\) does not have an obstruction up to the order \(t^{2M-1}\), and let \(\varphi_{2M-1}\) be the map obtained in this manner. At the order \(t^{2M}\), there may be an obstruction \(o_{2M}\) to deforming \(\varphi_{2M-1}\). When \(o_{2M}\) pairs non-trivially with \(\eta_{i}\), \(1\leq i\leq k-l+m-1\), we can follow the argument in Section 4 to modify \(c_{2}^{(j)}\), \(j=1,\ldots,m-1\), to cancel the obstruction. If \(o_{2M}\) pairs non-trivially with some \(\eta_{k-l+i}\), \(i=m,\ldots,l\), we can choose \(\tilde{c}_{2}^{(i)}\) so that by taking \(c_{2}^{(i)}=t^{2d_{i}}\tilde{c}_{2}^{(i)}\), we cancel the obstruction.
With these values of \(c_{2}^{(j)}\), we can apply the argument of Section 4.3 to construct a map \(\varphi_{2M}\) which deforms \(\varphi_{M^{\prime}}\) for some \(M^{\prime}\leq 2M-1\). Repeating this, again by the argument of Section 4.3, we eventually obtain a formal deformation of \(\varphi\), and by applying a suitable algebraization result, we finish the proof.
Let us assume that there is a point \(p_{i}\) which is not contained in the polar support of any element of \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\). Let \(p_{1}\) be such a point. Then, by taking any non-zero \(c_{2}^{(1)}=t^{d_{1}}\tilde{c}_{2}^{(1)}\), and taking all the other \(c_{2}^{(j)}\) to be zero, we obtain a non-trivial deformation \(\varphi_{M}\) of \(\varphi\). Then, we can apply the above argument to \(\varphi_{M}\) to obtain a formal deformation of \(\varphi\).
Finally, assume the condition (2) is satisfied. Then, a non-zero section of \(H^{0}(C,\bar{\mathcal{N}}_{\varphi})\) gives a first order deformation of the map \(\varphi\). By base change, we can assume that this deformation is defined at the order \(t^{M}\). Then, we can again apply the above argument to cancel the potential obstructions at the higher order by modifying the values of \(c_{2}^{(j)}\), and we obtain a deformation of \(\varphi\).
Conversely, suppose that neither of the conditions (1) nor (2) holds. This means that for each \(p_{j}\), there is a section \(\eta_{j}\) of \(H^{0}(C,\varphi^{*}\omega_{X}(Z))\) whose polar support is \(p_{j}\), and
\(H^{0}(C,\bar{\mathcal{N}}_{\varphi})=0\). By the latter condition, to have a non-trivial deformation, we need to take some \(c_{2}^{(j)}\) to be non-zero at some order with respect to \(t\). However, it couples with \(\eta_{j}\) non-trivially and produces a non-trivial obstruction. Thus, we cannot extend the deformation further.
**Acknowledgments.** The author was supported by JSPS KAKENHI Grant Number 18K03313.
|
2302.03754 | Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories | In this paper we improve the zero-shot generalization ability of language
models via Mixture-Of-Memory Augmentation (MoMA), a mechanism that retrieves
augmentation documents from multiple information corpora ("external memories"),
with the option to "plug in" new memory at inference time. We develop a joint
learning mechanism that trains the augmentation component with latent labels
derived from the end retrieval task, paired with hard negatives from the memory
mixture. We instantiate the model in a zero-shot dense retrieval setting by
augmenting a strong T5-based retriever with MoMA. Our model, MoMA, obtains
strong zero-shot retrieval accuracy on the eighteen tasks included in the
standard BEIR benchmark. It outperforms systems that seek generalization from
increased model parameters and computation steps. Our analysis further
illustrates the necessity of augmenting with mixture-of-memory for robust
generalization, the benefits of augmentation learning, and how MoMA utilizes
the plug-in memory at inference time without changing its parameters. We plan
to open source our code. | Suyu Ge, Chenyan Xiong, Corby Rosset, Arnold Overwijk, Jiawei Han, Paul Bennett | 2023-02-07T20:59:31Z | http://arxiv.org/abs/2302.03754v1 | # Augmenting Zero-Shot Dense Retrievers with Plug-in
###### Abstract
In this paper we improve the zero-shot generalization ability of language models via Mixture-Of-Memory Augmentation (MoMA), a mechanism that retrieves augmentation documents from multiple information corpora ("external memories"), with the option to "plug in" new memory at inference time. We develop a joint learning mechanism that trains the augmentation component with latent labels derived from the end retrieval task, paired with hard negatives from the memory mixture. We instantiate the model in a zero-shot dense retrieval setting by augmenting a strong T5-based retriever with MoMA. Our model, MoMA, obtains strong zero-shot retrieval accuracy on the eighteen tasks included in the standard BEIR benchmark. It outperforms systems that seek generalization from increased model parameters and computation steps. Our analysis further illustrates the necessity of augmenting with mixture-of-memory for robust generalization, the benefits of augmentation learning, and how MoMA utilizes the plug-in memory at inference time without changing its parameters. We plan to open source our code.
## 1 Introduction
Scaling up language models--with more parameters, compute, and annotation data--improves model generalization ability on downstream applications (Raffel et al., 2019; Brown et al., 2020; Smith et al., 2022), but with diminishing return: _linear_ improvements on downstream metrics often require _exponentially_ more parameters and computing cost (Kaplan et al., 2020; Hoffmann et al., 2022). Hence, scaling pretrained language models in this way is economically unsustainable (Strubell et al., 2020; Bender et al., 2021; Zhang et al., 2022).
Retrieval augmented language models provide a promising alternative. They allow language models to efficiently access vast resources from an external corpus (Guu et al., 2020; Borgeaud et al., 2022) that serves as a kind of "memory" they can refer to when making predictions, alleviating the need to memorize as much information in their own network parameters (Roberts et al., 2020). This open-book approach helps language models to better generalize on token prediction tasks and machine translation (Khandelwal et al., 2019; Borgeaud et al., 2022), and tasks which already involve a first-stage retrieval component, e.g., OpenQA (Borgeaud et al., 2022; Izacard et al., 2022). Existing retrieval augmentation methods usually stick to one single retrieval corpus throughout training and inference so that the retrieval component can be indirectly guided by the supervision from end tasks.
In this paper we improve the zero-shot generalization ability of language models using "mixture-of-memory" (MoMA), a new retrieval augmentation mechanism. Instead of a single corpus, MoMA retrieves documents from a "mixture" of multiple external corpora and enjoys the merits of a larger and more comprehensive source of knowledge. This mechanism also allows removing and/or "plugging-in" new corpora during inference time, when more information from the target task is revealed, or as an additional way for users to control the model. Specifically, we apply MoMA on the zero-shot dense retrieval task, which is the foundation of many important real-world applications (Thakur et al., 2021; Kim, 2022) and also the retrieval component of recent retrieval augmented language models (Guu et al., 2020; Izacard et al., 2022). However, it is not trivial to guide a retrieval model to leverage multiple corpora. We need to jointly train the augmentation component and dense retriever using supervised relevance signals and self-mined hard negatives.
We instantiate MoMA with a T5 encoder-decoder model (Ni et al., 2022) and apply it to the dense retrieval task (Karpukhin et al., 2020). Our end task retriever uses a set of augmenting documents from the mixture-of-memories to enhance its representation of the query with important context; the retriever then uses the enhanced query representation to retrieve a final candidate set. At inference time, we plug in the target task's corpus to the memory mixture to introduce in-domain context information, without updating any parameter.
We experimented on eighteen zero-shot dense retrieval tasks included in BEIR (Thakur et al., 2021), the standard ZeroDR benchmark. The results demonstrate the improved zero-shot ability of MoMA. When paired with the ANCE (Xiong et al., 2020) training framework
on a T5 model, it outperforms counterparts without the MoMA augmentation component, as well as recent state-of-the-art dense retrieval systems of the same scale, by large margins. To validate its effectiveness when paired with advanced models, we further instantiate MoMA with a contrastively pretrained T5 model. MoMA then achieves comparable or even stronger performance to ZeroDR systems with larger model scales and heavier computation costs.
Our analysis reveals that large and diverse corpora in the memory leads to the best performance; while only using a single corpus during training does not improve performance on unseen target tasks. The learning of augmentation component is also important for MoMA to utilize the diverse information from the mixture. Our analysis and case studies illustrate how MoMA leverages the plug-in memory at testing time to enrich its query representations with in-domain information that was not available in training.
## 2 Related Work
### Retrieval Augmentation
Recent research has explored two common ways to construct the external memory in retrieval-augmented language models. The first is to retrieve similar tokens for language models to copy from when predicting the next token (Khandelwal et al., 2019; Zhong et al., 2022). The second is to retrieve the related documents (text sequences) from an in-domain corpus as additional input (Guu et al., 2020; Borgeaud et al., 2022). Our work falls into this category as document-based models better align with knowledge-intensive tasks (Petroni et al., 2020), such as retrieval and OpenQA (Chen et al., 2017).
Learning to retrieve useful documents to augment the language model is a challenging task, since human annotations on the usefulness of augmentation documents are costly and seldom available. The most straightforward way is to use representations from raw pretrained language models to find documents similar to the task input, i.e., as unsupervised dense retrieval (Guu et al., 2020; Borgeaud et al., 2022). Adapting dense retrieval models trained for relevance matching is another common choice (Izacard and Grave, 2020; Lewis et al., 2020; Yu et al., 2021). A more formal solution is to jointly learn the augmentation components end-to-end using supervision from the final task, for example, treating the augmentation as latent variables and applying EM (Zhao et al., 2021), or distilling the augmentation component from feedback of the final model (Izacard and Grave, 2020). In a parallel work, Izacard et al. (2022) found the most effective one is attention distillation method (ADist), which trains the augmentation component using soft labels derived from the end model's attention on augmentation documents.
The motivation for query augmentation coincides with the query expansion methods in the traditional IR community, whereby the user's original query is augmented by new features with similar meanings (Carpineto and Romano, 2012). As feature selection usually requires additional semantic analysis, the efficiency and usability of traditional query expansion methods remain limited when faced with a new domain. To overcome this, recent work relies on dense retrieval results to expand the query (Yu et al., 2021). The retrieved relevant documents serve as pseudo relevance feedback signals for the model, which are concatenated with the original query as the augmented model input. Our work augments queries with feedback from multiple corpora and learns to select important augmentation documents automatically.
### Zero-shot Dense Retrieval
Dense retrieval models trained on a resource rich source tasks, e.g., web search, usually do not perform as well when zero-shot transferred to other domains (Thakur et al., 2021). This is concerning since many important real-world scenarios do not have the luxury of web corpus training signals and must rely on near zero-shot transfer, e.g., the medical domains (Kim, 2022). Xin et al. (2021) analyzed the challenge of shifting between training and testing domains, and leveraged domain-invariant learning to mitigate the gap. Another common approach is to first generate domain-specific pseudo labels for each task, and then use them to train dense retriever (Thakur et al., 2021; Wang et al., 2022). Additionally, continuous pretraining the language model also improves its generalization ability in ZeroDR (Izacard et al., 2021; Gao and Callan, 2022; Yu et al., 2022). Following works (Izacard et al., 2021; Yu et al., 2022) further contrastively pretrained the retriever on source or target corpus with a sentence matching loss. Other methods seek better generalization ability in ZeroDR from various resources, for example, combining with sparse retrieval to introduce exact match signals (Formal et al., 2021), using multiple vectors per documents for term-level matching (Khattab and Zaharia, 2020), or scaling up the retrieval model using larger language models (Ni et al., 2021; Neelakantan et al., 2022).
## 3 Method
In this section we first describe our Mixture-of-Memory Augmentation. Then we discuss how it is jointly learned with the end system and enables plug-in memory at inference time.
### Mixture-of-Memory Augmentation
Before going to the details of MoMA, we first recap some preliminaries in ZeroDR.
**Preliminaries.** The dense retrieval (DR) task aims to find relevant documents \(d\) from a corpus \(C\) for the given query \(q\) by representing them in a shared embedding space. Specifically, the retrieval score in DR is often calculated as:
\[f(q,d)=\mathbf{q}\cdot\mathbf{d};\mathbf{q}=g(q);\mathbf{d}=g(d). \tag{1}\]
It uses dot product as the scoring function to match the embeddings \(\mathbf{q}\) and \(\mathbf{d}\), which is known to support efficient nearest neighbor search (ANN) Johnson et al. (2019). A pretrained language model is often the encoder of choice \(g()\). We use the ST5-EncDec variant of Sentence-T5 Ni et al. (2022):
\[g(x)=\text{Dec}(\text{Enc}(x)), \tag{2}\]
which feeds in the text sequence (prepended by a special [CLS] tokens) to the encoder of T5, \(\text{Enc}()\), and uses the output representation of the [CLS] token from the decoder, \(\text{Dec}()\), as the text representation. This naturally leverages the attention from decoder to encoder at all Transformer layers Raffel et al. (2019), as a fine-grained information gathering mechanism.
The _training_ of dense retrieval systems often applies standard ranking loss and pairs the relevant documents \(d^{+}\in D^{+}\) for each query q with hard negatives \(d^{-}\in D^{-}\):
\[\mathcal{L}=\sum_{q}\sum_{d^{+}\in D^{+}}\sum_{d^{-}\in D^{-}}l( f(q,d^{+}),f(q,d^{-}));\] \[D^{-}\sim\text{ANN}^{C}_{f(q,\circ)}\setminus D^{+}. \tag{3}\]
Eqn. 3 uses ANCE hard negatives, which are the top-retrieved documents from \(C\) using the retriever itself Xiong et al. (2020). The loss function \(l()\) can be any standard ranking loss such as cross entropy. A ZeroDR model is trained on \(q^{*}\) and documents \(d^{*}\in C^{*}\) from a _source task_, often web search, and tested on _target_ tasks \(q^{t}\) and \(C^{t}\); supervision signals are only present from the source.
**Mixture-of-Memory Augmentation.** The key idea of (document-based) retrieval augmented language models is to enrich the representation \(g(q)\) with additional contextual input for the model, i.e., augmentation documents \(d^{a}\) retrieved from an external memory \(\mathcal{M}\). Instead of using a single document corpus, MoMA uses multiple corpora to provide richer and more diverse external resources for augmentation. For example, \(\mathcal{M}\) can be composed by the source corpus \(C^{s}\), a general encyclopedia, a domain specific knowledge graph, etc. Then we can retrieve the augmentation documents \(D^{a}\) :
\[D^{a}=\text{ANN}^{\mathcal{M}}_{f^{a}(x,\circ)};\ \mathcal{M}=\{C_{1},...,C_{M}\}. \tag{4}\]
This augmentation component uses another dense retriever \(f^{a}()\) (also a Sentence T5 model), with parameters distinct from those in \(g()\). Note that instead of retrieving \(D^{a}\) separately from \(M\) different ANN memory sources and merging results, Eqn. 4 combines them into one ANN index. This requires the augmentation component \(f^{a}()\) to be flexible enough handle various corpora in the mixture.
Using the encoder-decoder architecture for \(g()\) in Eqn. 2 enables a simple extension to incorporate the augmentation documents using the fusion-in-decoder (FiD) mechanism Izacard and Grave (2020):
\[g^{\text{MoMA}}(q) =\text{Dec}(\text{Enc}(q),\text{Enc}(d^{a}_{1}),...,\text{Enc}(d ^{a}_{K}));\] \[D^{a} =\{d^{a}_{1},...,d^{a}_{K}\}. \tag{5}\]
It feeds in the \(K\) augmentation documents separately to the T5 encoder of \(g()\). Then it fuses the encoded documents together with \(\text{Enc}(q)\) using one decoder that attends to all encoded vectors, as illustrated in Figure 1.
The FiD approach in Eqn 5 is a nice balance of efficiency and capacity when modeling multiple text sequences Izacard and Grave (2020). It is more efficient than concatenating all text pieces together, while also remaining expressive enough to model the nuances from many sequences Izacard and Grave (2020); Izacard et al. (2022).
When instantiating MoMA in the dense retrieval setting, we focus on augmenting the query representation \(\mathbf{q}\), as queries are often short, ambiguous, and benefit more from additional contextual information Lavrenko and Croft (2017); Yu et al. (2021). This leads to the following definition of MoMA:
\[f^{\text{MoMA}}(q,d)= \mathbf{q}^{a}\cdot\mathbf{d};\] \[\mathbf{q}^{a}=g^{\text{MoMA}}(q), \mathbf{d}=g(d), \tag{6}\]
using the construction of \(g^{\text{MoMA}}()\) in Eqn. 5 upon the augmentation documents defined in Eqn. 4.
### Joint Learning in MoMA and Inference with Plug In Memory
MoMA has two sets of parameters to learn, in the main model \(f^{\text{MoMA}}()\) and the augmentation component \(f^{a}()\). Both have their own T5 encoder-decoder parameters. The two components are bridged by the augmentation documents, which are retrieved by \(f^{a}()\) from \(\mathcal{M}\) and used by \(f^{\text{MoMA}}()\) to produce query representation \(\mathbf{q}^{a}\).
**Main Model Learning.** Given the relevance labels from the source task and an augmentation model, training \(f^{\text{MoMA}}()\) is straightforward. We can use the standard dense retrieval training to finetune the enriched query encoder \(g^{\text{MoMA}}()\) and the document encoder \(g()\):
\[\mathcal{L}^{\text{MoMA}}=\sum_{q^{*}}\sum_{d^{+}}\sum_{d^{-}}l( f^{\text{MoMA}}(q^{*},d^{+}),f^{\text{MoMA}}(q^{*},d^{-}));\] \[d^{+}\in D^{s+},d^{-}\in D^{s-} \tag{7}\] \[D^{s-}\sim\text{ANN}^{C^{s}}_{f^{\text{MoMA}}(q^{*},\circ)} \setminus D^{s+}. \tag{8}\]
The training signals come from the source task, including \(q^{*}\), its relevant documents \(D^{s+}\), and ANCE hard negatives \(D^{s-}\) retrieved from the source corpus \(C^{s}\).
Figure 1: Illustraion of the Mixture-of-Memory Augmentation.
**Augmentation Learning.** Training \(f^{a}()\) is challenging as it is hard to label whether an augmentation document is useful. Propagating gradients from the final loss to \(f^{a}()\) is also prohibitive as the retrieval operation in Eqn. 4 is discrete. Fortunately, recent research found the attention scores from the FiD decoder to each encoded inputs (Eqn. 5) are good approximations to the usefulness of augmentation documents Izacard and Grave (2020):
\[\text{FidAtt}(d^{a}_{i})=\sum_{\text{layers positions heads}}\sum_{ \text{Heads}}\text{Att}_{\text{Dec}\to\text{Enc}}(g^{\text{MoMA}}(d^{a}_{i})). \tag{9}\]
It sums the attentions from \(g^{\text{MoMA}}()\)'s special token at the decoder's [CLS] position over all layers, input positions, and attention heads. Ideally, higher \(\text{FidAtt}()\) is assigned to \(d^{a}_{i}\) that provides useful contextual information.
Previously, FidAtt scores are often used as soft labels for the augmentation model Izacard and Grave (2020); Izacard et al. (2022). Doing so with memory mixtures is risky as it is too sparse and overfits memory resource that appears earlier in the training, which are the only ones available for the decoder to attend on. To improve the learning robustness, we introduce ANCE-style hard negative mining to train the augmentation component as well.
First, we formulate the positive set of augmentation documents as:
\[D^{a+}=D^{s+}\cup\text{Top-N}_{\text{FidAtt}(d^{a}_{i}),D^{a}}. \tag{10}\]
which combines relevant documents \(D^{s+}\) and the augmenting ones that received N-highest attention scores from \(g^{\text{MoMA}}()\). Then we pair them with hard negatives to formulate the training of \(f^{a}()\) as:
\[\mathcal{L}^{a}=\sum_{q^{s}}\sum_{d^{+}\in D^{a+}}\sum_{d^{-}\in D^{a-}}l(f^{a }(q^{s},d^{+}),f^{a}(q^{s},d^{-})); \tag{11}\]
\[D^{a-}\sim\text{ANN}^{\mathcal{M}}_{f^{a}(q^{s},0)}\setminus D^{a+}. \tag{12}\]
Notice the negatives for \(f^{a}()\) have comprehensive coverage from multiple corpora.
**Iterative Training.** The learning of \(f^{\text{MoMA}}()\) and \(f^{a}()\) is an iterative process that fits naturally into the training procedure of dense retrieval training with hard negatives. We follow the standard iterations in ANCE and construct the \(t\)-th training episode of MoMA:
1. Construct hard negatives \(D^{s-}\) via Eqn. 8 using weights \(f^{\text{MoMA}}_{t-1}()\) from the last episode;
2. Retrieve augmentation \(D^{a}\) via Eqn. 4 using weights \(f^{a}_{t-1}()\) from the last episode;
3. Train \(f^{\text{MoMA}}_{t}()\) as Eqn. 7;
4. Formulate new positive augmentation documents \(D^{a+}\), using updated attention scores from \(f^{\text{MoMA}}_{t}()\), and mine negative augmentation documents \(D^{a-}\) using \(f^{a}_{t-1}()\);
5. Train \(f^{a}_{t}()\) following Eqn. 11.
Both \(f^{\text{MoMA}}_{0}()\) and \(f^{a}_{0}()\) can be initialized with a BM25 warmed-up T5 retriever. Steps 1 and 3 above are inherited from standard dense retrieval training. The rest are introduced by MoMA. The additional computation in the training side mainly resides updating the index for the memory mixture, a standard cost in retrieval-augmented language models Guu et al. (2020); Izacard et al. (2022).
**Zero-Shot Retrieval with Plug in Memories.** To perform zero-shot retrieval on unseen tasks, MoMA first retrieves augmented documents using \(f^{a}()\) from \(\mathcal{M}\) for the target query \(q^{t}\), and retrieves target documents \(d^{t}\in C^{t}\) with the augmented model \(f^{\text{MoMA}}()\) without changing any model parameters. MoMA allows \(f^{a}()\) to attend over the target corpus as well if it is plugged in: \(\mathcal{M}=\mathcal{M}\cup C^{t}\setminus C^{s}\), which conveys in-domain information. The augmenting corpus can also be engineered by users manually to inject their preference or domain knowledge, e.g., as "memory engineering". In this work we focus on swapping out the source corpus for the target corpus; we leave other explorations for future work.
## 4 Experimental Methodologies
**Datasets.** We choose the MS MARCO passage dataset Bajaj et al. (2016) as the source domain dataset, whereas the target domains are from the 18 datasets in BEIR Thakur et al. (2021) benchmark, which include including biomedical, scientific and financial texts. More details can be found in Appendix A.1. The evaluation metric NDCG@10 is the same with BEIR benchmark, which measures Normalized Discounted Cumulative Gain Wang et al. (2013) of top 10 prediction. The higher NDCG@10 value indicates better performance.
**Augmenting Corpora.** During training, the mixture-of-memory is composed of source training corpus (MARCO), Wikipedia and a medical knowledge graph. We use the Wikipedia chunk prepossessed by Karpukhin et al. (2020) without further processing1. The medical knowledge graph is extracted from the Medical Subject Headings (MeSH)2, an open-source database for indexing and cataloging of biomedical and health-related information. Since it is hierarchical in structure, we linearize it by concatenating spans with text information. During testing, we directly replace MARCO with the corresponding document sets from BEIR. Each task from BEIR is augmented independently. More dataset and preprocessing details can be found in Appendix A.1.
Footnote 1: [https://huggingface.co/datasets/wiki_dpr](https://huggingface.co/datasets/wiki_dpr)
Footnote 2: [https://www.ncbi.nlm.nih.gov/mesh/](https://www.ncbi.nlm.nih.gov/mesh/)
**Baselines and Model Choices.** We compare our MoMA with standard sparse and dense retrieval models on BEIR. We also compare MoMA with advanced
approaches that are specifically designed for zero-shot generalization. They involve techniques that are not directly comparable with this paper, including pretraining on extra data, in-domain continuous pretraining, and generating target pairs using another pretrained generative model. Besides, some baselines use larger scale language model as their backbone. We list the details of baselines in Appendix A.2.
As a plug-in-and-play method, MoMA can be combined with other techniques. We initiate MoMA on two versions of T5 model checkpoints. The primitive **MoMA (T5-ANCE)** is built on the original T5 model checkpoint. By comparing it with T5-ANCE, we can clearly observe the performance gain brought by MoMA. To demonstrate it can integrate techniques from other models to achieve higher performances, we apply MoMA with a better pretrained T5-based model. Following previous work Gao and Callan (2022); Yu et al. (2022), we continuously trained the T5 model on the MARCO corpus using a sentence-level contrastive loss, combined with the original masked language modeling loss. We then performed the same MoMA training on top of the continuously pretrained T5 checkpoint and denoted it as **MoMA (COCO)**. Both **MoMA (T5-ANCE)** and **MoMA (COCO)** are trained iteratively with ANCE-style Xiong et al. (2020) hard negatives, the only difference is the initialized model start point. We compare their pretraining details with other models in Table 2. Unlike previous work Yu et al. (2022), we did not include target datasets and augmenting corpora in the COCO pretraining stage. Since MARCO contains only 0.5M documents, it adds fewer computational overhead compared to other methods listed in the table, e.g., Contriever.
**Implementation Details.** For MoMA, we use the T5-base Raffel et al. (2019) architecture (12-layer Transformer, 768 hidden size) by directly loading the checkpoint from HuggingFace3. To warm up the language model for dense retrieval, we followed Xiong et al. (2020) to further train it using BM25 negatives for 10 epochs. After warming up, we jointly trained the two components for three episodes, each episode including three training epochs. After three joint episodes, the end retriever reaches the best performance on MSMARCO, so we select this checkpoint for evaluation. The ratio between positive and hard negative pairs is 1:7 for both models. The main hyperparameters in MoMA include the total number of grounding documents \(K\) and the attention threshold number N in Equation 10. We directly set \(K\)=10 and N=5 without any parameter tuning. More details on hyperparameters and experimental settings can be found in Appendix A.3.
Footnote 3: [https://huggingface.co/t5-base](https://huggingface.co/t5-base)
## 5 Evaluation Results
Our experiments evaluate the zero-shot ability of MoMA, its performance with different memory sources, the influence of memory mixture learning, and the benefits of plug-in memory.
### Zero-Shot Retrieval Accuracy and Efficiency
The retrieval accuracy of MoMA and baselines are listed in Table 1. Besides baselines of similar parameter count, we also include larger models (GTR\({}_{\text{large}}\)) or those using multiple vectors per document (ColBERT). MoMA (COCO) shows the strongest zero-shot accuracy against previous state-of-the-art methods that do continuous contrastive pretraining (coCondenser), generate pseudo labels (GenQ), or consume additional training signals
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c} \hline \hline & BM25 & DPR & ANCE & T5-ANCE & coCondenser & GemQ\({}^{1}\) & ColBERT & Contiever & GTR\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\)\({}_{\text{large}}\) & **MoMA** & **MoMA** (T5-ANCE)** & **CoCO** \\ \hline
**Parameter1** & — & 110M & 110M*2 & 110M & 66M*18 & 110M & 110M & 353M & 110M*2 & 110M*2 \\ \hline TREC-COVID & 0.656 & 0.575 & 0.654 & 0.653 & 0.715 & 0.619 & 0.677 & 0.596 & 0.539 & 0.557 & **0.672** & 0.761 \\ BioSAO & 0.465 & 0.232 & 0.306 & 0.322 & 0.318 & 0.398 & **0.474** & — & 0.271 & 0.320 & 0.372 & 0.371 \\ NPC-corpus & 0.325 & 0.210 & 0.237 & 0.275 & 0.307 & 0.319 & 0.305 & 0.328 & 0.308 & 0.329 & 0.307 & **0.333** \\ NQ & 0.329 & 0.398 & 0.446 & 0.452 & 0.494 & 0.358 & 0.524 & 0.498 & 0.495 & **0.547** & 0.490 & 0.544 \\ HotperQA & 0.603 & 0.371 & 0.456 & 0.487 & 0.566 & 0.534 & 0.593 & **0.638** & 0.535 & 0.579 & 0.539 & 0.589 \\ FiQA-2018 & 0.236 & 0.274 & 0.295 & 0.294 & 0.285 & 0.308 & 0.317 & 0.329 & 0.349 & **0.424** & 0.320 & 0.329 \\ Signal-1M & **0.330** & 0.238 & 0.249 & 0.246 & 0.274 & 0.281 & 0.274 & — & 0.261 & 0.265 & 0.258 & 0.264 \\ TREC-NEWS & 0.398 & 0.366 & 0.382 & 0.379 & 0.389 & 0.396 & 0.393 & — & 0.337 & 0.343 & 0.413 & **0.453** \\ Robust4 & 0.408 & 0.344 & 0.392 & 0.412 & 0.399 & 0.362 & 0.391 & — & 0.437 & 0.470 & 0.469 & **0.475** \\ ArgusAn & 0.414 & 0.414 & 0.415 & 0.415 & 0.411 & 0.493 & 0.233 & 0.446 & 0.511 & **0.525** & 0.438 & 0.463 \\ Touch-2020 & **0.367** & 0.208 & 0.240 & 0.312 & 0.190 & 0.182 & 0.202 & 0.230 & 0.205 & 0.219 & 0.271 & 0.299 \\ Qoar & 0.789 & 0.824 & 0.852 & 0.836 & 0.863 & 0.830 & 0.584 & 0.865 & 0.881 & **0.890** & 0.847 & 0.843 \\ DBPedia-entity & 0.313 & 0.236 & 0.281 & 0.290 & 0.356 & 0.328 & 0.392 & **0.413** & 0.347 & 0.391 & 0.347 & 0.383 \\ SICDODS & 0.158 & 0.170 & 0.122 & 0.115 & 0.140 & 0.143 & 0.145 & **0.165** & 0.149 & 0.158 & 0.143 & 0.145 \\ Fever & 0.753 & 0.589 & 0.669 & 0.655 & 0.678 & 0.669 & **0.771** & 0.758 & 0.660 & 0.712 & 0.723 & 0.745 \\ Climate-Fewer & 0.213 & 0.176 & 0.198 & 0.194 & 0.184 & 0.175 & 0.184 & 0.237 & 0.241 & **0.262** & 0.235 & 0.233 \\ SeifAet & 0.655 & 0.475 & 0.507 & 0.566 & 0.660 & 0.664 & 0.671 & **0.677** & 0.600 & 0.639 & 0.632 & 0.630 \\ OQADapStack & 0.299 & 0.281 & 0.296 & 0.283 & 0.330 & 0.347 & 0.350 & 0.345 & 0.357 & **0.384** & 0.283 & 0.294 \\ \hline Contriever Sub Avg & 0.437 & 0.368 & 0.408 & 0.416 & 0.438 & 0.425 & 0.445 & 0.460 & 0.422 & 0.471 & 0.453 & **0.471** \\ Avg & 0.428 & 0.352 & 0.391 & 0.399 & 0.417 & 0.410 & 0.431 & — & 0.416 & 0.444 & 0.436 & **0.453** \\ \hline \hline \end{tabular}
\end{table}
Table 1: NDCG@10 on the BEIR benchmark. We also include an averaged score on datasets used by Contriever for a fair comparison. The best result each task is marked bold. An \({}^{*}\) denotes unfair comparison, as NQ is used in training for GTR. \(\dagger\): GenQ generated pseudo labels to train an independent model for each task. \(\ddagger\): Larger models
\begin{table}
\begin{tabular}
in both continuous pretraining and finetuning phrases (GTRbase). MoMA (T5-ANCE) also achieved nearly comparable zero-shot accuracy against larger models like GTRlarge, and ColBERT, which scales up the number of vectors per documents (one per token). This confirms that retrieval-augmentation provides another path to improve language models' generalization ability besides scaling up. MoMA (T5-ANCE) also outperforms T5-ANCE, which MoMA (T5-ANCE) uses as a subroutine for retrieval augmentation, on all but one retrieval task, showing the robustly improved generalization ability from plug-in mixture of memory.
We evaluate the efficiency of MoMA in two stages: offline model training and online inference. In offline training from Table 2, MoMA (T5-ANCE) is **significantly cheaper** than other methods as we do not require pretraining on large external corpora, which saves hundreds of hours training time. MoMA (COCO) additionally pretrain on MARCO for 50k steps, which is far fewer than the other compared methods. In online inference, similar with other retrieval enhanced language models, MoMA imposes a necessary cost of retrieval augmented model upon the baseline T5-ANCE. We further provide detailed efficiency analysis on MoMA in Table 3. The online latency is measured on one query and 100 retrieved documents. Due to the query augmentation, query encoding is more costly and takes about 55ms per query. Even with the augmentation cost, the full dense retrieval total online inference cost is 64ms, only slightly above the BM25 retrieval latency. The ANN retrieval is very efficient, only takes 9ms. In addition, the complexity of ANN retrieval is sub-linear to the corpus size, in most ANN framework such as FAISS. Thus the extra round of ANN retrieval operation in MoMA is not the bottleneck even when the size of memory mixture scales up.
### Performance with Different Memories
Table 4 evaluates how MoMA behaves under different combinations of external memories. Compared with the MoMA (T5-ANCE), MoMA (COCO) may lean towards the MARCO corpus since it is continuously pretrained on it. To avoid unfair comparison between MARCO and other corpora, we choose MoMA (T5-ANCE) as the _Full_ model version for ablation studies. Unsurprisingly, using a single out-of-domain memory for retrieval augmentation does not help, for example, even though MARCO is the source domain corpus, solely grounding on it reduces zero-shot accuracy. MeSH as the sole augmenting corpus also lowers performance, even on some medical retrieval tasks such as BioASQ. Interestingly, when we expand the memory to include MARCO, Wiki, and MeSH, but keep the target corpus excluded (_w/o Target_), MoMA exhibits better accuracy compared to the no-memory T5-ANCE. Our conclusion is that more memory sources achieves better generalization, especially when no target domain information is available.
In the _Full_ setting, the 3-memory mixture of MARCO, Wiki, and MeSH is jointly learned with final task at training time. At test time, MARCO is swapped out for the target corpus. The _Full_ improves zero-shot accuracy over both the _w/o Target_ setting (where the target corpus is excluded at test time), and the _w/o Learning_ setting (wherein the augmentation component is not learned). As expected, plugging in the target corpus at test time is the most valuable source of generalization power. It is also the most realistic, as access to the target corpus may only be available at testing time.
### Effect of Memory Mixture Learning
To study the effect of our joint learning mechanism on the memory mixture, we compare it with recent state-of-the-art Attention Distillation (ADist), which is first used in Izacard and Grave (2020) and recently updated in a parallel work Izacard et al. (2022). It jointly trains the augmentation model using attention scores from the end language model as pseudo-labels. We also enrich ADist with relevance labels from MARCO for more direct supervision, which was shown to be effective in distilling a dense retriever from stronger cross-encoder ranking model Hofstatter et al. (2021). Similar to previous section, to exclude the performance gain brought by contrastive pretraining, we choose MoMA (T5-ANCE) as our own method for comparison. The performances of these joint learning methods are listed in Table 5. We pick six BEIR tasks whose domains are closely related to the augmentation corpora: TREC-COVID, BIOASQ, and NFCorpus are medical search and closely related to MeSH. NQ, HotpotQA, and FEVER are all Wikipedia based. The results show that ADist, either standalone or enriched with MARCO labels, does not improve the final accuracy compared to using a supervised dense retriever as the augmentation component without joint learning. The main difference is that the supervised retriever has been trained effectively using hard negative sampling Xiong et al. (2020). Jointly learning using soft labels without hard negatives downgraded the augmentation accuracy. Hence, MoMA is a simple technique to learn the end task signals via the attention scores together with hard negatives, which improves quality over a supervised retriever alone.
To further illustrate the joint training process, we track the attention scores of documents from different
\begin{table}
\begin{tabular}{l|c c} \hline \hline
**Operation** & **Offline** & **Online** \\ \hline BM25 Index Build & 1.8h & — \\ BM25 Retrieval Per Query & — & 43ms \\ \hline
**MoMA Inference** & & \\ Encoding of Corpus/Per Doc & 1.5h/4.5ms & — \\ Query Encoding & — & 55ms \\ ANN Retrieval (hatched q) & — & 9ms \\ Dense Retrieval Total & — & 64ms \\ \hline
**MoMA Training** & & \\ Encoding of Corpus/Per Doc & 1.5h/4.5ms & — \\ ANN Index Build & 10s & — \\ Neg Construction Per Batch (32 queries) & 45ms & — \\ Back Propagation Per Batch (32 queries) & 330ms & — \\ \hline \hline \end{tabular}
\end{table}
Table 3: Efficiency of MoMA search and training.
memory sources as well as their ratio in the augmentation set in Figure 2. We also split MARCO documents by whether they are labeled as **Relevant (Rel)** for the corresponding query.
Firstly, MoMA learns to increasingly attend to, and retrieve, relevant documents from the memory mixture throughout training. In Figure 1(a), more attention is paid to MARCO Relevant documents than to any other type in the memory. Although the number of MARCO Relevant documents is not significant as a percentage of the augmenting set in Figure 1(c), a query level analysis confirms that percentage of queries having at least one relevant document in the augmenting set increases from 46% in Epi-0 to 62% in Epi-2.
This apparent discrepancy can be explained by the fact that MARCO has only one relevant label per query on average, leaving plenty of room for other types of documents to be included in the augmenting set.
Secondly, the amount of attention paid to certain types of documents by MoMA is positively correlated with their representation in the augmenting set. This confirms that the joint learning effectively conveys the feedback signals from the end model to the augmentation component. For instance, in Figure 1(a), MoMA pays a high level of attention to MARCO Other documents, a signal reflected in the composition of its augmentation set in Figure 1(c). Even though MARCO Other documents were not labeled relevant for the query, they can still prove to be valuable as an augmenting document because they may contain partial information that helps query understanding Lavrenko and Croft (2017) or it was simply not annotated in MARCO's sparse labels Bajaj et al. (2016). In comparison, the correlation of the two in ADist is weak as the model seems to include 60% augmenting documents from MeSH, far greater than the fraction of medical queries in MARCO.
### Generalization of Plug-In Memory
In the previous section, we observed how MoMA learns to attend to, and retrieve, informative documents from memories on which it was trained. In this section, we examine the zero-shot behavior of MoMA (T5-ANCE) on new corpora plugged-in at test time (keeping Wiki and MeSH as before).
Figure 3 compares documents from the plugged-in target versus the remaining memory mixture in terms of membership in the augmenting set (Doc Ratio) and attention. Again, on all tasks, MoMA (T5-ANCE) heavily attends to - and successfully retrieves - in-domain documents, even if those in-domain documents were only just plugged in. This confirms that the augmentation model achieves the zero-shot ability to capture relevant information from unseen corpora.
In the medical domain, the model pays more attention
\begin{table}
\begin{tabular}{l|c|c c c c|c c c} \hline \hline & **No Memory** & \multicolumn{4}{c|}{**Single Memory**} & \multicolumn{4}{c}{**Memory Mixture**} \\ \cline{2-9} & T5-ANCE & MARCO & Wiki & MeSH & Target & w/o Learning & w/o Target & Full \\ \hline TREC-COVID & 0.653 & 0.576 & 0.592 & 0.669 & 0.731 & 0.759 & 0.664 & **0.762** \\ BioASQ & 0.322 & 0.247 & 0.262 & 0.219 & 0.361 & 0.359 & 0.271 & **0.372** \\ NFCorpus & 0.275 & 0.295 & 0.302 & 0.282 & **0.319** & 0.317 & 0.301 & 0.307 \\ NO & 0.452 & 0.472 & 0.486 & 0.393 & 0.483 & **0.510** & 0.484 & 0.490 \\ HotpotQA & 0.487 & 0.481 & 0.519 & 0.462 & 0.538 & **0.539** & 0.520 & **0.539** \\ FiQA-2018 & 0.294 & 0.296 & 0.286 & 0.280 & **0.320** & 0.304 & 0.285 & **0.320** \\ Signal-1M & 0.246 & 0.239 & 0.225 & 0.238 & 0.250 & 0.248 & 0.240 & **0.258** \\ TREC-NEWS & 0.379 & 0.381 & 0.391 & 0.372 & **0.416** & 0.410 & 0.398 & 0.413 \\ Robust04 & 0.412 & 0.435 & 0.443 & 0.428 & **0.483** & 0.446 & 0.452 & 0.469 \\ Arguna & 0.415 & 0.439 & 0.438 & **0.442** & 0.429 & 0.427 & 0.438 & 0.438 \\ Touche-2020 & 0.312 & 0.281 & 0.281 & 0.252 & **0.331** & 0.275 & 0.272 & 0.271 \\ Quora & 0.836 & 0.809 & 0.798 & 0.835 & 0.781 & 0.813 & 0.812 & **0.847** \\ DBPedia-entity & 0.290 & 0.340 & 0.341 & 0.287 & 0.335 & 0.331 & 0.342 & **0.347** \\ SCIDOCS & 0.115 & 0.128 & 0.121 & 0.130 & **0.146** & 0.134 & 0.127 & 0.143 \\ Fever & 0.655 & 0.663 & 0.735 & 0.610 & 0.694 & 0.718 & **0.737** & 0.723 \\ Climate-Fever & 0.194 & 0.231 & 0.238 & 0.231 & 0.228 & 0.222 & **0.240** & 0.235 \\ SciFact & 0.566 & 0.583 & 0.587 & 0.585 & 0.624 & 0.618 & 0.598 & **0.632** \\ CQADupStack & **0.283** & 0.207 & 0.218 & 0.203 & **0.283** & 0.235 & 0.215 & **0.283** \\ \hline Avg & 0.399 & 0.395 & 0.403 & 0.384 & 0.431 & 0.426 & 0.411 & **0.436** \\ \hline \hline \end{tabular}
\end{table}
Table 4: NDCG@10 of MoMA (T5-ANCE) under different memory compositions: no memory, single memory, and a mixture of memories. _w/o Learning_ uses the end retriever to select augmenting documents without use of an augmentation component. _w/o Target_ excludes the target from memory.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline Distillation Method & TREC-COVID & BIOASQ & NFCorpus & NQ & HotpotQA & FEVER & **Avg** \\ \hline
**Soft Attention Distill** & & & & & & & \\ ADist Izacard et al. (2022) & 0.609 & 0.185 & 0.227 & 0.351 & 0.387 & 0.615 & 0.396 \\ ADist + MSMARCO rel & 0.664 & 0.220 & 0.255 & 0.397 & 0.394 & 0.624 & 0.426 \\
**w/o Distilling (Fixed)** & 0.741 & 0.361 & 0.301 & 0.472 & 0.513 & 0.684 & 0.512 \\
**MoMA (T5-ANCE)** & **0.762** & **0.372** & **0.307** & **0.490** & **0.539** & **0.723** & **0.532** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Zero-shot Performances of different distillation methods. We observe consistent trend on all BEIR datasets. We present results on 6 representative datasets from Wikipedia or medical domains.
to MeSH documents, especially on TREC-Covid task since MeSH includes high quality updated information related to COVID-19. Wikipedia documents received more attention on the Wiki-centric tasks like FEVER, as expected. Some tasks may need a small amount of precise information from Wikipedia to answer the detailed question, e.g. in HotpotQA. Similar with the training process, there is a non-trivial correspondence between attention score of a memory and its membership in the augmentation set.
### Case Studies
Table 6 shows examples of how augmenting documents chosen by MoMA can provide valuable contextual information for the query. The first example is a training query from MARCO, where the augmenting documents help disambiguate the query word "rating". In the second one, documents from the official Wiki and HotpotQA's Wiki corpus are descriptions of the two entities in HotpotQA's comparison question. It illustrates how MoMA provides more comprehensive augmentation by incorporating information from different sources.
## 6 Conclusion
In this paper we propose a new plug-in mixture-of-memory mechanism for the retrieval augmented language models to improve their zero-shot ability on the dense retrieval task. To learn the memory mixture we develop a new joint learning approach that trains the augmentation component using the positive signals from the end task, the language model's attention scores, and hard negatives retrieved from the mixture of augmentation corpora. This leads to our final model MoMA (T5-ANCE) and MoMA (COCO) that achieve strong zero-shot accuracy on 18 retrieval tasks included in BEIR. Our analysis shows the importance of augmenting with diverse memory sources and in-domain information for robust generalization. We also share our observations and insights on how the model learns to leverage the augmentation information from multiple corpora during training and testing. We hope our findings and illustrations can inspire more future research in better augmenting language models, to provide other alternatives to achieve generalization ability beyond solely relying on model scale.
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Queries** & **Augmentation Docs** \\ \hline \multicolumn{2}{l}{**Training**} \\ \hline
**[Marco]** & **[Marco]** \\ What is & PG? It is rated PG for some scary images, \\ hotel trans- & action and rude humor. **[Wilk]** Another review aggregate calculated an average score of 47 out of 100, indicating “mixed or average reviews”. \\ \hline \multicolumn{2}{l}{**Zero-Shot Testing**} \\ \hline
**[HotpotQA]** & **[Wiki]** Scott Derrickson (born July 16, 1966) is an American director, screenwriter and producer. **[HotpotQA]** Edward Davis \\ and Ed Wood Jr. (October 10, December 10, 1978) was an American filmmaker, actor, writer, producer, and director. \\ \multicolumn{2}{l}{} \\ \hline \hline \end{tabular}
\end{table}
Table 6: MoMA retrieves augmenting documents during training (Marco) and testing (BEIR).
Figure 3: The inclusion of Plug-In memory during testing (grouped by the Wiki and Medical domains).
Figure 2: Grounding component breakdown for different distillation methods in each learning iteration. We display the regularized doc and att. score ratio of documents from different augmentation sources.
## Limitations
Although MoMA (T5-ANCE) and MoMA (COCO) achieve strong zero-shot performances, we mainly verify their efficacy from the empirical performances on BEIR tasks, where the target corpora, Wiki and MARCO serve as readily available retrieval sources. In a real-world scenario, the grounding corpora usually need to be customized according to query domains and user needs. Thus, how to choose effective grounding corpora and efficiently evaluate their relative contribution remain an open problem. These analyses will go beyond our empirical settings and reveal a wider application scenario of MoMA.
## Ethics Statement
All data in this study are publicly available and used under ethical considerations. Text and figures in the paper are used for illustration only, they do not represent the ethical attitude of the authors.
|
2306.00972 | Improving and Benchmarking Offline Reinforcement Learning Algorithms | Recently, Offline Reinforcement Learning (RL) has achieved remarkable
progress with the emergence of various algorithms and datasets. However, these
methods usually focus on algorithmic advancements, ignoring that many low-level
implementation choices considerably influence or even drive the final
performance. As a result, it becomes hard to attribute the progress in Offline
RL as these choices are not sufficiently discussed and aligned in the
literature. In addition, papers focusing on a dataset (e.g., D4RL) often ignore
algorithms proposed on another dataset (e.g., RL Unplugged), causing isolation
among the algorithms, which might slow down the overall progress. Therefore,
this work aims to bridge the gaps caused by low-level choices and datasets. To
this end, we empirically investigate 20 implementation choices using three
representative algorithms (i.e., CQL, CRR, and IQL) and present a guidebook for
choosing implementations. Following the guidebook, we find two variants CRR+
and CQL+ , achieving new state-of-the-art on D4RL. Moreover, we benchmark eight
popular offline RL algorithms across datasets under unified training and
evaluation framework. The findings are inspiring: the success of a learning
paradigm severely depends on the data distribution, and some previous
conclusions are biased by the dataset used. Our code is available at
https://github.com/sail-sg/offbench. | Bingyi Kang, Xiao Ma, Yirui Wang, Yang Yue, Shuicheng Yan | 2023-06-01T17:58:46Z | http://arxiv.org/abs/2306.00972v1 | # Improving and Benchmarking Offline Reinforcement Learning Algorithms
###### Abstract
Recently, Offline Reinforcement Learning (RL) has achieved remarkable progress with the emergence of various algorithms and datasets. However, these methods usually focus on algorithmic advancements, ignoring that many low-level implementation choices considerably influence or even drive the final performance. As a result, it becomes hard to attribute the progress in Offline RL as these choices are not sufficiently discussed and aligned in the literature. In addition, papers focusing on a dataset (_e.g._, D4RL) often ignore algorithms proposed on another dataset (_e.g._, RL Unplugged), causing isolation among the algorithms, which might slow down the overall progress. Therefore, this work aims to bridge the gaps caused by low-level choices and datasets. To this end, we empirically investigate 20 implementation choices using three representative algorithms (_i.e._, CQL, CRR, and IQL) and present a guidebook for choosing implementations. Following the guidebook, we find two variants \(\text{CRR}^{+}\) and \(\text{CQL}^{+}\), achieving new state-of-the-art on D4RL. Moreover, we benchmark eight popular offline RL algorithms across datasets under unified training and evaluation framework. The findings are inspiring: the success of a learning paradigm severely depends on the data distribution, and some previous conclusions are biased by the dataset used. Our code is available at [https://github.com/sail-sg/offbench](https://github.com/sail-sg/offbench).
## 1 Introduction
Deep Reinforcement Learning (RL) is of significant importance to solving sequential decision-making tasks, ranging from game playing [30; 37; 5] to robot control [28; 23; 32]. However, interacting with the environment is prohibitively expensive and dangerous in real-world safety-sensitive scenarios, which limits the applications of RL methods outside of simulators. Therefore, offline RL, targeting learning agents from pre-collected experiences by arbitrary agents to avoid online interaction, is receiving increasing attention. As a result, remarkable achievements have been made in recent years. Most of them aim to solve the distributional shift problem [29], by introducing constraints or regularizations in either policy evaluation step [27; 46] or policy improvement step [44; 13].
The rapid progress brings new challenges in benchmarking the advances in offline RL. First, offline RL algorithms contain many low-level design choices that are often not well-discussed or aligned in the literature. This makes it impossible to assess whether improvements are due to the algorithms or due to their implementations. Similar observations have been made by various studies [3; 11] in online RL that low-level choices play a critical role in driving performance and, thus, should not be overlooked. Second, multiple datasets are released to facilitate offline RL research, among which RL Unplugged [17] and D4RL [12] are the most popular ones. However, there is apparent isolation between them. That is, algorithms evaluated on one dataset [44; 34; 17] are often ignored by papers
focusing on another [27; 26; 46], and vice versa. As a result, the conclusions drawn in a paper might be highly biased by the dataset used. In addition, evaluation metrics might not be aligned and not directly comparable. For example, [43] considers the best score at training, while [26] reports the ending performance of the training process.
The key goal of this work is thus two-fold: 1) to investigate low-level algorithm choices in depth to better attribute the progress in offline RL; 2) to benchmark offline RL algorithms across datasets with a unified evaluation protocol to facilitate future research. We first summarize and implement 20 choices from the literature and select three representative algorithms for low-level implementation study, including Critic Regularized Regression (CRR) [44], Conservative Q-Learning (CQL) [27], and Implicit Q-Learning (IQL) [26]. Through careful alignment and ablation, we provide a guidebook for making low-level decisions in offline RL algorithms. Moreover, we develop two variants of CRR and CQL (which we refer to as CRR\({}^{+}\) and CQL\({}^{+}\)) based on the guidebook, which significantly improves upon their original implementations (CQL\({}^{+}\) by \(5\%\) and CRR\({}^{+}\) by \(33.8\%\)), and outperform the current state-of-the-art (SOTA) method.
Then, to benchmark and validate the generalization ability across datasets with different distributions. We select two algorithms (Muzero Unplugged [34] and CRR) from RL Unplugged, and six (BC, SAC [18], Onestep RL [7], TD3+BC [13], CQL, and IQL). To eliminate the effect of codebases, we carefully re-implement all eight algorithms by strictly following their official implementations under a unified offline RL training framework. Then we conduct experiments on 26 tasks from 5 domains across D4RL and RL Unplugged. D4RL resembles the case where the data is generated by one or more _fixed_ agents, while RL Unplugged, on the other hand, considers the abundant replays generated by prior RL agents that can be used to learn a new agent efficiently. Our findings are insightful: algorithms with policy constraints demonstrate better transferability across datasets, while value constraints, which produce lower-bounded values, generally produce a worse performance on replay data. In addition, unconstrained off-policy algorithms, e.g., SAC and MuZero Unplugged, often fail on datasets generated by a mixture of agents, which is contradictory to the previous conclusion made by [34]. In summary, we make the following contributions:
* A unified training framework for various offline RL algorithms.
* A guidebook for low-level implementation choices in offline RL and two improved algorithms (CRR\({}^{+}\) and CQL\({}^{+}\)) that have never been recorded before.
* Insightful observations on dataset distributions and algorithmic designs and practical recommendations for algorithm selection.
## 2 Related Works
**Offline RL.** The biggest challenge of offline RL is the distribution shift between the learning policy and the behavior policy, _i.e._, the policy used to collect the offline dataset [29]. Due to the unconstrained Bellman's update, the extrapolation error of unseen Q-values \(Q(s,a)\) will accumulate during training and eventually produce an erroneous policy. Therefore, most of the existing offline RL methods consider a conservative learning framework implemented as an additional soft constraint upon the RL objective. The key to conservative learning is encouraging the learning policy to stay close to the behavior policy. It will query the out-of-distribution (OOD) actions less frequently. Such a constraint can be imposed directly on the policy improvement step, _i.e._, on the policy [15; 45; 13; 36; 44]. For example, TD3+BC [13] adds an additional behavior cloning term as a regularizer for the policy to stay in the dataset manifold. Also, the conservative constraints can be indirectly imposed on the policy evaluation step, _i.e._, the Q-functions [27]. Differently, MuZero Unplugged has demonstrated its applicability to both online and offline RL settings without any modifications to its algorithmic structures [34]. However, as prior algorithms are either evaluated only on D4RL or RL Unplugged without intersections, it is hard to compare their performance directly. In this work, we unify eight popular algorithms under the same framework, ranging from simple CQL and CRR [44], _etc._, to highly complex MuZero Unplugged [34], and experiment on both D4RL and RL Unplugged dataset to provide a better understanding of the progress. For algorithms that require a significant change to the network architecture or computation cost, e.g., Decision Transformer [8] or SAC-n [2], we leave them for future study.
**Benchmarking RL Algorithms.** Reinforcement learning provides a principled way to solve sequential decision-making problems. However, it is notorious for its instability and sensitivity to
hyper-parameters and low-level implementation choices [40; 10; 22]. Such a phenomenon commonly exists in model-based RL [42], off-policy RL [41; 16], and on-policy RL [3; 11]. In addition, as discussed in [1], naive point estimation of RL returns might be statistically unstable. However, existing works on offline RL mainly compare their point estimations with results from prior published papers without careful tuning of hyper-parameters and implementation choices. Such a scheme may hinder understanding the real progress in the offline RL field. Hence, we present a comprehensive benchmarking for offline RL algorithms, covering eight popular algorithms ranging from constrained policy improvement [13; 44; 26] to constrained policy evaluation [27]. We evaluate all algorithms under a unified framework with three different metrics which report the extreme performance as well as the stability of algorithms.
## 3 Preliminaries
We consider a Markov Decision Process (MDP) denoted as a tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},p_{0}(s),p(s^{\prime}\mid s,a),r(s,a),\gamma)\), where \(\mathcal{S}\) and \(\mathcal{A}\) are the state and action spaces, \(p_{0}(s)\) is the initial state distribution, \(p(s^{\prime}\mid s,a)\) is the transition function, \(r(s,a)\) is the reward function, and \(\gamma\) is the discount factor. The target of reinforcement learning is to find a policy \(\pi^{*}(a\mid s)\) that maximizes the accumulative return
\[\pi^{*}=\arg\max_{\pi}\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_ {t})\right]\]
\[s_{0}\sim p_{0}(s),s^{\prime}\sim p(\cdot\mid s,a),a\sim\pi(\cdot\mid s). \tag{1}\]
In an actor-critic framework, policy optimization follows Bellman's expectation operator \(\mathcal{B}^{\pi}Q(s,a)=r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim p(\cdot\mid s,a),a^{\prime}\sim\pi(\cdot\mid s^{\prime})}\left[Q(s^{\prime},a^{\prime})\right]\), which alternates between policy evaluation and policy improvement. Given a policy \(\pi_{\theta}(a\mid s)\) and Q function \(Q_{\phi}(s,a)\), policy evaluation aims to learn the Q function by minimizing the prediction error \(\mathbb{E}_{\mu_{\pi_{\theta}}\pi_{\theta}(a|s)}\left[(Q_{\phi}(s,a)-\mathcal{ B}^{\pi_{\theta}}Q_{\phi}(s,a))^{2}\right]\), where \(\mu_{\pi_{\theta}}\) is the stationary distribution induced by \(\pi_{\theta}(a\mid s)\)[35]. On the other hand, policy improvement focuses on learning the optimal policy by maximizing the approximated accumulative return by Q functions \(\mathbb{E}_{\mu_{\pi_{\theta}}(s)\pi_{\theta}(a|s)}\left[Q_{\phi}(a\mid s)\right]\).
However, as querying OOD actions is inevitable when sampling from \(\pi_{\theta}(a\mid s)\), both the policy improvement and evaluation steps are affected in offline RL setups. To alleviate this issue, conservative RL methods impose additional constraints on either the policy improvement step or the policy evaluation step to encourage the learning policy \(\pi_{\theta}\) to stay close to the behavior policy \(\pi_{\beta}\) that generates the dataset. Concretely, conservative policy improvement and conservative policy evaluation can be written as
\[\max_{\theta}\mathbb{E}_{\mu_{\pi_{\theta}}(s)\pi_{\theta}(a|s)} \left[Q_{\phi}(a\mid s)-\alpha_{1}C_{\theta,\phi,\beta}^{\pi}\right]\] \[\min_{\phi}\mathbb{E}_{\mu_{\pi_{\theta}}\pi_{\theta}(a|s)} \left[(Q_{\phi}(s,a)-\mathcal{B}^{\pi_{\theta}}Q_{\phi}(s,a))^{2}+\alpha_{2}C_{ \theta,\phi,\beta}^{Q}\right]\]
where \(\alpha_{1}\) and \(\alpha_{2}\) are hyper-parameters, and \(C_{\theta,\phi,\beta}^{\pi}\) and \(C_{\theta,\phi,\beta}^{Q}\) are conservative constraints for the policy and value functions respectively. In the later part of this paper, we refer to these two approaches as the conservative policy evaluation and the conservative policy improvement.
## 4 Implementation Choices for Offline RL
RL algorithms are notorious for their instability and sensitivity to hyper-parameters and implementation choices [3; 11]. As a particular case of RL, besides this issue, offline RL also suffers from other implementation difficulties on the conservative constraints terms during optimization. We first pick two baseline algorithms for case studies, CRR for conservative policy improvement and CQL for conservative policy evaluation, investigating how better low-level choices enable baselines to outperform the SOTA algorithm, IQL. In addition, we perform ablations on IQL. Our results demonstrate that the success of IQL highly depends on the choice of implementations.
### Study Design
We split the implementation choices into two categories, general RL choices, and algorithmic-specific choices. In this section, we discuss the general RL choices. Specifically, we focus on the gym
locomotion tasks (v2) of the D4RL dataset. For a fair comparison with reported results from [26], we report the average last step return over 3 seeds and 100 episodes. We pick a subset of the implementation choices for investigation, where the abbreviation used in Fig. 1 is bolded.
_Weight initialization scheme._ The initialization scheme of the last output layer of the network has a huge impact on the final performance [3]. We study three variants: orthogonal initialization with scale \(\sqrt{2}\) (**ORT-1.41**), orthogonal initiilaization with scale 0.01 (**ORT-0.01**), and the default Lecun normal initialization (**non-ORT**).
_Policy learning rate and scheduler._ For the Q function learning rate, we fix a commonly adopted value of \(3e^{-4}\). For policy learning rate, we examine two configurations, \(1e^{-4}\) (**lr=1e-4**) and \(3e^{-4}\) (**lr=3e-4**) with cosine learning rate scheduler.
_Reward normalization._ Reward normalization is one of the most important factors in RL [11]. We evaluate two settings: without reward normalization (**non-RN**) and reward normalization (**RN**) as \(r^{\prime}=r/(\max R-\min R)*1000\), where \(\max R\) and \(\min R\) denote the maximum and minimum trajectory returns of the dataset.
_Policy distribution parameterization._ We consider two variants of policy representation, tanh-squashed Gaussian \(a\sim\tanh(\mathcal{N}(\mu_{a},\sigma_{a}\mid s))\) (**TS**), or a clipped Gaussian distribution with tanh-squashed mean \(a\sim\operatorname{clip}(\mathcal{N}(\tanh(\mu_{a}),\sigma_{a}\mid s),-1,1)\) (**non-TS**). In addition, we also evaluate the influence of variance parameterization by either making it state-dependent (**SD**) or independent parameters (**non-SD**).
_Layer normalization._ As Q-value over-estimation is a common issue in offline RL, we add Layer Normalization [4] to the policy and Q value networks (**LN**) and examine if it improves the numerical stability.
_Activation functions._ We choose two different activation functions, **relu** and **elu**[9]. The activation function is applied after layer normalization.
### Case Study: Conservative Q Learning
Conservative Q Learning (CQL) [27] focuses on directly regularizing the Q-value functions during optimization. CQL learns a lower-bound of the ground-truth Q values by implementing \(C^{Q}_{\theta,\phi,\beta}\) as
\[C^{Q}_{\theta,\phi,\beta}=\mathbb{E}_{s\sim\mathcal{D}}\left[\log\sum_{a}\exp Q (s,a)-\mathbb{E}_{a\sim\pi_{\beta}(a|s)}\left[Q(s,a)\right]\right]. \tag{2}\]
Intuitively, CQL encourages the agent to produce high Q-values for in-distribution actions (positive sample), while suppressing the Q-value of OOD actions (negative samples).
For CQL, we further ablate its _number of actions_ for negative examples. In practice, we use \(\pi_{\theta}(a\mid s)\), \(\pi_{\theta}(a\mid s^{\prime})\), and \(\mathcal{U}(-1,1)\) to generate negative samples, where \(s^{\prime}\) is the next state and \(\mathcal{U}\) is a uniform distribution. For each distribution \(\mathbf{a}=\mathbf{N}\) actions are sampled, where \(N\in\{10,30,50\}\).
**CQL\({}^{+}\)**. The official implementation of CQL adopts \(\mathbf{a}=\mathbf{10}\), a policy learning rate of \(3e-4\), and relu activation for all networks. After sweeping, we observe that using \(\mathbf{a}=\mathbf{50}\), policy learning of \(1e-4\), and elu activation, CQL\({}^{+}\) significantly improves the performance of CQL. In Fig. 1(a), CQL\({}^{+}\) achieves a total score of **731.9** on gym-locomotion-v2 tasks of D4RL benchmark, while the original implementation has a score of 698.5 [27]. Fig. 1(b) also details the ablation results of CQL. We observe that tanh-squashed distribution is the most critical component of CQL, without which CQL would suffer a significant 17% performance drop. In addition, reward normalization hurts the performance of CQL. Moreover, a proper number of sampled actions, learning rate, and activation function also contribute significantly to the performance of CQL\({}^{+}\), while layer-norm and weight initialization schemes have a relatively minor impact on its performance.
### Case Study: Critic Regularized Regression
Critic Regularized Regression (CRR) [44] handles offline RL with conservative policy improvement. Concretely, it learns the policy by
\[\arg\max_{\pi}\mathbb{E}_{s,a\sim\mathcal{D}}\left[f(Q_{\phi},\pi,s,a)\log\pi _{\theta}(a\mid s)\right)] \tag{3}\]
where \(\mathcal{D}\) is the dataset, and \(f\) is a non-negative scalar function whose value is monotonically increasing in \(Q_{\phi}\). One common choice for \(f\) is \(f(Q_{\phi},\pi,s,a)=\exp\left[Q_{\phi}(s,a)-\mathbb{E}_{a^{\prime}\sim\pi(a|s) }Q(s,a^{\prime})\right]\). Such a formulation follows Advantage Weighted Regression (AWR) [31], where Eqn. (3) is derived as the closed-form solution to an optimization problem with \(C^{\pi}_{\theta,\phi,\beta}:D_{KL}\left[\pi_{\theta}(\cdot|s)\parallel\pi_{ \beta}(\cdot\mid s)\right]\leq\epsilon\) as the constraint.
For CRR, we additionally ablate the following two components:
_Double Q learning._ As the original CRR adopts a single Q value structure, we additionally implement the double Q learning (**DQ**). Specifically, we use \(\min(Q_{1},Q_{2})\) as the final prediction of the Q value.
_Advantage normalization._ Advantage normalization (**AN**) normalizes the advantages in a batch [3] for numerical stability of advantages. In our CRR implementation, we implement this by normalizing the exponential advantage \(\exp A(s,a)\) over a batch.
**CRR\({}^{+}\)**. The original CRR implementation considers only a single Q-value network with ResNet [19]-based architectures. For a fair comparison with other baselines, we focus on a simple 3-layer network. After sweeping, we observe double Q learning and layer normalization are the two most critical items for its performance boost, without which the performance would drop by 20.8% and 16.5%. On the other hand, the choices of activation functions, weight initialization schemes, and policy representations are less important considerations for CRR. As suggested by Fig. 1(a), CRR\({}^{+}\) achieves a significant performance boost from a total score of 525.2 to **702.3** (33.8% improvement), and it even outperforms the SOTA method IQL (692.4).
### Case Study: Implicit Q Learning
Implicit Q-Learning (IQL) [26] is one of the SOTA methods in offline RL. Similar to OnestepRL [7], IQL learns the policy in SARSA [38] style without querying OOD actions during policy evaluation. To better approximate the maximum Q-value to allow multi-step dynamic programming, IQL performs
expectile regression during policy evaluation. Specifically, it introduces an additional value function \(V_{\psi}(s)\), and performs policy evaluation by
\[\min_{\psi}\mathbb{E}_{s,a\sim\mathcal{D}}\left[L_{2}^{\tau}(Q_{\phi }(s,a))-V_{\psi}(s)\right]\] \[\min_{\phi}\mathbb{E}_{s,a,s^{\prime}\sim\mathcal{D}}\left[(r(s,a )+\gamma V_{\psi}(s^{\prime})-Q_{\phi}(s,a))^{2}\right], \tag{4}\]
where \(L_{2}^{\tau}\) is the expectile regression loss defined as \(L_{2}^{\tau}(x)=|\tau-\mathbb{1}(x<0)|x^{2}\) with hyper-paramter \(\tau\in(0,1)\) and the indicator function \(\mathbb{1}\). Intuitively, larger \(\tau\) allows \(V_{\psi}(s)\) to better approximate \(\max_{a}Q(s,a)\). As a result, IQL performs Q learning without querying OOD actions. For policy improvement, IQL follows AWR as described in Eqn. (3).
For IQL, we ablate its _expectile rate_\(\tau\) (**exp=\(\tau\)**) and its _training scheme_. In particular, although in its original paper, the IQL algorithm follows OnestepRL which performs policy improvement with a fixed learned value network (**non-JNT**), IQL jointly learns its policy and value networks (**JNT**) in its official implementation.
We present the ablation results of IQL in Fig. 2(a). We observe that the official configuration of IQL already gives the optimal performance and it is highly sensitive to the choice of implementations. Five alternative implementations will cause a significant performance drop of more than 100 points in its total return, including the policy learning rate (drop by 14.4%), the use of state-independent variance in policy distribution (drop by 14.6%), high expectile rate \(\tau\) (drop by 16.2%), the absence of joint policy and value training scheme (drop by 16.3%), and the absence of reward normalization (drop by 17.2%). Thus, careful implementations are critical to the strong performance of IQL.
### Recommendations of Implementation Choices
Through careful ablations over implementation choices and hyper-parameter configurations, we observe that all three algorithms require a careful choice of implementations. In addition to the algorithm-specific choices, for shared implementation choices as described in Sect. 4.1, we summarize their influences across three algorithms in Fig. 2(b). Specifically, CRR and CQL have a similar trend regarding implementation choices (6 out of 8), while IQL requires particular tuning on its own. We make a list of recommended implementation choices for prototyping new algorithms.
_Weight initialization schemes._ In contrast to on-policy algorithms [3], we observe that orthogonal initialization generally performs worse than Lecun initialization, regardless of the last layer weight scale (5 out of 6 cases).
_Policy learning rate._ In Fig. 2(b), we observe that although both CRR and CQL benefit from a smaller learning rate of 1e-4, IQL suffers from it with a sharp performance drop (-14.4%). We would recommend trying both learning rates (\(1e-4\) and \(3e-4\)) when implementing new algorithms.
_Reward normalization scheme._ Similar to learning rates, reward normalization has diverged influence over algorithms. Both CRR and CQL observe clear performance drops on normalized rewards (-8% and -8.7%), while it significantly improves the performance of IQL (17.2%). We would recommend trying out both choices for prototyping.
_Layer normalization._ Layer normalization contributes significantly to the success of CRR on continuous control tasks (18.8% improvement). Although both CQL and IQL encounter a performance drop, they are relatively minor (2.5% and 2.4%). Thus, we would recommend directly adding layer normalization to algorithms for prototyping.
_Activation function._ According to our ablation results, the choice of activation function has a relatively smaller impact than other choices, where only CQL encounters a 5% improvement switching from relu to relu. We would recommend directly starting with relu activation as a safer choice.
_Policy distribution parameterization._ We observe that using state-dependent variance learning generally improves CQL and CRR, and has a negligible influence on IQL. Nevertheless, tanh-squashed distributions have a strong impact on all three algorithms. CRR and CQL benefit from the tanh transformations, while IQL suffers from it. To this end, we would recommend taking state-dependent variance as a start, while trying both tanh and non-tanh squashed distributions.
We do not exhaust all the implementation choices but we believe the above-mentioned ablations could help prototype offline RL algorithms more easily.
## 5 Cross-Dataset Evaluation
One clear difference between Offline RL and online RL is that the environment interactions are replaced with a fixed dataset. An agent's performance highly depends on the dataset used, and the generalization across datasets should be an essential consideration for the performance evaluation.
### Evaluation Setups
**Dataset.** We further evaluate a wide range of algorithms across two datasets with distinct distributions, RL Unplugged [17] and D4RL [12]. RL Unplugged considers a wide range of tasks covering both continuous, _e.g._, DeepMind Control Suite [39], and discrete actions, _e.g._, Atari games. The dataset is generated by downsampling the replay buffer of an online agent, which contains a continuous spectrum of the past agent, _i.e._, the complete exploration process during the RL. Such a setup is useful for the fast iteration of new agents directly from previously collected experience, where the environment is slow to interact with. Differently, most tasks in D4RL focus on continuous control tasks with data generated by one or more fixed agents, except for the full-replay tasks. Hence, the data distribution of D4RL is significantly different from RL Unplugged, where data tends to be a
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & BC & 10\% BC & Onestep & MuZero & SAC & TD3+BC & IQL & CRR\({}^{+}\) & CQL\({}^{+}\) \\ \hline \hline \multirow{2}{*}{halfcheetah-medium-v2} & 42.6 & 41.6 & 47.9 & 25.0 & 36.3 & **48.1** & 47.5 & 47.8 & **48.1** \\ hopper-medium-v2 & 56.8 & 56.2 & 61.1 & 2.2 & 1.6 & 54.7 & 62.2 & 65.4 & **69.1** \\ walk2d-medium-v2 & 69.5 & 71.4 & 78.1 & 0.1 & 40.2 & 76.3 & 81.3 & **84.2** & 83.9 \\ halfcheetah-medium-epplay-v2 & 36.6 & 37.1 & 37.1 & 40.8 & 24.7 & 43.8 & 43.6 & 45.2 & **46.0** \\ hopper-medium-epplay-v2 & 45.1 & 70.0 & 91.1 & 30.3 & 18.1 & 45.5 & 94.2 & 72.3 & **97.1** \\ walk2d-medium-epplay-v2 & 23.3 & 54.4 & 54.3 & 41.5 & 0.7 & 42.6 & 78.9 & **85.2** & 81.7 \\ halfcheetah-medium-epvert-v2 & 47.0 & 86.4 & **93.7** & -1.2 & 7.9 & 92.4 & 89.3 & 91.6 & 83.7 \\ hopper-medium-epvert-v2 & 55.4 & 10.2 & **102.4** & 1.8 & 1.7 & 87.6 & 84.1 & 101.7 & 98.2 \\ walk2d-medium-epvert-v2 & 93.3 & 108.7 & 109.9 & -0.1 & 0.3 & 106.7 & 109.2 & **110.4** & 110.2 \\ total (gym-locomotion) & 469.6 & 627.9 & 675.6 & 140.2 & 91.1 & 597.7 & 690.2 & 703.9 & **717.9** \\ \hline \hline \multirow{2}{*}{pen-human-v0} & **79.6** & 1.9 & 72.4 & 0.81 & 1.8 & 5.9 & 75.0 & 67.8 & 77.4 \\ pen-cloned-v0 & 33.5 & -0.7 & 26.9 & 7.13 & -0.4 & 17.2 & **36.9** & 36.8 & 22.9 \\ total (adroit) & **113.1** & 1.2 & 99.2 & 7.9 & 1.4 & 23.1 & 111.9 & 104.6 & 100.3 \\ \hline \hline \multirow{2}{*}{kitchen-complete-v0} & 64.9 & 3.8 & 66.0 & 0 & 0.9 & 2.2 & 67.4 & **72.5** & 42.0 \\ kitchen-partial-v0 & 35.8 & 65.1 & 59.3 & 0.17 & 0.0 & 0.7 & 36.9 & 39.9 & **40.7** \\ kitchen-mixed-v0 & 49.7 & 46.0 & 56.5 & 0 & 0.4 & 0.0 & 49.2 & **50.1** & 45.7 \\ (hotclic) & 150.4 & 114.9 & 181.7 & 0.2 & 1.4 & 2.9 & 153.4 & **162.4** & 128.5 \\ \hline \hline \multicolumn{2}{c}{anmaze-maze-v0} & 52.0 & 61.3 & 62.4 & 0.0 & 0.1 & 40.2 & **81.0** & 0.0 & 47.0 /74.0 \\ antmaze-ummaze-diverse-v0 & 46.1 & 52.4 & 44.7 & 0.0 & 0.0 & 58.0 & **59.6** & 41.9 & 41.4 / 84.0 \\ antmaze-medium-diverse-v0 & 0.1 & 8.2 & 5.4 & 0.0 & 0.0 & 0.2 & **75.4** & 0.0 & 0.3 / 61.2 \\ antmaze-medium-diverse-v0 & 0.3 & 3.2 & 1.8 & 0.0 & 0.0 & 0.0 & **74.8** & 0.0 & 0.13 / 53.7 \\ antmaze-large-play-v0 & 0.0 & 1.2 & 0.1 & 0.0 & 0.0 & 0.0 & **47.3** & 0.0 & 0.17 / 15.8 \\ antmaze-large-diverse-v0 & 0.0 & 2.1 & 0.9 & 0.0 & 0.0 & 0.0 & **45.9** & 0.0 & 0.0 / 14.9 \\ total (antmaze) & 98.5 & 128.5 & 115.2 & 0.0 & 0.1 & 98.4 & **384.0** & 41.9 & 89.0 / 303.6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Running average returns on D4RL Dataset. Specifically, we observe a discrepancy between the reproduced results and the reported results of CQL on antmaze-v0 tasks, where the results are formatted as reproduced/reported.
Figure 3: Reward distributions of (a) halfcheetah-medium-expert-v2 from D4RL dataset and (b) cheetah-run from RL Unplugged dataset. These two tasks are conceptually similar but possess distinct data distributions.
multi-modal distribution. We visualize the reward and return distribution of the cheetah-run dataset of RL Unplugged and halfcheetah-medium-expert-v2 task in Fig. 3. We observe that halfcheetah-medium-expert-v2 has two modes, while the cheetah-run is closer to a uniform distribution.
**Evaluation Protocol.** One critical issue of the current offline RL research is that there exists no standard evaluation protocol. Some prior works consider the average of last return [26; 27; 7] as their metrics, which, however, often fail to capture the stability of an algorithm. Others might consider the best evaluation return throughout the training. Nevertheless, it often overestimates the real performance of offline RL and is naturally infeasible for offline RL as an evaluation environment is missing in realistic setups. In this work, we propose to consider the _last running average return_ for offline RL evaluation. Specifically, we maintain a sliding window of size \(L\), and for time step \(t\), the average return is calculated as \(\hat{R}_{t}=\frac{1}{L}\sum_{t^{\prime}=t-L}^{t}R_{t^{\prime}}\). The running average return at the last step \(T\) is used as the final metric for evaluation. Running average return captures the stability of algorithms, and the use of the last step running average return better fits the offline RL.
### Algorithms for Evaluation
Nevertheless, there is a clear isolation between algorithms evaluated on D4RL [27; 7; 13; 26] and the ones evaluated on RL Unplugged [44; 34]. As a result, it is yet unclear how an algorithm generalizes across different data distributions.
We benchmark 8 algorithms across datasets. Specifically, in addition to the three algorithms discussed in the previous section, we consider the following algorithms in our empirical study: (1) Soft Actor-Critic (SAC) [18], a popular off-policy RL algorithm. (2) MuZero Unplugged [34], an offline RL algorithm without conservative constraints. MuZero Unplugged has demonstrated strong performance on RL Unplugged, but untested on D4RL. For a fair comparison with other algorithms, we use flat MLPs for MuZero, instead of ResNets as in its original implementation. (3) \(x\)% BC, which stands for behavior cloning with trajectories of the top \(x\)% accumulative return. Here, we choose \(x=10\) following [26]. (4) TD3+BC [13], which is a conservative policy improvement method by constraining the learning policy with a simple behavior cloning term. (5) OnestepRL [7], which implements implicit conservative policy improvement with SARSA-style Q-learning.
We strictly follow the official implementations and re-implement all algorithms in JAX [6] with the neural network library Flax [20]. We search the hyper-parameters for all algorithms and fix the parameters with the best overall performance. More details are available in the appendix.
### Results and Discussions: D4RL
We present the _last step running average return_ on the D4RL dataset in Tab. 1.
On gym-locomotion tasks, CQL\({}^{+}\) achieves the best total running average return (717.9) across all algorithms, outperforming the SOTA method, IQL, by a large margin. Interestingly, CRR\({}^{+}\), a direct variant of the off-policy algorithm AWR, produces a stable and strong performance with careful implementation tuning on locomotion tasks and outperforms IQL. However, TD3+BC achieves a much worse performance (597.7) than its best-achieved performance (737.8, in the appendix). Such a phenomenon is caused by the instability of TD3+BC. We suspect the underlying reason is that directly regularizing the learning policy by behavior cloning leads to a conflicting optimization objective and eventually hinders policy improvement. For adroit and kitchen, we observe that OnestepRL, CRR\({}^{+}\), and IQL are significantly better than the other algorithms. All these three algorithms follow the policy improvement of AWR, _i.e._, weighted behavior cloning (Eqn. 3), which naturally avoids querying the OOD actions and better fits these environments with sparser reward.
We observe on antmaze environments with long-delayed rewards, IQL consistently outperforms all other algorithms. Most algorithms suffer from a performance drop as training proceeds. In particular, we observe an apparent discrepancy between the reproduced and reported results on CQL on antmaze. We followed the official configurations and implementations and performed a careful hyper-parameter sweeping but failed to reproduce the results. Interestingly, although MuZero Unplugged claims its generality on both online and offline setups, we observe that MuZero generally fails on D4RL tasks, except for medium-replay environments where the replay data is present.
### Results and Discussions: RL Unplugged
The _last step running average return_ on RL Unplugged dataset are presented in Tab. 2.
Contradictory results to D4RL are observed on RL Unplugged. MuZero has achieved relatively strong performance, even with the much smaller network re-implemented than its original implementations. However, on D4RL, MuZero failed except for the medium-replay tasks. To our surprise, as an off-policy algorithm, SAC observes a similar trend: it fails on D4RL but achieves strong performance on RL Unplugged, outperforming all other algorithms except MuZero. CRR\({}^{+}\) and IQL also transfer their strong performance to RL Unplugged, while CQL\({}^{+}\) and OnestepRL achieve much worse performances than others.
### Discussions and Recommendations
We have observed distinct results on D4RL and RL Unplugged datasets. Specifically, algorithms that succeed on D4RL, including CQL\({}^{+}\) and OnestepRL, fail to transfer their success to RL Unplugged. On the contrary, algorithms that fail on D4RL, including SAC and MuZero, achieve strong performance in RL Unplugged. By summarizing their shared properties, we draw the following conclusions.
_MuZero is closer to an off-policy algorithm_. As we have observed on both the D4RL dataset and RL Unplugged dataset, MuZero has shown the same trend in its performance with SAC, rather than other offline RL algorithms. It fails to handle the data distributions generated by fixed agents, which provides no coverage of the exploration process of a normal RL agent. To fix this issue, conservative constraints could be considered during the Monte-Carlo Tree Search (MCTS) process of MuZero.
_Overly conservative constraints will have adverse effects on replay data._ On the contrary to D4RL, for replay datasets, overly conservative algorithms, _e.g._, CQL, work worse than the standard off-policy RL algorithms, _e.g._, SAC. This is potentially because they estimate an overly loose lower bound of the actual policy/value, and thus fail to exploit the rare but good data in the dataset effectively.
_AWR-style policy improvement gives the best generalization across data distributions._ Different from standard Q-learning, algorithms with AWR-style policy improvement, _e.g._, IQL, and CRR, achieve stable performance on both RL Unplugged and D4RL datasets. This is potentially because the AWR-style policy improvement naturally encodes an implicit conservative constraint for policy and thus minimizes the need for additional constraints. As a result, they are minimally conservative and can adapt to different data distributions with minimal effort.
_Recommendation of algorithms._ When starting a practical project with offline RL, given the above analysis, we would recommend CRR and IQL as the go-to algorithms, which are efficient, general across different data distributions, and have strong performance on most of the tasks. Although in all experiments, we observe IQL still gives the best overall best performance, which demonstrates its algorithmic level advancements, IQL is more sensitive to hyper-parameters and would need careful implementation and hyper-parameter tuning in practical use.
## 6 Conclusions
We conduct a large-scale empirical study on the low-level implementation choices of offline RL algorithms and benchmark 8 algorithms across D4RL and RL Unplugged datasets. We show that low-level implementations are crucial to the performance of offline RL algorithms. By sweeping 20 low-level implementation choices, we present a guidebook for implementing new offline algorithms, as well as CRR\({}^{+}\) and CQL\({}^{+}\) that outperform the SOTA algorithm following the guidebook. Lastly, we benchmark 8 popular algorithms across RL Unplugged and D4RL, and show that conservative policy improvement algorithms demonstrate the best transferability across datasets. We hope this work would shed light on the future research of offline RL. |
2305.06853 | Bell Polynomial Approach and Wronskian Technique to Good Boussinesq
Equation | The elementary and systematic binary Bell polynomial approach is applied to
the good Boussinesq equation. The bilinear representation, $n$-soliton
solutions, bilinear B\"acklund transformation, Lax pair and infinite
conservation laws of the good Boussinesq equation are obtained directly. In
addition, from the reduction conditions of the obtained Lax pairs, the $n$
order $\operatorname{Wronskian}$ determinant solution of the equation is also
constructed. | Xiaotian Dai, Zhenyun Qin | 2023-05-11T14:51:00Z | http://arxiv.org/abs/2305.06853v1 | # Bell Polynomial Approach and Wronskian Technique
###### Abstract
The elementary and systematic binary Bell polynomial approach is applied to the good Boussinesq equation. The bilinear representation, \(n\)-soliton solutions, bilinear Backlund transformation, Lax pair and infinite conservation laws of the good Boussinesq equation are obtained directly. In addition, from the reduction conditions of the obtained Lax pairs, the \(n\) order Wronskian determinant solution of the equation is also constructed.
**Keywords**: good Boussinesq equation, binary Bell polynomials, Lax pairs, infinite conservation laws, Wronskian determinant solution
## 1 Introduction
As the soliton phenomena were first observed in 1834 [1] and Korteweg-de Vries (KdV) equation was solved by the inverse scattering method [2], finding soliton solutions of nonlinear PDEs has become one of the most exciting and extremely active areas of research. Investigation of integrability for a soliton equation can be regarded as a pretest and the first step of its exact solvability. Among the direct algebraic methods employed to study the integrability of soliton equations, the Hirota method has been a particularly powerful method [3]. Once a given soliton equation is written in bilinear form, on one hand, such results as multi-soliton solutions, quasi periodic wave solutions and other exact solutions are usually obtained, and on the other hand, the integrable properties of the soliton equation, such as the Backlund transformation (BT) and Lax pair can also be investigated. However, the construction of bilinear form and bilinear BT of the original soliton equation is not as one would wish. It relies on a particular skill in using appropriate dependent variable transformation, exchange formulas and bilinear identities.
Recently, Lambert and his co-workers have proposed an alternative procedure based on the use of Bell polynomials to obtain bilinear forms, bilinear BTs, Lax pairs for soliton equations in a lucid and systematic way [4, 5, 6]. Fan developed this method to find infinite conservation laws of soliton equations and proposed the super Bell polynomials [7, 8]. Ma systematically analyzed the connection between Bell polynomials and new bilinear equation [9]. Chen et al. proposed the Maple automatic program to construct Backlund transformation, Lax pairs and the infinite conservation laws for nonlinear evolution equations [10].
In Hirota's method, the final step is to prove the general \(n\)-soliton [11]. However, this proof can be quite difficult and since the early 1980s, it has been more usual to re-express the solutions found in terms of a Wronskian or Grammian determinant, or Pfaffians of several types, and verify the solution in that form [3]. The Wronskian technique provides direct verifications of solutions to bilinear equations by taking the advantage that special structure of a Wronskian contributes simple forms of its derivatives [12]. However, the key element of this technique is to figure out a system of linear differential conditions, which ensures that the Wronskian formulation presents solutions of the Hirota bilinear form.
In using the Wronskian technique to solve the KdV equation [14, 19]
\[u_{t}-6uu_{x}-u_{xxx}=0,\]
one usually starts from its Lax pair
\[\varphi_{xx}=\left(\lambda-u\right)\varphi,\quad\varphi_{t}=\varphi_{xxx}+3\left( \lambda+u\right)\varphi_{x}.\]
Choose \(u=0,\lambda=\frac{k^{2}}{4}\), then its Lax pair can be reduced to
\[\varphi_{xx}=\frac{1}{4}k^{2}\varphi,\quad\varphi_{t}=4\varphi_{xxx}.\]
Therefore, choose \(\varphi_{j}\in C^{\infty}\left(\Omega\right)\), satisfying
\[\varphi_{j,xx}=\frac{k_{j}^{2}}{4}\varphi_{j},\quad\varphi_{j,t}=4\varphi_{j,xxx},\quad j=1,2,\cdots,n,\]
which constitutes the Wronskian condition of KdV equation.
Inspired by Ma's work [14], the purpose of this paper is to extend the binary Bell polynomial approach and Wronskian technique to the good Boussinesq equation:
\[u_{tt}-u_{xx}+\left(u^{2}\right)_{xx}+\frac{1}{3}u_{xxxx}=0, \tag{1.1}\]
which is obviously equivalent to the following well-posed Boussinesq equation under the scaling transformation \(u\to U,x\rightarrow\frac{1}{\sqrt{3}}X,t\rightarrow\frac{1}{\sqrt{3}}T\).
\[U_{TT}-U_{XX}+U_{XXXX}+\left(U^{2}\right)_{XX}=0. \tag{1.2}\]
Equation (1.2) is of fundamental physical interest, and it occurs in a wide variety of physical systems [15, 16, 17, 18], because it describes the lowest order (in terms of wave amplitudes) nonlinear effects in the evolution of perturbations with the dispersion relation close to that for sound waves. It also appears in the problems dealing with propagations of nonlinear waves in a medium with positive dispersion, for example, electromagnetic waves interacting with transversal optical phonons in nonlinear dielectrics [16], magnetosound waves in plasmas [17] and magnetoelastic waves in antiferromagnets [18]. Furthermore, it is known that for the transonic speed perturbations, by neglecting the interaction of waves moving in the opposite directions.
So far, the soliton solutions and multicollapse solutions were investigated by using Darboux transformations [19] and Hirota's method [20]. Li et al. [21] have obtained solitons, negations, positions and complexitons for Eq.(1.1). However, not much work has been done on the integrability of Eq.(1.1).
The problem we concern in this paper is to study the integrability of Eq.(1.1), and additionally figure out the usual method for constructing the Wronskian condition.
This paper is organized as follows. In Section2, we briefly present necessary notations on binary Bell polynomials that will be used in this paper. These results will then be applied to construct the bilinear representations, \(n\)-soliton solutions, bilinear BT, Lax pair and infinite conservation laws to Eq.(1.1) in Section 3. In Section 4, we construct Wronskian condition of Eq.(1.1) by using the Lax pairs obtained in Section 3.3, and the solutions in Wronskian form are verified. Finally, Section 5 presents our conclusions.
## 2 Binary Bell Polynomials
To make our presentation easy understanding and self-contained, we first fix some necessary notations on the Bell polynomials, the details refer to Bell, Lambert and Gilson's work [4, 5, 6].
**Definition 2.1**.: Suppose that \(y=y\left(x_{1},\cdots,x_{n}\right)\) is a \(C^{\infty}\) function with \(n\) independent variables and we denote
\[y_{r_{1}x_{1},\cdots,r_{l}x_{l}}=\partial_{x_{1}}^{r_{1}}\cdots\partial_{x_{l }}^{r_{l}}y,\quad r_{1}=0,\cdots,n_{1},\cdots,r_{l}=0,\cdots,n_{l},\]
where \(l\) denotes arbitrary integers, then
\[Y_{n_{1}x_{1},\cdots,n_{l}x_{l}}\left(y\right)\equiv Y_{n_{1},\cdots,n_{l}}\left( y_{r_{1}x_{1},\cdots,r_{l}x_{l}}\right)=e^{-y}\partial_{x_{1}}^{n_{1}}\cdots \partial_{x_{l}}^{n_{l}}e^{y} \tag{2.1}\]
is a polynomial in the partial derivatives of \(y\) with respect to \(x_{1},\cdots,x_{l}\), which called a multi-dimensional Bell polynomial. For the special case \(f=f\left(x,t\right)\), the associated two-dimensional Bell polynomials defined by (2.1) read
\[Y_{x,t}\left(f\right)=f_{x,t}+f_{x}f_{t},\quad Y_{2x,t}\left(f\right)=f_{2x,t} +f_{2x}f_{t}+2f_{x,t}f_{x}+f_{x}^{2}f_{t},\cdots \tag{2.2}\]
By virtue of the above multi-dimensional Bell polynomials, the multi-dimensional binary Bell polynomials can be defined as follows.
**Definition 2.2**.: Suppose that \(w=f+g\), \(v=f-g\), then
\[\mathcal{Y}_{n_{1}x_{1},\cdots,n_{l}x_{l}}(v,w)=Y_{n_{1},\cdots,n_{l}}\left( y\right)\Bigg{|}_{y_{r_{1}x_{1},\cdots,r_{l}x_{l}}=}\begin{cases}v_{r_{1}x_{1}, \cdots,r_{l}x_{l}},r_{1}+\cdots+r_{l}\text{ is odd}\\ v_{r_{1}x_{1},\cdots r_{l}x_{l}},r_{1}+\cdots+r_{l}\text{ is even}\end{cases} \tag{2.3}\]
where the vertical line means that the elements on the left-hand are chosen according to the rule on the right-hand side.
For example, the first few lowest-order binary Bell polynomials are
\[\mathcal{Y}_{x}\left(v,w\right)=v_{x},\quad\mathcal{Y}_{2x}\left( v,w\right)=w_{2x}+v_{x}^{2},\quad\mathcal{Y}_{x,t}\left(v,w\right)=w_{x,t}+v_{x}v_{ t}, \tag{2.4}\] \[\mathcal{Y}_{3x}\left(v,w\right)=v_{3x}+3w_{2x}v_{x}+v_{x}^{3}, \quad\mathcal{Y}_{2x,t}\left(v,w\right)=v_{2x,t}+2w_{x,t}v_{x}+v_{x}^{2}v_{t} +w_{2x}v_{t},\cdots\]
**Proposition 2.3**.: _The relations between the binary Bell polynomials and the standard Hirota D-operators can be given by the identity_
\[\mathcal{Y}_{n_{1}x_{1},\cdots,n_{l}x_{l}}\left(v=\ln F/G,w=\ln FG\ \right)= \left(FG\right)^{-1}D_{x_{1}}^{n_{1}}\cdots D_{x_{l}}^{n_{l}}F\cdot G, \tag{2.5}\]
_where \(n_{1}+n_{2}+\cdots+n_{l}\geq 1\), and the Hirota D-operators are defined by_
\[D_{x_{1}}^{n_{1}}\cdots D_{x_{l}}^{n_{l}}F\cdot G=\left(\partial_{x_{1}}- \partial_{x_{1}^{{}^{\prime}}}\right)^{n_{1}}\cdots\left(\partial_{x_{l}}- \partial_{x_{l}^{{}^{\prime}}}\right)^{n_{1}}\!\!F\left(x_{1},\cdots,x_{l} \right)G\left(x_{1}^{{}^{\prime}},\cdots,x_{l}^{{}^{\prime}}\right)\Big{|}_{x_ {1}^{{}^{\prime}}=x_{1},\cdots,x_{l}^{{}^{\prime}}=x_{l}}. \tag{2.6}\]
In particular, when \(F=G\), formula (2.5) can be rewritten as
\[G^{-2}D_{x_{1}}^{n_{1}}\cdots D_{x_{l}}^{n_{l}}G\cdot G=\mathcal{Y}_{n_{1}x_{1 },\cdots,n_{l}x_{l}}\left(0,q=2\ln G\right)=\begin{cases}0&,\quad n_{1}+\cdots +n_{l}\text{ is odd},\\ P_{n_{1}x_{1},\cdots,n_{l}x_{l}}\left(q\right)&,\quad n_{1}+\cdots+n_{l}\text{ is even}.\end{cases} \tag{2.7}\]
which is also called a P-polynomial
\[P_{n_{1}x_{1},\cdots,n_{l}x_{l}}\left(q\right)=\mathcal{Y}_{n_{1}x_{1},\cdots,n_{l}x_{l}}\left(0,q=2\ln F\right). \tag{2.8}\]
which vanishes unless \(\sum\limits_{i=1}^{l}n_{i}\) is even.
The first few P-polynomials and are
\[P_{x,t}\left(q\right)=q_{x,t},\quad P_{2x}\left(q\right)=q_{2x},\quad P_{3x,t} \left(q\right)=q_{3x,t}+3q_{x,t}q_{2x},\quad P_{4x}\left(q\right)=q_{4x}+3q_{2 x}^{2},\cdots \tag{2.9}\]
Formula (2.5) and (2.7) will prove particularly useful in connecting nonlinear equations with their corresponding bilinear equations. This means that once a nonlinear equation can be expressed as a linear combination of P-polynomials, then its bilinear equation can be established directly.
**Proposition 2.4**.: _The binary Bell polynomials \(\mathcal{Y}_{n_{1}x_{1},\cdots,n_{l}x_{l}}(v,w)\) can be written as a combination of P-polynomials and Y-polynomials_
\[\begin{split}(FG)^{-1}D_{x_{1}}^{n_{1}}\cdots D_{x_{l}}^{n_{l}}F \cdot G&=\mathcal{Y}_{n_{1}x_{1},\cdots,n_{l}x_{l}}\left(v,w \right)\Big{|}_{v=\ln\frac{F}{G},w=\ln FG}\\ &=\mathcal{Y}_{n_{1}x_{1},\cdots,n_{l}x_{l}}\left(v,v+q\right) \Big{|}_{v=\ln\frac{F}{G},q=2\ln G}\\ &=\sum\limits_{r_{1}=0}^{n_{1}}\cdots\sum\limits_{r_{l}=0}^{n_{l} }\prod\limits_{i=1}^{l}\binom{n_{i}}{r_{i}}P_{r_{1}x_{1},\cdots,r_{l}x_{l}} \left(q\right)\cdot Y_{\left(n_{1}-r_{1}\right)x_{1},\cdots,\left(n_{l}-r_{l} \right)x_{l}}\left(v\right),\end{split} \tag{2.10}\]
_where \(\sum\limits_{i=1}^{l}n_{i}\) is even._
**Proposition 2.5**.: _In order to obtain the Lax pairs of corresponding NLEEs, we introduce the Hopf-Cole transformation \(v=\ln\psi\), then the Y-polynomials can be written as_
\[Y_{n_{1}x_{1},\cdots,n_{l}x_{l}}\left(v\right)|_{v=\ln\psi}=\frac{\psi_{n_{1} x_{1},\cdots,n_{l}x_{l}}}{\psi}, \tag{2.11}\]
_which provides the shortest way to the associated Lax systems of NLEEs._
## 3 Integrability of good Boussinesq equation
### Biinear representation
Using the homogeneous balance principle, in order to make the nonlinear term \(u_{x}^{2}\) (or \(uu_{xx}\)) and the highest derivative term \(u_{xxxx}\) in the equation (1.1) be partially balanced, we suppose that the highest degree of derivetives of \(u\left(x,t\right)\) with respect to \(x,t\) is \(m\), then the highest degree of derivetives of \(u_{xxxx}\) and \(u_{x}^{2}\) with respect to \(x,t\) is \(m+4\) and \(2m+2\) respectively. Let
\[m+4=2m+2, \tag{3.1}\]
we get \(m=2\), which shows that a dimensionless field \(q\) can be related to the field \(u\) by setting
\[u=cq_{xx}, \tag{3.2}\]
with \(c\) being a free constant to be the appropriate choice such that equation (1.1) links with P-polynomials. Substituting (3.2) into (1.1), we can write the resulting equation in the form
\[cq_{xxtt}-cq_{xxxx}+\left(c^{2}q_{xx}^{2}\right)_{xx}+\frac{1}{3}cq_{6x}=0. \tag{3.3}\]
Further integrating Eq.(3.3) with respect to x twice yields
\[E\left(q\right)\equiv q_{tt}-q_{xx}+cq_{xx}^{2}+\frac{1}{3}q_{4x}=0. \tag{3.4}\]
Comparing the third term of this equation with the formula (2.9) implies that we should require \(c=1\). The result equation is then cast into a combination form of P-polynomials
\[E\left(q\right)\equiv P_{2t}(q)-P_{2x}(q)+\frac{1}{3}P_{4x}(q)=0. \tag{3.5}\]
Making a change of dependent variable
\[u=q_{2x}=2(\ln F)_{2x}\]
and noting the proposition 2.3, Eq.(3.5) gives the bilinear representation as follows
\[\left(D_{t}^{2}-D_{x}^{2}+\frac{1}{3}D_{x}^{4}\right)F\cdot F=0. \tag{3.6}\]
### \(n\)-soliton solutions
Once the bilinear representation of Eq.(1.1) is given, associated soliton solutions are easily solved with the help of the perturbation expansion method. Here we leave out the computational process and give the \(n\)-soliton solutions directly, because its verification process is complicated [22].
\[u=2\Bigg{[}\log\sum_{\mu_{i}\in\{0,1\}}\exp\left(\sum\mu_{i}\eta_{i}+\sum_{1\leq i <j}\mu_{i}\mu_{j}A_{ij}\right)\Bigg{]}_{xx}, \tag{3.7}\]
where \(\eta_{i}=k_{i}x+w_{i}t+\eta_{i}^{(0)},\quad w_{i}^{2}=k_{i}^{2}\left(1-\frac{1} {3}k_{i}^{2}\right),\) and
\[A_{ij}=\log\left|-\frac{\left(w_{i}-w_{j}\right)^{2}-\left(k_{i}-k_{j}\right)^ {2}+\frac{1}{3}(k_{i}-k_{j})^{4}}{\left(w_{i}+w_{j}\right)^{2}-\left(k_{i}+k_{ j}\right)^{2}+\frac{1}{3}(k_{i}+k_{j})^{4}}\right|,i<j,i,j=1,2,3,\cdots\]
For \(n=1\), the single-soliton solution of the good Boussinesq equation(1.1) can be written as
\[u=2[\ln\left(1+e^{\eta_{1}}\right)]_{xx}=\frac{k_{1}^{2}}{2}\text{sech}^{2} \frac{\eta_{1}}{2}, \tag{3.8}\]
where \(\eta_{1}=k_{1}x+w_{1}t+\eta_{1}^{0},\quad w_{1}^{2}=k_{1}^{2}\left(1-\frac{1} {3}k_{1}^{2}\right)\).
For \(n=2\), the two-soliton solution reads
\[u=2\big{[}\log\left(1+e^{\eta_{1}}+e^{\eta_{2}}+e^{\eta_{1}+\eta_{2}+A_{12}} \right)\big{]}_{xx}, \tag{3.9}\]
where \(\eta_{i}=k_{i}x+w_{i}t+\eta_{i}^{(0)}\), \(w_{i}^{2}=k_{i}^{2}\left(1-\frac{1}{3}k_{i}^{2}\right)\), \(i=1,2\), \(e^{A_{12}}=-\frac{\left(w_{1}-w_{2}\right)^{2}-\left(k_{1}-k_{2}\right)^{2}+ \frac{1}{3}\left(k_{1}-k_{2}\right)^{4}}{\left(w_{1}+w_{2}\right)^{2}-\left(k_ {1}+k_{2}\right)^{2}+\frac{1}{3}\left(k_{1}+k_{2}\right)^{4}}\).
For \(n=3\), the three-soliton solution reads
\[\begin{split} u=& 2[\log(1+e^{\eta_{1}}++e^{\eta_{2}} +e^{\eta_{3}}+e^{\eta_{1}+\eta_{2}+A_{12}}++e^{\eta_{1}+\eta_{3}+A_{13}}++e^{ \eta_{2}+\eta_{3}+A_{23}} \tag{3.10}\] \[+e^{\eta_{1}+\eta_{2}+\eta_{3}+A_{12}+A_{13}+A_{23}})]_{xx}\end{split}\]
where \(\eta_{i}=k_{i}x+w_{i}t+\eta_{i}^{(0)}\), \(w_{i}^{2}=k_{i}^{2}\left(1-\frac{1}{3}k_{i}^{2}\right)\), \(A_{ij}=\log\left|-\frac{\left(w_{i}-w_{j}\right)^{2}-\left(k_{i}-k_{j}\right)^ {2}+\frac{1}{3}\left(k_{i}-k_{j}\right)^{4}}{\left(w_{i}+w_{j}\right)^{2}- \left(k_{i}+k_{j}\right)^{2}+\frac{1}{3}\left(k_{i}+k_{j}\right)^{4}}\right|\), \(i<j\), \(j=1,2,3\).
Figure 3.1: The process of overtaking collision for two solitary waves at times \(t=-100,-60,-20,20,60,100\)
### Bilinear Backlund Transformation and Lax Pair
Next, we search for the bilinear Backlund transformation and Lax pair of the good Boussinesq equation(1.1). Let \(\widetilde{q}=2\log F\), \(q=2\log G\) be two different solutions of Eq.(3.1), respectively, and we introduce two new variables,
\[w=\frac{\widetilde{q}+q}{2}=\log FG,\quad v=\frac{\widetilde{q}-q}{2}=\log\frac {F}{G}, \tag{3.11}\]
then we associate the two-field condition
\[\begin{split} E\left(\widetilde{q}\right)-E\left(q\right)& =\left(\widetilde{q}-q\right)_{2t}-\left(\widetilde{q}-q\right)_{ 2x}+\left(\widetilde{q}-q\right)_{2x}\left(\widetilde{q}+q\right)_{2x}+\frac{ 1}{3}(\widetilde{q}-q)_{4x}\\ &=2v_{2t}-2v_{2x}+2v_{2x}\cdot 2w_{2x}+\frac{1}{3}\cdot 2v_{4x}\\ &=2v_{2t}-2v_{2x}+4v_{2x}w_{2x}+\frac{2}{3}\left[\partial_{x} \mathcal{V}_{xxx}\left(v,w\right)-3v_{2x}w_{2x}-3v_{x}w_{3x}-3v_{x}^{2}v_{2x} \right]\\ &=\frac{2}{3}\left[\partial_{x}\mathcal{Y}_{xxx}\left(v,w \right)+3v_{2x}w_{2x}-3v_{x}w_{3x}-3v_{x}^{2}v_{2x}+3v_{2t}-3v_{2x}\right]\\ &=\frac{2}{3}\partial_{x}\mathcal{Y}_{xxx}\left(v,w\right)-2 \partial_{x}\mathcal{Y}_{x}\left(v\right)+R\left(v,w\right)\\ &=0,\end{split} \tag{3.12}\]
where
\[R\left(v,w\right)=2\,\text{Wronskian}\left[\mathcal{Y}_{xx}\left(v,w\right), \mathcal{Y}_{x}\left(v\right)\right]+2v_{2t}. \tag{3.13}\]
In order to decouple the two-field condition (3.12) into a pair of constraints, we impose such a constraint which enables us to express \(R\left(v,w\right)\) as the \(x\)-derivative of a combination of \(\mathcal{Y}\)-polynomials. The simplest possible choice of such a constraint may be
\[\mathcal{Y}_{2x}\left(v,w\right)+\mathcal{Y}_{t}\left(v,w\right)=\lambda, \tag{3.14}\]
where \(\lambda\) is an arbitrary constant, and later it will be used as the spectral parameter of the Lax pair. In terms of the constraint (3.14), \(R\left(v,w\right)\) can be expressed as
\[\begin{split} R\left(v,w\right)&=2\left|\begin{matrix} \lambda-v_{t}&-v_{xt}\\ v_{x}&v_{2x}\end{matrix}\right|+2v_{2t}\\ &=2\left(\lambda v_{2x}-v_{t}v_{2x}+v_{x}v_{xt}\right)+2\left(-w_{2x,t}- 2v_{x}v_{xt}\right)\\ &=2\left(\lambda v_{2x}-v_{t}v_{2x}-v_{x}v_{xt}-w_{2x,t}\right)\\ &=2\left(\lambda\partial_{x}\mathcal{Y}_{x}\left(v,w\right)- \partial_{x}\mathcal{Y}_{xt}\left(v,w\right)\right).\end{split} \tag{3.15}\]
Then, combining relations (3.12)-(3.15), we deduce a coupled system of \(\mathcal{Y}\)-polynomials,
\[\mathcal{Y}_{2x}\left(v,w\right)+\mathcal{Y}_{t}\left(v,w\right)=\lambda, \tag{3.16}\] \[\frac{2}{3}\mathcal{Y}_{3x}\left(v,w\right)-2\mathcal{Y}_{x}\left( v,w\right)+2\lambda\mathcal{Y}_{x}\left(v,w\right)-2\mathcal{Y}_{xt}\left(v,w \right)=0, \tag{3.17}\]
where the second equation is useful to construct conservation laws later. By application of the proposition 2.4, the system(3.15)-(3.16) immediately leads to the bilinear Backlund transformation
\[\left(D_{x}^{2}+D_{t}-\lambda\right)F\cdot G=0, \tag{3.18}\] \[\left(\frac{2}{3}D_{x}^{3}-2D_{x}+2\lambda D_{x}-2D_{x}D_{t} \right)F\cdot G=0, \tag{3.19}\]
where \(\lambda\) is a spectral parameter.
By transformation \(v=\ln\psi\), it follows from the formula (2.10) and (2.11) that
\[\mathcal{Y}_{t}\left(v,w\right)=\frac{\psi_{t}}{\psi},\quad \mathcal{Y}_{x}\left(v,w\right)=\frac{\psi_{x}}{\psi},\quad\mathcal{Y}_{2x} \left(v,w\right)=q_{2x}+\frac{\psi_{2x}}{\psi},\] \[\mathcal{Y}_{3x}\left(v,w\right)=3q_{2x}\frac{\psi_{x}}{\psi}+ \frac{\psi_{3x}}{\psi},\quad\mathcal{Y}_{xt}\left(v,w\right)=q_{xt}+\frac{ \psi_{xt}}{\psi},\]
on account of which, the system(3.15)-(3.16) is then linearized into a Lax pair with one parameter \(\lambda\)
\[\psi_{2x}+\psi_{t}+\left(u-\lambda\right)\psi=0, \tag{3.20}\] \[\psi_{3x}+\left(3u+3\lambda-3\right)\psi_{x}-3\partial_{x}^{-1}u_ {t}\cdot\psi-3\psi_{xt}=0, \tag{3.21}\]
where \(q_{2x}\) has been replaced by \(u\). It is easy to check that the integrability condition \(\psi_{2x,t}=\psi_{t,2x}\) is satisfied if \(u\) is a solution of the good Boussinesq equation(1.1).
### Infinite Conservation Laws
In this section, we derive the infinitely local conservation laws for good Boussinesq equation(1.1) by using binary Bell polynomials. In fact, the conservation laws actually have been hinted in the two-field constraint system(3.15)-(3.16), which can be rewritten in the conserved form
\[\mathcal{Y}_{2x}\left(v,w\right)+\mathcal{Y}_{t}\left(v,w\right)- \lambda=0, \tag{3.22}\] \[\partial_{x}\left[\frac{1}{3}\mathcal{Y}_{3x}\left(v,w\right)- \mathcal{Y}_{x}\left(v,w\right)+\lambda\mathcal{Y}_{x}\left(v,w\right)\right] -\partial_{t}\mathcal{Y}_{2x}\left(v,w\right)=0. \tag{3.23}\]
By introducing a new potential function
\[\eta=\frac{\widetilde{q}_{x}-q_{x}}{2}. \tag{3.24}\]
it follows from the relation(3.11) that
\[w_{x}=q_{x}+\eta,\quad v_{x}=\eta,\quad v_{t}=\partial_{x}^{-1}\eta_{t}. \tag{3.25}\]
Substituting (3.25) into (3.22) and (3.23), we get a Riccati-type equation
\[\eta_{x}+\eta^{2}+q_{2x}+\partial_{x}^{-1}\eta_{t}=\lambda, \tag{3.26}\]
and a divergence-type equation
\[\frac{1}{3}\eta_{3x}+\left(2\lambda-1\right)\eta_{x}-2\eta^{2}\eta_{x}-\eta_{ x}\cdot\partial_{x}^{-1}\eta_{t}-\eta\eta_{t}+\partial_{x}^{-1}\eta_{tt}=0, \tag{3.27}\]
where we have used Eq.(3.26) to get Eq.(3.27). Suppose that \(\lambda=\varepsilon^{2}\), to proceed, inserting the expansion
\[\eta=\varepsilon+\sum_{n=1}^{\infty}I_{n}\left(q,q_{x},\cdots\right)\varepsilon ^{-n} \tag{3.28}\]
into Eq.(3.26) and equating the coefficients for power of \(\varepsilon\), we then obtain the recursion relations for the conserved densities \(I_{n}\)
\[I_{1}=-\frac{1}{2}q_{2x}=-\frac{1}{2}u, \tag{3.29}\] \[I_{2}=-\frac{1}{2}I_{1,x}-\frac{1}{2}\partial_{x}^{-1}I_{1,t}= \frac{1}{4}u_{x}+\frac{1}{4}\partial_{x}^{-1}u_{t},\] (3.30) \[\cdots,\] \[I_{n+1}=-\frac{1}{2}\left(I_{n,x}+\sum_{k=1}^{n}I_{k}\cdot I_{n- k}\right)-\frac{1}{2}\partial_{x}^{-1}I_{n,t}. \tag{3.31}\]
Substituting Eq.(3.28) into Eq.(3.27) provides us infinite consequence of conservation laws
\[I_{n,t}+F_{n,x}=0,\quad n=1,2,\cdots \tag{3.32}\]
where the conserved densities \(I_{n}\) (\(n=1,2,\cdots\)) are given by formula (3.31) and the fluxes \(F_{n}\left(n=1,2,\cdots\right)\) are given by recursion formulas explicitly
\[F_{1}=2I_{2}=\frac{1}{2}u_{x}+\frac{1}{2}\partial_{x}^{-1}u_{t}, \tag{3.33}\]
\[F_{2} =-\frac{1}{3}I_{1,xx}-\left(2\lambda-1\right)I_{1}+2\left(I_{1}^{2}+I_ {3}\right)-\partial_{x}^{2}I_{1,tt}\] \[=-\frac{1}{12}u_{xx}+\left(\lambda-\frac{1}{2}\right)u+\frac{1}{4 }u^{2}-\frac{1}{2}u_{t}+\frac{1}{4}\partial_{x}^{2}u_{tt}, \tag{3.34}\] \[\cdots,\] \[F_{n+1} =-\frac{1}{3}I_{n,xx}-\left(2\lambda-1\right)I_{n}+2\left[I_{n+2} +\sum_{k=1}^{n}I_{k}\cdot I_{n+1-k}+\frac{1}{3}\sum_{i+j+k=n}I_{i}I_{j}I_{k}\right]\] \[+\sum_{l=1}^{n-1}I_{l}\partial_{x}^{-1}I_{n-l,t}-\partial_{x}^{-2 }I_{n,tt}. \tag{3.35}\]
We present recursion formulas for generating an infinite sequence of coservation laws; The first few conserved densities and associated fluxes are explicitly given. The first equation of conservation law is exactly the good Boussinesq equation(1.1).
## 4 \(n\) order Wronskian determinant solution
In this section, a \(n\) order Wronskian determinant solution will be established to Eq.(1.1) via the Wronskian technique. To use this technique, we adopt the following helpful notation.
\[W\left(\varphi_{1},\cdots,\varphi_{n}\right) =\begin{vmatrix}\varphi_{1}&\varphi_{1}^{(1)}&\cdots&\varphi_{1}^ {(n-1)}\\ \varphi_{1}&\varphi_{1}^{(1)}&\cdots&\varphi_{2}^{(n-1)}\\ \cdots&\cdots&\cdots&\cdots\\ \varphi_{n}&\varphi_{n}^{(1)}&\cdots&\varphi_{n}^{(n-1)}\\ \end{vmatrix} \tag{4.1}\] \[=\left|\varphi,\varphi^{(1)},\cdots,\varphi^{(n-1)}\right|\] \[=\left|0,1,\cdots,n-1\right|\] \[=\left|\widehat{n-1}\right|,\]
where \(\varphi_{j}^{(k)}=\frac{\partial^{k}\varphi_{j}}{\partial x^{k}}\), \(\varphi=\left(\varphi_{1},\cdots,\varphi_{n}\right)^{T}\), \(\varphi^{(k)}=\left(\varphi_{1}^{(k)},\cdots,\varphi_{n}^{(k)}\right)^{T}\), \(k=1,2,\cdots,n-1\).
In order to obtain the Wronskian solution, the following lemmas are needed.
**Lemma 4.1**.: \[\left|M,a,b\right|\cdot\left|M,c,d\right|-\left|M,a,c\right|\cdot\left|M,b,d \right|+\left|M,a,d\right|\cdot\left|M,b,c\right|=0,\] (4.2)
where M is an \(n\times(n-2)\) matrix, \(a,b,c\) and \(d\) represent \(n\) column vectors.
**Lemma 4.2**.: \[\sum_{k=1}^{n}\left|A\right|_{kl}=\sum_{k=1}^{n}\left|A\right|^{kl}=\sum_{i,j =1}^{n}A_{ij}\frac{\partial^{l}a_{ij}}{\partial x^{l}},\] (4.3)
where \(A=\left(a_{ij}\right)_{n\times n}\), and \(\left|A\right|_{kl}\), \(\left|A\right|^{kl}\) and \(A_{ij}\) denote the determinant resulting from \(\left|A\right|\) with its \(k\)th row differentiated l times with respect to \(x\), the determinant resulting from \(\left|A\right|\) with its \(k\)th column differentiated l times with respect to \(x\), and the co-factor of \(a_{ij}\), respectively.
In particular, choose
\[\left|A\right|=\left|\widehat{n-1}\right|=\left|\varphi,\varphi^{(1)},\cdots,\varphi^{(n-1)}\right|=\left|0,1,\cdots,n-1\right|=\begin{vmatrix}\varphi_ {1}^{(0)}&\varphi_{1}^{(1)}&\cdots&\varphi_{1}^{(n-1)}\\ \vdots&\vdots&\vdots&\vdots\\ \varphi_{i}^{(0)}&\varphi_{i}^{(1)}&\cdots&\varphi_{i}^{(n-1)}\\ \vdots&\vdots&\vdots&\vdots\\ \varphi_{n}^{(0)}&\varphi_{n}^{(1)}&\cdots&\varphi_{n}^{(n-1)}\\ \end{vmatrix},\]
and use the equality(4.3) with \(l=3\), then we obtain the equality as follows.
\[\left|\widehat{n-4},n-2,n-1,n\right|-\left|\widehat{n-3},n-1,n+1\right|+\left| \widehat{n-2},n+2\right|-\frac{3}{4}\left|\widehat{n-2},n\right|=0. \tag{4.4}\]
Differentiating with respect to x yields
\[\begin{split}\left|\widehat{n-5},n-3,n-2,n-1,n\right|-\left| \widehat{n-3},n,n+1\right|+\left|\widehat{n-2},n+3\right|\\ -\frac{3}{4}\left|\widehat{n-3},n-1,n\right|-\frac{3}{4}\left| \widehat{n-2},n+1\right|=0.\end{split} \tag{4.5}\]
The identities (4.4) and (4.5) are very useful in constructing the Wronskiansolution of the equation (1.1).
Next, we construct Wronskian condition of good Boussinesq equation (1.1) by virtue of the Lax pairs (3.20) and (3.21) obtained in section 3.3. We choose \(u=0\), \(\lambda=0\), then the Lax pair (3.20) and (3.21) reduces to
\[\psi_{t}=-\psi_{2x},\quad\psi_{3x}=\frac{3}{4}\psi_{x}. \tag{4.6}\]
Therefore, choose \(\psi_{j}\in C^{\infty}\left(\Omega\right)\) satisfying
\[\psi_{j,t}=-\psi_{j,xx},\quad\psi_{j,xxx}=\frac{3}{4}\psi_{j,x},\quad j=1,2, \cdots,n. \tag{4.7}\]
**Theorem 4.3**.: _Assume that a group of function \(\psi_{j}\left(x,t\right)\), \(j=1,2,\cdots,n\), satisfies_
\[\psi_{j,t}=-\psi_{j,xx},\quad\psi_{j,xxx}=\frac{3}{4}\psi_{j,x},\quad j=1,2, \cdots,n, \tag{4.8}\]
_simultaneously. Then \(F=W\left(\psi_{1},\psi_{2},\cdots,\psi_{n}\right)\) defined by (4.1) solves the bilinear Boussinesq equation(3.6)._
Proof.: It only needs to prove that \(F=W\left(\psi_{1},\psi_{2},\cdots,\psi_{n}\right)\) solves bilinear equation(3.6). To complete the proof, the bilinear equation(3.6) can be reduced to determinant identity. Therefore, the bilinear equation(3.6) is rewritten as
\[\Delta\equiv 2F_{2t}F-2F_{t}^{2}-2FF_{2x}+2F_{x}^{2}+\frac{2}{3}FF_{4x}-\frac {8}{3}F_{x}F_{3x}+2F_{2x}^{2}=0. \tag{4.9}\]
Using the property of Wronskian determinant, calculate the derivatives required by the equation(4.9)
\[F_{x}=\left|\widehat{n-2},n\right|,\] \[F_{2x}=\left|\widehat{n-3},n-1,n\right|+\left|\widehat{n-2},n+1 \right|,\] \[F_{3x}=\left|\widehat{n-4},n-2,n-1,n\right|+2\left|\widehat{n-3},n-1,n+1\right|+\left|\widehat{n-2},n+2\right|,\] \[F_{4x}=\left|\widehat{n-5},n-3,n-2,n-1,n\right|+3\left|\widehat{ n-4},n-2,n-1,n+1\right|+2\left|\widehat{n-3},n,n+1\right|\] \[\qquad\qquad+3\left|\widehat{n-3},n-1,n+2\right|+\left|\widehat{ n-2},n+3\right|.\]
Using the formula (4.6), we can get
\[F_{t}=\left|n-3,n-1,n\right|-\left|n-2n+1\right|,\] \[F_{tt}=\left|n-5,n-3,n-2,n-1,n\right|-\left|n-4,n-2,n-1,n+1 \right|+2\left|n-3,n,n+1\right|\] \[\qquad-\left|n-3,n-1,n+2\right|+\left|n-2,n+3\right|.\]
Substitute the above equations into the left hand of the formula (4.9), we can get
\[\begin{split}\Delta=& 2F\left(F_{2t}-F_{2x}+\frac{1}{3}F_{4x }\right)-2F_{t}^{2}+2F_{x}\left(F_{x}-\frac{4}{3}F_{3x}\right)+2F_{2x}^{2}\\ =& 2\left|n-1\right|\left(\left|n-5,n-3,n-2,n-1,n \right|-\left|n-4,n-2,n-1,n+1\right|+2\left|n-3,n,n+1\right|\right.\\ &-\left|n-3,n-1,n+2\right|+\left|n-2,n+3\right|-\left|n-3,n-1,n \right|-\left|n-2,n+1\right|\\ &+\frac{1}{3}\left|n-5,n-3,n-2,n-1,n\right|+\left|n-4,n-2,n-1,n+ 1\right|+\frac{2}{3}\left|n-3,n,n+1\right|\\ &+\left|n-3,n-1,n+2\right|+\frac{1}{3}\left|n-2,n+3\right|)-2( \left|n-3,n-1,n\right|-\left|n-2,n+1\right|)^{2}\\ &+2\left|n-2,n\right|(\left|n-2,n\right|-\frac{4}{3}\left|n-4,n- 2,n-1,n\right|)-\frac{8}{3}\left|n-3,n-1,n+1\right|\\ &-\frac{4}{3}\left|n-2,n+2\right|)+2(\left|n-3,n-1,n\right|+ \left|n-2,n+1\right|)^{2}\\ =& 2\left|n-1\right|(\frac{4}{3}\left|n-5,n-3,n-2,n-1,n \right|+\frac{8}{3}\left|n-3,n,n+1\right|+\frac{4}{3}\left|n-2,n+3\right|\\ &-\left|n-3,n-1,n\right|-\left|n-2,n+1\right|)+8\left|n-3,n-1,n \right|\cdot\left|n-2,n+1\right|+2\left|n-2,n\right|\\ &(\left|n-2,n\right|-\frac{4}{3}\left|n-4,n-2,n-1,n\right|-\frac {8}{3}\left|n-3,n-1,n+1\right|-\frac{4}{3}\left|n-2,n+2\right|).\end{split} \tag{4.10}\]
Using the identities (4.4) and (4.5), the following Plucker relation can be obtained
\[\begin{split}\Delta=& 8\left|n-1\right|\cdot\left|n-3,n,n+1 \right|+8\left|n-3,n-1,n\right|\cdot\left|n-2,n+1\right|\\ &-8\left|n-2,n\right|\cdot\left|n-3,n-1,n+1\right|\\ =& 4\left|\begin{matrix}n-3&0&n-2&n-1&n&n+1\\ 0&n-3&n-2&n-1&n&n+1\end{matrix}\right|\\ \equiv& 0.\end{split} \tag{4.11}\]
Theorem 4.3 tells us that if a group of functions \(\psi_{j}\left(x,t\right)\)\(\left(j=1,2,\cdots,n\right)\) satisfy conditions in (4.7), then we can get a solution \(F=W\left(\psi_{1},\psi_{2},\cdots,\psi_{n}\right)\) to the bilinear Boussinesq equation(3.6). Thus, the Wronskian determinant solution of the original equation (1.1) is
\[u=2(\log F)_{xx}=2(\log W\left(\psi_{1},\psi_{2},\cdots,\psi_{n}\right))_{xx}. \tag{4.12}\]
## 5 Concluding remarks
The present investigation has been carried out on the good Boussinesq equation (1.1). With binary Bell polynomials, the corresponding bilinear representation, bilinear Backlund transformation, Lax pair and infinite conservation laws are obtained directly and naturally. Combined with the perturbation expansion method, the \(n\)-soliton solution has also been derived.
Finally, inspired by the construction of Wronskian condition for the KdV equation, this paper tries to construct the Wronskian condition for the good Boussinesq equation from the Bell polynomials. We believe that there are still many deep relations between Bell polynomials and Wronskian technique, which still remain open and become an issue for future research to explore.
## Acknowledgments
We express our sincere thanks to Prof.E.G. Fan and Prof.Y.Chen for his valuable guidence and advice.
|
2305.17786 | Real-time Object Detection: YOLOv1 Re-Implementation in PyTorch | Real-time object detection is a crucial problem to solve when in comes to
computer vision systems that needs to make appropriate decision based on
detection in a timely manner. I have chosen the YOLO v1 architecture to
implement it using PyTorch framework, with goal to familiarize with entire
object detection pipeline I attempted different techniques to modify the
original architecture to improve the results. Finally, I compare the metrics of
my implementation to the original. | Michael Shenoda | 2023-05-28T18:17:31Z | http://arxiv.org/abs/2305.17786v1 | # Real-time Object Detection:
###### Abstract
Real-time object detection is a crucial problem to solve when in comes to computer vision systems that needs to make appropriate decision based on detection in a timely manner. I have chosen the YOLO v1 architecture to implement it using PyTorch framework, with goal to familiarize myself with entire object detection pipeline I attempted different techniques to modify the original architecture to improve the results. Finally, I compare the metrics of my implementation to the original.
## I Introduction
Surprisingly humans are really good at understanding visual content of an image and instantly provide information about the objects within. Since 2012 Convolutional Neural Network (CNN) has became so popular for object detection and classification tasks, but yet the challenges to create reliable models can get overwhelming. Looking at the YOLO architecture, it has taken a leap forward as being the architecture of choice for real-time detectors. I have chosen the YOLO v1 architecture to implement with primary goal to learn the fundamentals. Note that the original YOLO is implemented in Darknet c framework. So, the goal is to familiarize myself with object detection training and inference pipeline and understand the proper way to implement that using PyTorch framework. I attempted different techniques to modify the original architecture such as changes to kernel sizes as well as network depth and changing activation layers to improve the results. I have created a baseline using YOLO tiny and proposed modified version of the model to attempt to improve the results overall. Finally, I show the metrics comparing it to baseline and to published YOLO model metrics. I also provide a visual validaiton for the detection to visualize how the model is behaving.
## II Fundamentals of YOLO
### _Dataset_
Starting off with the dataset, YOLO used the PASCAL Visual Object Classes dataset, the 2007 and 2012 combined. It's important to note that a pre-processing step is required to convert the labels from VOC format to YOLO format. The author of YOLO has provided a conversion python script, which is available on GitHub [2]
### _Bounding Box Labels_
The YOLO bounding box label starts off by a numerical class id, followed by normalized box coordinates between 0 to 1 It's important to note that the x, y coordinate here is the bounding box center. They have chosen that to allow the bounding box to be scalable regardless of the image size.
### _Tensor Structure_
Before diving into the model architecture, lets look at the tensor structure. Looking at the tensor from a 2D the surface, it's a grid divided by the chosen grid size of 7x7 If we look at the tensor depth, each cell contains two bounding boxes followed by an object pretense flag, then followed by class probabilities. The bounding boxes here are relative to the cell, instead of the whole image. So, an encoding step is required to encode the raw label into a tensor before training. Also, a decoding step is required while doing inference, to decode the tensor back to a box coordinate relative to the image and get the class information of the object.
Fig. 1: Conversion from VOC to YOLO
Fig. 2: Bounding Box Labels Per Image
### _Full Architecture_
Yolo has 24 convolutional layers, alternating 1 x 1 convolutional layers reduce the features space from preceding layers, then followed by 2 fully connected layers to produce the final tensor as shown previously.
### _Tiny Baseline Architecture_
For my baseline, I chosen the tiny architecture of YOLO to implement, due to time and resources. The tiny architecture composed of 9 convolutional layers followed by 2 fully connected to produce the final tensor. Note here that I reduced the fully connected size from 4096 to 2048 to reduce memory footprint.
### _MS Architecture_
My modified YOLO model which consist of 6 convolutional layers with some modification to kernel size, activation, and adding adaptive average pooling [5] before fully connected layer. I also reduced the fully connected size from original 4096 down to 1920. In the Convolutional Layers I have made the following changes:
1. Used 5x5 kernel size across the board
2. Replaced first 3 LeakyReLU with ReLU
3. Replaced 4th & 5th LeakyReLU with SiLU [6]
4. Then added AdaptiveAvgPool2d before Fully Connected
Regarding the FC Layers, I have made the following changes:
1. Replaced LeakyReLU with SiLU activation
2. Changed Dropout of 0.25 after first FC
### _Training_
The training of original yolo has been done with 135 epochs with convolutional layers pre-trained on ImageNet at half resolution then doubled up for detection. They mention in the paper that the pertaining took them a week to complete, which I don't have the time nor resources to do so! Instead, I train everything from scratch with 200 epochs. The baseline used a fixed learning schedule. In my modified yolo, I did two different experiment one with two step fixed learning rate, and the other I used OneCycle learning rate with cosine annealing. Everything was trained with same optimizer with momentum and decay.
### _YOLO Loss_
Now, let's look at the YOLO loss. As we can see in the fooling equation, it's not an off the shelf loss function. At first, it may look complicated, but once breaking it into parts it starts to make more sense.
Fig. 4: YOLO Full Model Architecture
Fig. 5: YOLO Tiny Model Baseline Architecture
Fig. 3: YOLO Tensor Structure
Fig. 6: YOLO MS Model Architecture
Fig. 7: Training Paramters Table
Looking back to the tensor structure mentioned earlier, we will see that it maps nicely with the loss function.
First part of the equation, highlighted in green, is the coordinate loss which is responsible for the bounding boxes Second part of the equation, highlighted in blue, is the object presence loss which is responsible for whether the cell contains an object or not! Third and last part of the equation, is the object classification loss which is responsible for the class probabilities Now we can see clearly how the loss is structured and mapped to the tensor structure.
### _YOLO Augmentation_
The original yolo uses color jitter which randomly adjusts brightness, contrast, saturation, and hue. It also does a random up scaling of the image up to 20%.
## III PyTorch Implementation
### _Custom Augmentation_
In my implementation, I used color jitter similar to the original, in addition to random blur, random grayscale, random horizontal flip, random vertical flip, random rotation jitter.
\(ColorJitter(0.2,0.5,0.7,0.07),\)
\(RandomBlur([3,3],sigma=[0.1,2],p=0.1),\)
\(RandomGrayscale(p=0.1),\)
\(RandomHorizontalFlip(0.5),\)
\(RandomVerticalFlip(0.05),\)
\(RandomRotationJitter(p=0.5)\)
It's important to note here that the _RandomBlur, RandomHorizontalFlip, RandomVerticalFlip, and RandomRotationJitter_ are custom transform modules that I implemented myself. Specifically for the last three transforms, they change the location of the pixels, so you cannot just use PyTorch's off the shelf transforms without taking care of transforming the bounding boxes as well! Yeah! nothing is free when it comes to object detection!
### _Modules Overview_
Below is an overview of the python implementation scripts and configuration.
The ones highlighted in green are the user facing scripts, where you can use to training/evaluation and detection. I also provide test_transforms script to easily visualize the effect of image augmentation transforms to help choose the appropriate augmentation for your own dataset. The modules highlighted in blue are internal and not intended to be called directly.
## IV Results
### _Training & Validation Loss_
Showing below the training and validation loss for the experiments for baseline model and my modified model.
It's important to note that using OnceCycle [4] learning schedule with cosine annealing has helped to train and converge faster compared to fixed and multi step fixed learning rate.
Fig. 11: Training and Validation Loss
Fig. 8: YOLO Loss Function
Fig. 10: Implementation Modules
Fig. 9: YOLO Loss Function with Visual Marking
### Mean Average Precision
Here is the comparison of the mean average precision for all the models. The two on the top are my own and the two on the bottom are the published ones.
Clearly my latest model at 62.3 mAP is better than Yolo Tiny baseline model. It's getting closer to the result of the full YOLO model but it's super tiny! An extensive testing would still need to be done to fully confirm the performance.
## Visual Inspection
By visually inspecting the detections, we can see that it provides good localization and handles multi object scenarios shown below:
Overall, I think the results are considered good, given the time and resource limitation for training.
## V Limitations and Future Improvements
Obviously the existing limitation of original YOLO still remains and I claim no exceptional improvement over it. This entire work is intended for self education of YOLO object detection fundamentals and familiarity implementation in PyTorch. Further improvements could be done with regards to training data to incoroparate better dataset than VOC. Other things to consider is experimenting with layer fusion as well as model pruning to make it super light on low powered devices. Performance testing on various GPUs and edge devices should be done to provide validity of the network. Also, I'm not certain about the generalization of the modified model.
## VI Conclusion
This was a comprehensive work that resulted in a deeper understanding of the fundamentals of YOLO and object detection in general. Things that I have learned are the following:
Understanding YOLO paper with existing implementation to be able to break it down into smaller parts and digest it it step by step. Implementing Object Detection Training/Inference Pipeline in PyTorch which includes modules such as custom dataset, custom transforms, custom loss, training and validation loops, visualization using Tensorboard and detector module for inference
Overall, learning the fundamentals of real time object detection is valuable knowledge for my future work with regards to real-time instance segmentation, which was my primary motive behind this work.
|
2308.09544 | Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free
Continual Learning | In this work, we investigate exemplar-free class incremental learning (CIL)
with knowledge distillation (KD) as a regularization strategy, aiming to
prevent forgetting. KD-based methods are successfully used in CIL, but they
often struggle to regularize the model without access to exemplars of the
training data from previous tasks. Our analysis reveals that this issue
originates from substantial representation shifts in the teacher network when
dealing with out-of-distribution data. This causes large errors in the KD loss
component, leading to performance degradation in CIL models. Inspired by recent
test-time adaptation methods, we introduce Teacher Adaptation (TA), a method
that concurrently updates the teacher and the main models during incremental
training. Our method seamlessly integrates with KD-based CIL approaches and
allows for consistent enhancement of their performance across multiple
exemplar-free CIL benchmarks. The source code for our method is available at
https://github.com/fszatkowski/cl-teacher-adaptation. | Filip Szatkowski, Mateusz Pyla, Marcin Przewięźlikowski, Sebastian Cygert, Bartłomiej Twardowski, Tomasz Trzciński | 2023-08-18T13:22:59Z | http://arxiv.org/abs/2308.09544v3 | # Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning
###### Abstract
In this work, we investigate exemplar-free class incremental learning (CIL) with knowledge distillation (KD) as a regularization strategy, aiming to prevent forgetting. KD-based methods are successfully used in CIL, but they often struggle to regularize the model without access to exemplars of the training data from previous tasks. Our analysis reveals that this issue originates from substantial representation shifts in the teacher network when dealing with out-of-distribution data. This causes large errors in the KD loss component, leading to performance degradation in CIL models. Inspired by recent test-time adaptation methods, we introduce Teacher Adaptation (TA), a method that concurrently updates the teacher and the main models during incremental training. Our method seamlessly integrates with KD-based CIL approaches and allows for consistent enhancement of their performance across multiple exemplar-free CIL benchmarks. The source code for our method is available at [https://github.com/fszatkowski/cl-teacher-adaptation](https://github.com/fszatkowski/cl-teacher-adaptation).
## 1 Introduction
Continual learning aims to create machine learning models capable of acquiring new knowledge and adapting to evolving data distributions over time. One of the most challenging continual learning scenarios is _class incremental learning_ (CIL) [32, 50], where the model is trained to classify objects incrementally from the sequence of tasks, without forgetting the previously learned ones.
A simple and effective method of reducing forgetting is by leveraging _exemplars_[6, 21, 37, 41] of previously encountered training examples, _e.g._ by replaying them or using them for regularization. However, this approach presents challenges, particularly in terms of high storage needs and privacy concerns. These problems can affect edge devices, due to their limited storage capacity, and medical data, given their sensitive nature.
A common approach for exemplar-free CIL is knowledge distillation (KD), where the current model (student) is trained on the new data with a regularization term that minimizes the output difference with the previous model (teacher), which is kept frozen. This approach was introduced by LwF [29] and has been extended by many other methods. However, most of these methods use exemplars, such as iCaRL [41], EEIL [8], LUCIR [18], PodNET [14], SSIL [2], or rely on external datasets [61, 28].
Exemplar-free CIL still remains challenging [47] for KD methods due to the possibility of significant distribution drift in subsequent tasks. Such drift leads to large errors during training with KD loss, causing more undesired changes in the main model and harming the overall performance of the CIL training. This raises the question: _Can we adjust the teacher model to better transfer knowledge from earlier tasks?_
Motivated by the recent domain adaptation methods [52, 45], we examine the role of batch normalization statistics in CIL training. We conjecture that in standard KD methods, the KD loss between models with different normalization
Figure 1: Enhancement of vanilla Knowledge Distillation approach used in Continual Learning with our method of Teacher Adaptation. When training the student model on the new task, we allow the teacher model to continuously update its batch normalization statistics, which reduces the divergence between the representations in both models. Our method leads to lower knowledge distillation loss and an overall more stable model.
statistics may introduce unwanted model updates due to the data distribution shifts. To avoid this, we propose to continuously adapt them to the new data for the teacher model while training the student.
We show that adapting the teacher's batch normalization statistics to the new task can significantly lower KD loss without affecting the CE loss, which reduces changes in the model's representations (Figure 2). We note that, while the idea of changing the teacher model was explored in the standard KD settings [30, 64], our approach is the first application of this idea to CIL scenario, where the teacher and the model are trained on non-overlapping data. Moreover, our method works differently by exploiting the batch normalization statistics. We apply our method on top of different distillation strategies in CIL and show consistent improvements across various settings.
In summary, we make the following contributions:
1. We revisit the KD-based class-incremental learning (CIL) framework and study the negative impact of regularization using out-of-distribution data. We are the first to highlight the need for adjusting the teacher model in an exemplar-free situation, where it is usually kept frozen.
2. We propose a simple yet highly effective technique called Teacher Adaptation (TA), that enhances KD for exemplar-free CIL.
3. Through extensive experiments, we demonstrate that TA can be seamlessly integrated with various KD approaches, leading to significant improvements over the baselines across a wide range of continual learning scenarios for various datasets. We show that those improvements hold even when using pretrained models or in the presence of substantial distributional shifts between consecutive domains.
## 2 Related works
**Class Incremental Learning (CIL)**[32, 50] is a subfield of continual learning, where the aim is to learn incrementally from a stream of tasks, without the task identifier. There exist several families of approaches to CIL:
Memory-based methods store either exemplars or features from the previous tasks in the buffer [6, 21, 37, 41] and use them during training the new task to consolidate previously learned knowledge. Those methods usually perform well, but their practical applications are limited due to privacy concerns and memory requirements that arise when storing the data. Architectural approaches focus on modifying the structure of the model, often by allocating certain parameters to corresponding tasks [53, 54]. Finally, regularization-based methods aim to preserve the knowledge in the network by imposing constraints on the changes in model weights [23] or activations during learning the new task [29]. Many CIL methods often also combine the above approaches [8, 42, 55], for instance using both memory and regularizaion [29, 41, 2].
**Regularization methods for continual learning** offer a way to prevent forgetting with constant memory usage and no privacy issues. There are two main types of regularization methods: (i) parameter regularization and (ii) functional regularization. The first type of methods regularizes the model weights, for example using the Fisher Information Matrix [23], synaptic saliency [59] or the gradient inspection [3]. On the other hand, functional regularization methods employ knowledge distillation (KD) techniques to regularize model activations. KD was originally proposed by Hinton et al. [17] to transfer the knowledge from a larger model to a smaller one. In CL, KD was first applied in Learning without Forgetting (LwF) [29], where the model is discouraged from drifting too far from the model from previous tasks. We describe KD methods in detail in Section 3.1.
**Functional regularization** has been widely used in
Figure 2: Applying our teacher adaptation (TA) method reduces knowledge distillation (KD) loss and improves stability throughout continual learning. (left) KD loss and cross-entropy (CE) loss of training the model with and without TA. Our method leads to more consistent representation, as visualized by the CKA [24] between the representations of the new data obtained in the teacher and student models while learning the second task (middle). KD with TA leads to better task-agnostic accuracy (right). We conduct the experiments on CIFAR100 split into 10 tasks.
CL since the introduction of LwF [18, 37, 40, 41] and numerous variants of KD have been proposed. Particularly, SSIL [2] uses task-wise knowledge distillation, while PODNET [14] regularizes using spatial-based distillation loss applied throughout the model.
Multi-level knowledge distillation [13] uses the current model to distill the knowledge from pruned snapshots of all previous models, while ANCL [22] distills simultaneously from the previous task model and the model learned specifically for the new task. Moreover, DMC [60] uses knowledge distillation on an auxiliary dataset to consolidate the knowledge from previous tasks.
However, most of those methods use a memory buffer and their performance depends on it heavily, which makes them impractical for exemplar-free settings. Recently, several works explored the idea of modifying the teacher model through meta-learning for better knowledge transfer in standard KD setting [64, 30]. Similarly, works on KD [16] suggest that updating the normalization statistics of the teacher model on the data used to train the student improves the performance. However, to our knowledge, our method is the first one that explores different approaches to teacher adaptation, such as updating the normalization statistics, in the context of CIL.
**Batch Normalization** (BN) [20] is widely used in deep learning models, but can be problematic for settings where the data distribution changes over time. Alternative approaches such as LayerNorm [5] or GroupNorm [57] do not rely on the batch-wise statistics, but directly replacing BN layers with them was shown to often decrease the performance of the models. Several domain adaptation methods achieve domain transfer through the use of normalization statistics [52, 45]. Recent work on efficient finetuning of large language models using only normalization layers [38] also suggests that the normalization layers play a crucial role in training deep neural networks. In CL, it was shown that BN can cause a discrepancy between the training and testing phases of BN, as the testing data is normalized using the statistics biased towards the current task, which results in higher forgetting of older tasks [44]. Several works have attempted to address this issue by CL-specific modifications to BN [9, 36]. However, those approaches are not suited for exemplar-free settings.
## 3 Method
In class-incremental learning setup, the model learns tasks sequentially. Each task contains several classes which are disjoint with the classes in other tasks. During training task \(t\), we only have access to the data \(D^{t}\) from task \(t\) which contains images \(\mathbf{x}_{i}\in X^{t}\) with class labels \(y_{i}\in C^{t}\). Thus an incremental learning problem \(\mathcal{T}\) with \(n\) tasks can be formulated as: \(\mathcal{T}=\left\{D^{1},D^{2},...,D^{t},...,D^{n}\right\}\), where after training \(n\) tasks we evaluate the model on all classes \(C^{1}\cup\ldots\cup C^{n}\), without knowing the task label at inference time (this is different than task-incremental learning, where task id can be used).
Below, we first introduce standard KD-based methods for exemplar-free CIL. Then we outline a problem of diverging batch normalization statistics between the teacher and student model caused by the shifts in training data between subsequent tasks. Finally, we propose to address this issue with a method that we call _Teacher Adaptation_ - a simple, yet effective solution that allows the teacher model to continuously update its normalization statistics alongside the student when training on the new data. The method in comparison to standard LwF is presented in Figure 1.
### Knowledge Distillation in Continual Learning
Knowledge distillation is one of the most popular techniques employed to reduce forgetting between subsequent tasks in incremental learning. Continual learning methods that use knowledge distillation save the model \(\Theta_{t}\) (_teacher_) trained after each task \(t\) and use it during learning the model \(\Theta_{t+1}\) (_student_) on new task \(t+1\). The learning objective for task \(t+1\) then becomes:
\[L=L_{CE}+\lambda L_{KD}, \tag{1}\]
where \(L_{CE}\) is the cross-entropy loss for classification on new data, \(L_{KD}\) is the knowledge distillation loss computed using \(\Theta_{t}\) and \(\Theta_{t+1}\), and \(\lambda\) is the coefficient that controls the trade-off between stability and plasticity. The general formula for knowledge distillation loss can include either output from the final layer of the model [2, 41, 29], or also representations from intermediate model layers [14, 12]. In practice, most exemplar-free methods that use knowledge distillation compute knowledge distillation loss using only the final layer outputs, and various methods that use intermediate representations usually only perform well with exemplars [47].
Multiple variants of knowledge distillation loss were proposed for continual learning. In exemplar-free CIL, KD loss is usually computed with the logits \(y_{o}\), \(\hat{y_{o}}\) returned by the current and previous models respectively. Following Li et al. [29], we denote that the loss uses logits corresponding to previously seen classes with a subscript \(o\). Ahn et al. [2] classify KD methods into general KD (GKD), which aggregates together logits belonging to the classes from all the previous tasks, and task-wise KD (TKD), which treats classes within each task separately.
GKD loss appears in several works [28, 56, 62] and usually uses cross-entropy:
\[\mathcal{L}_{GKD}(\mathbf{y}_{o},\mathbf{\hat{y}}_{o})=-\sum_{i=1}^{|C_{t}|}p _{o}^{(i)}\log\hat{p}_{o}^{(i)}, \tag{2}\]
where \(p_{o}^{(i)}\) is the probability of the \(i\)-th class and \(|C_{t}|\) is the number of classes learned by previous model \(\Theta_{t}\). Probabilities \(p_{o}^{(i)}\), \(\hat{p}_{o}^{(i)}\) are computed with temperature parameter \(T\) as follows:
\[p_{o}^{(i)}=\frac{e^{y_{o}/T}}{\sum_{j}e^{y_{o}/T}},\quad\hat{p_{o}}^{(i)}= \frac{e^{\hat{y}_{o}/T}}{\sum_{j}e^{\hat{y}_{o}/T}} \tag{3}\]
Comparatively, TKD loss, which was also used in several works [2, 8, 29], sums of the separately computed losses for each task:
\[\mathcal{L}_{TKD}(\mathbf{y}_{o},\mathbf{\hat{y}}_{o})=\sum_{i=1}^{t}\mathcal{ D}_{KL}(p_{o}^{(i)}\log\hat{p}_{o}^{(i)}), \tag{4}\]
where \(\mathcal{D}_{KL}\) is Kullback-Leibler divergence and \(p_{o}^{(i)}\), \(\hat{p}_{o}^{(i)}\) are computed task-wise across the classes that belong to task \(i\) as in Equation (3).
Rebuffi et al. [41] proposed another distinct variant of KD for multi-class incremental learning, where the loss is computed element-wise for each class (MKD):
\[\mathcal{L}_{MKD}\left(\mathbf{y}_{o},\mathbf{\hat{y}}_{o}\right)=-\sum_{i=1} ^{|C_{t}|}\sigma(y_{o}^{i})\log\sigma(\hat{y}_{o}^{i}), \tag{5}\]
where \(\sigma\) is a sigmoid function.
Additionally, a more recent KD-based method, Auxiliary Network Continual Learning (ANCL) [22], explores the idea of multi-teacher KD for continual learning. ANCL trains an auxiliary network trained only for the current task and combines standard GKD loss with the KD loss computed between outputs for the current task and outputs of this auxiliary network.
In this work, we investigate the aforementioned KD techniques (GKD, MKD, TKD, ANCL).
### Teacher Adaptation
Most models used in class incremental learning for vision tasks are convolutional neural networks such as ResNet [15]. Those models typically use batch normalization layers and keep the parameters and statistics of those layers in the teacher model \(\Theta_{t}\) fixed during learning \(\Theta_{t+1}\). However, with the changing distribution of the data in the new task, batch normalization statistics in the student and teacher models quickly diverge, which leads to higher KD loss. Gradient updates in this case not only regularize the model towards the stable performance on previous tasks but also compensate for the changes in normalization statistics, which needlessly overwrites the knowledge stored in the model and harms the learning process.
Inspired by test-time adaptation methods [52], we propose to reduce this negative interference with a simple method that we label _Teacher Adaptation (TA)_. Our method updates batch normalization statistics of both models simultaneously on new data while learning the new task. As shown in Figure 2, it allows for significantly reduced KD loss over learning from sequential tasks in CIL, which improves the overall model stability. We provide additional analysis on why TA improves learning with KD in Section 4.4 and in the Appendix.
## 4 Experiments
### Experimental setup
We conduct experiments on common continual learning benchmarks, such as CIFAR100 [26], TinyImageNet200 [1] and ImageNet-Subset [11]. We measure models' adaptability to large shifts in data distributions on DomainNet [35] dataset. Additionally, following FACIL [32] we construct fine-grained classification benchmark using Oxford Flowers [34], MIT Indoor Scenes [39], CUB-200-2011 Birds [51], Stanford Cars [25], FGVC Aircraft [31], and Stanford Actions [58]. Finally, we also introduce a corrupted CIFAR100 setting where data in every other task contains noise of varying severity, which allows us to measure the impact of TA under varying and controllable degrees of data shift.
We create CIL scenarios by splitting the classes in each dataset into disjoint tasks. We experiment with two particular types of settings: the first type of setting is built by splitting the classes in the dataset into tasks containing an equal number of classes, while the other simulates pretaining the network and uses half of the classes as a larger first task, with subsequent tasks composed of the evenly split remaining classes.
For all experiments, we use FACIL framework provided by Masana et al. [32]. For experiments on CIFAR100, we keep the class order from iCaRL [41] and we use ResNet32 [15]. For TinyImageNet, following [22], we rescale images to 32x32 pixels and also use ResNet32. For the other datasets, we use ResNet18 [15]. We always use the same hyperparameters for all variants within the single KD method unless stated otherwise, we report the results averaged over three runs with different random seeds.
In every setup, we train the network on each new task for 200 epochs with batch size 128. We use SGD optimizer without momentum or weight decay, with a learning rate scheduler proposed by Zhou et al. [63], where the initial learning rate of 0.1 is decreased 10x after 60th, 120th and 160th epoch. For experiments conducted on CIFAR100 and TinyImageNet200 in Table 1 we also employ a warmup phase [27] for the new classification head. In the Appendix, we provide the ablation study of the warmup with different benchmarks, alongside the details of its implementation and discussion on the method. Additionally, we provide the evaluation of our method with different model architectures and batch sizes.
**Evaluation metrics.** The average incremental accuracy at task \(k\) is defined as \(A_{k}=\frac{1}{k}\sum_{j=1}^{k}a_{k,j}\), where \(a_{k,j}\in[0,1]\) be the accuracy of the \(j\)-th task (\(j\leq k\)) after training the network sequentially for \(k\) tasks [4]. Overall average incremental accuracy \(Acc_{Inc}\) is the mean value from all tasks. We also report _average forgetting_ as defined in [10], while the \(Forg_{Inc}\) is similarly the mean value from all tasks. We provide results with additional metrics such as final accuracy \(Acc_{Final}\) and final forgetting \(Forg_{Final}\) in the Appendix.
### Standard CIL benchmarks
We evaluate knowledge distillation approaches described in Section 3.1 on the standard CIL benchmarks CIFAR100, TinyImageNet200 and ImageNet100, using different class splits. We present the results in Table 1 a), b) and c) respectively. We also provide results for more settings and ablation study of our method on those datasets in the Appendix.
In most settings, we observe that our method improves upon the baseline knowledge distillation. We notice that the improvement TA is generally more significant in settings with a larger number of tasks and an equal split of the classes. In settings with half the classes presented in the first task, the gains from TA are sometimes not that visible, as in this case the initial model already learns a good feature extractor, and the distribution of its normalization statistics after the first task is a better approximation of the statistics for the whole dataset. TA sometimes underperforms with MKD, which might be caused by the fact that the loss formula of MKD uses sigmoid function, and the differences between the probabilities for KD loss are insignificant if the values of logits are not small and centered around zero, which is not guaranteed without imposing additional learning constraints.
### TA under severe distribution shifts
Motivated by the continual learning settings in which the data distribution changes significantly across the tasks, we conduct a series of experiments to empirically verify the benefits of our method.
#### 4.3.1 Fine-grained classification datasets
We evaluate TA on fine-grained classification tasks using six datasets: Stanford Actions, FGVC Aircraft, Stanford Cars, CUB-200-2011 Birds, MIT Indoor Scenes and Oxford Flowers. We create CIL tasks by randomly sampling a subset of classes from each dataset, in the above-mentioned order. We sample the classes without replacement, and to obtain the settings with 12 or 24 tasks we repeat the procedure. For this set of experiments, we start from ResNet18 checkpoint pretrained on ImageNet.
We conduct experiments using splits of 24 tasks with 5 classes each, 12 tasks with 10 classes each, and 6 tasks with 20 classes each. We show the results in Table 2. Consistently with the results from Section 4.2, our method generally improves upon the base KD, with the improvements being more visible on the longer tasks.
We omit ANCL method from our analysis, as we were unable to obtain sufficiently good results with its official implementation. We provide the results of additional experiments conducted with reverse order of datasets and with the full datasets used as a single task in the Appendix.
#### 4.3.2 Large domain shifts with DomainNet
To verify the effectiveness of teacher adaptation for continual learners under significant data distribution shifts, we use DomainNet [35] as our evaluation dataset. DomainNet consists of images from 6 domains and 345 classes. We select the first 50 classes and create each task from a different domain, resulting in more severe data drift between tasks in CIL. This allows us to measure how well the models can adapt to new data distributions. We use ResNet18 and compare two settings: training from scratch and from starting from the model pretrained on ImageNet. Table 3 shows the results of our experiments. Consistently with the results from previous Sections, we find that, aside from MKD, TA generally performs better than the baselines, and the differences are more visible when training on 12 tasks, where the model is exposed to more changes in the data distribution.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{6 tasks, 20 classes each} & \multicolumn{2}{c}{12 tasks, 10 classes each} & \multicolumn{2}{c}{24 tasks, 5 classes each} \\ \cline{2-7} & \(Acc_{Inc}\uparrow\) & \(Forg_{Inc}\downarrow\) & \(Acc_{Inc}\uparrow\) & \(Forg_{Inc}\downarrow\) & \(Acc_{Inc}\uparrow\) & \(Forg_{Inc}\downarrow\) \\ \hline GKD & 56.03\(\pm\)3.75 & **30.23\(\pm\)0.72** & 44.03\(\pm\)1.53 & 42.70\(\pm\)2.32 & 30.14\(\pm\)4.00 & 51.95\(\pm\)2.28 \\ _+ours_ & **57.24\(\pm\)1.79** & 33.12\(\pm\)1.76 & **50.80\(\pm\)1.27** & **37.58\(\pm\)3.01** & **42.86\(\pm\)1.82** & **39.01\(\pm\)1.55** \\ \hline MKD & **60.12\(\pm\)1.59** & **26.42\(\pm\)1.60** & 45.19\(\pm\)2.91 & 43.81\(\pm\)1.56 & 32.61\(\pm\)0.55 & 55.91\(\pm\)1.77 \\ _+ours_ & 55.77\(\pm\)2.23 & 31.43\(\pm\)2.28 & **51.16\(\pm\)1.84** & **34.89\(\pm\)2.60** & **45.49\(\pm\)1.82** & **34.48\(\pm\)0.89** \\ \hline TKD & 56.69\(\pm\)3.36 & 33.86\(\pm\)1.36 & 46.28\(\pm\)1.42 & 43.38\(\pm\)0.82 & 32.17\(\pm\)2.71 & 51.72\(\pm\)1.95 \\ _+ours_ & **57.99\(\pm\)1.88** & **33.79\(\pm\)1.80** & **51.66\(\pm\)1.63** & **35.43\(\pm\)2.75** & **43.95\(\pm\)2.60** & **33.28\(\pm\)1.86** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average task-agnostic accuracy and forgetting for KD-based CL methods on fine-grained classification datasets.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{6 tasks} & \multicolumn{2}{c}{6 tasks, pretrained} & \multicolumn{2}{c}{12 tasks} & \multicolumn{2}{c}{12 tasks, pretrained} \\ \cline{2-9} & \(Acc_{Inc}\uparrow\) & \(Forg_{Inc}\downarrow\) & \(Acc_{Inc}\uparrow\) & \(Forg_{Inc}\downarrow\) & \(Acc_{Inc}\uparrow\) & \(Forg_{Inc}\downarrow\) & \(Acc_{Inc}\uparrow\) & \(Forg_{Inc}\downarrow\) \\ \hline GKD & 18.63\(\pm\)0.27 & **23.27\(\pm\)0.26** & 43.27\(\pm\)0.10 & 36.83\(\pm\)0.88 & 14.45\(\pm\)0.25 & **29.04\(\pm\)0.37** & 35.98\(\pm\)0.96 & 43.00\(\pm\)1.66 \\ _+ours_ & **19.55\(\pm\)0.42** & 27.22\(\pm\)0.21 & **43.52\(\pm\)0.17** & **34.98\(\pm\)0.19** & **16.25\(\pm\)0.46** & 33.03\(\pm\)0.19 & **38.89\(\pm\)0.52** & **41.10\(\pm\)0.42** \\ \hline TKD & 19.12\(\pm\)0.26 & **26.66\(\pm\)0.38** & 42.42\(\pm\)0.10 & **40.83\(\pm\)1.11** & 16.31\(\pm\)0.55 & 37.32\(\pm\)0.87 & 38.15\(\pm\)0.22 & 42.28\(\pm\)0.50 \\ _+ours_ & **19.57\(\pm\)0.10** & 29.96\(\pm\)0.05 & **42.75\(\pm\)0.14** & 41.79\(\pm\)0.65 & **16.74\(\pm\)0.55** & **33.32\(\pm\)0.47** & **39.06\(\pm\)0.33** & **40.19\(\pm\)0.85** \\ \hline MKD & **18.74\(\pm\)0.52** & **19.10\(\pm\)0.40** & **45.70\(\pm\)0.30** & **27.48\(\pm\)0.60** & 13.45\(\pm\)0.53 & **27.23\(\pm\)0.98** & **39.14\(\pm\)0.21** & 36.53\(\pm\)1.00 \\ _+ours_ & 18.04\(\pm\)0.16 & 22.70\(\pm\)0.25 & 42.91\(\pm\)0.04 & 29.43\(\pm\)0.04 & **15.30\(\pm\)0.35** & 28.71\(\pm\)0.26 & 37.84\(\pm\)0.35 & **34.09\(\pm\)0.39** \\ \hline ANCL & 19.58\(\pm\)0.46 & **25.63\(\pm\)0.20** & **42.90\(\pm\)0.84** & **37.28\(\pm\)1.86** & 14.82\(\pm\)0.41 & **33.46\(\pm\)0.40** & 33.34\(\pm\)0.55 & 49.05\(\pm\)0.65 \\ _+ours_ & **20.34\(\pm\)0.40** & 30.73\(\pm\)0.44 & 42.67\(\pm\)0.51 & 38.56\(\pm\)1.24 & **17.19\(\pm\)0.06** & 37.40\(\pm\)0.38 & **35.81\(\pm\)0.18** & **43.84\(\pm\)0.23** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average task-agnostic accuracy and forgetting for KD and TA under significant semantic drift on DomainNet. We test scenarios with 6 tasks of 50 classes and 12 tasks of 25 classes, both when training from scratch and starting from pretrained model. Aside from MKD, TA generally leads to better results.
#### 4.3.3 Varying the strength of the distribution shift
We create CIL settings with controllable levels of data distribution shift between subsequent tasks by corrupting every other task. We split CIFAR100 into 5, 10, and 20 tasks of equal size and add Gaussian noise to every other task, so that in subsequent tasks the data distribution changes from clean to noisy or vice versa. We obtain varying strength of distribution shift by using different levels of noise severity, following the methodology of Michaelis et al. [33].
We show the results of this experiment in Figure 3. We see that as the noise severity increases, the gap between standard KD and TA widens, indicating that our method is better suited to more challenging scenarios of learning under extreme data distribution shifts.
### Detailed analysis
#### 4.4.1 Alternatives to batch normalization
We conduct a series of ablation experiments on CIFAR100 split into 10 tasks to justify the validity of our method over other potential solutions for adaptation of batch normalization layers. The results of those experiments are shown in Table 4. We compare the following settings: 1) standard training with batch normalization statistics from the previous task fixed in the teacher model, but updated in the student model, 2) batch normalization layers removed, 3) batch normalization statistics fixed in both models after learning the first task, 4) batch normalization layers replaced with LayerNorm [5] or 5) GroupNorm [57] layers, and finally 6) our solution of Teacher Adaptation.
Fixing or removing BatchNorms leads to unstable training. This can be partially fixed by setting a high gradient clipping value or lowering the lambda parameter, but both solutions lead to much worse network performance. Different normalization layers enable stable training, but ultimately converge to much worse solutions than the network with BatchNorm. Our solution is the only one that improves over different values of \(\lambda\) and does not require controlling the gradients by clipping the high values.
#### 4.4.2 Alternative methods of teacher adaptation.
We study alternative methods of adapting the teacher model and try pretraining (\(P\)) or continuously training (\(CT\)) the teacher model. For pertaining, we train the teacher on the new data in isolation for a few epochs before the training of the main model. During continuous training, we update the teacher alongside the main model using the same batches
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{\(clip=100\)} & \multicolumn{2}{c}{\(\lambda=5\)} & \multicolumn{2}{c}{\(\lambda=10\)} \\ \hline & \(Acc_{Final}\uparrow\) & \(Acc_{Inc}\uparrow\) & \(Acc_{Final}\uparrow\) & \(Acc_{Inc}\uparrow\) \\ \cline{2-5}
1) **GKD** & 25.47\(\pm\)0.57 & 41.59\(\pm\)0.32 & 27.96\(\pm\)0.34 & 42.28\(\pm\)0.67 \\
2) **-BN** & 0.33\(\pm\)1.15 & 2.01\(\pm\)2.67 & 0.33\(\pm\)1.15 & 2.85\(\pm\)3.81 \\
3) **fix BN** & - & - & - & - \\
4) **-BN +LN** & 21.94\(\pm\)0.95 & 34.7\(\pm\)0.48 & 22.76\(\pm\)1.05 & 34.48\(\pm\)0.15 \\
5) **-BN +GN** & 21.92\(\pm\)0.46 & 32.15\(\pm\)0.16 & 22.01\(\pm\)0.82 & 31.71\(\pm\)0.35 \\
6) **+TA** & **31.39\(\pm\)0.17** & **44.98\(\pm\)0.38** & **31.85\(\pm\)0.10** & **44.06\(\pm\)0.69** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results for different solutions to the problem of diverging batch normalization layers when using knowledge distillation in continual learning. We use GKD with different \(\lambda\) and gradient clipping values. We compare the baseline with variants without batch normalization layers, with batch normalization statistics fixed after the first task and with batch normalization layers replaced with LayerNorm or GroupNorm. ”-” indicates that training crashes due to instability. TA is the only solution that improves upon the baseline.
Figure 3: Average incremental accuracy for standard KD and our method of TA under varying strength of data shift on splits of CIFAR100. As the noise strengthens, the gap between TA and standard KD widens, indicating that our method leads to more robust learning in case of data shifts. We obtain data shifts by adding noise of varying strength to every other task, using the Gaussian noise and noise severity scale proposed by Michaelis et al. [33].
of new data. With both approaches, we set a lower learning rate for the teacher. We conduct those experiments by training either the full teacher model (\(FM\)) or only its batch normalization layers (\(BN\)). Finally, to isolate the impact of changing batch normalization statistics and training model parameters, we repeat all the experiments with fixed batch normalization statistics (\(fix\ BN\)).
We conduct our experiment on CIFAR100 split into 10 tasks and present the results in Table 5. Alternative solutions perform within the standard deviation of TA with tuned hyperparameters, but the values of the hyperparameters for those models (described in the Appendix) are also very small, indicating that the teacher model doesn't change much. Upon closer study, we find that it's mostly batch normalization statistics that change throughout the training. Therefore, knowing that other successful methods from test-time adaptation [52] use a similar approach, we continue with Teacher Adaptation based on batch normalization layers, as it does not require any hyperparameter tuning or additional pretraining epochs.
## 5 Conclusions
We propose Teacher Adaptation, a simple yet effective method to improve the performance of knowledge distillation-based methods in exemplar-free class-incremental learning. Our method continuously updates the teacher network by adjusting batch normalization statistics during learning a new task both for the currently learning model and the teacher model saved after learning the previous tasks. This mitigates the changes in the model caused by knowledge distillation loss that arise as the current learner is continuously trying to compensate for the modified normalization statistics. We further improve the stability of the model by introducing a warm-up phase at the beginning of the task, where a new classification head is trained in isolation before finetuning the whole model. The warm-up phase ensures that the initialization of the weights is not random in the initial phases of training, and reduces the gradient updates to the whole model. We conduct experiments with Teacher Adaptation on several class-incremental benchmarks and show that it consistently improves the results for different knowledge distillation-based methods in an exemplar-free setting. Moreover, our method can be easily added to the existing class-incremental learning solutions and induces only a slight computational overhead.
DiscussionSince the introduction of Learning without Forgetting, KD-based methods have emerged as effective solutions to mitigate forgetting in CIL models. Several approaches, such as iCaRL, EEIL, BiC, LUCIR, and SSIL, have integrated KD with exemplars, which helps maintain a balanced discrepancy between the teacher and student models. In scenarios where a sufficient number of exemplars are available, teacher adaptation may not be required, as their presence in the training data mitigates the divergence between the normalization statistics of the subsequent tasks. Nevertheless, our research is dedicated exclusively to the exemplar-free setting, in which we investigate techniques that do not rely on storing exemplars. To the best of our knowledge, we are the first to propose the adaptation of the teacher model within the context of KD-based exemplar-free CIL.
ImpactOur method focuses on exemplar-free scenarios, and therefore we alleviate the issues with storing potentially confidential, private, or sensitive data. However, we recognize that machine learning algorithms can be harmful if applied carelessly, and we encourage practitioners to carefully check training data and the models to ensure that the results of their work do not perpetuate biases or discriminate against any minority.
All our work was conducted using publicly available datasets and open-source code. To allow other researchers to build on our work and validate the results, we will share the code for the experiments in this paper on GitHub upon acceptance.
## Acknowledgements
Filip Szatkowski and Tomasz Trzcinski are supported by National Centre of Science (NCP, Poland) Grant No. 2022/45/B/ST6/02817. Tomasz Trzcinski is also supported by NCP Grant No. 2020/39/B/ST6/01511.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & \(Acc_{Final}\uparrow\) & \(Acc_{Inc}\uparrow\) & \(Forg_{Final}\downarrow\) & \(Forg_{Inc}\downarrow\) \\ \hline Base & 27.53\(\pm\)0.15 & 42.22\(\pm\)0.38 & 31.28\(\pm\)1.64 & 23.11\(\pm\)1.58 \\ \hline P-FM & 31.54\(\pm\)0.67 & 43.46\(\pm\)0.72 & 24.18\(\pm\)1.17 & 20.80\(\pm\)1.51 \\ _+fix BN_ & 28.02\(\pm\)0.60 & 42.33\(\pm\)0.53 & 29.91\(\pm\)1.27 & 22.66\(\pm\)0.95 \\ \hline P-BN & 31.16\(\pm\)0.54 & 43.64\(\pm\)0.77 & 24.44\(\pm\)0.96 & 20.13\(\pm\)0.75 \\ _+fix BN_ & 27.62\(\pm\)0.48 & 42.12\(\pm\)0.38 & 29.95\(\pm\)1.64 & 22.50\(\pm\)0.95 \\ \hline CT-FM & 31.37\(\pm\)0.94 & 43.38\(\pm\)0.77 & 24.34\(\pm\)1.37 & 20.93\(\pm\)1.58 \\ _+fix BN_ & 28.17\(\pm\)0.49 & 42.29\(\pm\)0.42 & 29.79\(\pm\)1.02 & 22.55\(\pm\)0.67 \\ \hline CT-BN & 31.35\(\pm\)0.63 & 43.69\(\pm\)0.76 & 24.29\(\pm\)0.61 & 20.23\(\pm\)0.59 \\ _+fix BN_ & 27.33\(\pm\)0.50 & 42.09\(\pm\)0.45 & 30.20\(\pm\)1.73 & 22.50\(\pm\)0.85 \\ \hline TA & **32.15\(\pm\)0.12** & **44.31\(\pm\)0.26** & **23.55\(\pm\)0.51** & **19.85\(\pm\)0.93** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study of different ways to adapt the teacher model. Our method achieves the best results while requiring no additional hyperparameters. We try teacher adaptation during pretraining (P) and continuous training (CT). We train either full model (FM) or only batch normalization layers (BN). _fix BN_ indicates fixed BN statistics. |
2307.10848 | Central Limit Theorem for traces of the resolvents of half-heavy tailed
Sample Covariance matrices | We consider the spectrum of the Sample Covariance matrix $\mathbf{A}_N:=
\frac{\mathbf{X}_N \mathbf{X}_N^*}{N}, $ where $\mathbf{X}_N$ is the $P\times
N$ matrix with i.i.d. half-heavy tailed entries and $\frac{P}{N}\to y>0$ (the
entries of the matrix have variance, but do not have the fourth moment). We
derive the Central Limit Theorem for the Stieltjes transform of the matrix
$\mathbf{A}_N$ and compute the covariance kernel. Apart from that, we derive
the Central Limit Theorem for the Stieltjes transform of overlapping Sample
Covariance matrices. | Svetlana Malysheva | 2023-07-20T13:11:44Z | http://arxiv.org/abs/2307.10848v1 | Central Limit Theorem for traces of the resolvents of "half-heavy tailed" Sample Covariance matrices
###### Abstract
We consider the spectrum of the Sample Covariance matrix \(\mathbf{A}_{N}:=\frac{\mathbf{x}_{N}\mathbf{x}_{N}^{*}}{N}\), where \(\mathbf{X}_{N}\) is the \(P\times N\) matrix with i.i.d. half-heavy tailed entries and \(\frac{P}{N}\to y>0\) (the entries of the matrix have variance, but do not have the fourth moment). We derive the Central Limit Theorem for the Stieltjes transform of the matrix \(\mathbf{A}_{N}\) and compute the covariance kernel. Apart from that, we derive the Central Limit Theorem for the Stieltjes transform of overlapping Sample Covariance matrices.
## 1 Introduction
In this paper, we will study the spectrum of Sample Covariance random matrices, i.e. the spectrum \(\lambda_{1},\ldots,\lambda_{P}\) of matrices
\[\mathbf{A}_{N}:=\frac{\mathbf{X}_{N}\mathbf{x}_{N}^{*}}{N}, \tag{1}\]
where
\[X_{N}:=(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N}):=(x_{ij})_{ \begin{subarray}{c}1\leq i\leq P\\ 1\leq j\leq N\end{subarray}} \tag{2}\]
is a random \(P\times N\) matrix with i.i.d. entries. The dimensions of the matrix \(\mathbf{X}_{N}\) will be assumed to be growing at the same speed, i.e. we will take the parameter \(D\rightarrow+\infty\) such that
\[\frac{P(D)}{N(D)}\to y\in\left(0,+\infty\right). \tag{3}\]
When \(\mathbb{E}\left|x_{i,j}\right|^{4}<+\infty\) we will refer to this case as the "light-tailed" case, possibly without specifying all necessary conditions on \(x_{i,j}\).
In the case of our main interest \(x_{i,j}\) will be taken to be regularly varying with parameter \(\alpha\) such that \(2<\alpha<4.\) This case will be called "half-heavy tailed". Particularly, in this case \(\mathbb{E}\left|x_{i,j}\right|^{2}<+\infty\) and \(\mathbb{E}\left|x_{i,j}\right|^{4}=+\infty.\)
For any function \(\varphi\) defined on the spectrum of \(\mathbf{A}_{N}\), the value
\[\theta_{\varphi}^{\mathbf{A}_{N}}:=\frac{1}{P}\sum_{i=1}^{P}\varphi\left(\lambda _{i}\right) \tag{4}\]
is called Linear Spectral Statistics (shortened as LSS), and \(\varphi\) is called a test function. Central Limit Theorems for Linear Spectral Statistics usually refer to the pointwise by \(\varphi\) convergence of re-normalised LSS to the Gaussian process with certain mean and covariance, where \(\varphi\) belongs to some certain class of functions \(\mathcal{D}\).
Central Limit Theorems for linear spectral statistics can be used in hypothesis testing for high-dimensional data sets. If the number of observations of a certain high-dimensional random variable is proportional to the number of dimensions, CLT can be used to test if the covariance matrix of the random variable is the identity matrix. For example, when \(\varphi(x):=x-\log x-1\), \(P\theta_{\phi}^{\mathbf{A}_{N}}\) is called log-likelihood ratio statistics. For the "light-tailed case" it was shown in [1] that when \(P\sim N\), the test considering CLT (Corrected Likelihood Ratio Criterion) performs better than the traditional Likelihood Ratio Criterion (based on the convergence of \(P\theta_{\phi}^{\mathbf{A}_{N}}\) to \(\chi_{\frac{1}{2}P(P+1)}^{2}\) when \(P\) is fixed and \(N\to+\infty\)). Also, unlike previously existing tests ([13], [14]), it does not assume the normality of the data. More applications of CLTs to hypothesis testing can also be found in [10, Chapter 9]. Similarly, Central Limit Theorems for "half-heavy tailed" Sample Covariance matrices may have potential applications in testing high-dimensional data sets, whose entries are regularly varying random variables with exponent between 2 and 4. Such data sets include stock returns in short periods [11] and household incomes [15].
For \(\varphi(x):=\frac{1}{z-x}\), where \(z\in\mathbb{C}\backslash\mathbb{R}\),
\[\theta_{\varphi}^{\mathbf{A}_{N}}:=\frac{1}{P}\sum_{i=1}^{P}\frac{1}{z-\lambda _{i}}=\frac{1}{P}\operatorname{Tr}\left(z-\mathbf{A}_{N}\right)^{-1}, \tag{5}\]
and this way the trace of resolvent can also be considered as LSS. We will derive CLT for "half-heavy tailed" Sample Covariance matrices on the domain \(\mathcal{D}\), that is a span of resolvent traces. In the "light-tailed case", it is possible to extend CLT (as it was done in [1]) from the resolvent traces to analytic functions on the open interval, containing the limiting spectrum. It was done by integration via contour, containing all the eigenvalues with probability converging to 1. The results [10], [11], [12], [13] yield that it is possible to find such a contour. Nevertheless, in the "half-heavy tailed" case similar extension cannot be done the same way because the largest eigenvalues of \(\mathbf{A}_{N}\) tend to \(+\infty\) ([12], [1]).
Historically, the main directions of improvement of existing Central Limit Theorems for Linear Spectral Statistics included enlarging the class \(\mathcal{D}\) of test functions and weakening the restrictions on the entries of the matrix \(\mathbf{X}_{N}\). For the application of the CLT from [13], the entries of \(\mathbf{X}_{N}\) should have the first
four moments matching the first four moments of the standard normal random variable, and the class \(\mathcal{D}\) should consist of test functions that are analytic on the domain that contains the support of Marchenko-Pastur law. In [10] the 4th cumulant is not necessarily 0 anymore, and the space of test functions is extended to those that have more than 5 bounded derivatives. The term proportional to the 4th cumulant was added. In the CLT provided in [11], it is sufficient for the entries \(\mathbf{X}_{N}\) to have at least \(4+\epsilon\) moments, and the class of test functions is enlarged to functions \(\varphi\) such that
\[\int(1+2|k|)^{3+\epsilon}|\widehat{\varphi}(k)|^{2}dk<\infty. \tag{6}\]
On the other hand, the CLT for LSS holds also for heavy-tailed random matrices i.e. those with \(\alpha<2\). In [1], the authors prove CLT for symmetric Levy random matrices with infinite variance.
Our methodology will be based on the methods used in [1], [13] and [14]. In [1] CLT for resolvent traces of "half-heavy tailed" Wigner matrices were obtained and the covariance kernel was written in an integral form. The main steps of the proof include truncation, application of martingale CLT, removal of non-diagonal terms in the resolvents in the resulting formula and the computation of the covariance kernel using the approximation of \(\mathbb{E}\left[\exp\left(-\mathbf{i}\left|\frac{\hat{x}_{i,j}}{\sqrt{N}} \right|^{2}\lambda\right)\right]\) when \(\Im\lambda<0.\) In [13], the integral in the expression obtained in [1] was evaluated explicitly by separation of variables. Afterwards, the integral kernel associated with this covariance was extracted. Interestingly, the effect of the eigenvalues outside the limiting spectrum can be spotted in this kernel.
As a first step, in this paper, we will prove the CLT for centred resolvent traces for "half-heavy tailed" Sample Covariance matrices, following [1]. Afterwards, similarly to [13] we will simplify the expression obtained in the first step. Also, we will sketch the proof of CLT for the resolvent traces for overlapping "half-heavy tailed" Sample Covariance matrices. In the "light-tailed" case overlapping CLT was introduced in [1] for Sample Covariance matrices, and in [1] for Wigner matrices; it got further development in [12]. With the methods in the first step, our overlapping CLT will not require the introduction of any new formulas or estimations, just extra attentiveness. Potential applications of the overlapping CLT include testing large-dimensional data sets with some missing data.
The paper is organised as follows. In Section 3 we will obtain CLT for the resolvent traces with limiting covariance in the form of an integral. In Subsection 3.1, we will justify the truncation of the entries of \(\mathbf{X}_{N}\) with the argument \(N^{\frac{1}{4}+\frac{1}{\alpha}+\epsilon}\) for any \(\epsilon>0.\) In Subsection 3.2, we will calculate the moments of truncated elements and approximate \(\mathbb{E}\left[\exp\left(-\mathbf{i}\left|\frac{\hat{x}_{i,j}}{\sqrt{N}} \right|^{2}\lambda\right)\right]\). In Subsection 3.3, we will rewrite \(\hat{\theta}_{N}(z)\) as a sum of martingale differences. It will then remain to prove the convergence of the expression, depending on the entries of matrix \(\mathbf{X}_{N}\) and resolvents of \(\mathbf{A}_{N,k}\) (which are Sample Covariance matrices of \(\mathbf{X}_{N}\) with one
column removed). In Subsection 3.4, we will remove non-diagonal entries of the resolvents from this expression using moments calculated in Section 3.2. Here, we will use that \(\alpha<4.\) Also, in this step we will need \(\epsilon\) in the truncation argument to be small enough, such that inequality (69) holds. Finally, in Subsection 3.5, we will use the approximation, computed in Subsection 3.2, to rewrite the expression from Section 3.4 as an integral of \(t\) and \(s.\) We will upper-bound the absolute value of the function under the integral so as to justify our use of Dominated Convergence Theorem, and we compute its pointwise limit. In Section 4 we will calculate the integral expression, obtained for the limiting covariance in Section 3. In Section 5 we will prove the CLT for overlapping matrices.
## 2 Matrix Model, Notations and Main Results
The random variables we will study in this paper have the following distributional properties.
**Definition 2.1** (Regularly varying random variables).: _A real or complex random variable \(Z\) is said to be heavy-tailed with parameter \(\alpha\) if there exists a constant \(c>0\) and a slowly-varying function \(\ell:\mathbb{R}^{+}\to\mathbb{R}^{+}\) for which_
\[\mathbb{P}\Big{(}\big{\{}|Z|>x\big{\}}\Big{)}=\frac{-c\ell(x)}{\Gamma\left(1- \frac{\alpha}{2}\right)x^{\alpha}},\qquad x>0.\]
_Recall that a slowly-varying \(\ell(x)\) satisfies_
\[\lim_{t\to\infty}\frac{\ell(tx)}{\ell(t)}=1,\]
_for all \(x\in\mathbb{R}^{+}\)._
_Remark_.: To clarify, it is important to note that \(\Gamma\left(1-\frac{\alpha}{2}\right)<0.\) We will focus solely on the function \(\ell(\cdot),\) ensuring that its limit as \(t\) approaches positive infinity is \(1\). We will select a suitable constant \(c\) based on this criterion. The unconventional choice of \(c\) stems from the intentional cancellation of \(-\Gamma\left(1-\frac{\alpha}{2}\right)\) in Part 5 of Lemma 3.2, which subsequently manifests in the Main Results.
We say "half-heavy" in this paper primarily because we will restrict our study to the parameter ranges \(2<\alpha<4\); the random matrix literature often refers to the heavy-tail case as the parameter range \(0<\alpha<2\). We now construct the matrix model of interest in this paper.
**Definition 2.2** (Half-Heavy tailed Sample Covariance Matrices).: _Define a \(P\times N\) matrix \(\mathbf{X}_{N}\) whose entries are i.i.d random variables which are centred, have variance one and satisfy the tail decay of Definition 2.1 with \(2<\alpha<4\). Let_
\[\mathbf{A}_{N}:=\frac{\mathbf{X}_{N}\mathbf{X}_{N}^{*}}{N},\]
_be the Sample Covariance random matrix obtained from \(X\)._
**Definition 2.3** (Empirical spectral measure).: _For the \(P\times P\) Hermitian matrix \(\mathbf{A}\) with eigenvalues \(\lambda_{1},\lambda_{2},\ldots,\lambda_{P}\) we define the empirical spectral measure as the measure \(\mu_{\mathbf{A}}\) with support on the real line such that_
\[\operatorname{d}\mu_{\mathbf{A}}(x):=\frac{1}{P}\sum_{i=1}^{P}\delta(x-\lambda_ {i})\operatorname{d}x,\]
_where \(\delta(x)\) denotes Dirac delta function._
For matrices satisfying the properties of Definition 2.2, the Marchenko-Pastur Law has been established, which we recall in the following Proposition (appearing as [1, Theorem 3.6-7])
**Proposition 2.1**.: _Let \(\mathbf{A}_{N}\) be as in Definition 2.2, and suppose that \(P\) and \(N\) depend on an underlying parameter \(D\) and tend to infinity in such a way that \(\lim_{D\to\infty}P(D)/N(D)\to y\in(0,\infty)\). Let_
\[a :=(1-\sqrt{y})^{2},\] \[b :=(1+\sqrt{y})^{2},\]
_then the empirical spectral measure of \(\mathbf{A}_{N}\), converges weakly almost surely to the Marchenko Pastur law,_
\[\operatorname{d}\mu_{\mathrm{MP},y}(x)=p_{y}(x)\operatorname{d}x+\mathbf{1}_{ y>1}\bigg{(}1-\frac{1}{y}\bigg{)}\delta(x)\operatorname{d}x\]
_where_
\[p_{y}(x)=\frac{\sqrt{(b-x)(x-a)}}{2\pi xy}\mathbf{1}_{[a,b]}(x).\]
For a probability measure \(\mu(\cdot)\) with the support on the real line we define its Stieltjes transform in the point \(z\in\mathbb{C}\backslash\mathbb{R}\) as
\[s_{\mu}(z):=\int_{\mathbb{R}}\frac{1}{z-\lambda}\operatorname{d}\mu(\lambda). \tag{7}\]
The resolvent matrix of \(P\times P\) Hermitian matrix \(\mathbf{A}_{N}\) is defined as
\[\mathbf{G}_{\mathbf{A}_{N}}(z):=(z\mathrm{Id}_{P}-\mathbf{A}_{N})^{-1},\qquad z \in\mathbb{C}\backslash\mathbb{R}. \tag{8}\]
Noticeably, the renormalised trace of the resolvent of the matrix \(\mathbf{A}_{N}\) is equal to the Stieltjes transform of the empirical spectral distribution of the matrix \(\mathbf{A}_{N}:\)
\[s_{\mathbf{A}_{N}}(z):=\int_{\mathbb{R}}\frac{1}{z-\lambda}\operatorname{d} \mu_{\mathbf{A}_{N}}(\lambda)=\sum_{i=1}^{P}\frac{1}{z-\lambda_{i}}=\frac{1}{ P}\operatorname{Tr}\mathbf{G}_{\mathbf{A}_{N}}(z). \tag{9}\]
By [1, Theorem B.9] the convergence of the probability measures on the real line is equivalent to the convergence of their Stieltjes transforms. The Proposition 2.1 is equivalent to the following statement holding.
For random matrix \(\mathbf{A}_{N}\) satisfying Definition 2.2 for each \(z\in\mathbb{C}\backslash\mathbb{R}\)
\[\lim_{D\to\infty}\frac{1}{P}\operatorname{Tr}\mathbf{G}_{\mathbf{A}_{N}}(z)=m_{ y}(z)\qquad\text{almost surely},\]
where \(m_{y}(z)\) is the Stieltjes transform of \(\mu_{\operatorname{MP},y}\).
The statement of the [1, Lemma 3.11] yields, that
\[m_{y}(z)=\int_{\mathbb{R}}\frac{\mu_{\operatorname{MP},y}(\operatorname{d}x)}{z -x}=-\frac{z-(1-y)-\sqrt{(z-1-y)^{2}-4y}}{2yz}, \tag{10}\]
where the branch cut of the square root is taken on the positive real line. It is easy to see, that the Stieltjes transform of the Marchenko-Pastur law is the root of the following quadric equation:
\[zym_{y}^{2}(z)+(1-z-y)m_{y}(z)+1=0. \tag{11}\]
Our aim is to prove the following result regarding the fluctuation of the Stieltjes transform of \(\mathbf{A}_{N}\).
**Theorem 2.1** (Central Limit Theorem).: _Under the same assumptions of Proposition 2.1, the linear spectral statistics_
\[\theta_{N}(z)=\frac{1}{N^{1-\alpha/4}}(\operatorname{Tr}\mathbf{G}_{\mathbf{A }_{N}}(z)-\operatorname{\mathbb{E}}\operatorname{Tr}\mathbf{G}_{\mathbf{A}_{N }}(z)). \tag{12}\]
_converges in distribution to a Gaussian process on \(\mathbb{C}\backslash\mathbb{R}\) with covariance kernel_
\[C(z,w)=\int_{t,s>0}\frac{\partial}{\partial z}\frac{\partial}{\partial w} \mathcal{L}(z,t,w,s)\operatorname{d}t\operatorname{d}s, \tag{13}\]
_where_
\[\mathcal{L}(z,t,w,s):=y\exp\left(\mathbf{i}\operatorname{sgn}_{ \Omega z}tz+\mathbf{i}\operatorname{sgn}_{\Omega w}sw\right)\times\ell(z,t,w, s)\times r(z,t,w,s) \tag{14}\] \[\ell(z,t,w,s):=c\frac{\left(K(z,t)+K(w,s)\right)^{\alpha/2}-K(z,t )^{\alpha/2}-K(w,s)^{\alpha/2}}{ts}\] (15) \[r(z,t,w,s):=\exp\left(-yK(z,t)-yK(w,s)\right) \tag{16}\]
_where \(\operatorname{sgn}_{z}\) is the sign of the imaginary part of \(z\), \(K(z,t)=it\operatorname{sgn}_{z}zm_{y}(z)\), and \(m_{y}(z)\) is Stieltjes transform of Marchenko-Pastur law, equation (10)._
Similarly to the work [13], we will compute rewrite \(\ell(z,t,w,s)\) in an integral form, that will enable us to separate \((z,t)\) and \((w,s)\). It will allow us to calculate \(\int_{t,s>0}\frac{\partial}{\partial z}\frac{\partial}{\partial w}\mathcal{L} (z,t,w,s)\operatorname{d}t\operatorname{d}s\) explicitly.
**Theorem 2.2**.: _For all \(z,w\in\mathbb{C}\backslash\mathbb{R}\) holds_
\[C(z,w)=-yc\Gamma\left(1+\frac{\alpha}{2}\right)\frac{\partial}{ \partial z}\left(zm_{y}(z)\right)\frac{\partial}{\partial w}\left(wm_{y}(w) \right)\times\\ \frac{\left(-1+zm_{y}(z)\right)^{\alpha/2-1}-\left(-1+wm_{y}(w) \right)^{\alpha/2-1}}{zm_{y}(z)-wm_{y}(w)}.\]
Minor adjustments to the proof Theorem 2.1 in allow us to calculate the Limiting eigenvalue statistics of overlapping half-heavy-tailed Sample Covariance matrices. We will sketch the proof and do the necessary computations to prove the following theorem.
**Theorem 2.3**.: _Suppose that matrix \(\mathbf{X}_{N}\) is as in the Theorem 2.1. Denote the set of the row indices of the matrix \(\mathbf{X}_{N}\) as \(\mathcal{P}\) and the set of column indices as \(\mathcal{Q},\) so that \(\mathbf{X}_{N}=(x_{i,j})\underset{j\in\mathcal{Q}}{\overset{i\in\mathcal{P}}{ \underset{j\in\mathcal{Q}}{\overset{i}{\mathcal{Q}}{\rightleftharpoons}}}}.\) Choose \(\mathcal{P}_{i}\subseteq\mathcal{P}\) and \(\mathcal{Q}_{i}\subseteq\mathcal{Q}\) for \(i=1,\ldots,d,\) where \(d\) is not changing with \(N.\) Take submatrices \(\mathbf{X}_{N}^{[i]}:=(\mathbf{x}_{i,j})\underset{j\in\mathcal{Q}_{i}}{ \overset{i\in\mathcal{P}_{i}}{\underset{j\in\mathcal{Q}_{i}}{\rightleftharpoons}}}.\) Let \(\mathbf{A}_{N}^{[i]}:=\frac{\mathbf{X}_{N}^{[i]}\mathbf{X}_{N}^{[i]*}}{N}\) and_
\[\theta_{N}^{[i]}(z):=\frac{1}{N^{1-\alpha/4}}\left(\operatorname{Tr}\mathbf{G }_{\mathbf{A}_{N}^{[i]}}(z)-\operatorname{\mathbb{E}}\operatorname{Tr}\mathbf{ G}_{\mathbf{A}_{N}^{[i]}}(z)\right). \tag{17}\]
_Assume also the following asymptotic:_
\[\frac{|\mathcal{P}_{i}|}{N}\underset{N\to\infty}{\overset{\rightarrow}{ \underset{N\to\infty}{\rightarrow}}}p_{i}>0,\ \frac{|\mathcal{Q}_{i}|}{N}\underset{N\to\infty}{ \overset{\rightarrow}{\underset{N\to\infty}{\rightarrow}}}q_{i}>0,\ \frac{|\mathcal{P}_{i}\cap\mathcal{P}_{j}||\mathcal{Q}_{i}\cap\mathcal{Q}_{j} |}{N^{2}}\underset{N\to\infty}{\overset{\rightarrow}{\underset{N\to \infty}{\rightarrow}}}\gamma_{ij}>0. \tag{18}\]
_Then for any \(z_{1},z_{2},\ldots,z_{d}\in\mathbb{C}\backslash\mathbb{R}\) a vector \(\left\langle\theta_{N}^{[1]}(z_{1}),\theta_{N}^{[2]}(z_{2}),\ldots,\theta_{N}^{ [d]}(z_{d})\right\rangle\) converges to a complex Gaussian vector in distribution when \(N\to\infty\) and_
\[\operatorname{Cov}\left[\theta_{N}^{[i]}(z),\theta_{N}^{[j]}(w)\right] \underset{N\to\infty}{\overset{\rightarrow}{\underset{N\to \infty}{\rightarrow}}}C_{i,j}(z,w), \tag{19}\]
_where_
\[C_{i,j}(z,w)=\int_{t,s>0}\frac{\partial}{\partial z}\frac{\partial}{\partial w }\mathcal{L}^{[i,j]}(z,t,w,s)\,\mathrm{d}\,t\,\mathrm{d}\,s, \tag{20}\]
_and_
\[\mathcal{L}^{[i,j]}(z,t,w,s):=\gamma_{i,j}\exp\left(\mathbf{i}\operatorname{ sgn}_{\Im z}tz+\mathbf{i}\operatorname{sgn}_{\Im w}sw\right)\times\ell(z,t,w,s) \times r(z,t,w,s), \tag{21}\]
\[\ell^{[i,j]}(z,t,w,s):=c\frac{\left(K^{[i]}(z,t)+K^{[j]}(w,s)\right)^{\alpha/2 }-K^{[i]}(z,t)^{\alpha/2}-K^{[j]}(w,s)^{\alpha/2}}{ts}, \tag{22}\]
\[r^{[i,j]}(z,t,w,s):=\exp\left(-p_{i}K^{[i]}(z,t)-p_{j}K^{[j]}(w,s)\right), \tag{23}\]
_with the notation \(K^{[i]}(z,t)=t\operatorname{sgn}_{z}\mathbf{i}z\frac{1}{q_{i}}m_{\frac{p_{i}} {q_{i}}}\left(\frac{z}{q_{i}}\right).\)_
## 3 Proof of Theorem 2.1
We follow the same approach as in [3, 16]. First, we replace the matrix \(\mathbf{X}_{N}\) with a centred and truncated version matrix \(\hat{\mathbf{X}}_{N}\) and show that the linear spectral statistic (LSS) \(\hat{\theta}_{N}(z)\) with \(\hat{\mathbf{X}}_{N}\) replacing \(\mathbf{X}_{N}\) has the same distributional limit as \(\theta_{N}(z)\). We then apply the Martingale Central Limit Theorem to \(\hat{\theta}_{N}(z)\), using estimates we develop for the entries of \(\hat{\mathbf{X}}_{N}\).
### Truncation
In this section, we truncate the entries of \(\mathbf{X}_{N}\) by setting them to zero when they exceed \(N^{\beta}\) and then centring them by their new mean \(\mu_{N}\). We justify the truncation when \(\beta>0\) is bounded from below by a function of \(\alpha\).
**Lemma 3.1** (Truncated and Centered Matrix).: _Let \(\epsilon>0\) and consider \(\beta=\frac{1}{4}+\frac{1}{\alpha}+\epsilon\). With \(\mathbf{X}_{N}\) and \(\mathbf{A}_{N}\) as in Definition 2.2, and \(P\) and \(N\) are growing with respect to a parameter \(D\) as in Proposition 2.1, define the matrix \(\hat{\mathbf{X}}_{N}\) whose entries are_
\[\hat{x}_{i,j} :=x_{i,j}\mathbf{1}_{|x_{i,j}|<N^{\beta}}-\mu_{N},\] \[\mu_{N} :=\mathbb{E}\big{[}x_{i,j}\mathbf{1}_{|x_{i,j}|<N^{\beta}}\big{]}.\]
_If we define \(\hat{\mathbf{A}}_{N}:=\frac{\hat{\mathbf{X}}_{N}\hat{\mathbf{X}}_{N}^{*}}{N}\) and_
\[\hat{\theta}_{N}(z):=\frac{1}{N^{1-\frac{\alpha}{4}}}\big{(}\operatorname{Tr }\mathbf{G}_{\hat{\mathbf{A}}_{N}}(z)-\mathbb{E}[\operatorname{Tr}\mathbf{G} _{\hat{\mathbf{A}}_{N}}(z)]\big{)},\]
_then_
\[|\hat{\theta}_{N}(z)-\theta_{N}(z)|\to 0\]
_in probability as \(D\to\infty.\)_
Proof.: From the matrix \(\mathbf{X}_{N}\) we build the matrix \(\tilde{\mathbf{X}}_{N}\) whose entries are equal to
\[\tilde{x}_{i,j}:=x_{i,j}\mathbf{1}_{|x_{i,j}|<N^{\beta}}. \tag{24}\]
For each \(1\leq i\leq P\), set the random variable \(\xi_{i}\) to equal \(1\) if the \(i\)-th row of \(\mathbf{X}_{N}\) differs from the \(i\)-th row of \(\tilde{\mathbf{X}}_{N}\), and equal to \(0\) otherwise, i.e.
\[\xi_{i}:=\mathbf{1}_{\bigcup_{j=1}^{N}\{x_{i,j}\neq\tilde{x}_{i,j}\}}=\mathbf{ 1}_{\bigcup_{j=1}^{N}\big{\{}|x_{i,j}|\geq N^{\beta}\big{\}}}.\]
Since \(R:=\sum_{i=1}^{P}\xi_{i}\) counts the total number of non-zero rows of \(\mathbf{X}_{N}-\tilde{\mathbf{X}}_{N}\), we have the following bound
\[\operatorname{rank}(\mathbf{X}_{N}-\tilde{\mathbf{X}}_{N})\leq R.\]
The rank of \(\mathbb{E}[\tilde{\mathbf{X}}_{N}]\) is at most \(1\), which means \(\operatorname{rank}(\tilde{\mathbf{X}}_{N}-\hat{\mathbf{X}}_{N})\leq 1\). Using Corollary A.1 we have the bound
\[\frac{1}{P^{1-\frac{\alpha}{4}}}|\operatorname{Tr}\mathbf{G}_{\hat{\mathbf{A }}_{N}}(z)-\operatorname{Tr}\mathbf{G}_{\mathbf{A}_{N}}(z)|\leq\frac{\pi\left( R+1\right)}{|\Im z|P^{1-\frac{\alpha}{4}}}. \tag{25}\]
To prove the Lemma we combine bound (25) with the proof of convergence of \(\frac{R}{P^{1-\frac{\alpha}{4}}}\) to \(0\) in probability. Firstly, we will estimate \(\mathbb{E}R\) and \(\operatorname{Var}R.\) Afterwards, we will apply Chebychev's inequality.
Variables \(\xi_{i}\) are i.i.d since the rows of \(\mathbf{X}_{N}\) are i.i.d., which yields \(\mathbb{E}R=P\ \mathbb{E}\xi_{1}\) and \(\operatorname{Var}R=P\ \operatorname{Var}\xi_{1}.\) By the distributional assumption of Definition 2.1, there
exists a constant \(C>0\) depending only on \(\alpha\), \(\beta\) and \(\ell\) such that for sufficiently large \(N\) holds
\[\mathbb{P}\Big{(}\big{\{}|x_{1,1}|\geq N^{\beta}\big{\}}\Big{)}\leq\frac{C}{N^{ \alpha\beta}}.\]
Therefore for sufficiently large \(N\), the following bound holds
\[\mathbb{E}[\xi_{i}] =1-\bigg{(}1-\mathbb{P}\Big{(}\big{\{}|x_{1,1}|\geq N^{\beta}\big{\}} \Big{)}\bigg{)}^{N}\] \[\leq 1-\exp\bigg{(}N\log\Big{[}1-\frac{C}{N^{\alpha\beta}}\Big{]} \bigg{)}\leq 1-\exp\big{(}-2CN^{1-\alpha\beta}\big{)},\] \[\leq 2CN^{1-\alpha\beta}.\]
In the second inequality we used that \(CN^{-\alpha\beta}<\frac{1}{2}.\) In the third inequality we used that \(\beta>\frac{1}{\alpha}\) so that \(CN^{1-\alpha\beta}<\frac{1}{2}.\) Since \(\xi_{i}\) only takes values \(0\) and \(1\), \(\operatorname{Var}\xi_{i}^{2}\leq\mathbb{E}[\xi_{i}^{2}]=\mathbb{E}[\xi_{i}.]\) Thus, the following bound also holds
\[\operatorname{Var}\xi_{i}\leq 2CN^{1-\alpha\beta}.\]
The bounds on \(\xi_{i}\) and the fact that \(P/N\to y\in(0,\infty)\) yield the existence of a deterministic constant \(\tilde{C}>0\) for which, when \(P\) is sufficiently large,
\[\mathbb{E}R\leq\tilde{C}P^{2-\alpha\beta}\quad\text{and}\quad\operatorname{ Var}R\leq\tilde{C}P^{2-\alpha\beta}.\]
We rewrite the upper bound of equation (25) as
\[\frac{R}{P^{1-\frac{\alpha}{4}}}=\frac{R-\mathbb{E}R}{P^{1-\frac{\alpha}{4}}} +\frac{\mathbb{E}R}{P^{1-\frac{\alpha}{4}}}. \tag{26}\]
Assume further that \(P\) is sufficiently large. The second term in equation (26)
\[\frac{\mathbb{E}R}{P^{1-\frac{\alpha}{4}}}\leq\tilde{C}P^{1-\alpha\beta+\frac {\alpha}{4}}\to 0\]
since \(\beta>\frac{1}{\alpha}+\frac{1}{4}\). By Chebychev's inequality, for any \(t>0\)
\[\mathbb{P}\Big{(}\big{\{}|R-\mathbb{E}R|\geq tP^{1-\frac{\alpha}{4}}\big{\}} \Big{)}\leq\frac{\operatorname{Var}R}{t^{2}P^{2-\frac{\alpha}{2}}}\leq\frac{ \tilde{C}P^{\alpha(\frac{1}{2}-\beta)}}{t^{2}}\to 0,\]
(since \(\beta>\frac{1}{\alpha}+\frac{1}{4}\) and \(2<\alpha<4\) it follows that \(\Big{(}\frac{1}{2}-\beta\Big{)}\alpha<0\)). Thus, the first term in equation (26) converges in probability to \(0\). The sum of both terms converge to \(0\) in probability, which in combination with inequality (25) yields the statement of the Lemma.
### Properties of truncated elements
We will use of the following properties of the entries of \(\hat{\mathbf{X}}_{N}\), analogous to those proven in [1, Lemma 3.1].
**Lemma 3.2** (Truncation Bounds).: _With the same notation as in Lemma 3.1, the following bounds hold for some \(\kappa>0\) depending only on \(\beta\), \(\alpha\) and \(\ell\)._
1. \(|\mu_{N}|\leq\kappa N^{\beta-\alpha}\)_._
2. _Defining_ \(\sigma_{N}^{2}:=\mathbb{E}|\hat{x}_{i,j}|^{2}\)_, then we have_ \(|\sigma_{N}^{2}-1|\leq\kappa N^{2\beta(1-\alpha)}\)_._
3. \(\mathbb{E}|\hat{x}_{i,j}|^{3}\leq\kappa N^{\beta(3-\alpha)_{+}}\)__
4. \(\mathbb{E}|\hat{x}_{i,j}|^{4}\leq\kappa N^{\beta(4-\alpha)}\)__
5. _For any_ \(\lambda\in\mathbb{C}\) _such that_ \(\Im\lambda\leq 0\)_,_ \[\phi_{N}(\lambda) :=\mathbb{E}\bigg{[}\exp\bigg{(}-\mathbf{i}\left|\frac{\hat{x}_{i,j}}{\sqrt{N}}\right|^{2}\lambda\bigg{)}\bigg{]},\] (27) _where the function_ \(\mathfrak{E}_{N}(z)\) _is analytic in_ \(z\) _on_ \(\Re(z)>0,\) _uniformly on_ \(N\) _bounded on any compact there and uniformly on_ \(N\) _tends to zero when_ \(z\to 0.\)__
Proof.: We use Definition 2.1 to obtain the bounds on the integrals and prove parts 1--4 of the Lemma. Since \(x_{i,j}\) is centred,
\[|\mu_{N}|=\Big{|}\mathbb{E}\big{[}x_{i,j}\mathbf{1}_{|x_{i,j}|\geq N^{\beta}} \big{]}\Big{|}\leq\int_{N^{\beta}}^{\infty}\mathbb{P}\big{(}\big{\{}|x_{i,j}|> t\big{\}}\big{)}\,\mathrm{d}\,t\leq\kappa N^{\beta(1-\alpha)},\]
which proves the first part of the Lemma. Since the variance of \(x_{i,j}\) is \(1\),
\[\big{|}\sigma_{N}^{2}-1\big{|}=\mathbb{E}\big{[}x_{i,j}^{2}\mathbf{1}_{|x_{i,j }|\geq N^{\beta}}\big{]}=\int_{N^{\beta}}t\mathbb{P}\big{(}\big{\{}|x_{i,j}|> t\big{\}}\big{)}\,\mathrm{d}\,t\leq\kappa N^{2\beta(1-\alpha)},\]
which concludes the second part. We obtain analogously the third and the fourth part.
It is left to prove the part 5. Let \(x\) have the same distribution as \(x_{i,j}\). We define
\[\phi_{N}^{(1)}(\lambda):=\mathbb{E}\bigg{[}\exp\bigg{(}-\frac{\mathbf{i} \lambda|x-\mu_{N}|^{2}}{N}\bigg{)}\bigg{]}.\]
Next, we estimate the difference
\[\Delta_{N}^{(1)}(\lambda) :=\phi_{N}^{(1)}(\lambda)-\phi_{N}(\lambda) \tag{28}\]
Using that for any \(t\in\mathbb{R}\), since \(\Im(\lambda)\leq 0\), \(|\exp(-\mathbf{i}\lambda t)|\leq 1\), we obtain the bound
\[\sup_{\lambda\in\mathbb{C}^{-}}|\Delta_{N}^{(1)}(\lambda)|\leq 2\mathbb{P}\big{(} \{|x|>N^{\beta}\}\big{)}\leq\kappa N^{-\alpha\beta}. \tag{29}\]
Next, we consider the function
\[\phi_{N}^{(2)}(\lambda):=\mathbb{E}\bigg{[}\exp\bigg{(}-\frac{\mathbf{i}\lambda|x |^{2}}{N}\bigg{)}\bigg{]},\]
and estimate the difference
\[\Delta_{N}^{(2)}(\lambda):=\phi_{N}^{(1)}(\lambda)-\phi_{N}^{(2)}( \lambda)\\ =\mathbb{E}\bigg{[}\bigg{\{}\exp\bigg{(}-\frac{\mathbf{i}\lambda|x -\mu_{N}|^{2}}{N}\bigg{)}-\exp\bigg{(}-\frac{\mathbf{i}\lambda|x|^{2}}{N} \bigg{)}\bigg{\}}\bigg{]}. \tag{30}\]
Using the representation
\[-\mathbf{i}\lambda\int_{p_{1}}^{p_{2}}\exp(-\mathbf{i}\lambda s)\ \mathrm{d}\,s=\exp(- \mathbf{i}\lambda p_{1})-\exp(-\mathbf{i}\lambda p_{2}),\qquad p_{1},p_{2}\in \mathbb{R}^{+},\]
we have the bound
\[\bigg{|}\exp\bigg{(}-\frac{\mathbf{i}\lambda|x-\mu_{N}|^{2}}{N}\bigg{)}-\exp \bigg{(}-\frac{\mathbf{i}\lambda|x|^{2}}{N}\bigg{)}\bigg{|}\leq\min\bigg{(}2, \frac{|\lambda|\big{|}|x-\mu_{N}|^{2}-|x|^{2}}{N}\bigg{)}.\]
Hence,
\[\big{|}\Delta_{N}^{(2)}(\lambda)\big{|}\leq 2 \mathbb{P}\Big{(}\big{\{}|\lambda|\big{|}|x-\mu_{N}|^{2}-|x|^{2}\big{|}>2N \big{\}}\Big{)}+\frac{|\lambda|}{N}\mathbb{E}\Big{[}\big{|}|x-\mu_{N}|^{2}-|x |^{2}\big{|}\Big{]}\\ \leq\frac{3|\lambda|}{N}\mathbb{E}\Big{[}\big{|}|x-\mu_{N}|^{2}- |x|^{2}\big{|}\Big{]}=\frac{3|\lambda|}{N}o(1). \tag{31}\]
Now we expand \(\phi_{N}^{(2)}(\lambda)\) using the same proof of [1, Theorem 8.1.6] (where \(\lambda\in\mathbb{R}\)) adapted for complex \(\lambda\). We do the expansion for \(\Im z\leq 0\) and \(|z|\to 0\), for the function
\[\psi(z)=\mathbb{E}\big{[}\exp(-iz|x|^{2})\big{]}\]
and will apply this expansion to \(z=\frac{\lambda}{N}\) which makes \(\psi(z)=\phi_{N}^{(2)}(\lambda)\).
Let \(\tilde{F}\) be the distribution function of \(|x|^{2}\) and let \(G=1-\tilde{F}.\) By Definition 2.1 we have
\[G(u)=\mathbb{P}\big{(}\big{\{}|x|^{2}>u\big{\}}\big{)}\sim\frac{c\ell(\sqrt{u })u^{-\frac{\alpha}{2}}}{-\Gamma\left(1-\frac{\alpha}{2}\right)}\qquad u\to\infty.\]
We may write
\[\psi(z)=\int_{0}^{\infty}\exp\big{[}-\mathbf{i}zu\big{]}\,\mathrm{d}\,\tilde{F }(u)=-\int_{0}^{\infty}\exp\big{[}-\mathbf{i}zu\big{]}\,\mathrm{d}\,G(u),\]
so that
\[1-\psi(z)=\int_{0}^{\infty}\big{(}\exp[-\mathbf{i}zu]-1\big{)}\,\mathrm{d}\,G (u),\]
We integrate by parts this representation:
\[1-\psi(z)={\bf i}z\int_{0}^{\infty}\exp[-{\bf i}zu]G(u)\ {\rm d}\,u.\]
Recall that
\[\int_{0}^{\infty}G(u)\ {\rm d}\,u=\mathbb{E}\big{[}|x|^{2}\big{]}=1.\]
Combining the equations above, we get
\[1-{\bf i}z-\psi(z)={\bf i}z\int_{0}^{\infty}\big{(}\exp[-{\bf i}zu]-1\big{)}G(u )\ {\rm d}\,u.\]
Denote \(\omega:=\frac{z}{|z|}=\exp[{\bf i}\arg(z)]\). We change variables \(u\mapsto t/|z|\), so that
\[1-{\bf i}z-\psi(z)={\bf i}\omega\int_{0}^{\infty}\big{(}\exp[-{\bf i}\omega t] -1\big{)}G\bigg{(}\frac{t}{|z|}\bigg{)}\ {\rm d}\,t,\]
which implies
\[\frac{1-{\bf i}z-\psi(z)}{G\big{(}\frac{1}{|z|}\big{)}}={\bf i}\omega\int_{0}^ {\infty}\big{(}\exp[-{\bf i}\omega t]-1\big{)}\frac{G\big{(}\frac{t}{|z|}\big{)} }{G\big{(}\frac{1}{|z|}\big{)}}\ {\rm d}\,t.\]
For \(|z|\to 0\), the integral on the right-hand side converges uniformly by \(\omega\) to
\[{\bf i}\omega\int_{0}^{\infty}\frac{\big{(}\exp[-{\bf i}\omega t]-1\big{)}}{t ^{\frac{\alpha}{2}}}\ {\rm d}\,t. \tag{32}\]
We evaluate this integral using the same method as in [13, Lemma 11]
\[{\bf i}\omega\int_{0}^{\infty}\frac{\big{(}\exp[-{\bf i}\omega t] -1\big{)}}{t^{\frac{\alpha}{2}}}\ {\rm d}\,t={\bf i}\omega\int_{0}^{\infty}t^{-\frac{\alpha}{2}}\int_{0}^{1} \frac{{\rm d}}{{\rm d}\,\nu}\exp(-{\bf i}\omega\nu t)\ {\rm d}\,\nu\ {\rm d}\,t,\] \[\qquad\qquad\qquad=-({\bf i}\omega)^{2}\int_{0}^{1}\int_{0}^{ \infty}t^{1-\frac{\alpha}{2}}\exp(-{\bf i}\omega\nu t)\ {\rm d}\,t\ {\rm d}\,\nu,\] \[\qquad\qquad=-({\bf i}\omega)^{2}\int_{0}^{1}\nu^{\frac{\alpha}{ 2}-2}\ {\rm d}\,\nu\int_{0}^{\infty}s^{1-\frac{\alpha}{2}}\exp(-{\bf i}\omega s)\ {\rm d}\,s,\] \[\qquad\qquad=-({\bf i}\omega)^{2}\frac{\Gamma\big{(}\frac{\alpha} {2}-1\big{)}\Gamma\big{(}\frac{\alpha}{2}\big{)}}{\Gamma\big{(}\frac{\alpha}{ 2}\big{)}}\int_{0}^{\infty}s^{1-\frac{\alpha}{2}}\exp(-{\bf i}\omega s)\ {\rm d}\,s\] \[\qquad\qquad=-({\bf i}\omega)^{2}\frac{\Gamma\big{(}\frac{\alpha} {2}-1\big{)}\Gamma\big{(}2-\frac{\alpha}{2}\big{)}}{\Gamma\big{(}\frac{\alpha} {2}\big{)}}\big{(}-{\bf i}\overline{\omega}\big{)}^{1-\frac{\alpha}{2}}\big{(} -{\bf i}\overline{\omega}\big{)}\] \[\qquad\qquad\qquad=-\big{(}{\bf i}\omega\big{)}^{\frac{\alpha}{2} }\frac{\Gamma\big{(}\frac{\alpha}{2}-1\big{)}\Gamma\big{(}2-\frac{\alpha}{2} \big{)}}{\Gamma\big{(}\frac{\alpha}{2}\big{)}}=({\bf i}\omega)^{\frac{\alpha}{2 }}\Gamma\bigg{(}1-\frac{\alpha}{2}\bigg{)}. \tag{33}\]
Recalling, that
\[G\left(\frac{1}{|z|}\right)\sim-\frac{c}{\Gamma\left(1-\frac{\alpha}{2}\right)} |z|^{\frac{\alpha}{2}}\]
we conclude, that
\[\psi(z)=1-\mathbf{i}z+c\left(\mathbf{i}z\right)^{\alpha/2}+|z|^{\alpha/2}o(1).\]
In the next sections, to shorten the notations, we will write \(\mathbf{X}_{N}\) instead of \(\hat{\mathbf{X}}_{N}\) and \(x_{i,j}\) instead of \(\hat{x}_{i,j}\).
### Martingale decomposition
To prove Theorem 2.1 we namely need to show that for any \(z_{1},z_{2},\ldots,z_{k}\in\mathbb{C}\backslash\mathbb{R}\) a vector \(\langle\theta_{N}(z_{1}),\theta_{N}(z_{2}),\ldots,\theta_{N}(z_{k})\rangle\) converges to a complex Gaussian vector in distribution with the proper covariance matrix. Using, that \(\theta_{N}(\overline{z})=\overline{\theta_{N}(z)}\) it is equal to the fact, that for any \(k\) for any \(z_{1},z_{2},\ldots z_{k}\in\mathbb{C}\backslash\mathbb{R}\) and \(\alpha_{1}^{\Re},\alpha_{2}^{\Re},\ldots\alpha_{k}^{\Re},\alpha_{1}^{\Im}, \alpha_{2}^{\Im},\ldots\alpha_{k}^{\Im}\in\mathbb{R}\) the linear combination
\[\alpha_{1}^{\Re}\cdot\left(\theta_{N}(z_{1})+\theta_{N}( \overline{z}_{1})\right)+\cdots+\alpha_{k}^{\Re}\cdot\left(\theta_{N}(z_{k}) +\theta_{N}(\overline{z}_{k})\right)\\ +\mathbf{i}\left(\alpha_{1}^{\Im}\cdot\left(\theta_{N}(z_{1})- \theta_{N}(\overline{z}_{1})\right)+\cdots+\alpha_{k}^{\Im}\cdot\left(\theta_ {N}(z_{k})-\theta_{N}(\overline{z}_{k})\right)\right) \tag{34}\]
converges in distribution to a Gaussian random variable whose variance agrees to the covariance kernel form Theorem 2.1. Define the filtration \(\mathcal{F}_{N,k}\) where \(0\leq k\leq N\) to be the \(\sigma\)-algebra generated by the first \(k\)-columns of the matrix \(\mathbf{X}_{N}\). Using this filtration, we will apply the Martingale Central Limit Theorem (Lemma A.9) to the decomposition into the martingale difference sequence of the random variable (34).
To shorten the notations, we denote \(\mathbb{E}_{k}:=\mathbb{E}\left[\cdot|\mathcal{F}_{N,k}\right].\) Consider the following array:
\[Y_{k}(z):=\frac{1}{N^{1-\alpha/4}}\left(\mathbb{E}_{k}-\mathbb{E}_{k-1}\right) \left[\operatorname{Tr}\frac{1}{z-\mathbf{A}_{N}}\right]. \tag{35}\]
The independence of \(\mathbf{x}_{k}\) and \(\mathbf{A}_{N,k}:=\mathbf{A}_{N}-\frac{1}{N}\mathbf{x}_{k}\mathbf{x}_{k}^{*}\) yields
\[\left(\mathbb{E}_{k}-\mathbb{E}_{k-1}\right)\left[\operatorname{Tr}\frac{1}{ z-\mathbf{A}_{N,k}}\right]=0.\]
This way, we rewrite
\[Y_{k}(z)=\frac{1}{N^{1-\alpha/4}}\left(\mathbb{E}_{k}-\mathbb{E} _{k-1}\right)\left(\operatorname{Tr}\frac{1}{z-\mathbf{A}_{N}}\right)=\\ \frac{1}{N^{1-\alpha/4}}\left(\mathbb{E}_{k}-\mathbb{E}_{k-1} \right)\left(\operatorname{Tr}\left[\frac{1}{z-\mathbf{A}_{N}}-\frac{1}{z- \mathbf{A}_{N,k}}\right]\right). \tag{36}\]
Corollary A.1 yields that
\[Y_{k}(z)\leq\frac{\pi}{N^{1-\alpha/4}|\Im z|}. \tag{37}\]
Since the expression (34) is the linear combination of \(Y_{k}(z)\) for different \(z\), the equation above provides us with the first condition of Lemma A.9. We conclude, that Theorem 2.1 follows from the lemma below.
**Lemma 3.3**.: _For all pairs \(z,w\in\mathbb{C}\backslash\mathbb{R}\)_
\[\sum_{k=1}^{N}\mathbb{E}_{k-1}\left[Y_{k}(z)Y_{k}(w)\right]\overset{\mathbb{P}} {\rightarrow}C(z,w). \tag{38}\]
Denote \(\mathbf{G}_{N,k}(z):=\left(z-\mathbf{A}_{N,k}\right)^{-1}.\) We rewrite the right-hand side of equation (36) as in [10, p.54]:
\[Y_{k}(z)=\frac{1}{N^{1-\alpha/4}}\left(\mathbb{E}_{k}-\mathbb{E}_{k-1}\right) \frac{\frac{1}{N}\mathbf{x}_{k}^{*}\mathbf{G}_{N,k}(z)^{2}\mathbf{x}_{k}}{1- \frac{1}{N}\mathbf{x}_{k}^{*}\mathbf{G}_{N,k}(z)\mathbf{x}_{k}}. \tag{39}\]
From the equation above,
\[Y_{k}(z)=\frac{1}{N^{1-\alpha/4}}\left(\mathbb{E}_{k}-\mathbb{E} _{k-1}\right)\left[\frac{-\frac{1}{N}\mathbf{x}_{k}^{*}\mathbf{G}_{N,k}(z)^{2 }\mathbf{x}_{k}}{1-\frac{1}{N}\mathbf{x}_{k}^{*}\mathbf{G}_{N,k}(z)\mathbf{x} _{k}}+\frac{1}{z}\right]\\ =\frac{1}{N^{1-\alpha/4}}\left(\mathbb{E}_{k}-\mathbb{E}_{k-1} \right)\frac{\frac{\partial}{\partial z}\left(z-\frac{1}{N}\mathbf{x}_{k}^{*}z \mathbf{G}_{N,k}(z)\mathbf{x}_{k}\right)}{z-\frac{1}{N}\mathbf{x}_{k}^{*}z \mathbf{G}_{N,k}(z)\mathbf{x}_{k}}. \tag{40}\]
For short, denote \(g_{N,k}(z):=z-\frac{1}{N}\mathbf{x}_{k}^{*}z\mathbf{G}_{N,k}(z)\mathbf{x}_{k}.\) Then, using this notation we futher rewrite (40) as
\[Y_{k}(z)=\frac{1}{N^{1-\alpha/4}}\left(\mathbb{E}_{k}-\mathbb{E} _{k-1}\right)\frac{\frac{\partial}{\partial z}g_{N,k}(z)}{g_{N,k}(z)}\\ =\frac{1}{N^{1-\alpha/4}}\left(\mathbb{E}_{k}-\mathbb{E}_{k-1} \right)\frac{\partial}{\partial z}\log\left|g_{N,k}(z)\right|^{2}. \tag{41}\]
We swap the derivative and the expectation in the equation above to get
\[Y_{k}(z)=\frac{\partial}{\partial z}\frac{1}{N^{1-\alpha/4}}\left(\mathbb{E} _{k}-\mathbb{E}_{k-1}\right)\log\left|g_{N,k}(z)\right|^{2}, \tag{42}\]
and justify the swap the following way. Application of Lemma A.5 and Lemma A.4 to the definition of \(g_{N,k}(z)\) yields
\[\left|\Im z\right|\leq\left|g_{N,k}(z)\right|\leq\left|z\right|+\frac{\left|z \right|}{\left|\Im z\right|}\frac{1}{N}\|\mathbf{x}_{k}\|_{2}^{2}. \tag{43}\]
Thus,
\[2\log\left|\Im z\right|\leq\log\left|g_{N,k}(z)\right|^{2}\leq 2\log\left( \left|z\right|+\frac{\left|z\right|}{\left|\Im z\right|}\frac{1}{N}\|\mathbf{x }_{k}\|_{2}^{2}\right). \tag{44}\]
The function \(\log\left|g_{N,k}(z)\right|^{2}\) is harmonic as it is a real part of the holomorphic function. Thus, a combination of Lemma A.8 and Lemma A.7 justifies the swap.
### Diagonalization
Further \(\operatorname{diag}\left[\mathbf{G}_{N,k}\right]\) will denote a \(P\times P\) matrix, whose diagonal entries match those of the matrix \(\mathbf{G}_{N,k}\), and off-diagonal entries are equal to \(0\).
Let us denote
\[\tilde{g}_{N,k}(z):=z-\frac{1}{N}\mathbf{x}_{k}^{*}z\operatorname{diag}\left[ \mathbf{G}_{N,k}(z)\right]\mathbf{x}_{k}. \tag{45}\]
Define
\[\tilde{Y}_{k}(z):=\frac{1}{N^{1-\alpha/4}}\left(\mathbb{E}_{k}-\mathbb{E}_{k-1 }\right)\frac{\frac{\partial}{\partial z}\tilde{g}_{N,k}(z)}{\tilde{g}_{N,k}(z )}. \tag{46}\]
Similarly equation(42), equation (46) can be rewritten as
\[\tilde{Y}_{k}(z)=\frac{\partial}{\partial z}\frac{1}{N^{1-\alpha/4}}\left( \mathbb{E}_{k}-\mathbb{E}_{k-1}\right)\log\left|\tilde{g}_{N,k}(z)\right|^{2}. \tag{47}\]
The main purpose of the further computations in this section will be to justify the replacement of \(Y\) with \(\tilde{Y}\) in Lemma 3.3.
**Lemma 3.4**.: _For \(0<t<y\) and \(\tilde{Y}_{k}\) defined as above the following there exist \(C,\delta>0\) such that:_
\[\left|\sum_{k=1}^{N}\mathbb{E}_{k-1}\left[Y_{k}(z)Y_{k}(w)-\tilde{Y}_{k}(z) \tilde{Y}_{k}(w)\right]\right|\leq CN^{-\delta}\frac{|z||w|}{|\Im z|^{3}|\Im w |^{3}}.\]
Define the operator \(\operatorname{Cov}_{k-1}[\cdot,\cdot]\) by
\[\operatorname{Cov}_{k-1}[a,b]:=\mathbb{E}_{k-1}\Big{[}(\mathbb{E} _{k}[a]-\mathbb{E}_{k-1}[a])\left(\mathbb{E}_{k}[b]-\mathbb{E}_{k-1}[b]\right) \Big{]}\\ =\mathbb{E}_{k-1}\big{[}\mathbb{E}_{k}\left[a\right]\mathbb{E}_{ k}\left[b\right]\big{]}-\mathbb{E}_{k-1}\big{[}a\big{]}\mathbb{E}_{k-1}\big{[}b \big{]}. \tag{48}\]
This notation will simplify some computations because for random variable \(\xi\), that is independent of \(\mathbf{x}_{k}\)
\[\operatorname{Cov}_{k-1}[a+\xi,b]=\operatorname{Cov}_{k-1}[a,b]. \tag{49}\]
Denote as \(F_{k}(z):=\log|g_{N,k}(z)|^{2}\) and \(\tilde{F}_{k}(z):=\log|\tilde{g}_{N,k}(z)|^{2}.\) As it was done previously to justify (42), we justify the swap of derivative and expectation once again to get
\[\mathbb{E}_{k-1}\left[Y_{k}(z)Y_{k}(w)\right]=N^{\alpha/2-2}\frac{\partial}{ \partial z}\frac{\partial}{\partial w}\operatorname{Cov}_{k-1}\left[F_{k}(z), F_{k}(w)\right] \tag{50}\]
and
\[\mathbb{E}_{k-1}\left[\tilde{Y}_{k}(z)\tilde{Y}_{k}(w)\right]=N^{\alpha/2-2} \frac{\partial}{\partial z}\frac{\partial}{\partial w}\operatorname{Cov}_{k- 1}\left[\tilde{F}_{k}(z),\tilde{F}_{k}(w)\right]. \tag{51}\]
Using Corollary A.2 is is easy to see that statement of the Lemma 3.4 would follow from the existence of \(C,\delta>0\), independent of \(k\) and \(N\) such that
\[N^{\alpha/2-2}\bigg{|}\operatorname{Cov}_{k-1}\Bigl{[}F_{k}(z),F_{k}(w )\Bigr{]}-\operatorname{Cov}_{k-1}\Bigl{[}\tilde{F}_{k}(z),\tilde{F}_{k}(w) \Bigr{]}\bigg{|}\\ \leq\frac{CN^{-\delta}}{N}\frac{|z||w|}{|\Im z|^{2}|\Im w|^{2}}. \tag{52}\]
Define also functions \(\hat{g}_{N,k}(z):=z-\frac{1}{N}z\operatorname{Tr}\mathbf{G}_{N,k}(z)\) and \(\hat{F}_{k}(z):=\log|\hat{g}_{N,k}(z)|^{2}\). Notice, that they are independent of the \(k\)-th column. Denote \(\epsilon_{k}(z):=F_{k}(z)-\tilde{F}_{k}(z)\) and \(\psi_{k}(z):=\tilde{F}_{k}(z)-\hat{F}_{k}(z).\) Using that \(\mathbb{E}_{k}\hat{F}_{k}(z)\in\mathcal{F}_{N,k}\) we see, that
\[\operatorname{Cov}_{k-1}\Bigl{[}F_{k}(z),F_{k}(w)\Bigr{]}= \operatorname{Cov}_{k-1}\Bigl{[}\hat{F}(z)+\psi_{k}(z)+\epsilon_{k}(z),\hat{ F}(w)+\psi_{k}(w)+\epsilon_{k}(w)\Bigr{]}\\ =\operatorname{Cov}_{k-1}\Bigl{[}\psi_{k}(z)+\epsilon_{k}(z), \psi_{k}(w)+\epsilon_{k}(w)\Bigr{]} \tag{53}\]
and
\[\operatorname{Cov}_{k-1}\left[\tilde{F}_{k}(z),\tilde{F}_{k}(w) \right]=\operatorname{Cov}_{k-1}\left[\hat{F}(z)+\psi_{k}(z),\hat{F}(w)+\psi_ {k}(w)\right]\\ =\operatorname{Cov}_{k-1}\Bigl{[}\psi_{k}(z),\psi_{k}(w)\Bigr{]}. \tag{54}\]
Thus
\[\operatorname{Cov}_{k-1}\left[F_{k}(z),F_{k}(w)\right]- \operatorname{Cov}_{k-1}\left[\tilde{F}_{k}(z),\tilde{F}_{k}(w)\right]\\ =\operatorname{Cov}_{k-1}\left[\psi_{k}(z)\epsilon_{k}(w)\right] +\operatorname{Cov}_{k-1}\left[\epsilon_{k}(z),\psi_{k}(w)\right]+ \operatorname{Cov}_{k-1}\left[\epsilon_{k}(z),\epsilon_{k}(w)\right]\\ =:T_{1}+T_{2}+T_{3}. \tag{55}\]
This way, Lemma 3.4 follows from the lemma below:
**Lemma 3.5**.: _Consider \(T_{1},T_{2},T_{3}\) defined above. There exists \(C,\delta>0,\) independent of \(k,N\) such that_
\[N^{\alpha/2-1}|T_{i}|<O(1)N^{-\delta}\frac{|z||w|}{|\Im z|^{2}|\Im w|^{2}}\]
_for all \(z,w\in\mathbb{C}\backslash\mathbb{R}\) and all \(i\in\left\{1,2,3\right\}.\)_
Proof.: Denote:
\[\eta_{k}(z):=g_{N,k}(z)-\tilde{g}_{N,k}(z)=\frac{1}{N}z\sum_{i\neq j}\overline {x_{ik}}x_{jk}\left(\mathbf{G}_{N,k}(z)\right)_{ij} \tag{56}\]
and
\[E_{k}(z):=\tilde{g}_{N,k}(z)-\hat{g}_{N,k}(z)=\frac{1}{N}z\sum_{i=1}^{P}\left( \mathbf{G}_{N,k}(z)\right)_{ii}\left(|x_{ik}|^{2}-1\right). \tag{57}\]
\[\epsilon_{k}(z)=\log\left|\frac{g_{N,k}(z)}{\tilde{g}_{N,k}(z)}\right|^{2}=2\log \left|1+\frac{\eta_{k}(z)}{\tilde{g}_{N,k}(z)}\right|=-2\log\left|1-\frac{\eta_ {k}(z)}{g_{N,k}(z)}\right| \tag{58}\]
Using that for \(z\in\mathbb{C}\)\(|1+z|\leq 1+|z|\) and that for \(x>0\) holds \(\log(1+x)\leq x\), and \(|\tilde{g}_{N,k}(z)|\geq|\Im z|\)
\[\log\left|1+\frac{\eta_{k}(z)}{\tilde{g}_{N,k}(z)}\right|\leq\log\left[1+ \left|\frac{\eta_{k}(z)}{\tilde{g}_{N,k}(z)}\right|\right]\leq\frac{|\eta_{k} (z)|}{|\Im z|}, \tag{59}\]
and similarly, using \(|g_{N,k}(z)|\geq|\Im z|\),
\[\log\left|1-\frac{\eta_{k}(z)}{g_{N,k}(z)}\right|\leq\frac{|\eta_{k}(z)|}{| \Im z|}. \tag{60}\]
Equation (58) and inequalities (59), (60) allow us to conclude, that
\[-2\frac{|\eta_{k}(z)|}{|\Im z|}\leq\epsilon_{k}(z)\leq 2\frac{|\eta_{k}(z)|}{| \Im z|}. \tag{61}\]
Using the same method, one also can prove that
\[-2\frac{|E_{k}(z)|}{|\Im z|}\leq\psi_{k}(z)\leq 2\frac{|E_{k}(z)|}{|\Im z|}. \tag{62}\]
Combining (61) and (62) with Cauchy-Schwarz inequality, we get
\[|T_{1}| \leq 8\frac{\sqrt{\mathbb{E}_{k-1}|E_{k}(z)|^{2}\mathbb{E}_{k-1}| \eta_{k}(w)|^{2}}}{|\Im z||\Im w|} \tag{63}\] \[|T_{2}| \leq 8\frac{\sqrt{\mathbb{E}_{k-1}|\eta_{k}(z)|^{2}\mathbb{E}_{k-1 }|E_{k}(w)|^{2}}}{|\Im z||\Im w|}\] (64) \[|T_{3}| \leq 8\frac{\sqrt{\mathbb{E}_{k-1}|\eta_{k}(z)|^{2}\mathbb{E}_{k-1 }|\eta_{k}(w)|^{2}}}{|\Im z||\Im w|}. \tag{65}\]
Now, we need to estimate \(\mathbb{E}_{k-1}|\eta_{k}|^{2}\) and \(\mathbb{E}_{k-1}|E_{k}|^{2}\).
**Lemma 3.6**.: _For \(\eta_{k}(z)\) defined above the following holds:_
1. \(\mathbb{E}_{k-1}|\eta_{k}(z)|^{2}\leq O(1)N^{-1}\frac{|z|^{2}}{|\Im z|^{2}}.\)__
2. \(\mathbb{E}_{k-1}|E_{k}(z)|^{2}\leq O(1)\frac{|z|^{2}}{|\Im z|^{2}}N^{4/\alpha -\alpha/4-1+\epsilon_{0}},\) _where_ \(\epsilon_{0}:=\epsilon(4-\alpha)\)__
Proof.: The first part of the lemma can be proven by combining Ward identity (Lemma A.6) and Lemma A.5:
\[\mathbb{E}_{k-1}|\eta_{k}(z)|^{2}=\frac{1}{N^{2}}|z|^{2}\mathbb{E}_{ \mathbf{x}_{k}}|\sum_{i\neq j}\overline{x_{ik}}x_{jk}(\mathbf{G}_{N,k}(z))_{i,j }|^{2}\\ \leq 4|z|^{2}\sigma_{N}^{4}\frac{1}{N^{2}}\sum_{i,j}|(\mathbf{G}_{ N,k}(z))_{i,j}|^{2}=4\sigma_{N}^{4}N^{-2}|z|^{2}\frac{1}{|\Im z|}\sum_{i=1}^{P}| \Im\left(\mathbf{G}_{N,k}(z)\right)_{ii}|\\ \leq 4\sigma_{N}^{4}N^{-2}|z|^{2}\frac{1}{|\Im z|}\sum_{i=1}^{P}| \left(\mathbf{G}_{N,k}(z)\right)_{ii}|\leq O(1)N^{-1}\frac{|z|^{2}}{|\Im z|^{ 2}}. \tag{66}\]
The second part of the lemma follows from Lemma A.5 and part 4 of Lemma 3.2.
Combining Lemma 3.6 with equations (63), (64) and (65) we can see, that
\[|T_{1}|,|T_{2}|<O(1)\frac{N^{-1/2+(2/\alpha-\alpha/8-1/2)+\epsilon_{0}/2}|z|| w|}{|\Im z|^{2}|\Im w|^{2}} \tag{67}\]
and
\[|T_{3}|<O(1)\frac{N^{-1}|z||w|}{|\Im z|^{2}|\Im w|^{2}}. \tag{68}\]
For any \(\alpha\in(2,4)\) there exist small enough \(\epsilon>0\) such that
\[-1/2+(2/\alpha-\alpha/8-1/2)+\epsilon_{0}/2<1-\alpha/2. \tag{69}\]
Also, when \(\alpha\in(2,4)\)
\[-1<1-\alpha/2,\]
that completes the proof of Lemma 3.5.
### Computation of the limit
Recalling Lemma 3.4, to prove Lemma 3.3 we should compute the limit of
\[\sum_{k=1}^{N}\mathbb{E}_{k-1}\left[\tilde{Y}_{k}(z)\tilde{Y}_{k}(w)\right].\]
Namely, we should prove that for fixed \(z\) and \(w\)
\[\frac{1}{N^{2-\alpha/2}}\sum_{k=1}^{N}\mathrm{Cov}_{k-1}\left[\frac{\frac{ \partial}{\partial z}\tilde{g}_{N,k}(z)}{\tilde{g}_{N,k}(z)},\frac{\frac{ \partial w}{\partial w}\tilde{g}_{N,k}(w)}{\tilde{g}_{N,k}(w)}\right]\stackrel{{ \mathbb{P}}}{{\to}}C(z,w). \tag{70}\]
**Definition 3.1** (Uniform convergence in probability).: _We say that the sequence array of random variables \(X_{N}^{(k)}\) uniformly on \(k\) converges in probability to the constant \(C\) when \(N\to\infty\) if for all \(\epsilon>0\)_
\[\max_{k}\mathbb{P}\left(|X_{N}^{(k)}-C|>\epsilon\right)\underset{N\to\infty}{ \to}0. \tag{71}\]
By Lemma A.10, the convergence in equation (70) will follow from the Lemma below.
**Lemma 3.7**.: _Fix \(z,w.\)_
\[N^{\alpha/2-1}\operatorname{Cov}_{k-1}\left[\frac{\frac{\partial}{\partial z} \tilde{g}_{N,k}(z)}{\tilde{g}_{N,k}(z)},\frac{\frac{\partial}{\partial w}\tilde {g}_{N,k}(w)}{\tilde{g}_{N,k}(w)}\right]\underset{N\to\infty}{\to}C(z,w)\]
_uniformly on \(k\) in probability. Also, there exists a constant \(D(z,w)>0\) such that for all \(k,N\)_
\[\left|N^{\alpha/2-1}\operatorname{Cov}_{k-1}\left[\frac{\frac{\partial}{ \partial z}\tilde{g}_{N,k}(z)}{\tilde{g}_{N,k}(z)},\frac{\frac{\partial}{ \partial w}\tilde{g}_{N,k}(w)}{\tilde{g}_{N,k}(w)}\right]\right|<D(z,w).\]
Further, we will do some preparatory work to rewrite expressions in Lemma 3.7.
**Definition 3.2** (\(k\)-independent copy).: _Fix \(k.\) Denote \(\underline{\mathbf{X}}_{N}\) a \(P\times N\) matrix, with the same distribution as \(\mathbf{X}_{N},\) whose first \(k\) columns match those of matrix \(\mathbf{X}_{N},\) and others are independent of \(\mathbf{X}_{N}.\) For any random variable \(a:=F(\mathbf{X}_{N})\) for some non-random function \(F\) on the space of \(P\times N\) matrices we will denote \(\underline{a}:=F(\underline{\mathbf{X}}_{N}).\)_
Let \(\mathcal{F}_{\mathbf{x}_{k}}\) be the \(\sigma\)-algebra, generated by all the columns of matrix \(\mathbf{X}_{N},\) apart from the column \(\mathbf{x}_{k}.\) We will use the notations \(\mathbb{E}_{\mathbf{x}_{k}}[\cdot]\) and \(\operatorname{Cov}_{\mathbf{x}_{k}}[\cdot,\cdot]\) that act on any random variables \(a,b\) the following way:
\[\mathbb{E}_{\mathbf{x}_{k}}[a]:=\mathbb{E}\left[a\mid\mathcal{F}_{\mathbf{x}_ {k}}\right] \tag{72}\]
and
\[\operatorname{Cov}_{\mathbf{x}_{k}}[a,b]:=\mathbb{E}_{\mathbf{x}_{k}}[ab]- \mathbb{E}_{\mathbf{x}_{k}}[a]\mathbb{E}_{\mathbf{x}_{k}}[b]. \tag{73}\]
We generalize [1, Lemma 3.4] for Sample Covariance matrices.
**Lemma 3.8**.: _Suppose that \(a=F_{1}(\mathbf{X}_{N})\) and \(b=F_{2}(\mathbf{X}_{N}),\) for some non-random functions \(F_{1},F_{2}\) on the space of \(P\times N\) matrices, and \(\operatorname{Var}a<+\infty,\)\(\operatorname{Var}b<+\infty.\) Then_
\[\operatorname{Cov}_{k-1}\left[a,b\right]=\mathbb{E}_{k}\left(\operatorname{ Cov}_{\mathbf{x}_{k}}\left[a,\underline{b}\right]\right). \tag{74}\]
Proof.: \(a\) and \(\underline{b}\) are independent when the first \(k\) columns of the matrix \(\mathbf{X}_{N}\) are fixed. Thus,
\[\mathbb{E}_{k}(\underline{a}\underline{b})=\mathbb{E}_{k}(a)\mathbb{E}_{k}( \underline{b}). \tag{75}\]
Notice, that \(\mathbb{E}_{k}b=\mathbb{E}_{k}\underline{b},\) which further yields
\[\mathbb{E}_{k-1}\left(\mathbb{E}_{k}(a)\mathbb{E}_{k}(b)\right)=\mathbb{E}_{k- 1}\left(\mathbb{E}_{k}(a\underline{b})\right)=\mathbb{E}_{k-1}\left(a \underline{b}\right)=\mathbb{E}_{k}\mathbb{E}_{\mathbf{x}_{k}}\left(a \underline{b}\right). \tag{76}\]
Notice, that \(\underline{\mathbb{E}_{\mathbf{x}_{k}}b}=\mathbb{E}_{\mathbf{x}_{k}}\underline{b}.\) Similarly to equation (75)
\[\mathbb{E}_{k}\left(\mathbb{E}_{\mathbf{x}_{k}}a\mathbb{E}_{\mathbf{x}_{k}} \underline{b}\right)=\mathbb{E}_{k-1}a\mathbb{E}_{k-1}\underline{b}. \tag{77}\]
Combining (76) and (77) we conclude the statement of the Lemma.
Using Lemma 3.8, we conclude that
\[\mathrm{Cov}_{k-1}\left[\frac{\frac{\partial}{\partial z}\tilde{g}_{N,k}(z)}{ \tilde{g}_{N,k}(z)},\frac{\frac{\partial}{\partial w}\tilde{g}_{N,k}(w)}{\tilde {g}_{N,k}(w)}\right]=\mathbb{E}_{k}\left[\mathrm{Cov}_{\mathbf{x}_{k}}\left[ \frac{\frac{\partial}{\partial z}\tilde{g}_{N,k}(z)}{\tilde{g}_{N,k}(z)}, \frac{\frac{\partial}{\partial w}\tilde{g}_{N,k}(w)}{\tilde{g}_{N,k}(w)}\right] \right]. \tag{78}\]
By Lemma A.4, \(\Re\left(\mathrm{sgn}\,\Im z\mathrm{i}\tilde{g}_{N,k}(z)\right)\leq-|\Im z|\). Thus, we can rewrite
\[\frac{\frac{\partial}{\partial z}\tilde{g}_{N,k}(z)}{\tilde{g}_{N,k}(z)}=-\mathbf{i}\,\mathrm{sgn}\,\Im z\frac{\partial}{\partial z}\tilde{g}_ {N,k}(z)\times\int_{0}^{\infty}\exp\left(\mathrm{sgn}\,\Im z\mathbf{i}t\tilde{ g}_{N,k}(z)\right)dt\\ =-\mathbf{i}\,\mathrm{sgn}\,\Im z\int_{0}^{\infty}\frac{ \partial}{\partial z}\frac{1}{\mathbf{i}t\,\mathrm{sgn}\,\Im z}\exp\left( \mathrm{sgn}\,\Im z\mathbf{i}t\tilde{g}_{N,k}(z)\right)dt. \tag{79}\]
Substituting (79) into (78) gives:
\[N^{\alpha/2-1}\,\mathrm{Cov}_{\mathbf{x}_{k}}\left[\frac{ \frac{\partial}{\partial z}\tilde{g}_{N,k}(z)}{\tilde{g}_{N,k}(z)},\frac{ \frac{\partial}{\partial w}\tilde{g}_{N,k}(w)}{\tilde{g}_{N,k}(w)}\right]=\\ =\int_{0}^{\infty}\int_{0}^{\infty}\frac{\partial}{\partial z} \frac{\partial}{\partial w}N^{\alpha/2-1}\frac{\mathrm{Cov}_{\mathbf{x}_{k}} \left[\exp\left(\mathrm{sgn}\,\Im z\mathbf{i}t\tilde{g}_{N,k}(z)\right),\exp \left(\mathrm{sgn}\,\Im w\mathbf{i}s\tilde{g}_{N,k}(w)\right)\right]}{ts}\, \mathrm{d}\,t\,\mathrm{d}\,s. \tag{80}\]
Denote
\[\mathcal{L}_{N,k}(z,t,w,s):=N^{\alpha/2-1}\frac{\mathrm{Cov}_{\mathbf{x}_{k}} \left[\exp\left(\mathrm{sgn}\,\Im z\mathbf{i}t\tilde{g}_{N,k}(z)\right),\exp \left(\mathrm{sgn}\,\Im w\mathbf{i}s\tilde{g}_{N,k}(w)\right)\right]}{ts}.\]
Equation (80) allows to rewrite the first condition of the Lemma 3.7 as
\[\mathbb{E}_{k}\int_{0}^{\infty}\int_{0}^{\infty}\frac{\partial}{\partial z} \frac{\partial}{\partial w}\mathcal{L}_{N,k}(z,t,w,s)\,\mathrm{d}\,t\, \mathrm{d}\,s\underset{N\rightarrow\infty}{\rightarrow}\int_{0}^{\infty}\int_ {0}^{\infty}\frac{\partial}{\partial z}\frac{\partial}{\partial w}\mathcal{L}(z,t,w,s)\,\mathrm{d}\,t\,\mathrm{d}\,s \tag{81}\]
uniformly on \(k\) in probability, and the second condition as
\[\left|\mathbb{E}_{k}\left[\int_{0}^{\infty}\int_{0}^{\infty}\frac{\partial}{ \partial z}\frac{\partial}{\partial w}\mathcal{L}_{N,k}(z,t,w,s)\,\mathrm{d} \,t\,\mathrm{d}\,s\right]\right|<D(z,w). \tag{82}\]
Firstly, we will prove the bound (82). The intermediate lemma from this proof will allow us to cut the domain of integration in (81).
#### 3.5.1 Proof of the bound (82) and simplification of (81).
**Lemma 3.9**.: \[\left|\frac{\partial}{\partial z}\frac{\partial}{\partial w}\mathcal{L}_{N,k}(z, t,w,s)\,\mathrm{d}\,t\,\mathrm{d}\,s\right|\leq\mathcal{S}(z,t,w,s),\] (83)
_where_
\[\mathcal{S}(z,t,w,s)=32N^{\alpha/2-1}\frac{\exp(\frac{-t|\Im z|-s|\Im w|}{2})}{| \Im z||\Im w|}\times\min\left(\frac{1}{t}\frac{|w|}{|\Im w|},\frac{1}{s}\frac{| z|}{|\Im z|}\right).\]
Proof.: By definition of \(\tilde{g}\)
\[\mathcal{L}_{N,k}(z,t,w,s)=N^{\alpha/2-1}\exp(\mathbf{i}\,\mathrm{ sgn}_{\Im z}\,tz+\mathbf{i}\,\mathrm{sgn}_{\Im w}\,sw)\times\\ \mathrm{Cov}_{\mathbf{x}_{k}}\left[\frac{\exp\left(\mathrm{sgn}_{ \Im z}\,\mathbf{i}tf_{N,k}(z)\right)}{t},\frac{\exp\left(\mathrm{sgn}_{\Im w} \,\mathbf{i}sf_{N,k}(w)\right)}{s}\right],\]
where \(f_{N,k}(z):=-\frac{1}{N}\mathbf{x}_{k}^{*}z\,\mathrm{diag}\left[\mathbf{G}_{N,k}(z)\right]\mathbf{x}_{k}\) and \(\underline{f_{N,k}}(z):=-\frac{1}{N}\mathbf{x}_{k}^{*}z\,\mathrm{diag}\left[ \underline{\mathbf{G}_{N,k}}(z)\right]\mathbf{x}_{k}\). By the properties of the Cov operator, the following holds:
\[\mathrm{Cov}_{\mathbf{x}_{k}}\left[\frac{\exp\left(\mathrm{sgn}_{ \Im z}\,\mathbf{i}tf_{N,k}(z)\right)}{t},\frac{\exp\left(\mathrm{sgn}_{\Im w }\,\mathbf{i}sf_{N,k}(w)\right)}{s}\right]=\\ \mathrm{Cov}_{\mathbf{x}_{k}}\left[\frac{\exp\left(\mathrm{sgn}_ {\Im z}\,\mathbf{i}tf_{N,k}(z)\right)}{t},\frac{\exp\left(\mathrm{sgn}_{\Im w }\,\mathbf{i}sf_{N,k}(w)\right)-1}{s}\right]=\\ \mathrm{Cov}_{\mathbf{x}_{k}}\left[\frac{\exp\left(\mathrm{sgn}_ {\Im z}\,\mathbf{i}tf_{N,k}(z)\right)-1}{t},\frac{\exp\left(\mathrm{sgn}_{\Im w }\,\mathbf{i}sf_{N,k}(w)\right)}{s}\right]. \tag{84}\]
One can bound
\[\left|\frac{\exp\left(\mathrm{sgn}_{\Im z}\,\mathbf{i}tf_{N,k}( z)\right)-1}{t}\right|=\\ \left|\frac{1}{t}\int_{0}^{t}f_{N,k}(z)\exp\left(\mathrm{sgn}_{ \Im z}\,\mathbf{i}uf_{N,k}(z)\right)\mathrm{d}\,u\right|\leq|f_{N,k}(z)|, \tag{85}\]
and analogously
\[\left|\frac{\exp\left(\mathrm{sgn}_{\Im w}\,\mathbf{i}sf_{N,k}(w)\right)-1}{s }\right|\leq|\underline{f_{N,k}}(w)|.\]
The second part of Lemma A.4 yields that \(|\exp\left(\mathrm{sgn}_{\Im z}\,\mathbf{i}tf_{N,k}(z)\right)|\leq 1\) and that \(|\exp\left(\mathrm{sgn}_{\Im w}\,\mathbf{i}sf_{N,k}(w)\right)|\leq 1\). Using Lemma A.5 we also estimate \(\mathbb{E}_{\mathbf{x}_{k}}|f_{N,k}(z)|\leq\frac{|z|}{|\Im z|}\).
Thus,
\[\left|\mathrm{Cov}_{\mathbf{x}_{k}}\left[\frac{\exp\left(\mathrm{sgn}_{\Im z} \,\mathbf{i}tf_{N,k}(z)\right)}{t},\frac{\exp\left(\mathrm{sgn}_{\Im w}\, \mathbf{i}sf_{N,k}(w)\right)}{s}\right]\right|\leq 2\min\left(\frac{1}{t}\frac{|w|}{|\Im w|}, \frac{1}{s}\frac{|z|}{|\Im z|}\right).\]
Using that \(\Re(\operatorname{sgn}_{\Im z}itz)\leq-|\Im z|\) and \(\Re(\operatorname{sgn}_{\Im w}\mathbf{i}sw)\leq-|\Im w|\), we can bound \(|\exp(\mathbf{i}\operatorname{sgn}_{\Im z}tz+\mathbf{i}\operatorname{sgn}_{ \Im w}sw)|\) so that the inequality above leads to
\[|\mathcal{L}_{N,k}(z,t,w,s)|\leq 2N^{\alpha/2-1}\exp(-t|\Im z|-s|\Im w|)\times \min\left(\frac{1}{t}\frac{|w|}{|\Im w|},\frac{1}{s}\frac{|z|}{|\Im z|}\right). \tag{86}\]
Function \(\mathcal{L}_{N,k}(z,t,w,s)\) is analytic by \(z\) and \(w\). Applying Cauchy inequality (Lemma A.11), we get that
\[\left|\frac{\partial}{\partial z}\frac{\partial}{\partial w}\mathcal{L}_{N,k} (z,t,w,s)\right|\leq 32N^{\alpha/2-1}\frac{\exp(\frac{-t|\Im z|-s|\Im w|}{2})}{| \Im z||\Im w|}\times\min\left(\frac{1}{t}\frac{|w|}{|\Im w|},\frac{1}{s}\frac{ |z|}{|\Im z|}\right). \tag{87}\]
**Corollary 3.1**.: _Suppose \(0<\epsilon<2-\frac{\alpha}{2}\). Denote_
\[\mathcal{L}_{N,k}^{0}(z,t,w,s):=\mathcal{L}_{N,k}(z,t,w,s)\times\mathbf{1}_{ \frac{t|z|}{|\Im z|}<N^{1-\epsilon}}\times\mathbf{1}_{\frac{s|w|}{|\Im w|}<N^ {1-\epsilon}}. \tag{88}\]
_Then_
\[\int_{0}^{\infty}\int_{0}^{\infty}\left|\frac{\partial}{\partial z }\frac{\partial}{\partial w}\mathcal{L}_{N,k}(z,t,w,s)-\frac{\partial}{ \partial z}\frac{\partial}{\partial w}\mathcal{L}_{N,k}^{0}(z,t,w,s)\right| \mathrm{d}\,t\,\mathrm{d}\,s\\ \leq O(1)N^{\alpha/2-2+\epsilon}\frac{|z||w|}{|\Im z|^{3}|\Im w|^ {3}}. \tag{89}\]
Proof.: Suppose that \(t_{0}=\frac{N^{1-\epsilon}|\Im z|}{|z|}\) and \(s_{0}=\frac{N^{1-\epsilon}|\Im w|}{|w|}\). Then \(\min\left(\frac{1}{t}\frac{|w|}{|\Im w|},\frac{1}{s}\frac{|z|}{|\Im z|} \right)=N^{-1+\epsilon}\frac{|z||w|}{|\Im z||\Im w|}\min(\frac{t_{0}}{t},\frac {s_{0}}{s})\).
\[\int_{0}^{\infty}\int_{0}^{\infty}\left|\frac{\partial}{\partial z }\frac{\partial}{\partial w}\mathcal{L}_{N,k}(z,t,w,s)-\frac{\partial}{ \partial z}\frac{\partial}{\partial w}\mathcal{L}_{N,k}^{0}(z,t,w,s)\right| \mathrm{d}\,t\,\mathrm{d}\,s\\ \leq O(1)N^{\alpha/2-2+\epsilon}\frac{|z||w|}{|\Im z|^{2}|\Im w|^ {2}}\left(\int_{t_{0}}^{+\infty}\int_{0}^{t\frac{s_{0}}{t_{0}}}\frac{t_{0}}{t} \exp\left(\frac{-t|\Im z|-s|\Im w|}{2}\right)\mathrm{d}\,s\,\mathrm{d}\,t\\ +\int_{s_{0}}^{+\infty}\int_{0}^{s\frac{t_{0}}{s_{0}}}\frac{s_{0} }{s}\exp\left(\frac{-t|\Im z|-s|\Im w|}{2}\right)\mathrm{d}\,t\,\mathrm{d}\,s \right)\\ \leq O(1)N^{\alpha/2-2+\epsilon}\frac{|z||w|}{|\Im z|^{2}|\Im w|^ {2}}\left(\frac{1}{|\Im w|}\int_{t_{0}}^{+\infty}\frac{t_{0}}{t}\exp\left( \frac{-t|\Im z|}{2}\right)\mathrm{d}\,t\right.\\ \left.+\frac{1}{|\Im z|}\int_{s_{0}}^{+\infty}\frac{s_{0}}{s} \exp\left(\frac{-s|\Im w|}{2}\right)\mathrm{d}\,s\right)\\ \leq O(1)N^{\alpha/2-2+\epsilon}\frac{|z||w|}{|\Im z|^{3}|\Im w|^ {3}}. \tag{90}\]
Suppose, that there exist a function \(\mathcal{F}(z,t,w,s)>0\) such that
\(\left|\frac{\partial}{\partial z}\frac{\partial}{\partial w}\mathcal{L}^{0}_{N,k }(z,t,w,s)\right|<\mathcal{F}(z,t,w,s)\) everywhere with probability \(1\) for all \(N,k\) and
\[\int_{0}^{\infty}\int_{0}^{\infty}\mathcal{F}(z,t,w,s)\,\mathrm{d}\,t\, \mathrm{d}\,s<D(z,w). \tag{91}\]
Then condition (82) would automatically hold and convergence in (81) will follow from
\[\int_{s_{1}}^{s_{2}}\int_{t_{1}}^{t_{2}}\frac{\partial}{\partial z }\frac{\partial}{\partial w}\mathcal{L}_{N,k}(z,t,w,s)\,\mathrm{d}\,t\, \mathrm{d}\,s=\frac{\partial}{\partial z}\frac{\partial}{\partial w}\int_{s_{ 1}}^{s_{2}}\int_{t_{1}}^{t_{2}}\mathcal{L}_{N,k}(z,t,w,s)\,\mathrm{d}\,t\, \mathrm{d}\,s\\ \stackrel{{\mathbb{P}}}{{\rightarrow}}\frac{\partial}{ \partial z}\frac{\partial}{\partial w}\int_{s_{1}}^{s_{2}}\int_{t_{1}}^{t_{2} }\mathcal{L}(z,t,w,s)\,\mathrm{d}\,t\,\mathrm{d}\,s=\int_{s_{1}}^{s_{2}}\int_ {t_{1}}^{t_{2}}\frac{\partial}{\partial z}\frac{\partial}{\partial w}\mathcal{ L}(z,t,w,s)\,\mathrm{d}\,t\,\mathrm{d}\,s \tag{92}\]
uniformly in \(k\) for all fixed \(0<t_{1}<t_{2}\) and \(0<s_{1}<s_{2}\).
**Lemma 3.10**.: _For all \(N,k\) for \(\mathcal{L}_{0}(z,t,w,s)\) defined as above_
\[\left|\frac{\partial}{\partial z}\frac{\partial}{\partial w}\mathcal{L}^{0}_{ N,k}(z,t,s,w)\right|\leq\mathcal{F}(z,t,w,s),\]
_where_
\[\mathcal{F}(z,t,w,s):=\\ O(1)\exp\left(\frac{-t|\Im z|-s|\Im w|}{2}\right)\left(t^{\alpha/4-1}|z |^{\alpha/4}|\Im z|^{\alpha/4-1}\times s^{\alpha/4-1}|w|^{\alpha/4}|\Im w|^{ \alpha/4-1}\right).\]
Proof.: \[\mathrm{Cov}_{\mathbf{x}_{k}}\left[\frac{\exp\left(\mathrm{sgn}_ {\Im z}\,\mathbf{i}tf_{N,k}(z)\right)}{t},\frac{\exp\left(\mathrm{sgn}_{\Im w }\,\mathbf{i}sf_{N,k}(w)\right)}{s}\right]=\\ \frac{\prod_{1}^{P}\phi_{N}(tu_{i}+sv_{i})-\prod_{1}^{P}\phi_{N} (tu_{i})\phi_{N}(sv_{i})}{ts},\] (93)
where \(u_{i}:=\mathrm{sgn}_{\Im z}\,z\mathbf{G}_{N,k}(z)_{ii}\) and \(v_{i}:=\mathrm{sgn}_{\Im w}\,w\mathbf{G}_{N,k}(w)_{ii}.\) Denote
\[\ell_{N,k}^{(i)}(z,t,w,s):=N^{\alpha/2}\frac{\phi_{N}(tu_{i}+sv_{i})-\phi_{N} (tu_{i})\phi_{N}(sv_{i})}{ts} \tag{94}\]
and
\[r_{N,k}^{(i)}(z,t,w,s):=\prod_{j=1}^{i-1}\phi_{N}(tu_{j}+sv_{j})\prod_{j=i+1}^ {P}\phi_{N}(tu_{j})\phi_{N}(sv_{j}). \tag{95}\]
We can rewrite
\[\mathcal{L}_{N,k}(z,t,w,s)=\frac{\exp\left(\mathbf{i}\,\mathrm{sgn}_{\Im z}\, tz+\mathbf{i}\,\mathrm{sgn}_{\Im w}\,sw\right)\sum_{i=1}^{P}\left(\ell_{N,k}^{(i)}(z,t,w,s)\times r_{N,k}^{(i)}(z,t,w,s)\right)}{N}. \tag{96}\]
By Lemma A.4\(\Im u_{i}<0\) and \(\Im v_{i}<0\), therefore
\[\left|r^{(i)}_{N,k}(z,t,w,s)\right|\leq 1. \tag{97}\]
Next, we will prove that
\[\left|\ell^{(i)}_{N,k}(z,t,w,s)\right|\leq O(1)\ t^{\alpha/4-1}|z|^{\alpha/4}| \Im z|^{-\alpha/4}\times s^{\alpha/4-1}|w|^{\alpha/4}|\Im w|^{-\alpha/4}. \tag{98}\]
By Cauchy-Schwartz inequality for complex random variables \(X\) and \(Y\)
\[|\operatorname{Cov}_{\mathbf{x}_{k}}(X,Y)|=|\mathbb{E}_{\mathbf{ x}_{k}}(X-\mathbb{E}_{\mathbf{x}_{k}}X,Y-\mathbb{E}_{x_{k}}Y)|\leq\\ \sqrt{\left(\mathbb{E}_{\mathbf{x}_{k}}X\overline{X}-\mathbb{E}_ {\mathbf{x}_{k}}X\overline{\mathbb{E}_{\mathbf{x}_{k}}X}\right)\left(\mathbb{ E}_{\mathbf{x}_{k}}Y\overline{Y}-\mathbb{E}_{\mathbf{x}_{k}}Y\overline{\mathbb{E}_{ \mathbf{x}_{k}}Y}\right)}. \tag{99}\]
Thus
\[|\phi_{N}(tu_{i}+sv_{i})-\phi_{N}(tu_{i})\phi_{N}(sv_{i})|\leq\\ \sqrt{\left(\phi_{N}(2t\Im u_{i})-\phi_{N}(tu_{i})\overline{\phi _{N}(tu_{i})}\right)\left(\phi_{N}(2s\Im v_{i})-\phi_{N}(sv_{i})\overline{\phi _{N}(sv_{i})}\right)}. \tag{100}\]
The 5-th part of Lemma 3.2 allows the following estimate for \(t\leq t_{0}\):
\[\phi_{N}(2t\Im u_{i})-\phi_{N}(tu_{i})\overline{\phi_{N}(tu_{i})}=O\left( \frac{|tz|^{\alpha/2}}{|\Im z|^{\alpha/2}N^{\alpha/2}}\right),\]
which leads to
\[|\phi_{N}(tu_{i}+sv_{i})-\phi_{N}(tu_{i})\phi_{N}(sv_{i})|\leq O\left(\frac{|tz |^{\alpha/4}|sw|^{\alpha/4}}{|\Im z|^{\alpha/4}|\Im w|^{\alpha/4}N^{\alpha/2}} \right). \tag{101}\]
Thus, using the bounds on the support of \(\mathcal{L}^{0}_{N,k}(z,t,w,s)\) we can conclude, that
\[\left|\mathcal{L}^{0}_{N,k}(z,t,s,w)\right|\leq\\ O(1)\exp\left(-t|\Im z|-s|\Im w|\right)(t^{\alpha/4-1}|\Im z|^{- \alpha/4-1}\times s^{\alpha/4-1}|\Im w|^{-\alpha/4-1}). \tag{102}\]
Applying Cauchy inequality (Lemma A.11), we bound
\[\left|\frac{\partial}{\partial z}\frac{\partial}{\partial w} \mathcal{L}^{0}_{N,k}(z,t,s,w)\right|\leq\\ O(1)\exp\left(\frac{-t|\Im z|-s|\Im w|}{2}\right)(t^{\alpha/4-1}|z| ^{\alpha/4}|\Im z|^{-\alpha/4-1}\times s^{\alpha/4-1}|w|^{\alpha/4}|\Im w|^{- \alpha/4-1}). \tag{103}\]
Notice, that
\[\int_{0}^{\infty}\int_{0}^{\infty}\exp\left(\frac{-t|\Im z|-s|\Im w |}{2}\right)(t^{\alpha/4-1}|\Im z|^{\alpha/4-1}\times s^{\alpha/4-1}|\Im w|^{ \alpha/4-1})\,\mathrm{d}\,t\,\mathrm{d}\,s\leq\\ 8\frac{1}{|\Im z||\Im w|}\Gamma(\alpha/4)^{2}, \tag{104}\]
which means, that the input of \(\mathcal{L}_{N,k}^{0}\) into \(D(z,w)\) does not exceed \(O(1)\frac{|z|^{\alpha/4}|w|^{\alpha/4}}{|\Im z|^{\alpha/2+1}|\Im w|^{\alpha/2+ 1}}\), which finishes the proof of the bound (82).
#### 3.5.2 Proof of convergence (92)
Recalling the proof of Lemma 3.10, the limit (92) will follow from the Lemma below.
**Lemma 3.11**.: _Fix \(0<t_{1}<t_{2}\) and \(0<s_{1}<s_{2}\) and \(z,w\in\mathbb{C}\backslash\mathbb{R}\)_
\[\max_{\begin{subarray}{c}t\in[t_{1},t_{2}]\\ s\in[s_{1},s_{2}]\end{subarray}}|\mathcal{L}_{N,k}(z,t,w,s)-\mathcal{L}(z,t,w,s)|\underset{N\to\infty}{\to}0. \tag{105}\]
_uniformly on \(k\) in probability._
Proof.: Using expansion (93), by Lemma A.10 it is enough to prove, that
\[\max_{\begin{subarray}{c}t\in[t_{1},t_{2}]\\ s\in[s_{1},s_{2}]\end{subarray}}\left[\ell_{N,k}^{(i)}(z,t,w,s)-\ell(z,t,w,s) \right]\underset{N\to\infty}{\to}0, \tag{106}\]
and
\[\max_{\begin{subarray}{c}t\in[t_{1},t_{2}]\\ s\in[s_{1},s_{2}]\end{subarray}}\left[r_{N,k}^{(i)}(z,t,w,s)-r(z,t,w,s) \right]\underset{N\to\infty}{\to}0. \tag{107}\]
in probability uniformly over \(i,k.\) Firstly, we will prove the limit (106). Using the part 5 of Lemma 3.2 we get, the asymptotic decomposition below:
\[\frac{ts}{N^{\alpha/2}}\times\ell_{N,k}^{(i)}(z,t,w,s)=\phi_{N}( tu_{i}+sv_{i})-\phi_{N}(tu_{i})\phi_{N}(sv_{i})\\ =1-\frac{(tu_{i}+sv_{i})}{N}+c\frac{(tu_{i}+sv_{i})^{\alpha/2}}{N^ {\alpha/2}}+\frac{|tu_{i}+sv_{i}|^{\alpha/2}}{N^{\alpha/2}}\mathfrak{E}_{N} \left(\frac{tu_{i}+sv_{i}}{N}\right)\\ -\left(1-\frac{tu_{i}}{N}+c\frac{(tu_{i})^{\alpha/2}}{N^{\alpha/2 }}+\frac{|tu_{i}|^{\alpha/2}}{N^{\alpha/2}}\mathfrak{E}_{N}\left(\frac{tu_{i}} {N}\right)\right)\\ \times\left(1-\frac{sv_{i}}{N}+c\frac{(sv_{i})^{\alpha/2}}{N^{ \alpha/2}}+\frac{|sv_{i}|^{\alpha/2}}{N^{\alpha/2}}\mathfrak{E}_{N}\left( \frac{sv_{i}}{N}\right)\right)\\ =\frac{1}{N^{\alpha/2}}c\left((tu_{i}+sv_{i})^{\alpha/2}-tu_{i}^{ \alpha/2}-sv_{i}^{\alpha/2}\right)+o(1)\frac{1}{N^{\alpha/2}}, \tag{108}\]
and \(o(1)\leq c_{N}(z,t_{1},t_{2},w,s_{1},s_{2})\underset{N\to\infty}{\to}0\).
_Remark_.: For the fixed \(z,t_{1},t_{2},w,s_{1},s_{2}\) the order of convergence of \(c_{N}\) can be estimated from the top through the max of \(\mathfrak{E}_{N}(\cdot)\) on the half-ball with centre in \(0\) and radius \(\frac{t_{2}}{N\left\lvert\Im z\right\rvert}+\frac{s_{2}}{N\left\lvert\Im w \right\rvert}\).
We can make further expansion:
\[\left(tu_{i}+sv_{i}\right)^{\alpha/2}-\left(tu_{i}\right)^{\alpha/ 2}-\left(sv_{i}\right)^{\alpha/2}=\\ \left(K(t,z)+K(s,w)\right)^{\alpha/2}-K(t,z)^{\alpha/2}-K(s,w)^{ \alpha/2}+\Delta_{i,k}(z,t,w,s),\]
where
\[\left\lvert\Delta_{i,k}(z,t,w,s)\right\rvert\leq 4(t_{2}+s_{2})^{\alpha/2} \bigg{(}\Big{|}z(\mathbf{G}_{N,k})_{ii}(z)-zm_{y}(z)\Big{|}+\Big{|}w(\underline {\mathbf{G}_{N,k}})_{ii}(w)-zm_{y}(w)\Big{|}\bigg{)}.\]
The random variable \(\left(\mathbf{G}_{N,k}\right)_{ii}(z)\) uniformly on \((i,k)\) converges in probability to \(m(z)\).
Next, we prove convergence (107).
\[r_{N,k}^{(i)}(z,t,w,s)=\prod_{j=1}^{i-1}\phi_{N}(u_{j}+v_{j}) \prod_{j=i+1}^{P}\phi_{N}(u_{j})\phi_{N}(v_{j})=\\ \exp\left(\sum_{j=1}^{i-1}\ln\phi_{N}(u_{j}+v_{j})+\sum_{j=i+1}^{ P}\left[\ln\phi_{N}(u_{j})+\ln\phi_{N}(v_{j})\right]\right). \tag{109}\]
When \(|z|<\frac{1}{10}\cdot|\ln(1+z)-z|<10|z|^{3}\) For bounded \(u\)
\[\ln\phi_{N}(u)=-\frac{\mathbf{i}u}{N}+O\left(\frac{1}{N^{\alpha/2}}\right),\]
which leads to
\[\prod_{j=1}^{i-1}\phi_{N}(u_{j}+v_{j})\prod_{j=i+1}^{P}\phi_{N}(u_{j})\phi_{N} (v_{j})=\exp\left(-\mathbf{i}\frac{\sum_{j=1,j\neq i}^{P}(u_{j}+v_{j})}{N} \right)(1+o(1)).\]
Recalling the definitions of \(u_{i}\) and \(v_{i}\) we get that
\[-\mathbf{i}\frac{\sum_{j=1,j\neq i}^{P}(u_{j}+v_{j})}{N}=- \mathbf{i}t\operatorname{sgn}_{\Im z}\frac{1}{N}z\operatorname{Tr}\mathbf{G} _{N,k}(z)-\mathbf{i}s\operatorname{sgn}_{\Im w}\frac{1}{N}w\operatorname{Tr} \underline{\mathbf{G}_{N,k}}(w)\\ =-yK(z,t)-yK(s,w)-\mathbf{i}tz\operatorname{sgn}_{\Im z}M_{N}(z)- \mathbf{i}sw\operatorname{sgn}_{\Im w}\frac{M_{N}(w)}{N}\\ +O\left(\frac{t|z|}{N|\Im z|}\right)+O\left(\frac{s|w|}{N|\Im w |}\right),\]
where \(M(z):=\frac{1}{N}\operatorname{Tr}\mathbf{G}_{N}(z)-ym_{y}(z).\) Marchenko-Pastur law yields, that \(M_{N}(z)\to 0\) in probability, as well as \(\underline{M_{N}}(w).\) Thus, (107) will hold, and convergence, will obviously be uniform.
Proof of the Theorem 2.2
To calculate the integral from Theorem 2.1 we will follow [13].
Using Lemma A.13 it is possible to rewrite the following equation as an integral:
\[(K(z,t)+K\left(w,s\right))^{\alpha/2}-K(z,t)^{\alpha/2}-K(w,s)^{ \alpha/2}\\ =\frac{1}{\Gamma(-\alpha/2)}\int_{0}^{\infty}\frac{(\exp\left(-rK (z,t)\right)-1)\left(\exp\left(-rK(w,s)\right)-1\right)}{r^{\frac{\alpha}{2}+1 }}\mathrm{d}r. \tag{110}\]
Thus, denoting
\[\mathfrak{k}(z,t,r):=\frac{(\exp\left(-rK(z,t)\right)-1)\exp(\mathrm{sgn}_{z} \,\mathfrak{k}tz-yK(z,t))}{t}, \tag{111}\]
it is possible to rewrite
\[\mathcal{L}(z,t,w,s)=yc\frac{1}{\Gamma(-\alpha/2)}\int_{0}^{\infty}\frac{1}{r ^{\frac{\alpha}{2}+1}}\mathfrak{k}(z,t,r)\mathfrak{k}(w,s,r)\,\mathrm{d}\,r. \tag{112}\]
and
\[C(z,w)=\int_{0}^{\infty}\int_{0}^{\infty}\frac{\partial}{\partial z }\frac{\partial}{\partial w}\mathcal{L}(z,t,w,s)\,\mathrm{d}\,s\,\mathrm{d}\, t=\\ yc\frac{1}{\Gamma(-\alpha/2)}\int_{0}^{\infty}\int_{0}^{ \infty}\int_{0}^{\infty}\frac{1}{r^{\frac{\alpha}{2}+1}}\frac{\partial}{ \partial z}\mathfrak{k}(z,t,r)\frac{\partial}{\partial w}\mathfrak{k}(w,s,r) \,\mathrm{d}\,r\,\mathrm{d}\,s\,\mathrm{d}\,t=\\ yc\frac{1}{\Gamma(-\alpha/2)}\int_{0}^{\infty}\frac{1}{r^{ \frac{\alpha}{2}+1}}\frac{\partial}{\partial z}\left(\int_{0}^{\infty} \mathfrak{k}(z,t,r)\,\mathrm{d}\,t\right)\frac{\partial}{\partial w}\left( \int_{0}^{\infty}\mathfrak{k}(w,s,r)\,\mathrm{d}\,s\right)\mathrm{d}\,r. \tag{113}\]
Lemma A.14 allows us to conclude, that
\[\int_{0}^{\infty}\mathfrak{k}(z,t,r)\,\mathrm{d}\,t=\,-\log\left(1-\frac{rm_ {y}(z)}{1-ym_{y}(z)}\right). \tag{114}\]
Recalling equation (10),
\[\frac{1}{1-ym_{y}(z)}=(1-y)+zym_{y}(z). \tag{115}\]
Substituting it into (114) we get, that
\[\int_{0}^{\infty}\mathfrak{k}(z,t,r)\,\mathrm{d}\,t=\,-\log\left(1-rm_{y}(z) \left((1-y)+zym_{y}(z)\right)\right). \tag{116}\]
Similarly, by (10) holds \(m_{y}(z)\left((1-y)+zym_{y}(z)\right)=zm_{y}(z)-1\), so we can simplify
\[\int_{0}^{\infty}\mathfrak{k}(z,t,r)\,\mathrm{d}\,t=\,-\log\left(1-r(zm_{y}(z) -1)\right), \tag{117}\]
from where we can deduce
\[\frac{\partial}{\partial z}\int_{0}^{\infty}\mathfrak{k}(z,t,r)\,\mathrm{d}\,t=r \frac{\frac{\partial}{\partial z}zm_{y}(z)}{1-r\left(zm_{y}(z)-1\right)}. \tag{118}\]
Substituting this into (113) we get that
\[C(z,w)=\frac{yc}{\Gamma(-\alpha/2)}\frac{\partial}{\partial z}zm_{y}(z)\frac{ \partial}{\partial w}wm_{y}(w)\int_{0}^{\infty}\frac{r^{1-\alpha/2}}{(1-r \left(zm_{y}(z)-1\right))(1-r\left(wm_{y}(w)-1\right))}\,\mathrm{d}\,r. \tag{119}\]
The Lemma A.15 allows us to calculate
\[\int_{0}^{\infty}\frac{r^{1-\alpha/2}}{(1-r\left(zm_{y}(z)-1 \right))(1-r\left(wm_{y}(w)-1\right))}\,\mathrm{d}\,r\\ =\frac{\pi}{\sin\left(\pi\frac{\alpha}{2}\right)}\frac{(-1+zm_{y} (z))^{\alpha/2-1}-\left(-1+wm_{y}(w)\right)^{\alpha/2-1}}{zm_{y}(z)-wm_{y}(w )}. \tag{120}\]
Combining it with (119) and using \(\frac{\pi}{\sin\left(\pi\frac{\alpha}{2}\right)}=\Gamma\left(-\frac{\alpha}{2 }\right)\Gamma\left(1+\frac{\alpha}{2}\right),\) we conclude that
\[C(z,w)=-yc\Gamma\left(1+\frac{\alpha}{2}\right)\frac{\partial}{ \partial z}\left(zm_{y}(z)\right)\frac{\partial}{\partial w}\left(wm_{y}(w) \right)\\ \times\frac{\left(-1+zm_{y}(z)\right)^{\alpha/2-1}-\left(-1+wm_{ y}(w)\right)^{\alpha/2-1}}{zm_{y}(z)-wm_{y}(w)}. \tag{121}\]
## 5 Proof of 2.3
In this section, we will show how to adapt the proof of Theorem 2.1 so that it works for overlapping half-heavy-tailed random matrices, where the number of overlapping rows and columns is proportional to \(N.\) The proof of the truncation and diagonalization stages does not differ from one in previous sections. Nevertheless, to move to the computation of the limit we need to be careful with the number of non-zero terms in the martingale sum and with the number of resolvent diagonal elements involved.
Denote
\[Y_{k}^{[i]}(z):=\frac{1}{N^{1-\alpha/4}}\left(\mathbb{E}_{k}-\mathbb{E}_{k-1} \right)\mathrm{Tr}\,\mathbf{G}_{\mathbf{A}_{N}^{[i]}}(z). \tag{122}\]
Similarly to Section 3.3 it is enough to prove that
\[\sum_{k\in\mathcal{Q}_{i}\cap\mathcal{Q}_{j}}\mathbb{E}_{k-1}\left[Y_{k}^{[i] }(z)Y_{k}^{[j]}(w)\right]\to C_{i,j}(z,w). \tag{123}\]
Denote \((\mathbf{x}_{k}\mid_{\mathcal{P}_{i}})\) the projection of the vector \(\mathbf{x}_{k}\) on coordinates out of the set \(\mathcal{P}_{i}\). Then, "diagonalization" term \(\tilde{Y}_{k}^{i}(z)\) can be defined the following way:
\[\tilde{Y}_{k}^{[i]}(z):=\begin{cases}0,&k\notin\mathcal{Q}_{i}\\ \frac{\partial}{\partial z}\frac{1}{N^{1-\alpha/4}}\left(\mathbb{E}_{k}- \mathbb{E}_{k-1}\right)\log\left|\tilde{g}_{N,k}^{[i]}(z)\right|^{2},&\text{ otherwise},\end{cases} \tag{124}\]
where \(\tilde{g}_{N,k}^{[i]}(z):=z-\frac{1}{N}\left(\mathbf{x}_{k}\mid_{\mathcal{P}_ {i}}\right)^{*}z\operatorname{diag}\left[\mathbf{G}_{N,k}^{[i]}(z)\right]\left( \mathbf{x}_{k}\mid_{\mathcal{P}_{i}}\right).\) Further, for \(k\in\mathcal{Q}_{i}\cap\mathcal{Q}_{j}\) we will denote
\[\mathcal{L}_{N,k}^{[i,j]}(z,t,w,s)\\ :=\frac{|\mathcal{Q}_{i}\cap\mathcal{Q}_{j}|}{N}\cdot N^{\alpha/ 2-1}\frac{\operatorname{Cov}_{\mathbf{x}_{k}}\left[\exp\left(\operatorname{ sgn}\Im\mathrm{z}\mathrm{i}t\tilde{g}_{N,k}^{[i]}(z)\right),\exp\left( \operatorname{sgn}\Im\mathrm{w}\mathbf{i}s\tilde{g}_{N,k}^{[j]}(w)\right) \right]}{ts}\\ =\frac{|\mathcal{Q}_{i}\cap\mathcal{Q}_{j}|}{N^{2}}\exp\left( \mathbf{i}\operatorname{sgn}_{\Im\mathrm{z}}tz+\mathbf{i}\operatorname{sgn}_ {\Im\mathrm{w}}sw\right)\\ \times\sum_{m\in\mathcal{P}_{i}\cap\mathcal{P}_{j}}\left[\ell_{N, k}^{(m),[i,j]}(z,t,w,s)\times r_{N,k}^{(m)[i,j]}(z,t,w,s)\right], \tag{125}\]
where
\[u_{m}^{[i]}:=-\operatorname{sgn}_{\Im\mathrm{z}}\mathbf{i}z\left(\mathbf{G}_{ N,k}^{[i]}(z)\right)_{m,m} \tag{126}\]
\[v_{m}^{[j]}:=-\operatorname{sgn}_{\Im\mathrm{z}}\mathbf{i}w\left(\mathbf{G}_{ N,k}^{[j]}(w)\right)_{m,m} \tag{127}\]
\[\ell_{N,k}^{(m)[i,j]}(z,t,w,s):=N^{\alpha/2}\frac{\phi_{N}(tu_{m}^{[i]}+sv_{m} ^{[j]})-\phi_{N}(tu_{m}^{[i]})\phi_{N}(sv_{m}^{[j]})}{ts}, \tag{128}\]
and
\[r_{N,k}^{(m)[i,j]}(z,t,w,s):=\prod_{\begin{subarray}{c}n<m\\ n\in\mathcal{P}_{i}\cap\mathcal{P}_{j}\end{subarray}}\phi_{N}(tu_{n}^{[i]}+sv_{n }^{[j]})\prod_{\begin{subarray}{c}n>m\\ n\in\mathcal{P}_{i}\cap\mathcal{P}_{j}\end{subarray}}\phi_{N}(tu_{n}^{[i]}) \phi_{N}(sv_{n}^{[j]})\\ \times\prod_{n\in\mathcal{P}_{i}\setminus\mathcal{P}_{j}}\phi_{N}(tu _{n}^{[i]})\prod_{n\in\mathcal{P}_{j}\setminus\mathcal{P}_{i}}\phi_{N}(sv_{n}^{ [j]}). \tag{129}\]
As in Subection 3.5, we see that
\[\ell_{N,k}^{(m)[i,j]}(z,t,w,s)\to c\frac{\left(K^{[i]}(z,t)+K^{[j]}(w,s) \right)^{\alpha/2}-K^{[i]}(z,t)^{\alpha/2}-K^{[j]}(w,s)^{\alpha/2}}{ts} \tag{130}\]
and
\[r_{N,k}^{(m)[i,j]}(z,t,w,s)\to\exp\left(-p_{i}K^{[i]}(z,t)-p_{j}K^{[j]}(w,s) \right), \tag{131}\]
where \(K^{[i]}(z,t)=t\operatorname{sgn}_{z}\mathbf{i}zs^{[i]}(z),\) and \(s^{[i]}(z)\) denotes the limit of the diagonal elements of the resolvent of \(\mathbf{A}_{N}^{[i]}.\) Notice, that
\[z-\mathbf{A}_{N}^{[i]}=z-\frac{\mathbf{X}_{N}^{[i]}\mathbf{X}_{N}^{[i]}{}^{*}} {N}=z-\frac{|\mathcal{Q}_{i}|}{N}\cdot\frac{\mathbf{X}_{N}^{[i]}\mathbf{X}_{N}^ {[i]}{}^{*}}{|\mathcal{Q}_{i}|}=\frac{|\mathcal{Q}_{i}|}{N}\left(z\frac{N}{| \mathcal{Q}_{i}|}-\frac{\mathbf{X}_{N}^{[i]}{}\mathbf{X}_{N}^{[i]}{}^{*}}{| \mathcal{Q}_{i}|}\right).\]
Thus,
\[s^{[i]}(z)=\frac{1}{q_{i}}m_{\frac{p_{i}}{q_{i}}}\left(\frac{z}{q_{i}}\right).\]
## Appendix A Appendix
We use the following rank inequalities adapted from [1, Theorem A.44], where proofs may be found.
**Lemma A.1**.: _[_1_, Theorem A.44]_ _Let \(\mathbf{X}_{N}\) and \(\hat{\mathbf{X}}_{N}\) be two \(P\times N\) complex matrices and let \(F(\cdot)\) and \(\hat{F}(\cdot)\) be the cumulative distribution functions of the the empirical spectral measures of \(\mathbf{X}_{N}\mathbf{X}_{N}^{*}\) and \(\hat{\mathbf{X}}_{N}\hat{\mathbf{X}}_{N}^{*}\) respectively:_
\[F(x):=\frac{\#\big{\{}j:\lambda_{j}\left(\frac{\mathbf{X}_{N} \mathbf{X}_{N}^{*}}{N}\right)\leq x\big{\}}}{P},\] \[\hat{F}(x):=\frac{\#\big{\{}j:\lambda_{j}\left(\frac{\hat{ \mathbf{X}}_{N}\hat{\mathbf{X}}_{N}^{*}}{N}\right)\leq x\big{\}}}{P}\]
_Then the following inequality holds_
\[\sup_{x\in\mathbb{R}}\left|F(x)-\hat{F}(x)\right|\leq\frac{1}{P}\operatorname {rank}(\mathbf{X}_{N}-\hat{\mathbf{X}}_{N}).\]
The following Corollary of the above Lemma is used to truncate and centre the original matrix \(\mathbf{X}_{N}\).
**Corollary A.1**.: _Under the same assumptions as Lemma A.1, we have the following bound on the resolvent for all \(z\in\mathbb{C}\backslash\mathbb{R}\),_
\[\left|\operatorname{Tr}\left[\left(z-\frac{\mathbf{X}\mathbf{X}^{*}}{N}\right) ^{-1}\right]-\operatorname{Tr}\left[\left(z-\frac{\hat{\mathbf{X}}\hat{ \mathbf{X}}^{*}}{N}\right)^{-1}\right]\right|\leq\frac{\pi}{|\Im z|} \operatorname{rank}(\mathbf{X}-\hat{\mathbf{X}}).\]
Proof.: Using equation (5) and integration by parts, we can get
\[\frac{1}{P}\left|\operatorname{Tr}\left[\left(z-\frac{\mathbf{X }\mathbf{X}^{*}}{N}\right)^{-1}\right]-\operatorname{Tr}\left[\left(z-\frac{ \hat{\mathbf{X}}\hat{\mathbf{X}}^{*}}{N}\right)^{-1}\right]\right|\] \[= \left|\int_{-\infty}^{+\infty}\frac{1}{z-\lambda}\operatorname{d }\left(F(\lambda)-\hat{F}(\lambda)\right)\right|=\left|\int_{-\infty}^{+ \infty}\frac{F(\lambda)-\hat{F}(\lambda)}{(z-\lambda)^{2}}\operatorname{d} \lambda\right|\] \[\leq\int_{-\infty}^{+\infty}\frac{1}{\left|z-\lambda\right|^{2}} \operatorname{d}\lambda\times\sup_{x\in\mathbb{R}}\left|F(x)-\hat{F}(x)\right|.\]
Note that for any \(z\in\mathbb{C}\backslash\mathbb{R}\)
\[\int_{-\infty}^{+\infty}\frac{1}{\left|z-\lambda\right|^{2}}\operatorname{d} \lambda=\int_{-\infty}^{+\infty}\frac{1}{\left(\Re z-\lambda\right)^{2}+\Im z ^{2}}\operatorname{d}\lambda=\int_{-\infty}^{+\infty}\frac{1}{x^{2}+\Im z^{2} }\operatorname{d}x=\frac{\pi}{\Im z}.\]
Combing equations above with Lemma A.1 we get the statement of the Corollary.
**Lemma A.2**.: _Let \(\mathbf{X}\) be a \(T\times S\) matrix, and \(\mathbf{x}_{i}\) is its \(i\)-th column. Then_
\[\left(\frac{1}{z-\mathbf{X}^{*}\mathbf{X}}\right)_{ii}=\frac{1}{z-\mathbf{x}_{ i}^{*}\frac{z}{z-\left(\mathbf{X}\mathbf{X}^{*}-\mathbf{x}_{i}\mathbf{x}_{i}^{*} \right)}\mathbf{X}_{i}} \tag{132}\]
**Lemma A.3** (Marchenko-Pastur law application).: _For the random matrix \(\mathbf{X}_{N}\) as in Definition 2.2,_
\[\left(\frac{1}{z-\frac{\mathbf{X}_{N}\mathbf{X}_{N}^{*}}{N}}\right)_{ii} \underset{D\rightarrow\infty}{\rightarrow}\frac{1}{z-\frac{z}{y}m_{1/y} \left(\frac{z}{y}\right)}. \tag{133}\]
_uniformly on \(i\) in probability._
_Remark_.: Notice, that equation (11) leads to
\[\frac{1}{z-\frac{z}{y}m_{1/y}\left(\frac{z}{y}\right)}=\frac{1}{z-\left((1-y) -zym_{y}(z)\right)}=m_{y}(z).\]
Thus,
\[\left(\frac{1}{z-\frac{\mathbf{X}_{N}\mathbf{X}_{N}^{*}}{N}}\right)_{ii} \underset{D\rightarrow\infty}{\rightarrow}m_{y}(z). \tag{134}\]
**Lemma A.4**.: _For \(\mathbf{A}=\mathbf{X}\mathbf{X}^{*},\) where \(\mathbf{X}\) is any matrix with at least \(1\) non-zero element_
1. \(\operatorname{sgn}\Im\left(z\operatorname{Tr}\mathbf{G}_{\mathbf{A}}(z) \right)=-\operatorname{sgn}\Im z.\)__
2. \(\operatorname{sgn}\Im\left(z\left(\mathbf{G}_{\mathbf{A}}(z)\right)_{ii} \right)=-\operatorname{sgn}\Im z\)__
Proof.: The first part can be seen from the positivity of eigenvalues of \(\mathbf{A}.\) The second part follows from the eigenvalue positivity of the Sample Covariance matrix and Lemma A.2.
**Lemma A.5**.: _For any Hermitian matrix \(\mathbf{A}\) holds_
\[\left|\mathbf{G}_{\mathbf{A}}(z)_{ii}\right|\leq\frac{1}{\left|\Im z\right|}.\]
**Lemma A.6** (Lemma 8.3 in [1], Ward identity).: _For any Hermitian matrix \(\mathbf{A}\) of the size \(N\times N\) holds_
\[\sum_{j=1}^{N}\left|\mathbf{G}_{\mathbf{A}}(z)_{ij}\right|^{2}=-\frac{1}{\Im z }\Im\left(\mathbf{G}_{\mathbf{A}}(z)_{ii}\right)\]
**Lemma A.7**.: _Suppose that \(X(z)\) is a real continuously differentiable random process for \(z\in\Omega,\) where \(\Omega\) is some open domain, and there exist \(C\) such that with probability \(1\) for all \(a,b\in\mathbb{R}\) such that \(a+\mathbf{i}b=z\in\Omega\)_
\[\begin{cases}|\nabla_{a,b}X\left(a+\mathbf{i}b\right)|\leq C,\\ X(z)\leq C\end{cases} \tag{135}\]
_Then for any \(\sigma\)-algebra \(\mathcal{F}\)_
\[\frac{\partial}{\partial z}\mathbb{E}\left(X(z)\mid\mathcal{F}\right)= \mathbb{E}\left(\frac{\partial}{\partial z}X(z)\mid\mathcal{F}\right). \tag{136}\]
**Lemma A.8**.: _For each harmonic function \(u\) in \(\mathcal{B}(0,1)\) holds_
\[|\nabla u(0,0)|\leq 2\max_{\theta}|u(cos\theta,sin\theta)|\]
Proof.: By Poisson integral formula
\[u(re^{i\theta})=\frac{1}{2\pi}\int_{-\pi}^{\pi}P_{r}(t-\theta)u(e^{it})dt\text { for }0\leq r<1,\]
where \(P_{r}(\theta)=\sum_{n=-\infty}^{\infty}r^{|n|}e^{in\theta}\) is a Poisson kernel. It is easy to see, that \(\left|\frac{d}{dr}P_{r}(\theta)|_{r=0}\right|\leq 2.\) Thus,
\[\left|\frac{d}{dr}\frac{1}{2\pi}\int_{-\pi}^{\pi}P_{r}(t-\theta)u(e^{it})dt \mid_{r=0}\right|\leq\max_{t}|u(e^{\mathbf{i}t})|\]
\[|\nabla u(0,0)|=\max_{\theta}\frac{d}{dr}u(re^{i\theta})\mid_{r=0}.\]
\(|\frac{d}{dr}P_{r}(\theta)|\leq 2\) when \(r=0.\)
**Corollary A.2**.: _Suppose, that holomorphic function \(g(z)\) is defined on \(\mathbb{C}\backslash\mathbb{R}\) and satisfies_
\[|\Re g(z)|\leq C\frac{|z|}{|\Im z|}\]
_for all \(z\in\mathbb{C}\backslash\mathbb{R},\) where \(C\) is any constant not depending on \(z.\)Then,_
\[\left|\frac{\partial}{\partial z}\Re g(z)\right|\leq 8C\frac{|z|}{|\Im z|^{2}}\]
_for all \(z\in\mathbb{C}\backslash\mathbb{R}.\)_
Proof.: Notice that \(\Re g(z)\) is a harmonic function. Applying Lemma A.8 to the ball with the centre in \(z\) and of the radius \(|\Im z|/2\) we can check, that
\[\left|\frac{\partial}{\partial z}\Re g(z)\right|\leq|\nabla_{x,y}\Re g(x+ \mathbf{i}y)|\leq\frac{1}{|\Im z|/2}\max_{w\in\mathcal{B}\left(z,\frac{|\Im z |}{2}\right)}|\Re g(w)|\leq 8C\frac{|z|}{|\Im z|^{2}}\]
**Lemma A.9** (CLT for martingales,Th. 35.12 in Billingsley (1995)).: _Suppose for each \(n\)\(Y_{n1},Y_{n2},\ldots Y_{nr_{n}}\) is a real martingale difference sequence with respect to the increasing \(\sigma\)-field \(\{\mathcal{F}_{nj}\}\) having second moments. If for each \(\varepsilon>0\),_
\[\sum_{j=1}^{r_{n}}\mathbb{E}\left(Y_{nj}^{2}I_{(|Y_{nj}|\geq \varepsilon)}\right)\to 0\]
\[\sum_{j=1}^{r_{n}}\mathbb{E}\left(Y_{nj}^{2}\mid\mathcal{F}_{n,j-1}\right) \stackrel{{ i.p.}}{{\longrightarrow}}\sigma^{2},\]
_as \(n\to\infty\), where \(\sigma^{2}\) is a positive constant, then_
\[\sum_{j=1}^{r_{n}}Y_{nj}\stackrel{{ D}}{{\to}}N\left(0,\sigma^{2}\right)\]
**Lemma A.10**.:
1. _Suppose that the array of random variables_ \(X_{N}^{(1)},X_{N}^{(2)},\ldots X_{N}^{(N)}\) _is such that_ \[X_{N}^{(k)}\underset{N\to\infty}{\to}0,\] _uniformly on_ \(k\) _in probability, and there exists constant_ \(C\) _such that_ \(|X_{N}^{(k)}|<C\) _for all_ \(k,N.\) _Then_ \[\frac{X_{N}^{(1)}+X_{N}^{(2)}+\ldots X_{N}^{(N)}}{N}\stackrel{{ \mathbb{P}}}{{\to}}0.\]
2. _Suppose that the array of random variables_ \(\left(X_{N}^{(k,i)}\right)\) _is such that_ \[X_{N}^{(k,i)}\underset{N\to\infty}{\to}0,\] _uniformly on_ \((k,i)\) _in probability, and there exists constant_ \(C\) _such that_ \(|X_{N}^{(k,i)}|<C\) _for all_ \(k,N.\) _Then_ \[Y_{N}^{(k)}=\frac{X_{N}^{(k,1)}+X_{N}^{(k,2)}+\ldots X_{N}^{(k,N)}}{N} \underset{N\to\infty}{\to}0.\] _in probability uniformly on_ \(k.\)__
3. _Suppose that the array of random variables_ \(X_{N}^{(1)},X_{N}^{(2)},\ldots X_{N}^{(N)}\) _is such that_ \[\frac{\sum_{k=1}^{N}X_{N}^{(k)}}{N}\underset{N\to\infty}{\to}0,\] _uniformly in probability, and there exists constant_ \(C\) _such that_ \(|X_{N}^{(k)}|<C\) _for all_ \(k,N.\)__
Proof.: For any \(\epsilon>0\) there exist \(N_{0}\) such that for all \(N>N_{0}\)
\[\max_{1\leq k\leq N}\mathbb{P}\left\{|X_{N}^{(k)}|\geq\epsilon\right\}<\epsilon,\]
which leads to \(\mathbb{E}\left|X_{N}^{(k)}\right|\leq C\epsilon+\epsilon\), thus
\[\mathbb{E}\left[\left|\frac{X_{N}^{(1)}+X_{N}^{(2)}+\cdots+X_{N}^{(N)}}{N} \right|\right]\leq\epsilon C+\epsilon.\]
and
\[\mathbb{P}\left[\left|\frac{X_{N}^{(1)}+X_{N}^{(2)}+\cdots+X_{N}^{(N)}}{N} \right|>\sqrt{\epsilon}\right]\leq(C+1)\sqrt{\epsilon},\]
which proves the first part of the Lemma. Also, the second part of the Lemma can be derived from the computations above.
**Lemma A.11** (Cauchy inequality).: _Suppose that a function \(f(z,w)\) is analytic both in \(z,w\) for \(z,w\in\mathbb{C}\backslash\mathbb{R}\) Then_
\[\left|\frac{\partial}{\partial z}\frac{\partial}{\partial w}f(z,w)\right|\leq 4 \frac{1}{|\Im z||\Im w|}\max_{\begin{subarray}{c}|\tilde{z}-z|=0.5|\Im z|\\ |\tilde{w}-w|=0.5|\Im w|\end{subarray}}\left|f\left(\tilde{z},\tilde{w}\right) \right|. \tag{137}\]
Proof.: By Cauchy integral theorem
\[\frac{\partial}{\partial z}\frac{\partial}{\partial w}f(z,w)=\frac{1}{-4\pi^{ 2}}\iint_{\begin{subarray}{c}|\tilde{z}-z|=0.5|\Im z|\\ |\tilde{w}-w|=0.5|\Im w|\end{subarray}}\frac{f(\tilde{z},\tilde{w})}{(\tilde{ z}-z)^{2}(\tilde{w}-w)^{2}}d\tilde{z}\tilde{w}.\]
Thus,
\[\left|\frac{\partial}{\partial z}\frac{\partial}{\partial w}f(z,w )\right|\leq\\ \frac{1}{4\pi^{2}}\times\pi\left|\Im z\right|\times\pi\left|\Im z \right|\times\frac{\max_{\begin{subarray}{c}|\tilde{z}-z|=0.5|\Im z|\\ |\tilde{w}-w|=0.5|\Im w|\end{subarray}}\left|f\left(\tilde{z},\tilde{w}\right) \right|}{0.25|\Im z|^{2}\times 0.25|\Im w|^{2}}=\\ 4\frac{1}{|\Im z||\Im w|}\max_{\begin{subarray}{c}|\tilde{z}-z|= 0.5|\Im z|\\ |\tilde{w}-w|=0.5|\Im w|\end{subarray}}\left|f\left(\tilde{z},\tilde{w}\right) \right|.\]
**Lemma A.12**.: _Suppose that \(f_{N}^{(k)}(z,w)\) the sequence of analytic on \(z,w\in\mathbb{C}\backslash\mathbb{R}\) random functions such that for all fixed \(z,w\in\mathbb{C}\backslash\mathbb{R}\)_
\[f_{N}^{(k)}(z,w)\underset{N\rightarrow\infty}{\rightarrow}0\]
_uniformly on \(k\) and for all \(k,N\) with probability \(1\) for all \(z,w\in\mathbb{C}\backslash\mathbb{R}\)_
\[\left|f_{N}^{(k)}(z,w)\right|\leq D(z,w)\]
_where \(D(z,w)\) is a continuous function on \(z,w\in\mathbb{C}\backslash\mathbb{R}.\) Then for all \(z,w\in\mathbb{C}\backslash\mathbb{R}\)_
\[\frac{\partial}{\partial z}\frac{\partial}{\partial w}f_{N}^{(k)}(z,w)\underset {N\rightarrow\infty}{\rightarrow}0\]
_uniformly on k in probability._
Proof.: There exists a constant \(M\) such that \(|f_{N,k}(\tilde{z},\tilde{w})|\leq M\) for \(\tilde{z}:|\tilde{z}-z|\leq 0.8|\Im z|\) and \(\tilde{w}:|\tilde{w}-w|\leq 0.8|\Im w|.\) Thus, there exists \(C\) such that for all \(z_{1},z_{2}:|z_{i}-z|\leq 0.75|\Im z|\) and \(w_{1},w_{2}:|w_{i}-w|\leq 0.75|\Im w|\)
\[|f(z_{1},w_{1})-f(z_{2},w_{1})|\leq C\left|z_{1}-z_{2}\right| \tag{138}\]
and
\[|f(z_{1},w_{1})-f(z_{1},w_{2})|\leq C\left|w_{1}-w_{2}\right|. \tag{139}\]
For every \(\epsilon>0\) choose a finite collection of \(\tilde{z}_{1},\tilde{z}_{2}\ldots\tilde{z}_{p}\) and \(\tilde{w}_{1},\tilde{w}_{2}\ldots\tilde{w}_{q}\), such that
\[|\tilde{z}-z|=0.5|\Im z|\text{ and }|\tilde{w}-w|=0.5|\Im w| \Rightarrow\\ \exists i<p,j<q:|\tilde{z}-\tilde{z}_{i}|<\epsilon\text{ and }| \tilde{w}-\tilde{w}_{j}|<\epsilon \tag{140}\]
If \(|\tilde{z}-\tilde{z}_{i}|<\epsilon\) and \(|\tilde{w}-\tilde{w}_{j}|<\epsilon\) thus, using the equation we can get, that
\[|f_{N}^{(k)}(\tilde{z},\tilde{w})-f_{N}^{(k)}(\tilde{z}_{i}, \tilde{w}_{j})|\leq\\ |f_{N}^{(k)}(\tilde{z},\tilde{w})-f_{N}^{(k)}(\tilde{z},w_{j})|+ |f_{N}^{(k)}(\tilde{z},w_{j})-f_{N}^{(k)}(\tilde{z}_{i},\tilde{w}_{j})|\leq 2C\epsilon. \tag{141}\]
There exists \(N_{0}\) such that \(\forall N>N_{0}\)\(\max_{k}\mathbb{P}\left[\max_{i,j}|f_{N}^{(k)}(\tilde{z}_{i},\tilde{w}_{j})|> \epsilon\right]<\frac{\epsilon}{pq}\), thus for all \(N>N_{0}\), for all \(k\)
\[\max_{k}\mathbb{P}\left[\max_{\begin{subarray}{c}|\tilde{z}-z|=0.5|\Im z|\\ |\tilde{w}-w|=0.5|\Im w|\end{subarray}}\left|f_{N}^{(k)}(\tilde{z},\tilde{w}) \right|>2C\epsilon+\varepsilon\right]<\epsilon. \tag{142}\]
This way, using the Cauchy theorem, we get the statement of the Lemma.
**Lemma A.13** ([12]).: _If \(\Re(\sigma)<0,\)_
\[\int_{0}^{\infty}\frac{\exp(r\sigma)-r\sigma-1}{r^{\frac{\alpha}{2}+1}}\text { }\mathrm{d}r=(-\sigma)^{\alpha/2}\times\Gamma(-\alpha/2) \tag{143}\]
**Lemma A.14**.: _If \(\Re\sigma_{1},\ \Re\sigma_{1}>0\)_
\[\int_{0}^{\infty}\frac{e^{-\sigma_{1}t}-e^{-\sigma_{2}t}}{t}\,\mathrm{d}\,t=- \log\sigma_{1}+\log\sigma_{2}=-\log\frac{\sigma_{1}}{\sigma_{2}} \tag{144}\]
**Lemma A.15** ([13]).: _For \(\sigma_{1}\) and \(\sigma_{2}\in\mathbb{C}\backslash\mathbb{R}\)_
\[\int_{0}^{\infty}\frac{r^{\frac{\alpha}{2}-1}}{(r-\sigma_{1})(r-\sigma_{2})}= \frac{\pi\left((-\sigma_{1})^{\alpha/2}\sigma_{2}-(-\sigma_{2})^{\alpha/2} \sigma_{1}\right)}{\sin\left(\frac{\pi\alpha}{2}\right)\sigma_{2}\sigma_{1}( \sigma_{2}-\sigma_{1})}. \tag{145}\]
## Acknowledgements
I am very grateful to Asad Lodhia for his invaluable assistance with my paper. His expertise and guidance significantly improved the rigorousness of the proofs provided, and I cannot thank him enough for his kindness and support. Additionally, I would like to express my heartfelt gratitude to my supervisor, Anna Maltsev, for her continuous encouragement and feedback throughout the entire process.
|
2302.13547 | Nonlinearity effect on Joule-Thomson expansion of
Einstein-power-Yang-Mills AdS black hole | Considering the nonlinearity of the Yang Mills charge, we investigate the
Joule-Thomson expansion for the Einstein-Power-Yang-Mills AdS black holes in
the context of the gauge-gravity duality. Under this framework, we calculate
the Joule-Thomson coefficient, describe all relevant inversion and isenthalpic
curves in the temperature-pressure plane that determining in this manner the
corresponding cooling and heating regions. Finally, we analyze the effect of
the charge nonlinearity on the Joule-Thomson expansion. | Yun-Zhi Du, Xiao-Yang Liu, Yang Zhang, Li Zhao, Qiang Gu | 2023-02-27T06:59:32Z | http://arxiv.org/abs/2302.13547v1 | # Nonlinearity effect on Joule-Thomson expansion of Einstein-power-Yang-Mills AdS black hole
###### Abstract
Considering the nonlinearity of the Yang Mills charge, we investigate the Joule-Thomson expansion for the Einstein-Power-Yang-Mills AdS black holes in the context of the gauge-gravity duality. Under this framework, we calculate the Joule-Thomson coefficient, describe all relevant inversion and isenthalpic curves in the temperature-pressure plane that determining in this manner the corresponding cooling and heating regions. Finally we analyze the effect of the charge nonlinearity on the Joule-Thomson expansion.
## I Introduction
In recent decades, people confirm the fact that black holes are thermodynamics systems [1; 2; 3] where its area is related with entropy and its surface gravity is connected with temperature [4; 5]. The following important step is to establish the quantum gravity theory. The negative cosmological constant in an Anti-de Sitter (AdS) spacetime with black hole will lead to phase transitions of black holes [6; 7]. And the corresponding conjugate quantity of pressure in an extended phase space for black holes is the volume [8]. The physical implication was related with the holography, where black holes would being a system and dual to conformal field theories [9]. That makes AdS black holes can be identical to the thermodynamics of ordinary systems and their thermodynamics become more complete. Especially there exist several different types of phase transition, the Van de Walls (VdW's)-like phase transition [10; 11; 12], reentrant phase transitions [13; 14], the polymer-like phase transition [15], and the triple points [16; 17], along with the novel dual relation of HP phase transition [18]. Meanwhile, the inclusion of the pressure-volume term in the thermodynamical first law makes other model parameters as novel thermodynamical quantities [11] and make it possible to regard AdS black holes as heat engines [19; 20]. All of those developments are in the subdiscipline, black hole chemistry [21].
In the classical thermodynamics, there is a well known process named the Joule-Thomson expansion, i.e., the gas moves from a region of high pressure to a region of low pressure via an equal velocity. Based on this, the JT effect of the charged AdS black hole was firstly investigated in ref. [22]. Subsequently the JT expansion becomes an active issue and gets more attention, furthermore is extended to the study of other black holes [23; 24; 25; 26; 27; 28]. Additionally, at the linear level the charged black holes in an AdS spacetime nearby the critical point is of the scaling symmetries, \(S\sim q^{2},\ P\sim q^{-2},\ T\sim q^{-1}\)[29; 30]. It is natural to guess that whether the same scaling symmetry still hole on for the non-linear charged AdS black holes? There are lots of the generalization of the linear charged AdS black hole solution: Einstein-Maxwell-Yang-Mills AdS black hole [31], Einstein-Power-Yang-Mills AdS black hole [32], Einstein-Maxwell-Power-Yang-Mills AdS black hole [33], Einstein-Yang-Mills-Gauss-Bonnet black hole [34], Einstein-power-Maxwell-power-Yang-Mills-dilaton [35], and so on. An interesting non-linear generalization of charged black holes involves a Yang-Mill field exponentially coupled to Einstein gravity (i.e., Einstein-Power-Yang-Mills gravity theory) because it possesses the conformal invariance and is easy to construct the analogues of the four-dimensional Reissner-Nordstrom black hole solutions in higher dimensions. Additionally several thermodynamical features of the EPYM AdS black hole in the extended phase space have been exhibited [33; 36; 37]. Here we pay attention on the JT expansion for the non-linear charged AdS black hole in this theory.
In this paper we study and discuss the Joule-Thomson expansion for black holes in the model of nonlinear electrodynamics (NED) coupled to gravity in AdS spacetime. The interest to NED model [38] considered is due to its simplicity: the metric function is expressed via simple elementary function. This model was explored to study the supermassive black hole M87* [38] and to construct non-singular model of magnetized black hole [39]. In Sec. II, we briefly review the EPYM AdS black hole solution and its hawking temperature. In Sec. III, we investigate the Joule-Thomson expansion for the EPYM AdS black hole. A brief summary is given in Sec. IV.
EPYM ADS black hole and Hawking temperature
The action for four-dimensional Einstein-power-Yang-Mills (EPYM) gravity with a cosmological constant \(\Lambda\) was given by [32; 40; 41; 42]
\[I=\frac{1}{2}\int d^{4}x\sqrt{g}\left(R-2\Lambda-\mathcal{F}^{ \gamma}\right) \tag{1}\]
with the Yang-Mills (YM) invariant \(\mathcal{F}\) and the YM field \(F^{(a)}_{\mu\nu}\)
\[\mathcal{F} = \mathrm{Tr}(F^{(a)}_{\mu\nu}F^{(a)\mu\nu}), \tag{2}\] \[F^{(a)}_{\mu\nu} = \partial_{\mu}A^{(a)}_{\nu}-\partial_{\nu}A^{(a)}_{\mu}+\frac{1}{ 2\xi}C^{(a)}_{(b)(c)}A^{(b)}_{\mu}A^{(c)}_{\nu}. \tag{3}\]
Here, \(\mathrm{Tr}(F^{(a)}_{\mu\nu}F^{(a)\mu\nu})=\sum_{a=1}^{3}F^{(a)}_{\mu\nu}F^{(a )\mu\nu}\), \(R\) and \(\gamma\) are the scalar curvature and a positive real parameter, respectively; \(C^{(a)}_{(b)(c)}\) represents the structure constants of three-parameter Lie group \(G\); \(\xi\) is the coupling constant; and \(A^{(a)}_{\mu}\) represents the \(SO(3)\) gauge group Yang-Mills (YM) potentials defining by the Wu-Yang (WY) ansatz [43]. Variation of the action with respect to the spacetime metric \(g_{\mu\nu}\) yields the field equations
\[G^{\mu}{}_{\nu} = \Lambda\delta^{\mu}{}_{\nu}=T^{\mu}{}_{\nu}, \tag{4}\] \[T^{\mu}{}_{\nu} = -\frac{1}{2}\left(\delta^{\mu}{}_{\nu}\mathcal{F}^{\gamma}-4 \gamma\,\mathrm{Tr}\left(F^{(a)}_{\nu\lambda}F^{(a)\mu\lambda}\right)\mathcal{ F}^{\gamma-1}\right). \tag{5}\]
Variation with respect to the 1-form YM gauge potentials \(A^{(a)}_{\mu}\) and implement the traceless yields the 2-forms YM equations
\[\mathbf{d}\left({}^{\star}\mathbf{F}^{(a)}\mathcal{F}^{\gamma-1 }\right)+\frac{1}{\xi}C^{(a)}_{(b)(c)}\mathcal{F}^{\gamma-1}\mathbf{A}^{(b)} \wedge{}^{\star}\mathbf{F}^{(c)}=0, \tag{6}\]
where \(\mathbf{F}^{(a)}=\frac{1}{2}F^{(a)}_{\mu\nu}dx^{\mu}\wedge dx^{\nu},\ \mathbf{A}^{(b)}=A^{(b)}_{\mu}\wedge dx^{\mu}\), and \({}^{\star}\) stands for the duality. It is obviously that for the case of \(\gamma=1\) the EPYM theory reduces to the standard Einstein-Yang-Mills (EPM) theory [34]. In this work our issue is paid on the role of the non-linear YM charge parameter \(\gamma\).
Here we should point out that the non-Abelian property of the YM gauge field is expressed with its YM potentials
\[\mathbf{A}^{(b)}=\frac{q}{r^{2}}C^{(a)}_{(i)(j)}x^{i}dx^{j},\ r^{2}=\sum_{j=1 }^{3}x_{j}^{2}, \tag{7}\]
and \(q\) is the YM charge, the indices \((a,\ i,\ j)\) run the following ranges: \(1\leq a,\ i,\ j\leq 3\). The coordinates \(x_{i}\) take the following forms: \(x_{1}=r\cos\phi\sin\theta,\ x_{2}=r\sin\phi\sin\theta,\ x_{3}=r\cos\theta.\) Since we have utilized the WY ansatz for the YM field, the invariant for this field takes the form [44; 45]
\[\mathrm{Tr}(F^{(a)}_{\mu\nu}F^{(a)\mu\nu})=\frac{q^{2}}{r^{4}}. \tag{8}\]
This form leads to the disappearance of the structure constants which can be described the non-Abelian property of the YM gauge field. Therefore, under the condition of the WY ansatz we may focus on the role of the non-linear YM charge parameter, instead of the non-Abelian character parameter.
The metric for the four-dimensional EPYM AdS black hole is given as follows [46],
\[ds^{2}=-f(r)dt^{2}+f^{-1}dr^{2}+r^{2}d\Omega_{2}^{2}, \tag{9}\]
where
\[f(r)=1-\frac{2\bar{M}}{r}+\frac{r^{2}}{l^{2}}+\frac{\left(2q^{2} \right)^{\gamma}}{2(4\gamma-3)r^{4\gamma-2}}. \tag{10}\]
Here \(d\Omega_{2}^{2}\) is the metric on unit 2-sphere with volume \(4\pi\) and \(q\) is the YM charge, \(l\) is related to the cosmological constant: \(l^{2}=-\frac{3}{\Lambda}\), \(\gamma\) is the non-linear YM charge parameter and satisfies \(\gamma>0\)[41]. The event horizon of the black
hole is obtained from the relation \(f(r_{+})=0\). The mass parameter of the black hole can be expressed in terms of the horizon radius as
\[\bar{M}=\frac{r_{+}}{2}\left(1+\frac{r_{+}^{2}}{l^{2}}+\frac{2^{\gamma-1}q^{2 \gamma}}{(4\gamma-3)r_{+}^{4\gamma-2}}\right). \tag{11}\]
We can also obtain the Hawking temperature of the black hole from eq. (10) as follows
\[T=\frac{1}{4\pi r_{+}}\left(1+8\pi\bar{P}r_{+}^{2}-\frac{\left(2q^{2}\right)^{ \gamma}}{2r_{+}^{(4\gamma-2)}}\right). \tag{12}\]
From eqs. (11) and (12) we will calculate the critical value of thermodynamical quantities which are presented in Sec. VI. Next we will give the modified first law of the four-dimensional EPYM AdS black hole thermodynamics in natural units (\(\hbar=c=1\)), i.e., the restricted phase space formulism.
## III Joule-Thomson expansion
More recently, the authors of [22] have investigated the Joule-Thomson (JT) expansion for AdS charged black holes with the aim to confront the resulting features with those of Van der Waals fluids. The extension to the charged black hole solution in the presence of the quintessence field [47] and rotating-AdS black hole [23] have also been considered. JT expansion [48] is a convenient isoenthalpic tool that a thermal system exhibits with a thermal expansion. It is worth noting that when expanding a thermal system with a temperature \(T\), the pressure always decreases yielding a negative sign to \(\partial P\). In this section, we will investigate the Joule-Thomson expansion of the RN-dS spacetime. In JT expansion for the Van der Waals system as well as AdS black holes, gas/black hole phase is passed at high pressure through a porous plug or small value in the low-pressure section of an adiabatic tube, and the enthalpy remains constant during the expansion. The expansion is characterized by a change in temperature relative to pressure. The JT coefficient, which can describe the expansion process, is read as
\[a=\left(\frac{\partial T}{\partial P}\right)_{H}, \tag{13}\]
where the enthalphy is related with the internal energy
\[H=U+PV. \tag{14}\]
We can judge whether the system is in a cooling process or in a heating process by the positive or the negative for the JT coefficient. Namely, if the temperature of the system is increasing with the decreasing of pressure, the JT coefficient is negative and the system is in the heating process, while if the temperature is decreasing with the decreasing of pressure, the JT coefficient is positive and the system is in the cooling process.
In this part, we will investigate the JT expansion of the EPYM AdS black hole. As we know, when the system is in the JT expansion, the enthalpy of system is fixed. In the extended phase space, the mass parameter of AdS black hole is corresponding to the enthalpy and it only just keep a constant when the system is in a JT process. The Joule-Thomson coefficient can be expressed as
\[H=\bar{M},\hskip 14.226378pta=\left(\frac{\partial T}{\partial\bar{P}} \right)_{H}=\left(\frac{\partial T}{\partial\bar{P}}\right)_{\bar{M},q}= \left(\frac{\partial T}{\partial r_{+}}\right)_{\bar{M},q}\bigg{/}\left( \frac{\partial\bar{P}}{\partial r_{+}}\right)_{\bar{M},q} \tag{15}\]
In order to study the JT expansion of the system more easier, we rewritten the temperature and the pressure as
\[T = \frac{1}{2\pi r_{+}}\left(-1+\frac{3\bar{M}}{r_{+}}-\frac{\gamma 2 ^{\gamma}q}{(4\gamma-3)r_{+}^{4\gamma-2}}\right), \tag{16}\] \[\bar{P} = \frac{3}{8\pi r_{+}^{2}}\left(-1+\frac{2\bar{M}}{r_{+}}-\frac{2^{ \gamma-1}q}{(4\gamma-3)r_{+}^{4\gamma-2}}\right). \tag{17}\]
From above equations, the JT coefficient becomes
\[a = \frac{2r_{+}}{3}\frac{1-\frac{6\bar{M}}{r_{+}}+\frac{\gamma(4\gamma- 1)2^{7}q}{(4\gamma-3)r_{+}^{1-2}}}{1-\frac{3\bar{M}}{r_{+}}+\frac{\gamma 2^{7}q}{(4 \gamma-3)r_{+}^{1-2}}} \tag{18}\] \[= \frac{4r_{+}}{3}\frac{2+8\pi\bar{P}r_{+}^{2}-\frac{[2\gamma(4 \gamma-1)-3]2^{7}q}{2(4\gamma-3)r_{+}^{4}\gamma-2}}{1+8\pi\bar{P}r_{+}^{2}- \frac{2^{12}q}{2r_{+}^{2}-2}}.\]
The JT coefficient will be divergent at the point \(r_{+m}\), which is satisfied the following equation
\[1-8\pi\bar{P}r_{+m}^{2}-\frac{2^{7}q}{2r_{+m}^{4\gamma-2}}=0. \tag{19}\]
It is very interesting that at the point \(r_{+m}\) the hawking temperature in eq. (12) is just zero, which indicates that the divergent point of the JT coefficient will be reveal the certain information of the extreme EPYM AdS black hole.
In the following, we will focus on the minimum inverse temperature, the minimum inverse mass parameter, the isoenthalpic and inverse curves of this system. When the JT coefficient and the temperature are both zero in eq. (18), the horizon radius satisfies
\[r_{i}^{4\gamma-2}=\frac{\left[2\gamma(4\gamma-1)-3\right]2^{7}q}{4(4\gamma-3)}, \tag{20}\]
then substituting the above expression into eq. (16), the minimum inverse temperature reads
\[T_{i}^{min}=\frac{8\gamma^{2}-10\gamma+3}{4\pi(8\gamma^{2}-2\gamma-3)}\left( \frac{\left[8\gamma^{2}-2\gamma-3\right]2^{7}q}{4(4\gamma-3)}\right)^{-1/(4 \gamma-2)}. \tag{21}\]
The ratio between the minimum inverse temperature and the critical one becomes
\[\frac{T_{i}^{min}}{T_{c}}=\frac{\left(8\gamma^{2}-10\gamma+3\right)(4\gamma- 1)}{4\left(8\gamma^{2}-2\gamma-3\right)(2\gamma-1)}\left(\frac{8\gamma^{2}-2 \gamma-3}{4\gamma(4\gamma-3)(4\gamma-1)}\right)^{-1/(4\gamma-2)}. \tag{22}\]
It is obviously that the ratio is independent with the YM charge and its behavior is exhibited in Fig. 1. Note that as \(\gamma\rightarrow\infty\), the above ratio is approach to \(1/2\). For the non-linear YM field (i.e., \(\gamma\neq 1\)), this ratio is not equal to
Figure 1: The behavior of the ratio between the minimum inverse temperature and the critical one with the non-linear YM parameter.
\(1/2\), which is just different from that for the linear YM field in this theory as well as the Einstein-Maxwell theory [23; 49]. This difference is induced by the non-linear YM field, or maybe the modification of the thermodynamical volume. Especially, as \(1<\gamma\) this ratio is bigger than \(1/2\) and it is less than \(1/2\) for \(1/2<\gamma<1\). In addition when the pressure is zero and the temperature is the minimum inverse temperature, we can obtain the minimum inverse mass by substituting eq. (20) into eq. (11) as
\[\bar{M}_{min}=\frac{8\gamma^{2}-2\gamma-1}{2\left(8\gamma^{2}-2\gamma-3 \right)}\left(\frac{\left(8\gamma^{2}-2\gamma-3\right)2^{\gamma}q}{4(4\gamma- 3)}\right)^{1/(4\gamma-2)}. \tag{23}\]
Since in the JT process of this system the black hole mass parameter \(\bar{M}\) is unchanging, thus we can check out that whether the system is in a JT process through the minimum inverse mass parameter. That means a JT process of the system can be survive when \(\bar{M}\geq\bar{M}_{min}\). Note that when \(\gamma\rightarrow\infty\) the limitation of the minimum inverse mass approaches \(\frac{1}{2^{3/4}}\) and it is independent with the YM charge. Furthermore as \(3/4<\gamma\), \(\bar{M}_{min}>0\) and it is decreasing with the non-linear YM charge parameter \(\gamma\).
As the JT coefficient is zero, we can obtain the inverse pressure and temperature from eqs. (12) and (18) as
\[\bar{P}_{i} = \frac{1}{8\pi r_{i}^{2}}\left(-2+\frac{\left[2\gamma(4\gamma-1)- 3\right]2^{\gamma}q}{2(4\gamma-3)r_{i}^{4\gamma-2}}\right), \tag{24}\] \[T_{i} = \frac{1}{4\pi r_{i}}\left(-1+\frac{\gamma 2^{\gamma}q}{r_{i}^{4 \gamma-2}}\right), \tag{25}\]
where the lower index "\(i\)" stands for the inverse meaning. Therefore we can exhibit the inverse curve in the \(\bar{P}-T\) plane with different values of the YM charge \(q\) and the non-linear charge parameter \(\gamma\) from above equations in Fig. 2. From Fig. 2 we can see that there exists a inverse curve of the EPYM AdS black hole with the given parameters and it is not a circled one. The inverse temperature is increasing with the increasing of \(q\) and \(\gamma\). On the other hand, to better understand the Joule-Thomson expansion from eqs. (16) and (17) we can depict the isoenthalpic curves, the inverse curves, and the effects of \(q\), \(\gamma\) on them in Fig. 3. The result shows that the inverse curve divide the isoenthalpic one into two parts: one is the cooling phenomena with the positive slope of the \(\bar{P}-T\) curves, the other is the heating process with the negative slope of the \(\bar{P}-T\) curves. And both the inverse temperature and pressure are increasing with the increasing of the non-linear YM charge parameter, while they are decreasing with the YM charge.
Figure 2: The inverse curves \(T_{i}-\bar{P}_{i}\) with different values of \(\gamma\) (see the left) and of \(q\) (see the right). In the left the non-linear charge parameter \(\gamma\) is set to \(0.85\) (the black dashed line), \(0.9\) (the red thick line), and \(1\) (the blue thick line). In the right the YM charge is set to \(0.85\) (the black dashed line), \(1\) (the red thick line), and \(1.2\) (the blue thick line).
Figure 3: The isoenthalpic and inverse curves with different values of the mass parameter.
Discussions and conclusions
In this manuscript we have analyzed the Joule-Thomson expansion of the EPYM AdS black hole in the expanded phase space. Considering the similar process of the gas expansion from a higher pressure section to a lower one by maintaining the fixed enthalpy, we applied it to the EPYM AdS black hole where the mass parameter is identified as enthalpy. Through the analysis of the Joule-Thomson coefficient we calculated the minimum inverse temperature and mass parameter, which could check out where a Joule-Thomson process of the system can be survive. We also presented the inverse curves in the \(\bar{P}-T\) plane and the corresponding isenthalpic curves. Above the inverse curve we obtained the cooling region, while below the inverse curve it corresponds to the heating one. Especially the effect of the Yang Mills charge nonlinearity on the Joule-Thomson expansion was also investigated. The corresponding result can be summarized in the following
* The ratio between the minimum inverse temperature and the critical temperature is independent with the YM charge and approaches to \(1/2\) as \(\gamma\rightarrow\infty\). When \(\gamma=1\) it equals to \(1/2\) and it is bigger than \(1/2\) for \(1<\gamma\), while less than \(1/2\) for \(1/2<\gamma<1\).
* The minimum inverse mass parameter is independent with the YM charge as \(\gamma\rightarrow\infty\), it is positive in the range of \(3/4<\gamma\) and is decreasing with the increasing of the non-linear YM charge parameter.
* Both the inverse temperature and pressure are increasing with the increasing of the non-linear YM charge parameter, while they are decreasing with the YM charge.
## Acknowledgements
We would like to thank Prof. Ren Zhao for their indispensable discussions and comments. This work was supported by the National Natural Science Foundation of China (Grant No. 12075143), the science foundation of Shanxi datong university(2022Q1) and the teaching reform project of Shanxi datong university ( XJG2022234).
|
2301.12309 | On the Lipschitz Constant of Deep Networks and Double Descent | Existing bounds on the generalization error of deep networks assume some form
of smooth or bounded dependence on the input variable, falling short of
investigating the mechanisms controlling such factors in practice. In this
work, we present an extensive experimental study of the empirical Lipschitz
constant of deep networks undergoing double descent, and highlight
non-monotonic trends strongly correlating with the test error. Building a
connection between parameter-space and input-space gradients for SGD around a
critical point, we isolate two important factors -- namely loss landscape
curvature and distance of parameters from initialization -- respectively
controlling optimization dynamics around a critical point and bounding model
function complexity, even beyond the training data. Our study presents novels
insights on implicit regularization via overparameterization, and effective
model complexity for networks trained in practice. | Matteo Gamba, Hossein Azizpour, Mårten Björkman | 2023-01-28T23:22:49Z | http://arxiv.org/abs/2301.12309v4 | # On the Lipschitz Constant of Deep Networks and Double Descent
###### Abstract
Existing bounds on the generalization error of deep networks assume some form of smooth or bounded dependence on the input variable, falling short of investigating the mechanisms controlling such factors in practice. In this work, we present an extensive experimental study of the empirical Lipschitz constant of deep networks undergoing double descent, and highlight non-monotonic trends strongly correlating with the test error. Building a connection between parameter-space and input-space gradients for SGD around a critical point, we isolate two important factors - namely loss landscape curvature and distance of parameters from initialization - respectively controlling optimization dynamics around a critical point and bounding model function complexity, even beyond the training data. Our study presents novels insights on implicit regularization via overparameterization, and effective model complexity for networks trained in practice.
## 1 Introduction
A longstanding question towards understanding the remarkable generalization ability of deep networks is characterizing the hypothesis class of models _trained in practice_(Hanin & Rolnick, 2019; Novak et al., 2018). Indeed, finding a parameterization that accurately describes the class of generalizing trained networks could shed light on the mechanisms controlling model complexity (Neyshabur et al., 2015a, b).
At present, for a target network topology, constraints on its hypothesis class have been derived from postulated assumptions on the training data (Kawaguchi et al., 2022; Wei & Ma, 2019), loss margins (Bartlett et al., 2017), model architecture (Hanin & Rolnick, 2019, 2019) as well as global optima (Ma & Ying, 2021). Particularly, many existing bounds on the generalization error of deep networks assume bounded dependence of the model function on the input variable, chiefly via uniformly bounded Lipschitz constant (Kawaguchi et al., 2022; Ma & Ying, 2021; Wei & Ma, 2019; Nagarajan & Kolter, 2018; Bartlett et al., 2017).
While such assumption may seem appealing for representing a well-behaved model function for fixed architectures, this view is at odds with the double descent phenomenon (Belkin et al., 2019; Geiger et al., 2019) - whereupon the test error depends non-monotonically on model size - which has been connected to smooth interpolation of training data (Gamba et al., 2022; Bartlett et al., 2020; Belkin et al., 2018).
Hence, a natural question is _whether uniform upper bounds on the Lipschitz constant provide a faithful representation of the hypothesis class of networks trained in practice_.
**Contributions** In this work, (1) we present an empirical investigation of input-space smoothness of deep networks through their Lipschitz constant estimated on the training data, as model size varies; (2) we observe non-monotonic trends for the Lipschitz constant, showing strong correlation with double descent; (3) we establish a theoretical connection between the observed trends and parameter-space dynamics of SGD in terms of fundamental operators and quantities; (4) we present several correlates of double descent, providing insights on the hypothesis class of networks trained in practice and their effective complexity.
**Outline of the Paper** Section 2 describes our experimental setup. Section 3 presents our main results, and connects the Lipschitz constant with parameter-space curvature of the loss landscape and model function. Section 4 discusses broader implications of our results. Finally, section 5 discusses our findings in connection to related works.
## 2 Experimental Details
We study the empirical Lipschitz constant of trained networks under double descent, when model size is controlled by network width. We reproduce the double descent curves of the test error (Belkin et al., 2019) by training a family of ConvNets and ResNet18s (He et al., 2015) on the CIFAR datasets (Krizhevsky et al., 2009) with up to \(20\%\) training labels randomly perturbed. Following Nakkiran et al. (2019), we control model size by increasing the number of learned feature maps \(\omega\) of each convolutional stage in both model families, following the progression \([\omega,2\omega,4\omega,8\omega]\), for \(\omega=1,\ldots 64\). To isolate the role of overparameterization, we remove potential confounders from the optimization process by training all networks with crossentropy loss and SGD with momentum and fixed learning rate, without any explicit regularization (full details in appendix B).
Figure 1 (top) shows the double descent curve for the test error for our experimental setting, with the test error showing the classic U-shaped curve for small models, and a second descent as the degree of parameterization increases further. By disabling explicit regularizers, we ensure that improvement in test error for large models is promoted by overparameterization rather than explicit regularization. Hereafter, we denote with _interpolation threshold_ the smallest model width that perfectly classifies the training data.
## 3 Input-Smoothness Follows Double Descent
We begin by studying the empirical Lipschitz constant of piece-wise linear networks in section 3.1. Section 3.2 presents our main empirical finding, which we theoretically connect to parameter-space gradients in section 3.3. Finally, section 3.4 presents novel bounds on the empirical Lipschitz constant, that capture double descent in practice.
Figure 1: (Top) **Train error** (dashed) and **test error** (solid) for our experimental setting, with the test error undergoing double descent as model size increases. (Left to right) ConvNets trained on CIFAR-10 (left) and CIFAR-100 (middle), and ResNets trained on CIFAR-10 (right). (Bottom) **Empirical Lipschitz constant** for the same models. For each setting, the Lipschitz depends non-monotonically on model size, strongly correlating with double descent. This finding questions the utility and validity of uniformly bounded Lipschitz assumptions in representing the hypothesis class of trained networks.
### Preliminaries
We consider feed-forward networks \(\mathbf{f}(\mathbf{x},\mathbf{\theta}):\mathbb{R}^{d}\times\mathbb{R}^{p}\to \mathbb{R}^{K}\), composing \(L\) affine layers with the continuous piece-wise linear activation ReLU \(\phi(x)=\max\{0,x\}\), interpreted as functions \(\mathbf{f}(\mathbf{x},\mathbf{\theta})=\mathbf{\theta}^{L}\phi(\mathbf{\theta}^{L-1}\phi( \cdots\phi(\mathbf{\theta}^{1}\mathbf{x}+\mathbf{b}^{1}))+\mathbf{b}^{L-1})+ \mathbf{b}^{L}\), where \(\mathbf{\theta}=(\mathrm{vec}(\mathbf{\theta}^{1}),\ldots,\mathrm{vec}(\mathbf{\theta}^ {L}))\in\mathbb{R}^{p}\) represents the vectorized model parameter and \(\mathbf{x}\in\mathbb{R}^{d}\) the input to the network.
For each fixed value of \(\mathbf{\theta}\), \(\mathbf{f}_{\mathbf{\theta}}:\mathbb{R}^{d}\to\mathbb{R}^{K}\) corresponds to a fixed hypothesis in the space \(\mathcal{H}\) of all functions expressible by the network architecture. Each model function \(\mathbf{f}_{\mathbf{\theta}}\) is itself continuous piece-wise linear, and partitions its input space into convex polytopes known as activation regions (Raghu et al., 2017; Montufar et al., 2014), on each of which a linear function is computed. By piece-wise linearity, one can write
\[\mathbf{f}_{\mathbf{\theta}}(\mathbf{x})=\sum_{\epsilon}\mathbb{1}_{ \epsilon}(\mathbf{x})\mathbf{\theta}_{\epsilon}\mathbf{x}+\mathbf{b}_{\epsilon} \tag{1}\]
where the indicator function selects the activation region according to \(\mathbf{x}\), and \(\mathbf{\theta}_{\epsilon}\) represents conditioning the factorization \(\mathbf{\theta}_{\epsilon}:=\prod_{\ell=1}^{L}\mathrm{diag}(\phi_{\mathbf{x}}^{ \ell})\mathbf{\theta}^{\ell}\) by the binary activation pattern \(\phi_{\mathbf{x}}^{\ell}\) of each ReLU according to the precactivation of the corresponding layer \(\ell\), dependent on the input \(\mathbf{x}\) to the network 2. Particularly, for an input \(\overline{\mathbf{x}}\in\mathbb{R}^{d}\), evaluating the Jacobian \(\nabla_{\mathbf{x}}\mathbf{f}_{\mathbf{\theta}}\) at \(\overline{\mathbf{x}}\) yields \(\mathbf{\theta}_{\epsilon}\), i.e. the linear function computed by \(\mathbf{f}_{\mathbf{\theta}}\) on the activation region \(\epsilon\) containing \(\overline{\mathbf{x}}\). Hence, given a dataset \(\mathcal{D}=\{(\mathbf{x}_{n},y_{n})\}_{n=1}^{N}\), the empirical Lipschitz of \(\mathbf{f}_{\mathbf{\theta}}\) on \(\mathcal{D}\) can be estimated by computing the expected operator norm 3
Footnote 2: A similar conditioning is applied to compute the bias term \(\mathbf{b}_{\epsilon}\).
Footnote 3: The dependency of \(\mathbf{\theta}_{\epsilon}\) from each sample \(\mathbf{x}_{n}\) is denoted by the activation region index \(\epsilon_{n}\).
\[(\mathbb{E}_{\mathcal{D}}\|\nabla_{\mathbf{x}}\mathbf{f}_{\mathbf{ \theta}}\|_{2}^{2})^{\frac{1}{2}}:=\Big{(}\frac{1}{N}\sum_{n=1}^{N}\sup_{ \mathbf{x}:\|\mathbf{x}\|\neq 0}\frac{\|\mathbf{\theta}_{\epsilon_{n}}\mathbf{x}\|_{2}^{2 }}{\|\mathbf{x}\|_{2}^{2}}\Big{)}^{\frac{1}{2}} \tag{2}\]
representing the expected largest change propagated by the function on activation regions covering \(\mathcal{D}\), and can be thought of as a measure of scale of \(\mathbf{f}_{\mathbf{\theta}}\). Appendix D outlines a procedure for estimating the operator norm in practice.
### Input Smoothness of Piece-wise Linear Networks
Next, we compute Equation 2 for deep networks trained in practice. Figure 1 (bottom) presents our main result:
_the empirical Lipschitz constant of deep networks is non-monotonic in model size, increasing until the interpolation threshold, and then decreasing afterward, strongly correlating with the test error_.
The empirical Lipschitz also correlates with hardness of the learning task, with interpolation threshold shifting towards larger models as label noise increases. The observed trends are consistent across all considered architectures, datasets, and noise settings.
This finding sheds light on the effective complexity of trained networks in relation to model size, contradicting the view of uniformly upper bounded Lipschitz constant adopted in many theoretical works (Kawaguchi et al., 2022; Ma and Ying, 2021; Wei and Ma, 2019; Nagarajan and Kolter, 2018; Bartlett et al., 2017), which miss the observed non-monotonicity, by taking the Lipschitz as a uniform constant.
The observed trends highlight a strong correlation between increased relative smoothness of the model function and its generalization ability, as well as dependency of the phenomenon on model size. It is important to note that the measure is computed exclusively on _training data_, and exhibits non-monotonic behaviour despite the training loss being monotonically decreasing with model size.
With the main message of the paper established, in the following sections we draw formal connections between the empirical Lipschitz and parameter-space regularity, and finally present and discuss further observations that offer deeper insights on model complexity and double descent.
### Connection to Parameter-Space Dynamics
In this section, we connect input-space and parameter-space dynamics of SGD. We defer all proofs to appendix F. Let \(\mathbf{x}^{\ell}=\phi(\mathbf{\theta}^{\ell}\mathbf{x}^{\ell-1}+\mathbf{b}^{\ell})\) denote the output of the \(\ell\)-th layer, for \(\ell=1,\ldots,L\), with \(\mathbf{x}^{0}=\mathbf{x}\in\mathbb{R}^{d}\). We begin by noting that linear layers enjoy a duality between their input and parameters, for which \(\frac{\partial\mathbf{x}^{\ell}}{\partial\mathbf{x}^{\ell-1}}\mathbf{x}^{\ell -1}{}^{T}=\mathbf{\theta}^{\ell}\frac{\partial\mathbf{x}^{\ell}}{\partial\mathbf{ \theta}^{\ell}}\), which implies the following statement.
**Theorem 1**.: _Let \(\mathbf{f}\) denote a neural network with a least one hidden layer, with \(\|\mathbf{\theta}^{1}\|>0\) and arbitrary weights \(\mathbf{\theta}^{2},\ldots,\mathbf{\theta}^{L}\). Let \(x_{\min}:=\min\limits_{\mathbf{x}_{n}\in\mathcal{D}}\|\mathbf{x}_{n}\|_{2}\). Then, parameter-space gradients bound input-space gradients of \(\mathbf{f}\) from above:_
\[\frac{x_{\min}^{2}}{\|\mathbf{\theta}^{1}\|_{2}^{2}}\mathbb{E}_{\mathcal{D}}\| \nabla_{\mathbf{x}}\mathbf{f}\|_{2}^{2}\leq\mathbb{E}_{\mathcal{D}}\|\nabla_{ \mathbf{\theta}}\mathbf{f}\|_{2}^{2}\,. \tag{3}\]
Crucially, the bound highlights the implicit regularization effect of parameter-space gradients on input-space gradients, by regularizing the empirical Lipschitz of \(\mathbf{f}_{\mathbf{\theta}}\) on the training data. Section 4 discusses implications beyond training data.
We note that, while an analogous bound was first observed by Ma and Ying (2021) (Theorem 3), the authors propose a uniform bound \(\mathbb{E}_{\mathcal{D}}\|\nabla_{\mathbf{\theta}}\mathbf{f}\|\leq\alpha p\) that linearly increases with the number of model parameters \(p\), with constant \(\alpha\) depending on learning rate and batch size.
In contrast, we study the bound in connection to double descent, as \(p\) varies with network width. Specifically, in section 3.4 we provide an upper bound to Theorem 1 that captures double descent in practical settings.
ImplicationsInterestingly, by recalling that \(\nabla_{\mathbf{x}}\mathbf{f}_{\mathbf{\theta}}=\prod\limits_{l=1}^{L}\mathrm{ diag}(\phi_{\mathbf{x}}^{\ell})\mathbf{\theta}^{\ell}\), the bound in Theorem 1 controls the expected growth of all layers save for \(\mathbf{\theta}^{1}\). This interpretation is well aligned with recent evidence showing that the scale of initialization of the first layer affects the magnitude of the parameter-space gradients, and may prevent an interpolating network from generalizing to test data (Mehta et al., 2020). Hence, the bound highlights the importance of carefully designed weight initialization schemes, as typically employed in practice (Arpit et al., 2019; He et al., 2015; Glorot and Bengio, 2010). We refer the reader to Mehta et al. (2020) for a detailed discussion on the problem.
Finally, for exponential losses \(\mathcal{L}(\mathbf{\theta},\mathbf{x},\mathbf{y})\) like crossentropy and mean squared error, an immediate corollary follows.
**Corollary 1**.: _Consider the composition of a loss function \(\mathcal{L}\) with a least one hidden layer, with \(\|\mathbf{\theta}^{1}\|>0\) and arbitrary weights \(\mathbf{\theta}^{2},\ldots,\mathbf{\theta}^{L}\). Then,_
\[\frac{x_{\min}^{2}}{\|\mathbf{\theta}^{1}\|_{2}^{2}}\mathbb{E}_{ \mathcal{D}}\|\nabla_{\mathbf{x}}\mathcal{L}\|_{2}^{2}\leq\mathbb{E}_{ \mathcal{D}}\|\nabla_{\mathbf{\theta}}\mathcal{L}\|_{2}^{2}\,. \tag{4}\]
In the following, building on Corollary 1, we draw an explicit connection between \(\|\nabla_{\mathbf{x}}\mathcal{L}\|_{2}\) and the parameter-space geometry of loss landscape.
### Connection to Parameter-Space Curvature
In order to better elucidate the implicit regularization effect in Equations 3 and 4, we consider the dynamics of SGD in proximity of a minimum \(\mathbf{\theta}^{*}\in\mathbb{R}^{p}\) of the loss \(\mathcal{L}\). We adopt a linear stability perspective (Hosoe and Hagiwara, 2022; Wu and Ma, 2018), and approximate the loss in a neighbourhood of \(\mathbf{\theta}^{*}\) via a second-order Taylor expansion
\[\mathbb{E}_{\mathcal{D}}\mathcal{L}(\mathbf{\theta},\mathbf{x},y)= \frac{1}{2}(\mathbf{\theta}-\mathbf{\theta}^{*})^{T}H(\mathbf{\theta}-\mathbf{\theta}^{*})+o( \|\mathbf{\theta}-\mathbf{\theta}^{*}\|^{3}) \tag{5}\]
where the first order term vanishes at the critical point \(\mathbf{\theta}^{*}\), as does the zeroth order term for interpolating models, and \(H\) represents the expected Hessian of the training loss.
In general, SGD with discrete learning rate is known to fluctuate around critical points due to stochastic noise (Mori et al., 2022; Liu et al., 2021). To study Theorem 1 in relationship to the dynamics of SGD around \(\mathbf{\theta}^{*}\), we need to account for such phenomenon. In the next result, we adopt the noise model recently proposed by Ziyin et al. (2022), to derive upper bounds on the empirical Lipschitz, focusing on the mean squared error \(\mathcal{L}=\frac{1}{2N}\sum\limits_{n=1}^{N}(f_{\mathbf{\theta}}(\mathbf{x}_{n}) -y_{n})^{2}\).
**Theorem 2**.: _Let \(\mathbf{\theta}^{*}\) be a critical point for the loss \(\mathcal{L}(\mathbf{\theta},\mathbf{x},y)\) on \(\mathcal{D}\). Let \(\mathbf{f}_{\mathbf{\theta}}\) denote a neural network with at least one hidden layer, with \(\|\mathbf{\theta}^{1}\|>0\). Then,_
\[\frac{x_{\min}^{2}}{\|\mathbf{\theta}^{1}\|_{2}^{2}}\mathbb{E}_{ \mathcal{D}}\|\nabla_{\mathbf{x}}\mathcal{L}\|_{2}^{2}\leq 2\mathcal{L}_{ \max}(\mathbf{\theta})\,\Delta(\mathcal{L}(\mathbf{\theta}))+o(\mathcal{L}(\mathbf{\theta})) \tag{6}\]
_with \(\Delta(\mathcal{L}(\mathbf{\theta})):=\mathrm{tr}\left(H\right)\) denoting the Laplace operator, \(H:=\mathbb{E}_{\mathcal{D}}[\frac{\partial^{2}\mathcal{L}}{\partial\mathbf{ \theta}\partial\mathbf{\theta}^{T}}]\) denoting the expected parameter-space Hessian of \(\mathcal{L}\), and \(\mathcal{L}_{\max}(\mathbf{\theta}):=\max\limits_{(\mathbf{x}_{n},y_{n})\in \mathcal{D}}\mathcal{L}(\mathbf{\theta},\mathbf{x}_{n},y_{n})\)._
Theorem 2 links network function regularity to the landscape of parameter space, via mean curvature \(\Delta(H)\) of the loss in a neighbourhood of \(\mathbf{\theta}^{*}\).
Figure 2 shows mean curvature of the loss landscape (top), as well as the largest loss Hessian eigenvalue \(\lambda_{\max}(H)\) (bottom) estimated in parameter space for our experimental setup (see appendix D for algorithmic details). Both mean and maximum curvature closely match the model functions' Lipschitz constant as model width increases, peaking near the interpolation threshold and decreasing afterward. This substantiates our bound in Equation 6, and provides a characterization of the empirical Lipschitz constant in terms of fundamental quantities in parameter space (Hessian trace and \(\lambda_{\max}\)), which themselves capture double descent.
Fluctuation due to Stochastic NoiseIn proximity of a critical point \(\mathbf{\theta}^{*}\), it is possible to derive linear stability conditions of Equation 6. We start by noting that, at each iteration \(t\), the update rule of SGD with momentum \(\mu\), batch size \(B\), and learning rate \(\eta\), is given by
\[\begin{cases}\mathbf{\theta}_{t+1}&=\mathbf{\theta}_{t}-\eta\mathbf{g}_{t}\\ \mathbf{g}_{t}&=\mu\mathbf{g}_{t-1}+\frac{1}{B}\sum\limits_{b=1}^{B}\nabla_{ \mathbf{\theta}}\mathcal{L}(\mathbf{\theta}_{t-1},\mathbf{x}_{\xi_{b}},y_{\xi_{b}}) \end{cases} \tag{7}\]
with the random variables \(\mathbf{\xi}=(\xi_{1},\ldots,\xi_{B})\) representing sampling of mini-batches. At step \(t\), the stochastic noise \(\mathbf{\epsilon}_{t}\) of SGD is given by
\[\mathbf{\epsilon}_{t}=\frac{1}{B}\sum\limits_{b=1}^{B}\nabla_{\mathbf{\theta}}\mathcal{ L}(\mathbf{\theta}_{t},\mathbf{x}_{\xi_{b}},y_{\xi_{b}})-\mathbb{E}_{\mathbf{\xi}} \nabla_{\mathbf{\theta}}\mathcal{L}(\mathbf{\theta}_{t}) \tag{8}\]
dependent both on the current parameter \(\mathbf{\theta}_{t}\) and \(\mathbf{\xi}_{t}\)(Ziyin et al., 2022; Mori et al., 2022). Importantly, the noise covariance \(C=\mathbb{E}_{\mathbf{\xi}}[\mathbf{\epsilon}_{t}\mathbf{\epsilon}_{t}^{T}]\) allows us to account for fluctuations of the bound in Theorem 2 due to stochastic noise.
**Corollary 2**.: _Let \(\mathbf{\theta}^{*}\) be a critical point for the loss \(\mathcal{L}(\mathbf{\theta},\mathbf{x},y)\) on \(\mathcal{D}\). Let \(\mathbf{f}_{\mathbf{\theta}}\) denote a neural network with at least one hidden layer, with \(\|\mathbf{\theta}^{1}\|>0\). Then,_
\[\frac{x_{\min}^{2}}{\|\mathbf{\theta}^{1}\|_{2}^{2}}\mathbb{E}_{\mathcal{D}}\| \nabla_{\mathbf{x}}\mathcal{L}\|_{2}^{2}\leq\operatorname{tr}\left(S\right)+o (\mathcal{L}(\mathbf{\theta})) \tag{9}\]
_with \(S=C+\frac{1}{B}\mathbb{E}_{\mathcal{D}}[\nabla_{\mathbf{\theta}}\mathcal{L}(\bm {\theta})]\mathbb{E}_{\mathcal{D}}[\nabla_{\mathbf{\theta}}\mathcal{L}(\mathbf{\theta })]^{T}\) denoting the gradient uncentered covariance._
Figure 2: (Top) **Mean loss curvature** (Hessian trace) in parameter space. (Bottom) **Maximum curvature** for the loss in parameter space. From left to right: ConvNets trained on CIFAR-10 (left), CIFAR-100 (middle) and ResNets trained on CIFAR-10 (right). In all settings, mean and maximum parameter-space curvature strongly correlate with double descent, peaking at the interpolation threshold, and highlighting a nonlinear dependence on network width. All values are reported in \(\log\)-\(y\) scale to better separate models in the interpolating regime.
Figure 3 shows the largest principal component of the gradient noise covariance \(C\), as model size increases. Similarly to the mean curvature, stochastic noise strongly correlates with the Lipschitz constant, decreasing considerably in the interpolation regime, thus presenting an additional fundamental quantity strongly correlating with double descent.
Lastly, we conclude by discussing stability of Theorem 2 with respect to training hyperparameters.
Stability of the boundThe dependency of Theorem 2 on SGD hyperparameters can be studied via the approximation error \(\mathcal{L}(\mathbf{\theta})-\mathcal{L}(\mathbf{\theta}^{*})\)(Liu et al., 2021; Thomas et al., 2020).
**Corollary 3**.: _Let \(\mathbf{\theta}^{*}\) be a critical point for the loss \(\mathcal{L}(\mathbf{\theta},\mathbf{x},y)\) on \(\mathcal{D}\). Let \(\mathbf{f_{\theta}}\) denote a neural network with at least one hidden layer, with \(\|\mathbf{\theta}^{1}\|>0\). Then,_
\[\frac{x_{\min}^{2}}{\|\mathbf{\theta}^{1}\|_{2}^{2}}\mathbb{E}_{ \mathcal{D}}\|\nabla_{\mathbf{x}}\mathcal{L}\|_{2}^{2}\leq o(\mathcal{L}(\mathbf{ \theta}))\quad+ \tag{10}\] \[\frac{\eta}{2(1-\mu)}\lambda_{\max}\Big{[}\big{(}I_{p}-\frac{ \eta}{2(1+\mu)}H\big{)}^{-1}C\Big{]}\operatorname{tr}\left(H\right)\]
Specifically, the bound implies a well-known stability condition of \(\mathbf{\theta}^{*}\): \(\eta<\frac{4(\mu+1)}{\lambda_{\max}(H)}\)(Liu et al., 2021; Thomas et al., 2020).
SummaryOur study of the Lipschitz constant reveals non-monotonic trends that strongly correlate with double descent for the test error, a phenomenon not captured by uniform bounds. By connecting the empirical Lipschitz to fundamental properties of the loss landscape, we present several correlates of double descent in parameter space, which in proximity of a critical point \(\mathbf{\theta}^{*}\) control complexity of the model function \(\mathbf{f_{\theta}}\) on the training data \(\mathcal{D}\).
Figure 4 summarizes our main findings, showing a strong correlation between the empirical Lipschitz constant and maximum parameter-space curvature of the loss landscape. mean parameter-space curvature, as well as the first principal component of gradient noise, with networks with large Lipschitz constant incurring in high test error.
Importantly, our analysis shows that for networks at convergence, the gradients \(\nabla_{\mathbf{\theta}}\mathcal{L}\) are bounded by the Hessian trace. Crucially, this implies that for large models, the loss \(\mathcal{L}(\mathbf{\theta},\mathbf{x},y)\) is Lipschitz also in \(\mathbf{\theta}\). Essentially, our Theorem 2 is a special case of the Poincare inequality, bounding a function through its gradient, which we apply to \(\nabla_{\mathbf{\theta}}\mathbf{f}\) via the chain rule. Intuitively, an analogous bound could be expected to hold for other families of compositional models.
Our analysis and experiments open many exciting questions on extending our observations beyond (i) piece-wise linear model functions, (ii) networks at convergence, as well as (iii) estimation of the Lipschitz on the training data. In section 4 we empirically explore these questions.
## 4 Implications for Implicit Regularization
We conclude our study by exploring broader implications of the trends observed in section 3. Section 4.1 extends our main finding to transformer architectures trained on machine translation tasks. Section 4.2 studies the development
Figure 3: **Dominant noise-covariance eigenvalue.** (Top) From left to right: ConvNets trained on CIFAR-10 (left), CIFAR-100 (middle) and ResNets trained on CIFAR-10 (right). In all settings, the magnitude of stochastic noise strongly correlates with double descent, peaking at the interpolation threshold, and highlighting a nonlinear dependence on network width. All values are reported in \(\log\)-\(y\) scale to better separate models in the interpolating regime.
of the Lipschitz constant throughout epochs, while section 4.3 studies implications of our findings for understanding effective complexity of trained networks. Finally, section 4.4 extends our experimental findings beyond training data.
### Beyond Piece-wise Linear Networks
We consider transformer architectures (Vaswani et al., 2017), whose model functions are not endowed with piecewise linear geometry due to softmax-based attention. Following (Nakkiran et al., 2019), we train \(8\)-layer multi-head attention transformers (Vaswani et al., 2017) on machine translation tasks, and control model size by scaling the embedding dimension \(h\), as well as the width of hidden fully connected layers to \(4h\). We compute Equation 2 on \(\nabla_{\mathbf{x}}\mathcal{L}\), where \(\mathcal{L}\) is the per-token perplexity. We note that Equation 2 can still be applied to the Jacobian \(\nabla_{\mathbf{x}}\mathcal{L}\) - which linearly approximates \(\mathcal{L}\) at each point \(\mathbf{x}\) - and the expected operator norm should be intended as the Sobolev seminorm \(\|\mathcal{L}\|_{\mathcal{D},1,2}\) of \(\mathcal{L}\) on \(\mathcal{D}\). Figure 5 extends our main finding, showing that the Sobolev seminorm of \(\mathcal{L}\) depends non-monotonically on model size, peaking near the interpolation threshold, and extending our main result beyond vision architectures.
Figure 4: **Correlation between Lipschitz constant and parameter-space curvature. From left to right: maximum curvature (left), dominant noise-covariance eigenvalue (middle) and mean curvature (right), respectively for ConvNets trained on CIFAR-10 (top), and ResNets trained on CIFAR-10 (bottom). In all settings, mean and maximum parameter-space curvature strongly correlate with the empirical Lipschitz constant in the interpolating regime. Furthermore, models with higher empirical Lipschitz present higher mean and maximum curvatures, and incur in higher test error. All values are reported in \(\log\)-\(y\) scale to better separate models.**
Figure 5: (Left) **Double descent of the test error for transformers trained on translation tasks, as the embedding dimension and model width vary. (Right) Loss Sobolev norm.**
### Overparameterization Accelerates Interpolation
Our experimental setup so far only focused on networks at convergence. Here, we track the development of the empirical Lipschitz throughout training, in relationship to the training error (0/1 loss), providing deeper insights on effective model complexity. Figure 6 (top) shows the Lipschitz constant of ConvNets (left) and ResNet18s (right) trained on CIFAR-10 with \(20\%\) noisy training labels, for representative model widths, together with the respective training error (bottom). Heatmaps for all model widths are presented in appendix C.2, connecting to the test error in Figure 10. We recall that the model-wise interpolation threshold (Belkin et al., 2019) denotes the smallest model width \(\omega_{0}\) that perfectly classifies the training set, in our experiment corresponding to \(\omega_{0}=14\) for ConvNets, and \(\omega_{0}=5\) for ResNets. We train ConvNets for \(500\) epochs, and ResNets for \(4\)k epochs, to ensure a fair training budget. During training, we observe three distinct behaviours.
Small models (\(\omega\ll\omega_{0}\)) are unable to interpolate the entire training set, and their training error as well as empirical Lipschitz quickly plateau, remaining stable therefrom. Increasing size among small models reduces their training error, and correspondingly increases the Lipschitz.
At the same time, models near the interpolation threshold \(\omega_{0}\) - peaking in test error and Lipschitz (cfr. Figure 1) - are able to achieve interpolation, _only when given considerable training budget_. Correspondingly, the Lipschitz monotonically increases over training as the training error is reduced, resulting in models achieving worst Lipschitz and worst test error. In contrast, consistently with the double descent trends reported in section 3, large models (\(\omega\gg\omega_{0}\)) are able to quickly interpolate the training set, with the largest models requiring fewer epochs to achieve interpolation.
**Implications** The seemingly unbounded Lipschitz constant of models near the threshold \(\omega_{0}\) suggests that the observations reported in Hardt et al. (2016) - for which prolonged training budgets may hurt generalization performance - are pertaining only to models near the threshold. In fact, larger models can be trained for considerably long without a comparable increase in complexity. Furthermore, the notion of acceleration via overparameterization was formally studied by Arora et al. (2018), showing that convergence of linear networks is accelerated by overparameterization via increasing network depth, _irrespective of model width_. In contrast, our findings show that for non-linear networks, model width considerably affects the speed of convergence.
### Overparameterization Constrains Complexity
Referring again to Figure 6, we now consider implications for effective complexity of trained networks. First, since model weights are typically initialized to small values around zero (He et al., 2015; Glorot and Bengio, 2010), the Lipschitz constant of all models is close to zero at the beginning of training. This corresponds to each model expressing a very simple function (low empirical Lipschitz), albeit with low generalization performance (typically close to random chance). Second, during training, fitting the dataset requires all models' Lipschitz constant to
Figure 6: (Top) **Lipschitz constant over epochs** and (Bottom) **Train error.** for ConvNets (left) and ResNets (right) trained on CIFAR-10 with \(20\%\) noisy labels.
grow, with corresponding increase in model complexity (as measured by Equation 3). When zero error is reached (\(\omega\geq\omega_{0}\)), the Lipschitz constant approximately plateaus, thereafter only slowly increasing over epochs. Recalling that large models are observed to interpolate faster, this finding suggest that large models may achieve interpolation via least meaningful deviation from initialization, realizing an overall smooth function.
To assess our hypothesis, we study distance from initialization \(\|\mathbf{\theta}_{T}^{\prime}-\mathbf{\theta}_{0}^{\prime}\|_{F}\) of each layer \(\ell\), with \(\mathbf{\theta}_{0}^{\ell}\) and \(\mathbf{\theta}_{T}^{\ell}\) respectively denoting the layers' weights at initialization and convergence. Figure 7 presents distance from initialization (colour) as model width increases (\(y\)-axis), for each layer (\(x\)-axis), for ConvNets (left) and ResNets (right) trained on CIFAR-10 with \(20\%\) label noise, and ConvNets on CIFAR-100 with no label noise (middle). For each heatmap independently, the distance from initialization of each layer is normalized by the largest distance observed for that layer.
Intriguingly, for almost all layers, the quantity follows double descent as model width increases, peaking near the interpolation threshold, and matching the epoch-wise trend reported in section 4.2. Importantly, the largest distance from initialization is observed for the later layers, counting the largest number of parameters, whose role has been recently connected to memorization (Stephenson et al., 2021).
**Implications** This exciting finding supports our interpretation that faster interpolation, as promoted by overparameterization, results in model functions which are overall low-complexity, due to least (but meaningful) deviation from initialization. Our proposed interpretation corroborates the empirical observations of Gamba et al. (2022a) and Somepalli et al. (2022), who respectively report that large models express low curvature model functions in input space, and consistent decision boundaries. Finally, our findings extend Neyshabur et al. (2018), who initially reported that distance from initialization decreases for overparameterized models. Importantly, we show that the statistic is non-monotonic in model size, and that it strongly correlates with double descent for the test error, and the overall trends observed in this work. Together with the observed low curvature of large models shown in section 3.4, this finding shares potential connection to the linear mode connectivity phenomenon (Garipov et al., 2018), by which low-loss paths that connect solutions obtained by optimization of the same model and task have been found in practice.
### Bounded Complexity Beyond Training Data
To conclude, in Figure 8 we estimate the empirical Lipschitz of ConvNets (left) and ResNets (right) trained on CIFAR%-10, probing the networks by computing Equation 2 on unseen test data or with random noise (experimental details in appendix E). We report that, surprisingly, even for data lying far from the support of the data distribution, the empirical Lipschitz constant remains bounded, and the model-wise trend follows double descent, peaking at the interpolation threshold. This finding further strengthens the view that reduced distance from initialization via acceleration may essentially control complexity of the _whole_ model function. This suggests it may be possible to cast implicit regularization as reduced distance from initialization (akin to weight decay). We leave this exciting direction to future work.
Figure 7: **Distance from initialization for each layer of ConvNets trained on CIFAR-10 with \(20\%\) noisy labels (left), CIFAR-100 (middle), and ResNet18s trained on CIFAR-10 with \(20\%\) noisy labels (right). For each ConvNet and most ResNet layers, distance of the converged layer’s parameters from initialization follows double descent, peaking at the interpolation threshold (dashed), suggesting global boundedness of the model function beyond training data for large models.**
## 5 Related Work and Discussion
While deep neural networks are able to express a rich family of functions as their model size increases (Zhang et al., 2018; Telgarsky, 2016; Cybenko, 1989), the effective complexity of generalizing models appears to be constrained in practice (Neyshabur et al., 2018; Zhang et al., 2019; Neyshabur et al., 2015). Developing a formal characterization of the mechanisms driving such phenomenon is still a challenging open problem. Theoretical studies hinge upon finding a parameterization of the hypothesis class of trained networks that meaningfully constrains their expressivity. Importantly, several works rely on uniform upper bounds on the Lipschitz to constrain model function variation (Kawaguchi et al., 2022; Ma and Ying, 2021; Wei and Ma, 2019; Nagarajan and Kolter, 2018; Bartlett et al., 2017). Furthermore, in practical settings, achieving low Lipschitz has been connected to improved generalization performance (Gouk et al., 2021; Moosavi-Dezfooli et al., 2019; Novak et al., 2018).
Recently, the study of the Lipschitz constant has received renewed attention, with Bubeck and Sellke (2021) prescribing overparameterization as a necessary condition for smooth interpolation of training data, for a generic class of learners. Our work corroborates their findings, by also presenting an upper bound on the Lipschitz in terms of neural networks fundamental components, in relation to optimization and the loss landscape in parameter space. Our results highlight the importance of incorporating the optimizer into the hypothesis class of neural networks.
While tightly estimating the Lipschitz constant is NP-hard for deep networks (Jordan and Dimakis, 2020; Virmaux and Scaman, 2018), we focus on the function applied by ReLU networks on the training data, and present relative trends as model size increases. Crucially, we extend a uniform bound on empirical Lipschitz Ma and Ying (2021), by proposing a novel one that experimentally captures double descent, in light of recently proposed models of stochastic noise in proximity of minima of the loss (Ziyin et al., 2022; Mori et al., 2022; Li et al., 2021; Thomas et al., 2020).
Interestingly a concurrent work uses Sobolev seminorms of ReLU networks on the training set to propose a model complexity measure Dherin et al. (2022). In line with our findings, their proposed measure captures the test error. Our works differ in that they focus on studying explicit and implicit regularization of the metric, while instead we build a theoretical connection to several fundamental quantities capturing double descent in connection to the geometry of the loss landscape. Importantly, we extend our findings beyond training data, with evidence of global boundedness of the model function through distance from initialization, which we are the first to study in relation to double descent.
**Limitations** The analysis presented in section 3.4 holds only in proximity of a critical point, and does not account for the high-dimensional trajectories taken by SGD in parameter space far from a solution \(\mathbf{\theta}^{*}\), nor for how such solution is found in the loss landscape. Furthermore, the stability analysis presented in corollaries 2 and 3 accounts only for covariance of stochastic noise \(\mathbf{\epsilon}_{t}\) with respect to mini-batch sampling \(\mathbf{\xi}_{t}\), and does not account for dependence on \(\mathbf{\theta}_{t}\), which is controlled by a power law (Mori et al., 2022). In this work, we focus on empirically establishing a non-monotonic dependency of \(C\) and \(H\) from model size, highlighting a strong correlation with the Lipschitz constant, as well as double descent for the test error in practical settings. A more precise analysis of our
Figure 8: **Lipschitz constant estimated on random validation data** for ConvNets (left) and ResNets (right) trained on CIFAR-10 with \(20\%\) noisy labels. Confirming our interpretation, even far from the support of the data distribution, the models’ Lipschitz follows double descent, supporting the notion of globally bounded model function due to reduced distance from initialization for overparameterized models.
bounds would require accounting for the parameters covariance \(\mathbb{E}_{\mathbf{\theta}}[\mathbf{\theta}_{t}\mathbf{\theta}_{t}^{T}]\) at each iteration \(t\), which we leave for future work.
## 6 Conclusions
We present an extensive study of the empirical Lipschitz of deep networks undergoing double descent, questioning the informativeness of uniform upper bounds on the constant, which may hide effective model complexity. By building a theoretical connection with the geometry of the loss landscape, we present several correlates of double descent in terms of fundamental notions, which we hope will inspire further theoretical studies. Our work isolates two important quantities - namely loss landscape flatness and distance of parameters from initialization - respectively controlling optimization dynamics around a critical point and bounding model function complexity beyond the training data. We believe understanding the structure and singularity of the overparameterized mapping from parameters to model functions is a fundamental open problem, which might reveal a causal structure between the correlates of double descent reported in this work.
|
2302.08601 | Adaptive Safety-Critical Control for a Class of Nonlinear Systems with
Parametric Uncertainties: A Control Barrier Function Approach | This paper presents a novel approach for the safe control design of systems
with parametric uncertainties in both drift terms and control-input matrices.
The method combines control barrier functions and adaptive laws to generate a
safe controller through a nonlinear program with an explicitly given
closed-form solution. The proposed approach verifies the non-emptiness of the
admissible control set independently of online parameter estimations, which can
ensure the safe controller is singularity-free. A data-driven algorithm is also
developed to improve the performance of the proposed controller by tightening
the bounds of the unknown parameters. The effectiveness of the control scheme
is demonstrated through numerical simulations. | Yujie Wang, Xiangru Xu | 2023-02-16T21:57:40Z | http://arxiv.org/abs/2302.08601v2 | [
###### Abstract
This paper presents a novel approach for the safe control design of systems with parametric uncertainties in both drift terms and control-input matrices. The method combines control barrier functions and adaptive laws to generate a safe controller through a nonlinear program with an explicitly given closed-form solution. The proposed approach verifies the non-emptiness of the admissible control set independently of online parameter estimations, which can ensure the safe controller is singularity-free. A data-driven algorithm is also developed to improve the performance of the proposed controller by tightening the bounds of the unknown parameters. The effectiveness of the control scheme is demonstrated through numerical simulations.
A Control Barrier Function Approach]Adaptive Safety-Critical Control for a Class of Nonlinear Systems with Parametric Uncertainties: A Control Barrier Function Approach
Y. Wang, Xiangru Xu]Yujie Wang\({}^{a}\), Xiangru Xu\({}^{a,**}\)
[email protected]
## 1 Introduction
Control barrier functions (CBFs) have been recently proposed as a systematic approach to ensure the forward invariance of control-affine systems [1, 2]. By including the CBF condition into a convex quadratic program (QP), a CBF-QP-based controller can act as a safety filter that modifies potentially unsafe control inputs in a minimally invasive fashion. However, most existing CBF works require precise model information, which is often challenging to obtain. Robust CBF control methods have been proposed to address this issue, ensuring safety in the presence of bounded model uncertainties [3, 4, 5, 6, 7]. However, the design of a robust CBF controller relies on the bounds of the uncertainties or the Lipschitzness of the unknown dynamics, making it difficult to handle _parametric uncertainties_ that are generally unbounded.
Adaptive CBFs (aCBFs) have been proposed to ensure safety for control-affine systems in the presence of parametric uncertainties [8, 9, 10, 11, 12, 13, 14, 15, 16]. In contrast to the robust CBF-based methods that consider the worst-case of uncertainties, the aCBF-based approach estimates the unknown parameters online to generate a safe controller through solving a QP. However, the aforementioned aCBF results only take into account parametric uncertainties in the drift terms. There is limited research that considers parametric uncertainties in the control-input matrix. For example, [17] uses a filtering-based concurrent learning algorithm in the CBF framework to design safe controllers for single-input-single-output systems with unknown control coefficients; the estimated parameter converges to the true value exponentially, but system safety is not guaranteed before the convergence of the parameter adaptations. In [18], a zeroing CBF-based adaptive control algorithm is proposed to solve the funnel control problem for systems with parametrically uncertain control-input matrices, which can achieve tracking of a reference trajectory within a pre-defined funnel; however, this method may fail in singular configurations, as discussed in Remark 1 of that paper. Despite these early contributions, the aCBF-based control design for systems with parametric uncertainties in the control-input matrix terms is still an open field and merits further investigation.
Consider a control-affine system \(\dot{x}=f(x)+g(x)u\) where \(f\), \(g\) include parametric uncertainties (e.g., \(f\) and \(g\) are identified by universal approximators such as neural networks). The main challenge of stabilizing such a system using adaptive controllers arises from the so-called "loss of controllability" problem; that is, although the system is controllable, the identification model may lose its controllability at some points in time, owing to parameter adaptations [19, 20]. The same issue could happen in the aCBF-based control design, which will result in the emptiness of the admissible safe control set and therefore, the infeasibility of the QP. To the best of our knowledge, the _singularity-free_ aCBF-based safe controller is not yet developed in the literature, though relevant stabilizing adaptive control schemes have been proposed in [19, 20, 21, 22, 23]. To bridge this gap, this paper proposes a singularity-free aCBF-based control design method for systems with parametric uncertainties in both \(f\) and \(g\). The safety constraint (i.e., the CBF condition) of the proposed method does not rely on the parameter estimations and thus, the non-emptiness of the admissible safe control set can be verified independent of the online parameter estimation pro
Figure 1: Main results of this paper. |
2301.11600 | Creative beyond TikToks: Investigating Adolescents' Social Privacy
Management on TikTok | TikTok has been criticized for its low privacy standards, but little is known
about how its adolescent users protect their privacy. Based on interviews with
54 adolescents in Switzerland, this study provides a comprehensive
understanding of young TikTok users' privacy management practices related to
the creation of videos. The data were explored using the COM-B model, an
established behavioral analysis framework adapted for sociotechnical privacy
research. Our overall findings are in line with previous research on other
social networks: adolescents are aware of privacy related to their online
social connections (social privacy) and perform conscious privacy management.
However, we also identified new patterns related to the central role of
algorithmic recommendations potentially relevant for other social networks.
Adolescents are aware that TikTok's special algorithm, combined with the app's
high prevalence among their peers, could easily put them in the spotlight. Some
adolescents also reduce TikTok, which was originally conceived as a social
network, to its extensive audio-visual capabilities and share TikToks via more
private channels (e.g., Snapchat) to manage audiences and avoid identification
by peers. Young users also find other creative ways to protect their privacy
such as identifying stalkers or maintaining multiple user accounts with
different privacy settings to establish granular audience management. Based on
our findings, we propose various concrete measures to develop interventions
that protect the privacy of adolescents on TikTok. | Nico Ebert, Tim Geppert, Joanna Strycharz, Melanie Knieps, Michael Hönig, Elke Brucker-Kley | 2023-01-27T08:57:50Z | http://arxiv.org/abs/2301.11600v1 | # Creative beyond TikToks: Investigating Adolescents' Social Privacy Management on TikTok
###### Abstract.
TikTok has been criticized for its low privacy standards, but little is known about how its adolescent users protect their privacy. Based on interviews with 54 adolescents in Switzerland, this study provides a comprehensive understanding of young TikTok users' privacy management practices related to the creation of videos. The data were explored using the COM-B model, an established behavioral analysis framework adapted for sociotechnical privacy research. Our overall findings are in line with previous research on other social networks: adolescents are aware of privacy related to their online social connections (social privacy) and perform conscious privacy management. However, we also identified new patterns related to the central role of algorithmic recommendations potentially relevant for other social networks. Adolescents are aware that TikTok's special algorithm, combined with the app's high prevalence among their peers, could easily put them in the spotlight. Some adolescents also reduce TikTok, which was originally conceived as a social network, to its extensive audio-visual capabilities and share TikToks via more private channels (e.g., Snapchat) to manage audiences and avoid identification by peers. Young users also find other creative ways to protect their privacy such as identifying stalkers or maintaining multiple user accounts with different privacy settings to establish granular audience management. Based on our findings, we propose various concrete measures to develop interventions that protect the privacy of adolescents on TikTok.
TikTok, adolescent, video, privacy management, social privacy, COM-B, Behavior Change Wheel +
Footnote †: journal: Accepted in 2023
+
Footnote †: journal: Accepted in 2023
This study is the first to examine how adolescents between the ages of 12 and 18 manage their privacy on TiTok when it comes to personal videos. Our findings are based on original data from personal interviews and offer unique insights into how privacy concerns influence young people's online behavior. The qualitative nature of our study helped us to understand the components that shape sharing behavior on TiTok. Ultimately, this allowed us to make concrete suggestions on how to effectively promote privacy-protective behavior among adolescents on TiTok (e.g., specific training, improved app features, and policy enforcement).
## 2. Related Work
### TiTok and Privacy Issues
As with many social media platforms, TikTok has come under scrutiny for its handling of personal data. TikTok is a video-focused social network originally started as U.S.-based musical.ly but later bought by Beijing ByteDance Technology Ltd. The TikTok app (available for Android and iOS) allows users to create short videos (which may only be a few seconds long) and live streams (TikTok, 2019). Like YouTube, TikTok is a manifestation of user-generated media where content is not primarily created by a limited number of producers but by a myriad of users (TikTok, 2019). Compared to other social networks such as Facebook or Instagram, users on TikTok do not need to communicate with each other to find a community. They can simply visit the "For You" default page to find like-minded users (Krause et al., 2019; TikTok, 2019; TikTok, 2020). Via the primary button at the center of the home screen, users can easily record and edit short videos, apply various effects and sounds, and reuse content produced by other users. Videos can be saved as drafts or published immediately to be viewed by different audiences (myself, followers, everybody) (TikTok, 2020). As of September 2021, 1 billion monthly active users were reported (TikTok, 2020), and 740 million first-time installs were estimated in 2021 (TikTok, 2021). Cloudflare, a provider of content delivery networks, ranked TiTok as the most popular website of 2021, before Google (Google, 2021). TikTok is currently also gaining in popularity among users below the official age limit of 13 years (TikTok, 2020). In Switzerland, three-quarters of all adolescents had a TikTok account in 2020 (behind Instagram and Snapchat with both over 90%) (TikTok, 2020). Younger adolescents (12-15 years) were even more likely to have a TikTok account than older adolescents (16-19 years). Slightly more girls (78%) used it than boys (68%). 51% of all adolescents stated to use it at least multiple times per week, and 38% daily (TikTok, 2020). However, little is known about how young users think about the data they share on the platform.
The app has raised numerous severe security and privacy concerns (e.g., (TikTok, 2019; TikTok, 2019; TikTok, 2020)) and caught the attention of the international authorities in the U.S. and EU (TikTok, 2020; TikTok, 2020; TikTok, 2020). For example, an analysis of the app revealed extensive aggressive user tracking (e.g., including techniques such as fingerprinting) and data sharing with other websites (e.g., sharing searches with Facebook) (TikTok, 2020). The app could also potentially collect other personal data from the user's smartphone (e.g., data from the clipboard (TikTok, 2019)). Since young people have always been an important user group of TikTok, concerns have been raised about ByteDance's handling of their personal data. For example, in February 2019 ByteDance was fined USD 5.7 million by the U.S. Federal Trade Commission (FTC) because musical.ly had collected information from minors under the age of 13 in violation of the Children's Online Privacy Protection Act (TikTok, 2020). Due to the death of a 10-year-old TikTok user, the Italian data protection authority has banned TikTok from processing the data of users whose age could not be determined with full certainty (TikTok, 2020). Also, the transfer of minors' data to China after the acquisition of U.S.-based musical.ly had caused a serious backlash in the US and EU (TikTok, 2020; TikTok, 2020). As recent as June 2022, evidence surfaced that ByteDance has repeatedly accessed U.S. user data from China - a practice that they had denied three years earlier when similar criticism was raised (TikTok, 2020).
TikTok has reacted to public criticism with several privacy-related updates to the original app. As part of its settlement with the FTC, the platform introduced an age-verification process for its users based on self-declaration, meaning users can provide a false age (TikTok, 2020). Further changes included extended parental control features (TikTok, 2020) and privacy settings contingent on the app users' age statement (TikTok, 2020). While children below 13 cannot use the app, adolescents between the ages of 13 and 15 are automatically switched to a "private account" as a default option, limiting those who can view their videos to approved followers. When 16- and 17-year-old users imitate an existing video in the form of a "dute" (split-screen video) or "stitch" (video incorporating a short clip of someone else's content), these are automatically restricted to "friends only". Only users who are 18 and older can buy and send virtual gifts. However, it is unclear if and how TikTok's efforts have affected users' privacy management.
### Adolescents' Privacy Management on Social Media
From the moment adolescents started to use online social networking sites, "online privacy" has been a major topic of discussion (TikTok, 2020). Informational privacy can be defined as "the claim of individuals, groups or institutions to determine for themselves when, how, and to what extent information about them is communicated to others" (TikTok, 2020). Research on online privacy and adolescents can be divided into two categories: "institutional privacy" and "social privacy" (TikTok, 2020). Institutional privacy refers to the data collection practices by organizations (e.g., for commercial purposes) (TikTok, 2020; TikTok, 2020). The focus of this paper is social privacy, i.e., issues related to sharing personal information with others (e.g., friends and family). According to the theory of "networked privacy," individuals do not have complete control over the sharing of their personal information within social connections (e.g., on social media) because privacy is not managed by individuals alone, but by networks of individuals collectively (TikTok, 2020).
Young people are often seen as particularly vulnerable social media users with limited capacities to protect their privacy (Krause et al., 2019; TikTok, 2020). At the same time, they are also portrayed as individuals who put themselves and others at risk with their naive and reckless social media behavior (Krause et al., 2019). Following this logic, numerous guides for parents emphasize the importance of modifying privacy settings and monitoring their children's behavior (e.g., (TikTok, 2020)). However, there has also been a pushback to this alarmist perspective by scholars who suggest that adolescents' online privacy should be addressed based on empirical research rather than paternal insinct (TikTok, 2020).
Empirical evidence from social networks other than TikTok (e.g., Facebook) suggests that adolescents are aware of their social privacy and actively manage their privacy on social media. As described by body (Bradley, 2017), adolescents want to avoid surveillance from parents, teachers, friends and other meaningful persons in their lives (that is what "online privacy" means to them). Adolescents' social media use seems to generally prompt increased disclosure of personal information (Krishnan, 2017). However, frequent sharing of content does not imply that adolescents share indiscriminately, nor that the content is intended for a wider audience (Krishnan, 2017). Indeed, adolescents are concerned about their privacy and capable of protecting it (Bradley, 2017; Bradley, 2017; Krishnan, 2017; Krishnan, 2017). Contrary to conventional wisdom, young people are, in fact, more likely to protect their privacy on social media than older people (Bradley, 2017). Madden et. al found several strategies adolescents use on social media to manage their identity and protect sensitive information (Krishnan, 2017). These strategies include deleting friends, faking names, deleting content, withholding/faking information, and changing privacy settings (Krishnan, 2017; Krishnan, 2017; Krishnan, 2017). They also employed different "zones of privacy" by using different channels for disclosing personal information to maintain intimacy with friends while protecting their privacy from their parents and strangers (Krishnan, 2017). Privacy management can also mean modifying social media content to shield it from audiences (Krishnan, 2017; Krishnan, 2017). This practice is referred to as "social steganography" or encoding a message for a defined audience (Krishnan, 2017). Adolescents' privacy management is influenced by various factors such as their social environment (e.g., friends, parents), prior (negative) experiences as well as the saliency of privacy settings (Krishnan, 2017; Krishnan, 2017).
Despite the existing evidence on adolescents' social media use on other social networks, researchers argue that existing findings might not be directly applicable to TikTok (Krishnan, 2017). Compared to other networks such as Facebook or Instagram, TikTok mainly thrives on content exploration and (re)-creation (Krishnan, 2017). The focus is not on the interaction between users and their social network but the interaction with users' videos proposed by an algorithm (Bradley, 2017). The main feature, the "For You" page, presents an endless stream of personalized, publicly available videos. Seeing them will motivate users to react and create similar content (e.g., through features such as "duel" or "stitch"). TikTok might therefore pose a particular threat to adolescents' privacy because a space previously conceptualized as private and safe can easily become a space of public visibility, surveillance, and judgment (such as in the case of a teenager being seen to perform a dance routine in their bedroom) (Krishnan, 2017).
Only a few studies have investigated adolescents' privacy management on TikTok. There is some evidence that privacy management on TikTok is considered as crucial by adolescents (Krishnan, 2017) and becomes more stringent at higher perceived risks (Krishnan, 2017). However, it is unclear how and why adolescents manage their privacy on TikTok.
### COM-B Model
As we were interested in the components that shape privacy behavior, we chose the COM-B model, which has been used in exploratory studies (e.g., (Krishnan, 2017)) and a series of contexts to change behavior (e.g., (Bradley, 2017)), as the conceptual framework for our analysis. Many behavioral theories have been developed, often with overlapping but differently named constructs (Krishnan, 2017) and limited guidance on choosing an appropriate theory for a particular, real-world context (Krishnan, 2017). As a consequence, theories are often under-used to understand real-world contexts and to design real-world solutions, which makes replication, implementation, evaluation, and improvements difficult (Krishnan, 2017; Krishnan, 2017). Researchers have argued that a comprehensive meta-model or "supra-theory" model of behavior - like the COM-B model - is needed that is applicable across contexts (Krishnan, 2017; Krishnan, 2017). As a meta-model of behavior, the COM-B model does not come with a pre-determined set of context-specific predictions that are common for many behavioral theories. COM-B is based on several existing social cognition models and has a broader understanding of behavior, having "also (...) automatic processing at its heart (like emotions and habits), broadening the understanding of behaviour beyond the more reflective, systematic cognitive processes that have been the focus of much behavioural research (...) (for example, social cognition models such as the Theory of Planned Behaviour) (Krishnan, 2017). Its comprehensive nature and flexibility made it a good fit for the exploratory nature of our study that was not constrained by the conceptual boundaries of a single theoretical framework. Furthermore, the model comes with hands-on actionable advice on appropriate interventions in a given context in form of a holistic behavior change framework ("Behavior Change Wheel") (see (Krishnan, 2017)).
As illustrated in Figure 1, the COM-B model is based on three components - capability (C), opportunity (O), and motivation (M) - that shape a person's behavior (B) (Krishnan, 2017). Firstly, capability is a subject's psychological ability (including necessary comprehension, knowledge, and skills) as well as the physical ability (e.g., control of the body) to engage in a behavior. Secondly, motivation can be defined as the subject's mental processes that energize and direct behavior. It includes the reflective motivation that involves conscious processes (e.g., goals, plans, and evaluations) as well as automatic processes (i.e., habitual, instinctive, drive-related, and affective processes). Finally, opportunity is defined as an attribute of the environmental system (unlike capability and motivation)
Figure 1. The COM-B model (Krishnan, 2017). The three components capability (C), opportunity (O) and motivation (M) must be present for a behavior (B) to occur. They interact over time and form a dynamic system with positive and negative feedback loops (Krishnan, 2017).
that enables or facilitates a behavior. Opportunities can be physical (e.g., technical features of an app, material, financial, and time) and social (e.g., norms and culture). In this study, we analyzed the participants' capabilities, opportunities, and motivation to engage in privacy behaviors.
Firstly, capability is a subject's psychological ability (including necessary comprehension, knowledge, and skills) as well as the physical ability (e.g., control of the body) to engage in a behavior. Secondly, motivation can be defined as the subject's mental processes that energize and direct behavior. It includes the reflective motivation that involves conscious processes (e.g., goals, plans, and evaluations) as well as automatic processes (i.e., habitual, instinctive, drive-related, and affective processes). Finally, opportunity is defined as an attribute of the environmental system (unlike capability and motivation) that enables or facilitates a behavior. Opportunities can be physical (e.g., technical features of an app, material, financial, and time) and social (e.g., norms and culture).
In our exploratory study, we did not focus on identifying interactions between COM-B components that explain a specific target behavior. Rather, our scope was first to learn about the full range of behaviors and explanatory factors associated with adolescents' privacy management.
## 3. Methodology
### Research Ethics
This paper is based on semi-structured, one-to-one interviews with adolescents in the Canton of Zurich, Switzerland, conducted in November 2021. In total, we visited two secondary schools (one in the city of Zurich and one in the greater Zurich area) and three youth centers (all in the city of Zurich). All interviews were audio-recorded and transcribed verbatim. Ethical approval was obtained from our university's institutional review board. Study participants provided written informed consent. For subjects below the age of 16, additional consent was sought from the parents. The interviews were voluntary and conducted at the institutions from which the subjects had been recruited. Digital, personalized shopping vouchers with a value of CHF 20 (-USD 19) were offered to study participants as compensation. The amount and type of the vouchers was determined beforehand together with the adolescents' supervisors (i.e., teachers, social workers) in order to not create an inappropriate but still sufficient incentive. After the interviews, in agreement with the participants, WhatsApp was used to deliver the individualized vouchers to the participants and to allow them to review their personal interview transcripts.
Several steps were taken to protect the participants identity without compromising the transparency of our research process. To begin with, all personally identifiable information was removed (e.g., references to persons, locations) and participants' names were replaced with pseudonyms. Furthermore, the study data was stored in line with our university's storage policy and only the involved researchers had access to the files. Finally, the original audio files were deleted from all devices half a year after recording together with other remaining personal data (e.g., phone numbers, WhatsApp chats, digital vouchers).
### Sample and Procedure
Due to the lack of research on this topic, we chose a highly exploratory approach. To identify information-rich cases and make optimal use of available resources, we drew a purposive sample (Srivastava et al., 2017). We used social media and search engines to find institutions in the Canton of Zurich (e.g., secondary schools, youth work, youth associations, museums) with contact with adolescents between 12 and 18 years of age. Afterward, principals of participating institutions recruited interested teachers and social workers. They, in turn, contacted interested TikTok users in the required age group. To extend the participant base, we applied snowballing among the interested TikTok users. Based on our primary aim (i.e., to explore how adolescent TikTok users manage their privacy), we chose to sample based on study participants' age and gender (equally distributed). We decided to ignore other demographic information such as ethnic identity. Following a pragmatic definition of theoretical saturation (Srivastava et al., 2017), no new information emerged after approximately 40 interviews, and we ended data collection after the 54th interview.
We chose to employ semi-structured interviews for our study because it encourages two-way communication and provides the interviewer with the opportunity to learn the reasons behind an answer. Some of the questions were part of the interviewer's guide (see Appendix), others were addressed at the moment. The interview guide was developed based on a previous study that applied the COM-B model in a qualitative setting (Krause et al., 2019) and adopted to the context of the current study. After asking for demographic information, we first explored general TikTok usage and motivation. The other questions followed the COM-B structure and were related to privacy-related behaviors as well as the explanatory components related to the target behavior "video creation". We finished the interviews with questions about commercial privacy aspects (i.e., targeted advertising and user tracking). After the interview process was completed, study participants received a copy of their interview transcript via WhatsApp and were invited to add information or make amendments. Minimal revisions were made by one participant.
To analyze the content of the interviews, we used a two-step procedure that first divided each statement into one of the four COM-B components (behavior, capability, opportunity, motivation) before further subdividing them into privacy-specific content. For phase one, we used a directed content analysis approach (Srivastava et al., 2017) to analyze the statements. To counter the subjectivity inherent to qualitative data analysis, three researchers read and coded all statements into the four COM-B domains (behavior, capability, opportunity, motivation). On the grounds of economy in both cost and effort, we decided against using "intercoder reliability" (ICR). As full replication of results was deemed unnecessary due to the exploratory and qualitative nature of data collection and analysis, we instead followed guidelines suggesting the use of "multiple coding" which allows independent researchers to cross check their coding strategies and interpretation of data (Brandrands, 2017). The authors engaged in researcher triangulation (Srivastava et al., 2017) by discussing the emerging codes during the open coding process of the first three interviews and developed coding guidelines. Disagreements were discussed and resolved. Using the MAXQDA 2020 software, all responses were coded consistent
with six COM-B labels1 (behavior, psychological capability, automatic/reflective motivation, social/physical opportunity). To ensure continued adherence to the agreed coding guidelines, the three researchers regularly communicated to ensure coding consistency.
Footnote 1: We did not need to code “physical capability” as the participants did not have physical impairments.
In phase two, all statements - previously labeled as one of the COM-B components - were further analyzed for their privacy-specific content. Therefore, an inductive thematic analysis (Krishnan, 2017) to identify themes within similarly coded statements was conducted (see Appendix for coding scheme). One researcher identified themes across identically coded statements and discussed them with the other researchers. A theme reflects a collection of similar responses from at least two different study participants. For example, responses that were coded under the COM-B label "reflective motivation" such as "I would be afraid of stupid remarks.", "I have no desire to be bullied.", and "I can do without being ridiculed in my class's WhatsApp group." were allocated to the privacy-specific theme "negative reaction avoidance". This step resulted in a list of themes within each of the six COM-B labels. Ultimately, the researchers reviewed and discussed the emerging themes, merged similar themes, and re-labeled others. By playing the "devil's advocate" - a common way to scrutinize identified themes (Bendley, 2017) - we sought to exploit the full potential of multiple coding to furnish alternative interpretations of our findings. The anonymized, coded interview transcripts are publicly available at osf.io/28d3w.
## 4. Results
A total of 54 adolescents aged between 12-18 years (15 \(\pm\) 1.82 years) were interviewed, of which half (27) were female (see Table 1). Interviews ranged from 5 to 21 minutes in length, with a mean of 12.6 min per interview (SD = 3.91). Most users attended secondary school, and 80% had used the app for more than one year. Half of the study participants admitted using TikTok between one and three hours per day.
Building on the conceptional framework of the COM-B model, we identified 13 themes from the data analysis that described how and why adolescents protect their privacy on TikTok (see Table 2). These are described in more detail in the following. No weighting was associated with the themes in terms of their overall contribution.
### Behavior
#### 4.1.1. Proactive privacy
The participants in our study mentioned various ways to control the content of their TikTok2 and their audience. Publishing content to audiences was described as reflective and non-automatic (as opposed to a habitual, non-reflective publication of TikToks). This behavior is also referred to as the "approach" privacy strategy (TikTok, 2017). For example, regarding the content, study participants described what they consider to be too sensitive for publication on TikTok and would not publish (e.g., TikToks that reveal too much about them). Lima (F, 14) creates public videos and has 50 different accounts. She has clear privacy boundaries regarding the video content: "I would not post TikToks where you can see a lot of myself. I wouldn't post videos in which I'm drunk." Another form of restriction is to define who can see which type of content on the platform. This includes TikTok users making drafts only visible to themselves or blocking selected users from watching videos. Barbel (F, 13) actively tries to keep her parents from seeing her videos: "To prevent my parents from seeing my videos, I can simply block them."
Footnote 2: The term “TikTok” is used synonymously with videos.
We identified two subthemes within the proactive privacy theme: private creators (19 persons, 35% of the sample) and public creators (11 persons, 20%). Private creators create videos only for themselves or close friends but do not publish them for a broad audience. A few users described the practice of posting videos that are just visible to themselves, only to be able to then repost them on "more private" social media such as Snapchat or WhatsApp for a selected group of people: "I don't post my videos. I download them, save them under photos, then send them on WhatsApp, for example. I only use TikTok for editing." (Amy, F, 17).
Public creators regularly create videos for their followers or the general public. An extreme case is Joy (F, 13), who has used TikTok since she was nine years old (when the app was still musical.ly). She maintains 50 thematic user accounts with different age settings and distinct followings (e.g., some accounts for gaming-related videos and others for YouTube reposts). In addition to managing multiple accounts, public creator Lima (F, 14) also uses the live feature. It is available to users with at least 1,000 followers and allows them to create personal live streams and interact with users in real time. Lima had to set her age to 16 years to enable the live feature.
#### 4.1.2. Avoidance
Some study participants reported that they do not publish videos on TikTok at all to protect their privacy. In the literature, this is referred to as the avoidance privacy strategy (TikTok, 2017). Peter (M, 14), one of 24 study participants (44%) we classified as a pure consumer, stated: "I've never created a TikTok. I don't even know how to do it.". Tim (M, 12) published once but decided to only watch TikToks afterward: "To try it out, I uploaded something
\begin{table}
\begin{tabular}{l c} \hline \hline Variables & \% (_n_) \\ \hline Gender (\% of females) & 50\% (27) \\ Age & 15 \(\pm\) 1.82 \\ Educational level & \\ Primary level & 2\% (1) \\ Lower secondary level & 54\% (29) \\ Upper secondary level & 44\% (24) \\ User since & 20\% (11) \\ Between one and two years & 33\% (18) \\ More than two years & 46\% (25) \\ Current app usage & \\ Daily \textgreater{}= 3h & 17\% (9) \\ Daily \textgreater{}= 1 and \textless{}3h & 50\% (27) \\ Daily \textless{} 1h & 28\% (15) \\ Less than daily & 6\% (3) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Characteristics of one-to-one interview participants (\(n\) = 54)
once, but nothing from me. I thought that was funny. But I prefer to watch videos."
### Capability (Psychological)
#### 4.2.1. Past privacy incidents
This theme refers to a specific form of privacy-related knowledge (cp. [(60)]) gained after experiencing potential or actual privacy incidents. Potential privacy incidents are perceived as minor threats but may lead to increased privacy awareness. "I posted my very first video by accident. It was only seen by three people," reported Yasmina (F, 15). Lima (F, 14), a public creator, remembered: "I was half asleep and accidentally posted a TikTok. The next morning, I saw that someone had commented on the video. But I thought it was funny and not bad at all." When TikTok updated its app and increased the size of the "publish" button to lower the threshold for publication, Lima decided to block app updates.
Users have also realized that some of TikTok's privacy features can be easily bypassed. Their awareness of the platform's weaknesses has contributed to a greater privacy awareness. An example is a feature that allows blocking certain users from viewing videos, which can be easily bypassed: "If I block people but they still want to see my TikToks, they immediately make an extra fake account and continue seeing them." Roswitha (F, 15). However, she found a way to manage her privacy: "Since these users have too few followers, I simply block them again or ignore them depending on the video.".
A more serious subtheme are actual privacy incidents. Barbel (F, 13) had to realize that she was not anonymizing herself sufficiently: "I wore a mask on my face in the video, anonymously, so to speak. But the people who deal with me every day recognized me by my outfit, my room, and my hairstyle and posted the video in the class WhatsApp chat.". Anna (F, 14) reported losing her account and not being able to reclaim it through TikTok's customer support. At the same time, other users were still able to watch her videos: "I made videos of myself when I was 9 and then lost the account. Now the videos are still public, but I can no longer access them.".
#### 4.2.2. Privacy literacy
Privacy literacy can be defined as a combination of factual or declarative ("knowing that') and procedural ("knowing how') knowledge about online privacy [(76)]. Concerning the publication of videos on the platform, adolescents need to have the knowledge and skills to assess and manage audiences and content as needed.
Respondents mentioned, for example, that the algorithm might present a video on TikTok's center stage: "It depends on how popular a video is and only then does it appear on the For You Page." (Barbel (F, 13)) or that public videos can also be watched without
\begin{table}
\begin{tabular}{l l r} \hline \hline Theme & Description & Frequency \\ \hline _Behavior_ & & \\ Proactive privacy & Publishing videos with control over the content and the audience & 30 \\ Avoidance & Publishing no videos on the platform & 24 \\ _Capability (Psychological)_ & & \\ Past privacy incidents & Previous negative experiences related to privacy on the platform (e.g., lost account, accidental publication) & 15 \\ Privacy literacy & Knowledge and skills related to privacy management in the app (e.g., audience understanding and configuration) & 53 \\ _Opportunity (Social)_ & & \\ Negative feedback & Negative behavior of others affects privacy management (e.g., observation of cyber-bullying) & 16 \\ Linkability experience & Observing that online personas can be linked to the personal sphere affects privacy management (e.g., my teacher is on the platform) & 39 \\ Restrictive influence & Restrictive behavior of others affects privacy management (e.g., restrictive parental mediation) & 34 \\ _Opportunity (Physical)_ & & \\ Platform features & Privacy-related features of the platform (e.g., audience settings, sharing via other social networks) & 46 \\ Device features & Privacy-related features of the device (e.g., screen time limits, deleting videos on the smartphone) & 17 \\ _Motivation (Automatic)_ & & \\ Negative emotion avoidance & Avoidance of negative emotions expected as a result of publication (e.g., shame, fear) & 15 \\ _Motivation (Reflective)_ & & \\ Negative reaction avoidance & Goal to avoid expected negative consequences of publication & 10 \\ Privacy identity & Privacy as a general value (e.g., also on other platforms) & 5 \\ Publicity avoidance & Goal to avoid expected publicity of publication & 29 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Identified themes for adolescents’ video privacy management on TikTok based on the COM-B model. Frequency is calculated across 54 interviews.
having a TikTok account: "From Google or Safari you can type in TikTok and view the videos." (Aron (M, 13)). They also described how to find out which of their peers used TikTok: "When you post a video, it spreads immediately and then you know who has TikTok and who does not. Because so many people have TikTok now, it has become weird for me to post TikToks." (Elsa (F, 14)). Respondents also described their audience and content management skills. The private creator Bea (F,14) only publishes for a strictly curated list of followers and therefore has established an approval process that allows her to maintain the desired level of privacy: "I get to know new classmates first and only then give them my TikTok account. Afterward, they tell me they sent a request and I accept them as followers in the app." (Bea (F, 14)). Furthermore, the adolescents interviewed were also able to assess different levels of sensitivity of content in terms of their privacy and select an adequate audience accordingly: "My buddy and I made 10 TikToks in which we share our weekend activities with people. Some have 60,000 views. But we think carefully what to make public." (Alex (M, 18)).
The adolescents also talked about various app settings needed to manage the audience, such as the activation of the private account "Switching to the private account takes only two minutes. This is not difficult." (Alexandra (F, 12)) or knowing the publication status of a video: "A draft is rendered greyish and blurry. When published, it is bright and jumps right out at you." (Alexander (M, 15)). Some adolescents also perform "digital housekeeping" activities by removing content related to a specific event or as a habit: "As I became older, I started to delete old videos." (Ariane (F, 15)).
### Opportunity (Social)
#### 4.3.1. Negative feedback
Negative feedback refers to expected or observed negative feedback from others (such as harsh comments to videos). Study participants reported negative reactions on the platform (e.g., from strangers or people from the same school) as an explanation for their privacy protection behavior. Alexander (M, 15) mentioned a general culture of mutual criticism: "Many of the famous TikTokers sometimes make mistakes. Afterwards, everyone makes fun of them in videos." Other respondents mentioned negative reactions from their peers that had influenced their behavior: "A friend went viral with a video. Then she got yelled at on the street. It would annoy me." (Katja (F, 17)).
#### 4.3.2. Linkability experience
Similar to the perception of negative feedback, the realization of how easily online personas can be linked to the personal sphere can also lead to more restrictive publication behavior. Study participants perceived the platform as a public space shared by acquaintances and strangers. However, by recognizing people from their school on their "For You" page, study participants realized that they, too, could be easily recognized. As Georg (M, 15) put it: "There are maybe ten or twenty people in the school building who do [public] TikToks regularly. You suddenly realize: I know that guy from TikTok. That's the reason why I don't publish." In addition to peers, respondents also described experiences that made them understand that acquainted adults in authority positions would be able to see their TikTok as well. Sibylle (F, 15) realized this: "My music teacher was on TikTok singing a song." Therefore, Sibylle also does not publish so as not to be recognized by everyone on the platform.
#### 4.3.3. Restrictive influence
Restrictive influence refers to others (e.g., close friends or parents) perceived to be restrictive or restricting study participants' video creation behavior. Some interviewees reported that their friends did not publish on TikTok, which in part motivated why the did not publish, either. In mentioning his peers, Felix (M, 12) stated: "Most of the people I know don't upload anything of themselves where they show their face." Another example is restrictive mediation by parents or relatives: "My eight-year-old cousin accidentally posted a video with my smartphone. His uncle saw it on his For You page, so I deleted it." (Sibylle (F, 15)).
### Opportunity (Physical)
#### 4.4.1. Platform features
Age verification is a key platform feature intended to protect the privacy of young users (not limited to creating videos) and the subject of much public discussion. In the semi-structured interviews, 29 of the interviewed participants were also asked what age they provided. Two-thirds admitted that they had given a false age when they registered (indicating, e.g., the age of their parents). The main motivation for this behavior was to be able to use TikTok in general (for those below the age of 13) or all its features. Some study participants, like Martin (M, 14), also had misconceptions about possible age restrictions: "Because otherwise, TikTok won't let me watch videos.".
However, study participants also described how they used TikTok's features for privacy purposes in general. This includes using a nickname instead of their real name, limiting the use of personal information on their profile page, and not linking their TikTok account with other social media accounts (e.g., Instagram). While some interviewees do not use a name at all: "Why should people know my name? I have replaced my name and individual letters with an X" (Ali (M, 12)), others actively involve their parents to make use of the in-app parental controls that restrict their app access.
Study participants also reported using various features related to audience configuration, such as creating personal drafts, activating a private account, deleting videos, or blocking users. Public creators sometimes create multiple "privacy-tailored" user accounts with specific follower groups for content of special sensitivity. Where the features offered by the platform are perceived as too limited or ineffective, the adolescents used creative workarounds not originally anticipated by the platform provider. For example, it is not easily possible to download and share drafts of videos that are not yet published. Amy (F, 17), however, described a popular workaround: "I post videos on TikTok, but only for me. Afterward, I'm able to download them to share them with my friends on WhatsApp."
#### 4.4.2. Device features
As part of the greater sociotechnical system, some devices (e.g., smartphones) offer features that affect user privacy. For example, study participants make use of the "digital well-being" functionality of their smartphone to limit their screentime: "I used TikTok three hours a day because I didn't know anything better to do with myself. Now I'm trying to get a handle on this with a screen time limit." stated Matthias (M, 17). Sandra (F, 14) was one of the study participants who used smartphone features to share videos more selectively: "You can take a screenshot of drafts with an iPhone and then send them via WhatsApp or Snapchat.". As mentioned earlier, Lima (F, 14) noticed that the size of the red "publish" button grew with each new app update compared to the
grey "save as draft" button. Fearing accidental publication, she bypassed this potentially manipulative design pattern ("dark pattern") by using an old version of the app, which her operating system allowed her to do: "Therefore, I have blocked the updates for TikTok on my cell phone."
### Motivation (Automatic)
#### 4.5.1. Negative emotion avoidance
The interviewees describe various negative emotions if they appeared in a video on TikTok. For example, they mentioned feelings of discomfort, shame, awkwardness, and annoyance. Milo (M, 12), who does not publish any videos, said: "I would be embarrassed to be seen in a video." Elsa (F, 14) reported that her desire to avoid negative emotions had evolved. While she had posted videos on musical.ly, she didn't publish on TikTok anymore: "Posting TikToks has become weird for me."
### Motivation (Reflective)
#### 4.6.1. Negative reaction avoidance
Another reason for not publishing personal content was negative reactions by others to their videos such as being bullied in class (e.g., in the WhatsApp class chat). Alexander (M, 15), who does not publish any videos, commented: "You make a mistake, people from school see it, it gets sent on, and you get bullied." Avoidance can also relate to the negative long-term consequences of sharing personal content. As they get older, adolescents who are getting ready to join the job market realize that their activity on TikTok could harm their career prospects. "The Internet never forgets and if I eventually look for an apprenticeship, it may be that my future employer sees that. That's very bad for my reputation." (Lima (F, 14)).
#### 4.6.2. Privacy identity
With privacy identity, we refer to a coherent set of privacy-related behaviors and personal qualities of an individual in a social setting (Shen et al., 2017). Some teenagers consider privacy as a value in itself and part of their identity. For example, for Yara (F, 14), the publication of videos on TikTok is no different from any social network activity: "It's just not my thing. I don't post in general either, not even on Instagram or anything." Lena (F, 17) explicitly stated that she considers privacy a significant personal value: "Privacy is important to me. I keep everything private that can be kept private.".
#### 4.6.3. Publicity avoidance
Another motivation for restricting the publication of personal videos on the platform is closely related to the linkability experience theme: the desire to not attract public attention. Study participants explained that publishing on TikTok means being in the public eye: "It's a big platform, and I don't want people around me to see that I make videos." (Anna (F,14)). While in musical.ly, the public was described as a community of people with similar interests and ages, on TikTok, it is perceived as a heterogenous, superficial place with different people of all ages (including strangers, peers from the same school, teachers, extended family members, and parents). Lina (F, 17) described how the change in the audience had an impact on her behavior: "At musically, there were also strangers, but more my age. But TikTok is now worldwide and there are adults everywhere. I don't have to post anything there.". Her comment shows that the platform is now perceived as completely public, whereas it used to be a more private community.
## 5. Discussion
Our general observation of adolescents' on TikTok is in line with previous research on other social networks (Bahdan et al., 2017; Bahdan et al., 2017; Bahdan et al., 2017; Bahdan et al., 2017): Contrary to public perception which portrays the publication of TikToks by young people as automatic and unreflective, the adolescents in our sample actively engaged in privacy management. They demonstrated a strong awareness of the need to manage their online identity and social privacy on the platform. However, the interview participants were more concerned with protecting their privacy from their immediate social environment than with institutional or commercial privacy issues. That is, while they were generally aware that TikTok used algorithms to tailor video content to their particular online behavior, they were more worried about the tangible aspects of the algorithm: that a published video could immediately appear on a classmate's account.
Next, we will discuss the results in more detail following the structure of the COM-B model. While many of our findings are consistent with themes found in previous research on other social media platforms (e.g., Facebook), a few themes and aspects are indeed unique and - best to our knowledge - have not yet been studied by researchers on TikTok or other platforms. The qualitative nature of our data inform the design of very concrete interventions on TikTok (Section 5.7).
### Behavior
In addition to previous research on other social networks (Shen et al., 2017), we were able to identify two very different types of proactive privacy behavior: public and private creation. While public creators perform privacy management to share videos directly on TikTok, private creators merely use the platform to create and edit videos to share them on other social networks that they see more appropriate for such content (e.g., Snapchat, WhatsApp). It indicates that adolescents have different "imagined audiences" (mental conceptualization of the people with whom the user is communicating, (Shen et al., 2017)) on each social network and curate who sees what by switching between networks. A unique finding of our study is that private creators essentially reduce TikTok, which was originally conceived as a social network, to its extensive audio-visual capabilities and share their personal content where social connections already exist and a higher degree of perceived control and intimacy exists (e.g., WhatsApp). It is possible that such a practice might also be found elsewhere (e.g., Instagram, YouTube). At a time when adolescents' increasingly use multiple social media platforms at once, privacy perceptions of and management between different platforms has to be addressed more comprehensively. That is, privacy management can no longer be seen as a single-platform-phenomenon - an observation with important research implications. Rather than focusing on isolated social networks with their own privacy standards, researchers should expand their analysis to include a cross-network view of privacy management.
### Psychological Capabilities
Similar to previous studies on other social media platforms (Bogor et al., 2016; Kavlioglu et al., 2017; Kavlioglu et al., 2018), we found that adolescents possess knowledge and skills on how to manage their privacy on TikTok (see "privacy literacy" theme). That is, adolescents were not only able to assess the audience of videos but also to actively manage the audience and content of their TikToks. As previously noted (Kavlioglu et al., 2018), privacy management can be very creative. This finding also holds true for TikTok: some of our respondents reported using various accounts for different audiences, blocking app updates to avoid receiving less privacy-friendly versions of the app, and making an effort to detect fake users trying to follow them. An interesting observation that can potentially inform other research on social privacy management in social networks is that adolescents on TikTok do not only use the technical features provided by the social network itself. Instead, some are also capable of using physical opportunities provided the device (e.g., blocking app updates, screen time management). This example illustrates how the existence of these generic physical opportunities provided by the operating system can influence the privacy management capability of young TikTok users to learn about additional ways to protect their privacy.
In line with previous research we found that negative past experiences affect future privacy management behaviors (Kavlioglu et al., 2018). Incidents can even serve as a learning opportunity (Kavlioglu et al., 2018). In our sample, participants experienced near or actual privacy incidents (e.g., accidentally publishing videos, loss of account with personal videos) that led them to adapt their privacy management (e.g., immediately deleting accidentally published videos, paying more attention to a publication in the future). While our data support the hypothesis that incidents serve as learning opportunities, it must be said that certain very extreme violations of privacy (e.g., persistent bullying or stalking) have not been reported in our study. It is unclear how such experiences affect privacy behavior in the long run. Nonetheless, our findings inform future research by showing that even minor privacy incidents without severe consequences can lead to improved capabilities.
### Physical Opportunities
Adolescents in our sample used various features of TikTok and the operating system to manage their privacy (themes platform features and device features). At the same time, they were aware of TikTok's privacy management limitations (e.g., the ineffectiveness of blocking users). Some of the measures TikTok has taken to protect the privacy of younger users in response to public criticism may not be very effective. Out of 29 study participants with whom we discussed the topic, two-thirds used a false age. Many teenagers we interviewed have been publishing on TikTok much before the legally allowed age of 13. Regardless of the normative standpoint, this calls into question TikTok's fine-grained, age-based privacy features. Despite legislative measures such as the Children's Online Privacy Protection Act of 1998, this problem has been described on other social networks in the past (Kavlioglu et al., 2018; Kavlioglu et al., 2018). Sometimes also parents help their underage children to access social networks (Bogor et al., 2016). Reasons for using social networks below the specified minimum age are diverse (e.g., wanting to stay in touch with classmates, wanting unrestricted access to TikTok's features) (Bogor et al., 2016). Consequently, technical measures to protect children such as non-public accounts or content restrictions are failing (Kavlioglu et al., 2018). boyd et. al (Bogor et al., 2016) called for abandoning ineffective age-based mechanisms. Instead, she advocates for an honest discussion about children's use of social media and a rethinking of the industry to better incorporate the needs of children and parents when developing apps.
Another issue on social networking sites is account loss (Kavlioglu et al., 2018). This issue was also highlighted by several of our respondents who reported that they were unable to reclaim a video they had posted after losing an account. As a consequence, they were unable to revoke their consent from publishing a childhood experiment that would now remain online forever. This is particularly problematic against the background of increasingly better algorithms for recognizing people in images and videos and the resulting linkability risk (e.g., Clearview AI (Kavlioglu et al., 2018)). To exercise the "right to be forgotten" as embodied in the EU GDPR, for example, the ability to reclaim accounts and delete old videos is essential. It is unclear whether account loss among adolescents is a broader phenomenon or whether other social networks are affected as well.
### Social Opportunities
Our findings on TikTok support previous research demonstrating that the social environment of teenagers shapes their privacy behaviors (Kavlioglu et al., 2018). Other social network users as well as the parents are major agents of socialization (Kavlioglu et al., 2018). Social norms, which emerge as a response to observed behavior or expected attitudes of friends and parents, influence children's intention to share personal information (Kavlioglu et al., 2018). If friends and parents disapprove of such behavior, children tend to share less. A recent study on TikTok described, that restrictive mediation by parents can also lead to more restrictive disclosure behavior in children (Kavlioglu et al., 2018).
In our study, we identified similar social influences on TikTok. Observing strangers being publicly criticized for videos (themes negative feedback) resulted in restrictive publication behavior by the adolescents we interviewed. In line with previous research (Kavlioglu et al., 2018), the restrictive norms and behavior of relatives, parents, and friends were also found to have the potential to affect behavior on TikTok (e.g., not publishing or blocking parents from videos).
What makes TikTok stand out from other social networks, is its specific content algorithm based on a granular observation of user preferences (Kavlioglu et al., 2018). The results of our study indicate that prevalent TikTok usage among peers in combination with the platform's specific algorithm that immediately displays the published content to cohorts with similar attributes - i.e., peers - may increase the social influence of others on adolescents' privacy behavior ("linkability experience"). Unlike posting a video under a nickname on YouTube that may never be discovered by peers, adolescents were aware that posting on TikTok was potentially more privacy-invasive. They recognized that their videos could become visible to their personal environment (e.g., in the schoolyard). This experience led to restricted publication behavior.
### Automatic and Reflective Motivations
Adolescents' motivations for protecting their privacy on TikTok were based on either wanting to avoid publicity, to avoid negative reactions/emotions, or to actively achieve privacy (themes
negative reaction avoidance, publicity avoidance). The adolescents interviewed reported wanting to evade the public eye and feared negative feedback (e.g., public criticism). These are themes previously described on other social networks (Steiner, 2017). To avoid a negative emotional outcome (e.g., shame), they refrain from having a too public profile (theme negative emotion avoidance) (see (Bhatt, 2017) for a similar finding).
For some adolescents, privacy was a personal matter beyond TikTok (theme privacy identity). That is, these teenagers were intrinsically motivated to keep their information private - a finding that stands in contrast with previous research on other social networks. Research suggests that, on average, adolescents have fewer privacy concerns than young adults (Bhatt, 2017; Bhatt, 2017). However, our findings indicate that these concerns can vary greatly across adolescents, and some may place great value on their privacy on social media. Even though the theme was mentioned by only a few participants, it underscores that adolescents are not a homogeneous group when it comes to motives for managing privacy on social media. For some participants' being private is a personal value and their goal is to achieve a coherent privacy behavior on TikTok and beyond.
### Methodological Consideration
For our study, the COM-B model helped to holistically understand adolescents' privacy management on TikTok related to the creation of videos. It has a solid theoretical foundation and - according to its authors - can be applied across various contexts. However, much of the research to date has applied the COM-B model to health-related behaviors such as smoking cessation and lowering cardiovascular disease risk (Krishnam, 2017). Our study, which showed that the COM-B model is also a suitable analytical framework for studying privacy behavior, provides yet another use case. By demonstrating its relevance to the privacy management of adolescents, we strengthen the model's extrinsic validity.
### Possible Approaches for Privacy Interventions
Several of the themes we identified can be used as starting points for the development of privacy interventions. The COM-B model is part of a theory-driven intervention development framework called behavior change wheel (BCW), a synthesis of behavior change frameworks (Steiner, 2017). In the logic of the BCW, interventions are directed at desired "target behaviors" (e.g., enabling privacy settings). Building on the interview findings and our observations, Figure 2 shows different parties and ideas for potential target behaviors affecting adolescents' video privacy management. It focuses on _which_ behaviors to address and does not answer the question of _how_ to design interventions that address these behaviors (e.g., adequate behavior change techniques (Steiner, 2017)).
Any intervention schemes to improve the privacy of adolescent TikTok users should focus on the _behavior of the adolescents themselves_. The interviews provide concrete suggestions for behaviors that adolescents already report that improve their privacy protection. This includes encouraging young users to remove inappropriate videos from the platform and the use of alternative social media apps (e.g., WhatsApp) to share content (theme: proactive privacy). Some of our participants reported regular checks if a video with the status "published" should be set to private. They also removed their old TikToks from the app and their smartphone. Our private creators did seldom publish on TikTok but used alternative apps such as Snapchat or WhatsApp with a perceived higher level of privacy and the ability to automatically delete shared TikToks after being watched by their friends. Another possible target behavior derived from our observations is "backing up user credentials" (theme: privacy literacy). Some adolescents in our sample who had already created accounts in musical.ly could not delete published videos because they had forgotten their credentials, and were not able to prove their identity to the TikTok support to retrieve their account. An intervention could mitigate account loss, especially in cases where children have multiple accounts. Finally, teenagers should be made aware of the privacy settings (e.g., the private account) and the potential risks of not correctly setting these (theme: reflective motivation). For example, in our interviews, participants accidentally published a TikTok upon their first usage of the app because they were not aware others would immediately see it.
_The platform_ must also play an important role in safeguarding the privacy of children and adolescents. Improving features directly related to privacy such as improved age verification, more effective blocking of users, and facilitating access to lost user accounts are promising approaches (theme: platform features). As described earlier, many adolescents in our sample did not use their real age due to various reasons. For example, they were often unaware that the private account would have been activated by default if they had provided their real age. As a result of providing false information, the privacy settings were much more lenient and TikTok videos would not only be published to followers but to everybody. Following boyd et. al's (2017) philosophy, one possibility would be to abandon TikTok's age-based mechanism and incorporate the needs of children and parents when developing the app. For TikTok, this could mean taking a certain level of responsibility for its content and giving kids and parents ways to control what videos are shown (e.g., via a content configuration or a separate app similar to YouTube Kids). Even if the app adhered to the age-based privacy concept, describing the consequences of providing real age (e.g., better privacy protection) might encourage some youth to provide their real age. Another approach has been lately launched by the twin app Douyin (Douyin, 2017). Douyin introduced an age verification that is not based on self-declaration only but requires - unlike the international counterpart TikTok - user authentication and imposes restrictions on the permitted daily use for users under 14.
Some participants also criticized that they could not effectively block users who they wanted to prevent from seeing their videos. The problem persists because blocked users can immediately "respawn" under a different username. TikTok could prevent this issue with a feature that block all accounts of the same user (similar to Instagram (Douyin, 2017)). Some study participants also reported feeling "nudged" by the user interface design towards publishing TikTok video for a broad audience. Others described publishing personal TikToks accidentally. While nudging teenagers towards better privacy behavior is also controversial (Douyin, 2017), presenting them with simple alternatives (such as publishing a TikTok vs saving a local draft) could provide a welcome middle ground. Furthermore, TikTok might also do more to educate its users on how to protect their privacy. This suggestion
is based on our observation that capabilities varied between adolescents and TikTok users had begun to create such privacy tutorials. The latter indicates a demand for more support (e.g., via privacy tutorials provided by TikTok).
_Family, friends, schools, and youth workers_ can also positively influence the privacy management of adolescents (social opportunity themes). In addition to supporting adolescents' privacy efforts, their social network could use TikTok themselves to better understand specific privacy issues. In our sample, an uncle of an eight-year-old boy used TikTok himself and warned him about the possibility on TikTok of publishing a video by accident. The social environment can also advise about long-term privacy risks to the children and adolescents of which they might not yet be aware. Among a group of adolescents of the same class, we repeatedly heard the narrative of a classmate being recognized on TikTok despite her wearing a mask. Due to this "risk narrative" the whole class was aware of the potential risks of insufficient anonymization on TikTok. A collection of such tales could be used by teachers in the classroom to illustrate the privacy risk associated with the platform.
As users do not only interact with each other when they share videos but also with the platform and its owner company, teenagers should also be made aware of commercial privacy issues. Our data confirmed that adolescents' primary privacy focus was indeed social. To this end, adolescents would need to understand TikTok's business model, which heavily relies on their personal data, and the organization behind TikTok.
_Policymakers and privacy advocates_ are also relevant actors. Not only do they seek to create privacy laws to protect users but also to enforce these laws through, for example, insisting on effective age verification (theme: platform features). Ideally, these actions are guided by evidence in collaboration with researchers, adolescent users, and parents. For example, our findings indicate that adolescents did not know that TikTok had taken additional measures to protect them in 2021 [75]. While privacy legislation demands transparency for data subjects - especially for children - this example shows that there is room for improvement in terms of the implementation of laws.
It should also be mentioned that _other TikTok users_ can influence an adolescent's privacy behavior (social opportunity themes). Older and more experienced teenagers may have capabilities (e.g., based on their negative experiences) that can benefit younger and less experienced users. One of our participants reported having learned about privacy settings from a video on TikTok. Indeed, some more experienced users have already begun to acts as mentors. This includes the user @seansrv with 1.1 million followers, who stated in his biography "I Read ToS [Terms of Service] So That You Don't Have To" and regularly posts TikTok videos related to privacy topics [71].
Finally, our interviews showed that _OS vendors and the vendors of other apps_ contribute to teenagers' privacy on TikTok (theme: device features). OS vendors have implemented more and more privacy control mechanisms for their end-users (e.g., granular rights management, location sharing notifications). These methods all work on low-level personal data (e.g., IP address, location, and email address). However, videos shared by adolescents on TikTok that possibly contain more sensitive personal data with higher risks involved are not yet covered by these mechanisms. At times when a user publishes a video accidentally, the OS could warn them in the same way that they are warned when sharing their location with the app. In our sample, participants reported also manually cleaning
Figure 2. Different parties and their potential target behaviors relevant for adolescents’ video privacy management on TikTok
up their TikToks in the app and on their phones. OS vendors could provide housekeeping functionalities that would simplify removing personal content across different social networks and on the phone.
### Limitations and Future Research
As with most qualitative research, our sample is small and was not drawn randomly. Therefore, we cannot claim that the results are representative of all young people in the region under consideration, and certainly not of Switzerland as a whole. Further validation with different samples is needed to strengthen the findings (e.g., including subjects' socioeconomic status).
Choosing interviews as our data collection methodology was useful to learn more about the perspectives of adolescents in Switzerland. Nevertheless, we are aware of the limitations associated with this method. Primarily, we relied on self-reporting rather than behavioral observations. Self-reports can be biased due to various influences, such as subjects' desire to portray themselves in a positive light. Future studies might want to gather data from a wider range of sources, such as direct observations of privacy management behavior (e.g., through TikTok data donations).
Based on our findings, future research could develop and systematically test privacy interventions based on the BCW methodology. A necessary first step would be to identify appropriate target behaviors with the greatest potential to improve privacy management among adolescents. Our research could be a starting point for select a "promising" target behavior reported by the adolescents (e.g., activating the private account) to address in a target population (e.g., pupils of a local school). To identify a baseline for each of the potential behaviors and to select a target behavior among them, further research would be necessary (e.g., in form of a survey among pupils). Furthermore, additional research is required to select appropriate behavior change techniques (e.g., increasing awareness for privacy settings) and evaluate their effectiveness (e.g., with an experiment). Importantly, such research could also control for factors such as socioeconomic status might also be relevant to explain privacy-related behaviors on TikTok (Tik et al., 2019). Given that teenagers may have very heterogeneous privacy management capabilities, motivations, and opportunities, depending on their age and experience regarding the platform, interventions need to be tailored to the specific target group. Large-scale intervention studies using the BCW can help to identify effective and evidence-based policies to improve privacy management among young people on social media platforms like TikTok.
Our interviews focused on social aspects of adolescents' privacy management. That is, our interviewees were more concerned with protecting their privacy from their social environment than from the corporations dealing with their data for commercial purposes; see (TikTok, 2022). Yet, TikTok videos are not only shared with other users but also with ByteDance. Even the users we identified as pure consumers who only view but not create content may have privacy issues. As the video and ad algorithms are known for their high level of customization, they make the platform heavily reliant on personal data including detailed user behavior (TikTok, 2022). Both users' active and passive behavior on the app has consequences: The TikTok pixel allows companies to engage in detailed web tracking of TikTok users on websites (e.g., a user who sees the ad on TikTok might buy the product in the online shop) (Kang et al., 2022). Further research could investigate if adolescent users are aware of these commercial privacy aspects and how they manage them.
###### Acknowledgements.
The research reported in this article was funded by the Digital Future Fund (DFF), which is part of the Digitalization Initiative of the Zurich Higher Education Institutions (DIZH), Switzerland. We would like to thank all adolescents, teachers, and social workers we contacted in conducting our study. We also thank Frank Wieber, Katja Kurz and Manuel Gunther for their helpful comments.
|
2303.08528 | Translating predictive distributions into informative priors | When complex Bayesian models exhibit implausible behaviour, one solution is
to assemble available information into an informative prior. Challenges arise
as prior information is often only available for the observable quantity, or
some model-derived marginal quantity, rather than directly pertaining to the
natural parameters in our model. We propose a method for translating available
prior information, in the form of an elicited distribution for the observable
or model-derived marginal quantity, into an informative joint prior. Our
approach proceeds given a parametric class of prior distributions with as yet
undetermined hyperparameters, and minimises the difference between the supplied
elicited distribution and corresponding prior predictive distribution. We
employ a global, multi-stage Bayesian optimisation procedure to locate optimal
values for the hyperparameters. Three examples illustrate our approach: a
cure-fraction survival model, where censoring implies that the observable
quantity is a priori a mixed discrete/continuous quantity; a setting in which
prior information pertains to $R^{2}$ -- a model-derived quantity; and a
nonlinear regression model. | Andrew A. Manderson, Robert J. B. Goudie | 2023-03-15T11:19:50Z | http://arxiv.org/abs/2303.08528v3 | # Translating predictive distributions into informative priors
###### Abstract
When complex Bayesian models exhibit implausible behaviour, one solution is to assemble available information into an informative prior. Challenges arise as prior information is often only available for the observable quantity, or some model-derived marginal quantity, rather than directly pertaining to the natural parameters in our model. We propose a method for translating available prior information, in the form of an elicited distribution for the observable or model-derived marginal quantity, into an informative joint prior. Our approach proceeds given a parametric class of prior distributions with as yet undetermined hyperparameters, and minimises the difference between the supplied elicited distribution and corresponding prior predictive distribution. We employ a global, multi-stage Bayesian optimisation procedure to locate optimal values for the hyperparameters. Three examples illustrate our approach: a nonlinear regression model; a setting in which prior information pertains to \(R^{2}\) - a model-derived quantity; and a cure-fraction survival model, where censoring implies that the observable quantity is _a priori_ a mixed discrete/continuous quantity.
## 1 Introduction
A key asset to the Bayesian paradigm is the conceptual ease with which prior information is incorporated into models. For complex, nonlinear, overparameterised, or otherwise partially identified (Gustafson, 2015) models, including such prior information is essential to exclude model behaviours that conflict with reality, and/or known qualities of the phenomena being modelled. Prior information can also improve computation of estimates of the posterior, making otherwise unusable models suitable for inference. However, it is for precisely the models for which prior information is so important that setting appropriate priors for parameters is hardest.
We consider in this paper the task of forming such appropriate informative priors. We distinguish two tasks: predictive elicitation; and the subsequent translation into a prior for a given model. Predictive elicitation is an approach in which elicitation (O'Hagan _et al._, 2006; Falconer _et al._, 2021; Low Choy, 2012) proceeds via predictive distributions, often for observable quantities, and is thought to be the most reliable and available form of information (Kadane and Wolfson, 1998). Predictive elicitation is also model-agnostic, meaning the complex, time-consuming process of elicitation does not need to be repeated for each variant of a model. A recent, comprehensive review of elicitation is undertaken by Mikkola _et al._ (2021).
Translation is the process of using information from predictive elicitation to set the corresponding informative prior for a model. Translation, as a distinct step in the prior specification process, has received less attention than elicitation. A simple predictive approach is to directly model the elicited, observable quantity. This direct approach requires no translation. For example, the Bayesian quantile-parameterised likelihood (Haldock, 2017; Keelin and Powley, 2011) approach of Perepolkin _et al._ (2021) involves updating directly elicited
information in light of observations. Such direct approaches are currently only feasible for simple models with no latent structure. For models with simple latent structure, it is sometimes possible to elicit information about an invertible function of the parameters (e.g. Chaloner _et al._, 1993). In these cases it is possible to analytically translate the elicited information into a prior for the parameters. Translation is also clear for conjugate models (Percy, 2002), as if we specify the target prior predictive distribution using the conjugate distribution, then the prior predictive distribution determines the values for the hyperparameters of the prior (related to the idea of "reverse Bayes" in Held _et al._, 2022). Translation, however, is unclear in general for nonconjugate models (Gribok _et al._, 2004), although techniques for specific models with specific latent structure are numerous, and include linear regression (Ibrahim, 1997), logistic regression (Bedrick _et al._, 1997; Chen _et al._, 1999), Cox models (Chaloner _et al._, 1993), contingency table analyses (Good, 1967), hierarchical models (Hem, 2021, noting that the space of possible hierarchical models is vast), and autoregressive time-series models (Jarocinski and Marcet, 2019). Nevertheless, a model-agnostic approach with a corresponding generic implementation would be preferable, as noted by Gelman _et al._ (2017) and Mikkola _et al._ (2021).
Our approach to translation builds on the idea of predictive checks (Gabry _et al._, 2019; Gelman _et al._, 2017; Box, 1980; the "hypothetical future samples" of Winkler, 1967; van Zundert _et al._, 2022), which are an important, and often recommended (Gelman _et al._, 2020; van Zundert _et al._, 2022), tool in assessing the concordance of the prior predictive distribution and the elicited predictive information or elicited data distribution. If concordance between these two distributions is low for a certain prior distribution, then the Bayesian workflow (Gelman _et al._, 2020) proceeds by adjusting the prior to better match the prior predictive distribution to the elicited information. However, for many classes of models, manually adjusting the prior in this manner is infeasible; the complexity required to describe the phenomena of interest muddies the relationship between the prior and the data distribution, and so a more automated method is required. One instance of this is the history matching idea of Wang _et al._ (2018), in which specific regions of observable space are labelled as (im)plausible, and the prior is deemed acceptable if it places a sufficiently (small) large amount of the prior predictive density the (im)plausible region. Modern elicitation techniques can also simultaneously specify priors and incorporate information from multiple experts. For example, Thomas _et al._ (2020) develop a "human in the loop" method for elicitation; data simulated from a given model are judged as plausible or implausible by experts, and these judgements drive a hyperparameter optimisation process that maximises the plausibility of generated data. Albert _et al._ (2012) propose a hybrid, hierarchical elicitation/specification method intended for multiple experts, which is capable of representing uncertainty in the elicited quantities; and, when analytically possible, adopts a predictive approach. Another approach, and the closest in motivation and methodology to ours, is Hartmann _et al._ (2020; which is partly inspired by da Silva _et al._, 2019), which uses elicited predictive quantiles and a novel estimation method to acquire a suitable prior distribution for a given model. Hartmann _et al._ (2020) employ a stochastic algorithm using implicit reparameterisation gradients (Figurnov _et al._, 2018) to expedite the process. Doing so limits the applicability of their method to those models with gradients we can compute using the reparameterisation trick, and implementations with access to automatic differentiation.
In this paper we develop a method, and software package, for constructing an informative prior distribution for model parameters that results in a desired prior predictive distribution. Our method begins from elicited predictive information about the data, which we call the _target_ prior predictive distribution. We then define a suitable loss function between the prior predictive distribution, given specific values for the hyperparameters of the prior, and this target distribution. The loss function is intentionally generic and permits data that are discrete, continuous, or a mixture thereof. We minimise this loss via a generic, simulation-based, global optimisation process to locate optimal hyperparameters. Solutions to this optimisation problem are rarely unique, so to regularise the problem we adopt a multiple objective approach. The global optimisation approach is also selected with generality in mind, rendering our method applicable to models where
derivative information is unavailable. We make our method available in an R package pbbo1(R Core Team, 2022). Our method is illustrated in three challenging, moderate dimension problems; a nonlinear regression model, a model using predictive information on a model-derived quantity - a situation not explicitly covered by prior predictive elicitation; and a cure fraction survival model.
Footnote 1: The release used in this paper is available at [https://doi.org/10.5281/zenodo.7736707](https://doi.org/10.5281/zenodo.7736707).
## 2 Translating elicited prior predictive distributions
In this section we introduce the desired properties for any translation method, and our mathematical framework and optimisation strategy that together constitute our prior specification methodology. We aim to minimise the difference between the prior predictive distribution and our elicited target distribution, whilst also promoting the marginal variance of the model's parameters. Satisfying the first of these goals produces priors faithful to the supplied information; the second promotes uniqueness and replicability - we elaborate on the precise meaning of these properties momentarily. We adopt a multi-objective optimisation approach to locate priors satisfying these requirements.
### Desiderata
We now postulate three key properties that we would like our method to satisfy: faithfulness, uniqueness, and replicability.
FaithfulnessWe consider a prior faithful if it accurately encodes the target data distribution provided by the elicitation subject. Faithfulness is a property of both the model, as it must be possible for the model to represent the information, and the procedure employed to obtain the prior. Especially with simple models and prior structures, not all target prior predictive distributions can be encoded.
UniquenessIn a complex model there may be many prior distributions that imply the same prior predictive distribution. These prior distributions will all be equally faithful. Should uniqueness be desired - and it seems a reasonable enough desiderata most of the time - we must distinguish between priors based on other properties. In Section 2.4 we propose to distinguish priors based on their marginal standard deviations, but other properties are easily incorporated into our method.
ReplicabilityWe call a procedure/method consistent if it obtains the same, or very similar, prior across independent replications, given the same target. This property is particularly important to assess for methods, like ours, that make use of simulation-based or otherwise stochastic estimates. Global, gradient-free optimisers, when applied to noisy loss functions, offer no guarantee of finding the global minimum in finite time (Liberti, 2008; Mullen, 2014). We assess replicability empirically in all our examples.
These properties are partly inspired by other works in the prior elicitation and specification literature. Faithfulness is closely related to Johnson _et al._ (2010a)'s definition of _validity_ (see also Johnson _et al._, 2010b) and O'Hagan _et al._ (2006)'s use of _faithful_ in Chapter 8 (and throughout the book). However, their concerns are specific to the elicitation process - do the quantities elicited represent what the experts believe? - and not to the subsequent translational step. Our conception of uniqueness and the need for regularisation is noted by da Silva _et al._ (2019) and Stefan _et al._ (2022), and is similar to the notion of model _sloppiness_ of Gutenkunst _et al._ (2007).
We have introduced our desiderata in what we believe to be their order of importance. Firstly, without faithfulness the procedure has not achieved the main aim of translating our knowledge into the prior distribution. Subsequently, given a suite of faithful priors, regularising the problem until it is unique allows us to select one in a clear and replicable way. Such uniqueness inducing regularisation schemes often improve a procedure's replicability and, given we often only elicit information on the scale or other broad
properties of a phenomena, it is unsurprising that such information is associable with many prior distributions. Replicability ultimately also relies on the empirical behaviour of the procedure when applied to the model of interest. We note that the desiderata are not binary, and at times we may wish to sacrifice some amount of the latter properties for improved faithfulness. We also envisage settings where sacrificing some model flexibility, and thus faithfulness, for a marked increase in replicability increases the usefulness or persuasiveness of a model.
### The target predictive distribution \(\text{T}(Y)\)
Our methodology assumes a target predictive distribution (a cumulative distribution function, CDF) for the observable quantity, \(\text{T}(Y)\), has been chosen using predictive elicitation (Kadane and Wolfson, 1998). In brief, such an elicitation proceeds by querying experts about the observable quantity at a small number quantiles, then fitting an appropriate parametric distribution to the elicited values (see Chapter 6 of O'Hagan _et al._, 2006). We assume a (mixture of) standard distributions can describe the target predictive distribution function \(\text{T}(Y)\), and that we can draw samples from this distribution.
We often wish to elicit information about the observable quantity \(Y\) conditional on some known values of a covariate. For example, when using the linear model \(Y=X\beta+\varepsilon\) we may elicit information about \(Y\) at a fixed set of values for \(X\). Further suppose \(X\) is an experimental design specified before collecting observations of \(Y\), or comprises observational covariates whose values are known prior to model construction. In such settings we can elicit \(r=1,\ldots,R\) conditional target distributions \(\text{T}(Y\mid X_{r})\).
We elect to describe our methodology in this covariate-specific setting, as it readily reduces to the covariate-independent case.
### Total predictive discrepancy (primary objective)
Consider a joint model for observables \(Y\in\mathcal{Y}\subseteq\mathbb{R}\) and parameters \(\theta\in\Theta\subseteq\mathbb{R}^{Q}\), given hyperparameters \(\lambda\in\Lambda\subset\mathbb{R}^{L}\) and covariates \(X\in\mathcal{X}\subseteq\mathbb{R}^{C}\). This joint model has CDF \(\text{P}(Y,\theta\mid\lambda,X)\) and prior predictive CDF for \(Y\), \(\text{P}(Y\mid\lambda,X)\). Choosing the CDF as the basis for our methodology enables us to be agnostic to whether the observable is continuous, discrete, or a mixture thereof. We will use _distribution_ to refer to the CDF of a stochastic quantity, and _density_ to refer to the corresponding probability density function (where it exists).
Further suppose the target CDF \(\text{T}(Y\mid X_{r})\) has been elicited at \(R\) values of the covariate vector denoted \(\{X_{r}\}_{r=1}^{R}\), which we stack in the covariate matrix \(\mathbf{X}=\left[X_{1}^{\top}\cdots X_{R}^{\top}\right]\in\mathbf{\mathcal{X}}\subseteq \mathbb{R}^{R\times C}\). We assume that each target \(\text{T}(Y\mid X_{r})\) has identical support to \(\text{P}(Y\mid\lambda,X_{r})\). Lastly, it will be convenient to denote \(\text{T}(Y\mid\mathbf{X})=\prod_{r=1}^{R}\text{T}(Y\mid X_{r})\), with \(\text{P}(Y\mid\lambda,\mathbf{X})\) and \(\text{P}(\theta\mid\lambda,\mathbf{X})\) defined analogously.
We now quantify the difference between the prior predictive and target by the _covariate-specific predictive discrepancy_, which we define to be
\[\tilde{D}(\lambda\mid\mathbf{X})=\frac{1}{R}\sum_{r=1}^{R}\int d(\text{P}(Y\mid \lambda,X_{r}),\text{T}(Y\mid X_{r}))\text{dT}(Y\mid X_{r}), \tag{1}\]
for some discrepancy function \(d(\cdot,\cdot)\). The Riemann-Stieltjes integral in Equation 1 is necessary because \(\mathcal{Y}\) can be continuous, discrete, or a mixture thereof. Minimising Equation (1) admits the optimal hyperparameter \(\lambda^{*}=\min_{\lambda\in\Lambda}\tilde{D}(\lambda\mid\mathbf{X})\). The covariate-independent equivalent \(\tilde{D}(\lambda)\) is obtained by setting \(R=1\) and ignoring all conditioning on \(X_{r}\) in Equation (1).
The discrepancy function \(d(\cdot,\cdot)\) takes two CDFs as its arguments. Inspired by the Cramer-von Mises (von Mises, 1947) and Anderson-Darling (Anderson and Darling, 1952) distributional tests we define, for arbitrary
CDFs \(\text{M}(Y)\) and \(\text{P}(Y)\), two options for our discrepancy function,
\[d^{\text{cCM}}(\text{M}(Y),\text{P}(Y))=(\text{M}(Y)-\text{P}(Y))^{2},\quad d^{ \text{AD}}(\text{M}(Y),\text{P}(Y))=\frac{(\text{M}(Y)-\text{P}(Y))^{2}}{\text{ P}(Y)(1-\text{P}(Y))}. \tag{2}\]
Both discrepancies are proper scoring rules (Gneiting and Raftery, 2007) as they are minimised iff \(\text{M}(Y)=\text{P}(Y)\) for all \(Y\in\mathcal{Y}\). Supposing \(\text{P}(Y\mid\lambda,X_{r})\) is flexible enough to exactly match \(\text{T}(Y\mid X_{r})\) for some unique \(\lambda^{*}\), then both discrepancies will yield the same \(\lambda^{*}\). Differences arise when \(\text{P}(Y\mid\lambda,X_{r})\) is insufficiently flexible. Furthermore, we will have to resort to a finite-sample approximation to Equation (1) (which we detail momentarily), and in this setting the Anderson-Darling discrepancy \(d^{\text{AD}}\) places more emphasis on matching the tails of two CDFs under consideration, but is more challenging to accurately compute.
Regularising estimates of \(\lambda^{*}\) by promoting the marginal standard deviation (secondary objective)
The optimisation problem of minimising Equation (1) is often underspecified. Specifically, there are many optimal values \(\lambda^{*}\) that yield values of \(\tilde{D}(\lambda^{*}\mid\mathbf{X})\) that are practically indistinguishable (noted by da Silva _et al._, 2019), and yet the prior distributions \(\text{P}(\theta\mid\lambda^{*},\mathbf{X})\) and the corresponding marginals for components of \(\theta\) can differ immensely. In terms of our desiderata, there are many equally faithful priors (which immediately implies a lack of uniqueness), thus we have an optimisation problem with solutions that are difficult to replicate due to nonuniqueness. This is not surprising because we are providing information only on \(Y\), which is typically of lower dimension than \(\theta\). To address this underspecification we seek to encode the following principle into our methodology: given two estimates of \(\lambda^{*}\) which have equivalent values of \(\tilde{D}(\lambda^{*}\mid\mathbf{X})\), we prefer the one with the larger variance for \(\text{P}(\theta\mid\lambda^{*},\mathbf{X})\). This preference induces less of a restriction on the possible values of \(\theta\) in the posterior.
We make use of this principle by adopting a multi-objective approach to prior construction and, therefore, now derive a suitable mathematical quantity measuring the variability of \(\theta\). There are numerous suitable functions measuring such variability, and our methodology is agnostic to the particular functional form. Most generally, we define the secondary objective \(\tilde{N}(\lambda\mid\mathbf{X})\) as comprising any such suitable function \(n(\theta)\) with
\[\tilde{N}(\lambda\mid\mathbf{X})=\int n(\theta)\,\text{dP}(\theta\mid\lambda,\bm {X}). \tag{3}\]
In this paper we consider only one form for \(n(\theta)\), and so hereafter the second objective, which we also seek to minimise, is always
\[\tilde{N}(\lambda\mid\mathbf{X})=-\frac{1}{Q}\sum_{q=1}^{Q}\log\left(\text{SD}_{ \text{P}(\theta_{q}|\lambda,\mathbf{X})}\left[\theta_{q}\right]\right), \tag{4}\]
where \(\text{SD}_{\text{P}(Z)}[Z]\) is the standard deviation of \(Z\) under distribution \(\text{P}(Z)\). This quantity is the mean of the marginal log standard deviations of each of the \(Q\) components of \(\theta\in\Theta\subseteq\mathbb{R}^{Q}\), which we negate so as to promote marginal variability when performing minimisation. We work with the standard deviation (instead of the variance) and take the logarithm thereof to minimise the contribution of any particularly extreme marginal (i.e. marginal with relatively low or high variance). Equations (3) and (4) make explicit the dependence on \(\text{P}(\theta\mid\lambda,\mathbf{X})\), and thus \(\lambda\), for clarity. We often have analytic expressions for2\(\text{SD}_{\text{P}(\theta|\lambda,\mathbf{X})}[\theta_{q}]\), but precise estimates are also simple to obtain using Monte Carlo.
Footnote 2: Note that Equation (4) assumes \(\text{SD}_{\text{P}(\theta_{q}|\lambda,\mathbf{X})}[\theta_{q}]\) exists, and is nonzero and finite for all \(q\) and \(\lambda\in\Lambda\). Should this not be true, for example if one of the marginals of \(\text{P}(\theta_{q}\mid\lambda,\mathbf{X})\) is a Cauchy distribution, we can instead employ alternative, robust estimators of scale (Kravchuk and Pollett, 2012; Rousseeuw and Croux, 1993).
### Post optimisation decision step
We jointly minimise Equations (1) and (4) using a multi-objective optimisation algorithm, which we will cover in detail momentarily. By adopting a multiple objective approach to the translation problem, we obtain a set of possible \(\lambda\) values which comprise the Pareto frontier \(\mathcal{P}=\{\lambda_{l}\}_{l=1}^{|\mathcal{P}|}\) (for an introduction to multi-objective optimisation problems see Chapter 2 of Deb, 2001). For each \(\lambda\) in \(\mathcal{P}\) we compute the loss
\[\tilde{L}(\lambda)=\log(\tilde{D}(\lambda\mid\mathbf{X}))+\kappa\,\tilde{N}( \lambda\mid\mathbf{X}), \tag{5}\]
where the value of \(\kappa>0\) expresses our relative belief in the importance of the secondary objective. We take the log of \(\tilde{D}(\lambda\mid\mathbf{X})\) to aid in selecting an appropriate \(\kappa\), given our definition of \(\tilde{N}(\lambda\mid\mathbf{X})\) in Equation (4), but stress that this not necessary should there be a more appropriate scale on which to define \(\tilde{L}(\lambda)\). The optimal value is then chosen such that \(\lambda^{*}:=\min\limits_{\lambda\in\mathcal{P}}\tilde{L}(\lambda)\). This optimum is clearly sensitive to the choice of \(\kappa\), but it is computationally inexpensive to test many values of \(\kappa\) (the set of which we denote with \(\mathcal{K}\)), and plots of the Pareto frontier coloured by loss greatly aid our decision about the appropriate choice of \(\kappa\).
Advantages of multi-objective optimisation are most immediately apparent when the scales of our objectives differ markedly. Consider the equivalent linearised approach, where we select \(\kappa\)_before_ optimisation and directly optimise \(\tilde{L}(\lambda\mid\mathbf{X})\). It is generally not possible to know the range of the values of \(\tilde{D}(\lambda\mid\mathbf{X})\) and \(\tilde{N}(\lambda\mid\mathbf{X})\) before optimisation. Selecting an appropriate \(\kappa\) without this knowledge is prohibitively difficult, leaving only the computationally expensive trial-and-error approach - where we re-run the optimiser for each new possible value of \(\kappa\) - as a plausible strategy for choosing \(\kappa\). In contrast, given \(\mathcal{P}\) it is computationally trivial to recompute \(\lambda^{*}\) for many possible values of \(\kappa\)_after_ optimisation (e.g. each panel of Figure 2 is trivial to compute). We can thus select \(\kappa\) in a problem-specific manner for practically no additional computational cost to that of the multi-objective optimiser. Note that the multi-objective optimisation approach is more expensive that the linearised approach, but this additional cost is dwarfed by the number of re-runs of the latter typically required to select \(\kappa\).
### Optimisation strategy
With the mathematical framework defined, we now turn to discuss the many practicalities of optimisation. We use a two-stage global optimisation process to construct a prior with the desiderata listed in Section 2.1. The first stage focuses entirely on faithfulness by minimising only \(\tilde{D}(\lambda\mid\mathbf{X})\), using the variant of controlled random search 2 (CRS2, Price, 1983) proposed by Kaelo and Ali (2006). Stage one output is then used to initialise stage two, which additionally focuses on uniqueness and replicability by employing multi-objective Bayesian optimisation (Frazier, 2018; Zaefferer _et al._, 2012) to jointly minimise \(\tilde{D}(\lambda\mid\mathbf{X})\) and \(\tilde{N}(\lambda\mid\mathbf{X})\). We focus on faithfulness, and thus \(D(\lambda\mid\mathbf{X})\), as a separate stage because minimising \(D(\lambda\mid\mathbf{X})\) is considerably more challenging than minimising \(N(\lambda\mid\mathbf{X})\). By initialising stage two with faithful estimates for \(\lambda\) we can, in the second stage, spend more computational resources on finding points along the Pareto frontier. The resulting optimal prior should suitably encode the information in the target predictive distribution, without being overly confident for model parameters. An idealised form of this process is illustrated in Figure 1.
Note that almost all global optimisation methods require, in the absence of other constraints, \(\Lambda\) to be compact subset of \(\mathbb{R}^{L}\). We require that the practitioner specify the upper/lower limits for each dimension of \(\Lambda\).
#### 2.6.1 Evaluating the objectives
Optimisation cannot proceed until we have practical means to evaluate \(\tilde{D}(\lambda\mid\mathbf{X})\) and \(\tilde{N}(\lambda\mid\mathbf{X})\). As noted previously, evaluating \(\tilde{N}(\lambda\mid\mathbf{X})\) for models where analytic results or simple Monte Carlo estimates are available is straightforward, and we denote the corresponding estimate (or, if available, the analytic form) of \(\tilde{N}(\lambda\mid\mathbf{X})\) with \(N(\lambda\mid\mathbf{X})\). However, there are two immediate challenges to evaluating \(\tilde{D}(\lambda\mid\mathbf{X})\):
Figure 1: An idealised depiction of the methodology we introduce in this paper when the corresponding densities are available. The starting point, or “stage zero” (S0) depicts the elicited target prior predictive distribution for the observable quantity \(Y\) depicted and denoted by its density \(\mathfrak{t}(Y)\) (red line). Stage one starts (S1 – start) from an initial prior (blue line), which does not match the target and is uninformative for the model parameter \(\theta\). Optimisation proceeds by using controlled random search to minimise the predictive discrepancy (grey shaded area) and produces optimal hyperparameters \(\lambda^{*}\). Stage one produces (S1 – end) a very faithful prior predictive distribution, but is overly confident for the model parameter. Stage two (S2) uses multi-objective Bayesian optimisation and corrects this overconfidence for \(\theta\) with only a small increase in predictive discrepancy.
1. the prior predictive CDF \(\text{P}(Y\mid\lambda,\mathbf{X})\) is often analytically unavailable;
2. the integral in Equation (1) is almost always intractable.
We address the former with a Monte Carlo based empirical CDF (ECDF), and the latter with importance sampling. Specifically, given a particular value of \(\lambda\) and \(X_{r}\), we draw \(S_{r}\) samples \(\mathbf{y}_{r}^{(\text{P})}=(y_{s,r})_{s=1}^{S_{r}}\) with \(\mathbf{y}_{r}^{(\text{P})}\sim\text{P}(Y\mid\lambda,X_{r})\) to form the ECDF \(\hat{\text{P}}(Y\mid\lambda,X_{r},\mathbf{y}_{r}^{(\text{P})})\). To apply importance sampling we rewrite the integral in Equation (1) with respect to importance distribution \(\text{Q}(Y\mid X_{r})\) and importance density \(\text{q}(Y\mid X_{r})\), such that
\[\tilde{D}(\lambda\mid\mathbf{X})=\frac{1}{R}\sum_{r=1}^{R}\int d(\text{P}(Y\mid \lambda,X_{r}),\text{T}(Y\mid X_{r}))\frac{\text{t}(Y\mid X_{r})}{\text{q}(Y \mid X_{r})}\text{d}\text{Q}(Y\mid X_{r}). \tag{6}\]
When \(Y\) is discrete or of mixed type, \(\text{q}(Y\mid X_{r})\) is instead a probability mass function or an appropriate mixture of discrete and continuous densities. Supposing we draw \(I_{r}\) importance samples \((y_{i,r})_{i=1}^{I_{r}}\sim\text{Q}(Y\mid X_{r})\), we denote the importance sampling approximation to \(\tilde{D}(\lambda\mid\mathbf{X})\) with \(D(\lambda\mid\mathbf{X})\) such that
\[D(\lambda\mid\mathbf{X})=\frac{1}{R}\sum_{r=1}^{R}\frac{1}{I_{r}}\sum_{i=1}^{I_{r} }d(\text{P}(y_{i,r}\mid\lambda,X_{r}),\text{T}(y_{i,r}\mid X_{r}))\frac{\text {t}(y_{i,r}\mid X_{r})}{\text{q}(y_{i,r}\mid X_{r})}. \tag{7}\]
Note that we write \(\text{Q}(Y\mid X_{r})\), and thus \(\text{q}(Y\mid X_{r})\), to make clear that the importance distribution could be covariate-specific, but in straightforward settings a common \(\text{Q}(Y)\) for all \(R\) covariate values will be appropriate.
We select \(\text{Q}(Y\mid X_{r})\) using information about the support \(\mathcal{Y}\), and samples from \(\text{P}(Y\mid\lambda,X_{r})\) and \(\text{T}(Y\mid X_{r})\). For more details see Appendix A. Finally, for numerical stability we evaluate \(D(\lambda\mid\mathbf{X})\) on the log scale, with details available in Appendix B. This process is summarised in Algorithm 1.
```
0: Targets \(\text{T}(Y\mid X_{r})\) for \(r=1,\ldots,R\); samplers for generating points from \(\text{T}(Y\mid X_{r})\) and \(\text{P}(Y\mid\lambda,X_{r})\); discrepancy \(d(\cdot,\cdot)\); number of samples to draw \(S_{r}\); number of importance samples \(I_{r}\); observable support \(\mathcal{Y}\)
1functionEvaluate log (\(D(\lambda\mid\mathbf{X})\))
2for\(r\) in \(1\ldots R\)do
3 Sample prior predictive \(\mathbf{y}_{r}^{(\text{P})}=(y_{s,r}^{(\text{P})})_{s=1}^{S_{r}}\sim\text{P}(Y \mid\lambda,X_{r})\)
4 Use \(\mathbf{y}_{r}^{(\text{P})}\) to form the ECDF \(\hat{\text{P}}(Y\mid\lambda,X_{r},\mathbf{y}_{r}^{(\text{P})})\)
5 Sample target \(\mathbf{y}_{r}^{(\text{T})}=(y_{s,r}^{(\text{T})})_{s=1}^{S_{r}}\sim\text{T}(Y \mid X_{r})\)
6 Choose importance distribution \(\text{Q}(Y\mid X_{r})\) via Appendix A
7 Sample importance points \((y_{i,r})_{i=1}^{I_{r}}\sim\text{Q}(Y\mid X_{r})\)
8endfor
9 Compute \(\log(D(\lambda\mid\mathbf{X}))\) using Equations (20) - (23) in Appendix B
10return: Value of \(\log(D(\lambda\mid\mathbf{X}))\)
11endfunction
```
**Algorithm 1** Importance sampling and ECDF approximation \(D(\lambda\mid\mathbf{X})\)
#### 2.6.2 Optimisation, stage 1
In this stage we focus solely on faithfulness by minimising \(D(\lambda\mid\mathbf{X})\). We do so using CRS2 (Price, 1983) with local mutation (Kaelo and Ali, 2006), which we run for \(N_{\text{CRS2}}\) iterations. We make use of the final optimum value \(\lambda^{*}\), as well as each of the \(N_{\text{CRS2}}\) trial points to obtain a design \(\mathcal{D}\) for the next stage. The design comprises values of \(\lambda\), and their corresponding values of \(\log(D(\lambda\mid\mathbf{X}))\). A (small) number of padding points \(N_{\text{pad}}\) are added to \(\mathcal{D}\) for numerical robustness in stage 2. The result is the design \(\mathcal{D}=\{\lambda_{i},\log(D(\lambda_{i}\mid\mathbf{X}))\}_{i=1}^{N_{\text{ coup}}+N_{\text{pad}}}\), whose construction is detailed in Algorithm 3 in Appendix C.
Whilst CRS2 was not designed to minimise noisy functions, it appears empirically robust to small quantities
of noise. We can make the noise in \(D(\lambda\mid\mathbf{X})\) arbitrarily small, but doing so usually incurs an enormous computational cost. Carefully balancing the noise in the objective, and thus quality of the stage one solution, against the cost of evaluation yields a faithful optimum \(\lambda^{*}\) and useful design \(\mathcal{D}\) in an acceptable amount of time.
#### 2.6.3 Optimisation, stage 2
Stage two focuses on uniqueness and replicability in addition to faithfulness. We adjust our optimisation technique to affect this change in emphasis, and use multi-objective Bayesian optimisation, via MSPOT (Zaefferer _et al._, 2012), to jointly minimise \(D(\lambda\mid\mathbf{X})\) and \(N(\lambda\mid\mathbf{X})\). MSPOT uses a separate Gaussian process (GP) approximation to each of the objectives, and evaluates these approximations at many points from Latin hypercube designs (Stein, 1987). In each of the \(N_{\text{BO}}\) iterations, the best points under the current GP approximations are evaluated using the actual objectives. These evaluations accumulate and thus iteratively improve the GP approximations. After \(N_{\text{BO}}\) iterations, the evaluated points are reduced to their Pareto frontier (Kung _et al._, 1975), which we use in Equation (5). Algorithm 4 in Appendix C describes in detail the MSPOT algorithm as applied to two objectives.
The noisy and computationally expensive nature of our objectives, particularly \(D(\lambda\mid\mathbf{X})\), necessitates an approach such as MSPOT. Employing approximate GP models for the objectives allows us to screen potential values of \(\lambda\in\Lambda\) inexpensively, and avoid evaluating the actual objectives at values of \(\lambda\) far from optimal. Moreover, the GP is a flexible yet data efficient model to use as an approximation and can, through appropriate choice of kernel, capture correlation or other complex relationships between components of \(\lambda\) and the objective.
Stage two also adopts an optional batching technique because we encounter computational limits for large values of \(N_{\text{BO}}\). This is due to the computational cost of evaluating the GP growing cubically in the number of points in its construction. Batching partially removes this limitation, and can produce equivalent or better priors in less time than a single batch of many iterations. It achieves this by subsampling to remove similar, or otherwise uninformative, points from the collection used to form the surrogate GP model. Specifically, we run MSPOT for a fixed number of iterations, then select a subsample of the points evaluated in the completed batch as initialisation points for the following batch. The exact subsampling strategy is detailed in Algorithm 5 in Appendix C.
### Benchmarking and other empirical considerations
We will at times compare results between this multi-objective optimisation approach with the "single objective" approach, which is identical other than stage 2 of the optimisation also only considers \(D(\lambda\mid\mathbf{X})\). This single objective approach inherently discards our uniqueness and replicability desiderata, but is occasionally an informative benchmark.
Finally, it should be noted that global optimisation methods lack guarantees of finding the global optimum in finite time, so we cannot generalise the performance of this optimisation process to all problems. We do, however, empirically validate the performance of this strategy in the examples we consider, and note that the optimisation process has a number of tuning parameters that control robustness and/or speed which may prove useful in other settings.
### Summary
Our method for specifying a prior given predictive information requires:
1. a method for sampling \(\text{P}(Y\mid\lambda,\mathbf{X})\);
2. upper and lower limits that render \(\Lambda\) a compact subset of \(\mathbb{R}^{L}\);
3. a target CDF T\((Y\mid\mathbf{X})\) and corresponding PDF t\((Y\mid\mathbf{X})\) which, for numerical stability, must both be implemented on the log-scale;
4. a method for generating samples from T\((Y\mid\mathbf{X})\), as the importance sampler depends on information contained in such samples;
5. a choice of \(\kappa\) (and we will present a diagnostic plot-see e.g. Figure 2-which assists in making this decision).
Algorithm 2 describes, in pseudocode, our proposed methodology.
```
1:\(\log(D(\lambda\mid\mathbf{X}))\) (evaluable using Algorithm 1); secondary objective \(N(\lambda\mid\mathbf{X})\); \(\kappa\); number of Bayesian optimisation iterations \(N_{\text{BO}}\); number of batches \(N_{\text{batch}}\); number of CRS2 iterations \(N_{\text{CRS2}}\); number of importance samples per-covariate \(I_{r}\); number of prior predictive samples per-covariate \(S_{r}\).
2:functionfpsbo(\(\kappa,N_{\text{BO}},N_{\text{batch}}\))
3: Minimising \(\log(D(\lambda\mid\mathbf{X}))\) alone, compute the initial design \(\mathcal{D}\) using CRS2 via Algorithm 3
4:for\(b\) in \(1\ldots N_{\text{batch}}\)do
5: Jointly minimising \(\log(D(\lambda\mid\mathbf{X}))\) and \(N(\lambda\mid\mathbf{X})\), compute the \(b^{\text{th}}\) Pareto Frontier \(\mathcal{P}_{b}\) and complete design \(\mathcal{D}_{b}\) using Algorithm 4, initialising with design \(\mathcal{D}\)
6: Update design \(\mathcal{D}\) using \(\mathcal{P}_{b}\) and \(\mathcal{D}_{b}\) via Algorithm 5
7:endfor
8: With final Pareto frontier \(\mathcal{P}_{N_{\text{batch}}}\), compute \(\lambda^{*}=\min L(\lambda)=\min_{\lambda\in\mathcal{P}_{N_{\text{batch}}}} \log(D(\lambda\mid\mathbf{X}))+\kappa N(\lambda\mid\mathbf{X})\)
9:return\(\lambda^{*}\)
10:endfunction
```
**Algorithm 2** Methodology to translate prior predictive information into a prior for the parameters in a complex model
### The pbbo R package
We implement our methodology in an R package (R Core Team, 2022) called pbbo, available from [https://github.com/hhau/pbbo](https://github.com/hhau/pbbo). pbbo3 builds on top of mlrMBO (Bischl _et al._, 2018) for multi-objective Bayesian optimisation, nlopt and nloptr (Ypma _et al._, 2022; Johnson, 2014) for global optimisation using CRS2 (Kaelo and Ali, 2006), and other packages for internal functionality and logging (Wickham _et al._, 2019; Rowe, 2016; Maechler _et al._, 2021). The code implementing the examples we consider in Sections 3 to 5, which further illustrate pbbo, can be found at [https://gitlab.com/andrew-manderson/pboo-paper](https://gitlab.com/andrew-manderson/pboo-paper).
Footnote 3: The release associated with this paper is available at [https://doi.org/10.5281/zenodo.7736707](https://doi.org/10.5281/zenodo.7736707).
## 3 A human-aware prior for a human growth model
We now consider a nonlinear regression model for human growth. There are a number of properties of this example that make it an interesting test for our methodology. First, we find it difficult to specify priors congruent with desired prior predictive distributions for such models; both the nonlinearity and the need to condition on specific values of the regressor complicate prior specification. Data for human growth are also readily available, so we can assess the impact of the prior on many distinct data and posteriors. Second, the model we consider is also poorly behaved under the flat prior, so some prior information is required to stabilise and/or regularise the estimate of the posterior. Finally, this example is also considered by Hartmann _et al._ (2020), and so there is a suitable comparator for our results.
Suppose an individual has their height measured at age \(t_{m}\) (in years) for \(m=1,\ldots,M\), with corresponding measurement \(y_{m}\) (in centimetres). The first Preece-Baines model (Preece and Baines, 1978) for human height
is,
\[y_{m} =h(t_{m};\theta)+\varepsilon_{m} \tag{8}\] \[=h_{1}-\frac{2(h_{1}-h_{0})}{\exp\{s_{0}(t_{m}-\gamma)\}+\exp\{s_{1 }(t_{m}-\gamma)\}}+\varepsilon_{m}, \tag{9}\]
with \(\varepsilon_{m}\sim\mathrm{N}(0,\sigma_{y}^{2})\). Some constraints are required to identify this model and ensure its physical plausibility: specifically, we require \(0<h_{0}<h_{1}\) and \(0<s_{0}<s_{1}\). A parameterisation that respects these constraints and is easier to work with uses \(\delta_{h}=h_{1}-h_{0}\) instead of \(h_{1}\), and \(\delta_{s}=s_{1}-s_{0}\) in place of \(s_{1}\). All of \((h_{0},\delta_{h},s_{0},\delta_{s})\) thus have the same positivity constraint. Finally, we also constrain \(\gamma\) such that \(\gamma\in(\min_{m}(t_{m}),\max_{m}(t_{m}))\). However, these constraints are not sufficient to make the model plausible for all permissible parameter values - the denominator of the fraction can be very small, yielding negative heights.
To align with the notation introduced in Section 2 we denote the parameters by \(\theta=(h_{0},\delta_{h},s_{0},\delta_{s},\gamma)\). As in Hartmann _et al._ (2020), we choose for each of the \(q=1,\ldots,5\) elements of \(\theta\) an independent \(\mathrm{LogNormal}(\mu_{q},s_{q}^{2})\) prior. We will seek to identify the optimal values of \(\lambda=\left(\mu_{q},s_{q}^{2}\right)_{q=1}^{5}\). Table 2 in Appendix 2 lists the upper and lower limits we choose for each component of \(\lambda\).
We do not consider the measurement error variance \(\sigma_{y}^{2}\) as part of \(\theta\). Doing so introduces a degenerate solution for \(\lambda\), where all variability in \(Y\) is singularly attributable to \(\varepsilon_{m}\) and thus \(\sigma_{y}^{2}\). Such a prior seems undesirable, so instead we fix the prior for \(\sigma_{y}^{2}\) to reflect the measurement process for human height; measurement errors are unlikely to be more than one or two centimetres, so values of \(\sigma_{y}^{2}\approx 1\) seem reasonable. Thus we set \(\sigma_{y}\sim\mathrm{LogNormal}(0,0.2^{2})\). More generally, in models with additive forms such as Equation (8) it is challenging to avoid attributing all the variability in \(Y\) to the noise term (if the prior for the noise is to be specified) and so it will generally be necessary to fix a prior for \(\sigma_{y}^{2}\) using knowledge of the measurement process.
### What information are we supplying?
Our data are assumed to originate from a sample of adolescent humans, uniformly distributed between ages 2 and 18, and evenly split between sexes. We consider supplying two types of target prior predictive distributions. A _covariate-independent_ prior predictive density \(\mathrm{t}(Y)\), and corresponding CDF \(\mathrm{T}(Y)\), for human heights across the entire age-range, derived by summarising external data. This target (Figure 3) is a mixture of 3 gamma densities specified to approximate the external data, which is multimodal due to the fact that humans grow in spurts. We also consider a _covariate-specific_\(\mathrm{T}(Y\mid X_{r})\) in which we specify the target predictive distribution \(\mathrm{T}(Y\mid X_{r})\) of human heights at ages \(X_{r}\in(2,8,13,18)\) with \(r=1,\ldots,4\). Each \(\mathrm{T}(Y\mid X_{r})\) is normal (Figure 4).
Specifically, denote with \(\mathrm{Gamma}(Y;\alpha,\beta)\) the CDF of the gamma distribution with shape parameter \(\alpha\) and rate \(\beta\); and \(\mathrm{Normal}(Y;\xi,\omega^{2})\) the CDF of the normal distribution with mean \(\xi\) and standard deviation \(\omega\). We define the covariate-independent target
\[\mathrm{T}(Y)=0.38\,\mathrm{Gamma}(Y;45.49,0.44)+0.36\,\mathrm{Gamma}(Y;115.4 1,0.81)+0.27\,\mathrm{Gamma}(Y;277.51,1.64), \tag{10}\]
and the covariate-specific target
\[\begin{split}\mathrm{T}(Y\mid X_{1}=2)&=\mathrm{ Normal}(Y;88,3.5^{2}),\quad\mathrm{T}(Y\mid X_{2}=8)=\mathrm{Normal}(Y;130,5.5^{2}), \\ \mathrm{T}(Y\mid X_{3}=13)&=\mathrm{Normal}(Y;160,8 ^{2}),\quad\mathrm{T}(Y\mid X_{4}=18)=\mathrm{Normal}(Y;172,9.5^{2}).\end{split} \tag{11}\]
### Example details and tuning parameters
For covariate-independent and covariate-specific targets densities, we obtain \(\lambda^{*}\) using both the single objective and multi-objective optimisation processes (see Sections 2.6 and 2.7). We assess replicability using 30 independent runs of each objective/target pair. For each replicate, we run pbbo using \(S=5\times 10^{4}\) samples from p(\(Y\mid\lambda\)) and likewise \(S_{r}=5\times 10^{4}\) samples from p(\(Y\mid\lambda,X_{r}\)) for each of the 4 values of \(X_{r}\). We use \(I=5\times 10^{3}\) and \(I_{r}=5\times 10^{3}\) importance samples, with \(N_{\text{CRS2}}=2000\) CRS2 iterations, \(N_{\text{batch}}=5\) Bayesian optimisation batches each of \(N_{\text{BO}}=250\) iterations, and carry forward \(N_{\text{design}}=50\) points per batch. We use only the Cramer-Von Mises discrepancy in this example, because we were not able to reliably compute the Anderson-Darling discrepancy due to numerical instabilities.
### Results
#### 3.3.1 Choosing \(\kappa\)
The optimal choice of \(\kappa\) is target specific, and so we separately choose an appropriate \(\kappa\in\mathcal{K}=\{0.05,0.1,\ldots,0.5\}\) for the covariate-independent and covariate-specific targets. As a heuristic, we choose the value of \(\kappa\) that yields the minimum variability of \(L(\lambda\mid\mathbf{X})\) for \(\lambda\in\mathcal{P}\) amongst the replicates, though this heuristic is unsuitable if, as we expect to be more commonly the case, only one run of the optimiser is made. In such settings we recommend plotting the Pareto frontiers as in Figure 2 and visually checking that the minimum loss point is not at either extrema of the frontier.
We select \(\kappa=0.2\) for the covariate-specific target given our minimum variability heuristic. The Pareto frontiers and minimum loss points displayed in Figure 2, though for brevity we display the results only for \(\kappa\in\{0.1,0.2,0.3,0.4\}\). The results for the covariate-independent target are similar (See Appendix D Figure 16) and there we select \(\kappa=0.15\). Visible in most replicates is an inflection point at values of \(N(\lambda)\approx 1.5\) (the Y-axis in Figure 2) around which the minimum loss points cluster. Any of these points likely admits a reasonable value for \(\lambda^{*}\).
Figure 2 displays notable inter-replicate variability, with the Pareto frontier for some replicates being totally dominated by other replicates. This is due to the stochastic properties of the global optimisers we employ.
#### 3.3.2 Final discrepancies and faithfulness
Having selected \(\kappa\) we compute the optimal \(\lambda\) by minimising \(L(\lambda)\). Given \(\lambda^{*}\), we compare t(\(Y\)) against p(\(Y\mid\lambda^{*}\)), and likewise t(\(Y\mid X_{r}\)) against p(\(Y\mid\lambda,X_{r}\)). These comparisons are intended to convince us that the optimal prior p(\(\theta\mid\lambda^{*}\)) indeed encodes the information in the prior predictive target, and is thus faithful to the target.
Figure 3 displays the targets and prior predictive density estimates for the covariate-independent target. In the covariate-independent target case, we see that introducing the secondary objective (right panel) produces estimates of \(\lambda^{*}\) that are congruent with estimates from the single objective case, though with an additional outlier or two. Both single and multi-objective approaches result in reasonably, but not entirely, faithful densities for p(\(Y\mid\lambda^{*}\)), However, most optimum priors seem to result in individual trajectories attaining their adult height \(h_{1}\) for younger than expected ages \(t\) (which we will later confirm in Figure 5), and thus the prior predictive p(\(Y\mid\lambda^{*}\)) accumulates additional probability surrounding \(Y=h_{1}\approx 155\).
For the covariate-specific target, displayed in Figure 4, the secondary objective introduces a number of outlying estimates for p(\(Y\mid\lambda^{*},X_{r}\)) most clearly visible for \(X_{1}=2\) and \(X_{2}=8\). Both the single and multi-objective approaches struggle to match the prior predictive distribution at all ages, with consistently poorer performance for \(X_{1}=2\). This is the youngest age at which the model is intended to be used, and some numerical instabilities are encountered here. It may also not be possible to match all four target prior
predictive distributions simultaneously, and the narrowness of t(\(Y\mid X_{1}=2\)) may make it contribute less to the predictive discrepancy under the Cramer-Von Mises discrepancy.
#### 3.3.3 Comparison with Hartmann _et al._ (2020)
Before continuing with our results, we detail the specifics of Hartmann _et al._ (2020) to contextualise our subsequent comparisons.
Hartmann _et al._ (2020) also consider the problem of prior specification given information about the prior predictive distribution for the model in Equation (9). However, there are key differences between our approaches that must be kept in mind when comparing results. Hartmann _et al._ (2020) elicit 6 predictive quantiles at ages \(t=(0,2.5,10,17.5)\), as opposed to entire predictive distributions at ages \(t=(2,8,13,18)\) which underpin the covariate-specific version of our method. We use different ages because the model of Preece and Baines (1978) is stated to be accurate and robust for ages greater than 2. Hartmann _et al._ (2020) include a noise parameter in their definition of \(\theta\). The exact interpretation of this parameter is complicated by their choice of Weibull likelihood, rendering the distribution of the measurement errors sensitive to conditional mean of the model (this is still the case despite their choice of Weibull parameterisation). Finally, Hartmann _et al._ (2020) elicit quantiles from 5 different users and report an estimated \(\lambda^{*}\) for each user. These estimates, extracted from the supplementary material to Hartmann _et al._ (2020) and reproduced in Appendix D.2, allow us to compare optimal priors \(\text{p}(\theta\mid\lambda^{*})\) and functions thereof. They do not report whether each user's estimate is consistent over repeated runs of their optimisation algorithm, and do not discuss the issue of estimate replicability.
#### 3.3.4 Prior faithfulness in the conditional mean
Do the priors we estimate produce reasonable and appropriately uncertain data _a priori_? This is also a question of faithfulness, but for all possible values of \(t\). Inspecting the prior predictive for the model without
Figure 2: Pareto frontiers for each \(\kappa\in\mathcal{K}\) for the **covariate-specific** example. The minimum loss point for each replicate is plotted with \(+\). Note also that the loss scales differ between plots.
Figure 4: Covariate-specific target densities t(\(Y\mid X_{r}\)) (red lines) and prior predictive densities p(\(Y\mid\lambda^{*},X_{r}\)) for each of the 30 replicates (blue lines). The replicates in the right column are obtained after an optimum value of \(\kappa=0.2\) is chosen.
Figure 3: The covariate-independent marginal target density t(\(Y\)) (red) and prior predictive densities p(\(Y\mid\lambda^{*}\)) for each of the 30 replicates (blue lines). The replicates in the right panel are obtained after an optimum value \(\kappa=0.15\) is chosen.
noise \(\mathsf{p}(h(t;\theta)\mid\lambda^{*})\), to exclude uncertainty due to the negligible measurement error, in Figure 5 suggests that both the covariate-independent and covariate-specific targets yield plausible typical growth trajectories. However, the covariate-independent priors are significantly more uncertain and as a result are not particularly plausible. This contrasts with the covariate-specific priors, which interpolate between the supplied targets with an acceptable degree of uncertainty. We also see why the covariate-specific target struggles to match all the targets simultaneously, as achieving an appropriate level of uncertainty at age 18 involves being similarly uncertain at age 2. All 5 of the priors from Hartmann _et al._ (2020), for a narrower uncertainty interval, are implausible in both shape and width when viewed on this scale. It also seems unlikely that these priors accurately reflect the information provided by the experts in Hartmann _et al._ (2020), but this information is not reported.
#### 3.3.5 Posterior replicability
Selecting values of \(\lambda^{*}\) with similar minimum loss, as displayed in e.g. Figure 16, is a necessary but not sufficient step in demonstrating the replicability of our estimates for \(\mathsf{p}(\theta\mid\lambda^{*})\). We must also inspect the marginal prior densities \(\mathsf{p}(\theta_{q}\mid\lambda^{*})\). Replicability is also important for the posterior; our prior ideally admits a posterior amenable to sampling (i.e. removes spurious modes and eliminates computational difficulties present when using a noninformative prior). It also seems desirable that similar priors should yield similar posteriors.
With these properties in mind, we compare the priors and posteriors produced by our methodology with the results from Hartmann _et al._ (2020) and, as benchmark, the posteriors produced using a flat, improper prior. For data we consider, separately, each of the 93 individuals in the growth data (Tuddenham and Snyder, 1954) provided by the fda package (Ramsay _et al._, 2022) in R (R Core Team, 2022). This is a form of prior sensitivity analysis, but distinct from the ideas of Roos _et al._ (2015) which consider only one particular realisation of the data. By considering each individual in the growth data independently, as opposed to jointly in a hierarchical model, we heighten the importance of including appropriate prior information. We sample each posterior using Stan (Stan Development Team, 2021), setting adapt_delta = 0.95 and max_treedepth = 12 to minimise false positive warning messages.
Stan has exceedingly robust sampling diagnostics. Should any diagnostic flag an issue with the sampling of the posterior we can be confident that something is amiss with the model. The converse is not immediately true; a lack of warnings does not imply the model is behaving appropriately, but it suggests we continue with further posterior predictive checks (Gabry _et al._, 2019; Gelman _et al._, 2020). Figure 6 displays whether, for a specific posterior, the call to Stan emits a warning message. The flat prior consistently produces posteriors that emit warnings, with some individuals particularly prone to warning messages (i.e. warnings are very correlated within individual columns), suggesting that their data are less informative than other individuals. Warnings are correlated within rows for the Hartmann et. al. priors, indicating that some of the priors are more suitable (replications 1 and 5) for the individuals in the growth data. Amongst our results we note that the covariate-specific approach produces fewer warnings than the covariate-independent approach in both the single- or multi-objective cases. This reflects the additional information available in the covariate-specific setting, and that this information results in improved priors. Warnings are particularly correlated within specific priors (i.e. across rows) for the covariate-independent approach, suggesting that these priors are inappropriate for many individuals. The multi-objective approach (third row of Figure 6) produces a small number of additional warnings above the equivalent single-objective approach (fifth row of Figure 6), illustrating the trade-off between priors that are as informative as possible (single-objective), and those, that by being slightly less informative (multi-objective, and thus better for more individuals), perform worse in settings where additional information is required.
Figure 5: **Prior predictive for the model without noise \(\text{p}(h(t;\theta)\mid\lambda^{*})\) for each replicate/user from Hartmann et al. (top right panel), the covariate-independent target (middle row) and covariate-specific target (bottom row) for the single objective and multi-objective settings (left column and right column respectively). Note that this quantity does not include measurement error. Solid lines depict the mean, with the grey regions representing the 95% prior predictive intervals, _except_ for the Hartmann panel, where the intervals are only 75% wide for visualisation purposes. The y-axis is truncated to \((70,200)\) and uncertainty intervals are also truncated to this range. The red lines in the covariate row correspond to our supplied t(\(Y\mid X_{r}\)) densities, and represent the same information as in Figure 4.**
Figure 6: Presence/absence of Stan warnings for all individuals (columns) in the FDA package growth data and replicate prior estimates (rows). Each replicate corresponds to a run of the optimisation process and thus a different prior, except the flat, improper prior which is identical for each replicate.
Posteriors for an individual whose data are uninformativeThe visual disparity between priors, displayed in Figure 5, is still visible, but reduced, when considering the posterior conditional mean \(\text{p}(h(t;\theta)\mid Y_{n},\lambda^{*})\) for individual \(n=26\) who, along with individuals \(n=21,27\), is the most warning-prone individual under the flat-prior. We select this individual because we are most interested in settings where additional regularisation by the prior is necessary for stable and plausible posteriors. Figure 7 displays the aforementioned posterior conditional means and uncertainty intervals. We observe that the data for individual \(26\) are unlikely to be fit well by this model, as it lacks an obvious upper asymptote and mid-trajectory inflection that is more typical of human growth, which the model is designed to capture. The warnings are thus indicative of a lack of model flexibility. However, we can improve the fit by adding information via the prior, as the covariate-independent and covariate-specific posteriors, particularly the multi-objective cases, seem to improve the fit for the average posterior. The Hartmann et al. posteriors are more plausible than their corresponding prior, but struggle to capture the _lack_ of growth spurts in this individual.
We reiterate here that the example intentionally difficult, and would be challenging for any prior specification methodology. There is only one individual's data, and these data are not equally informative for all parameters. Thus the posterior is sensitive to the specific prior information included in the model.
#### 3.3.6 Connecting replicability in the prior with the posterior
Does the relative similarity in prior and posterior conditional means translate into similarity, and thus replicability, in the distributions of \(\theta\) between replicates? We address this question by inspecting the prior and posterior marginal densities of \(\theta\) again for individual \(n=26\). The priors and posteriors for all parameters are displayed in Appendix D [Figures 17 and 18], but it is difficult to pick out performance trends from these plots. Instead we focus only on \((h_{0},\delta_{s})\in\theta\), which are displayed in Figure 8. The flat prior produces a multimodal posterior4 for our parameters of interest, demonstrating the lack of stability when computing the posterior and the need for some prior information. Subtle differences in \(h(t;\theta)\) magnify considerably when we consider \(\theta\) directly, with both the priors and posteriors for our covariate-specific target exhibiting substantial variability. There appears to be two distinct unimodal priors for \(h_{0}\) with similar loss, suggesting that \(\text{T}(Y\mid\mathbf{X})\) does not provided enough information to uniquely determine a prior distribution. However both priors are significantly broader than the Hartmann et. al. priors. The marginal priors, and posteriors, are more consistent for \(\delta_{s}\) but variability persists. All of our priors remove the possibility of a posterior for \(\delta_{s}\) with significant mass above 2, as is desirable, because such solutions are both physiologically implausible (they correspond to extremely fast growth spurts) and are unsupported by the data from individual \(n=26\).
Footnote 4: We run Stan with the default 4 chains to detect convergence warnings for Figure 6, but in Figure 8 we plot only one chain per replicate to better highlight multi-modal posteriors.
### Example summary
The priors estimated by our procedure in this example are faithful to the supplied information, but are partly constrained by the model's inflexibility making it difficult to simultaneously match the \(t=2\) and \(t=18\) targets in the covariate-specific case. They regularise the posterior sufficiently enabling accurate posterior sampling (model inadequacy notwithstanding), with the covariate-specific, multi-objective method proving most useful, but are arguably over concentrated and occasionally prevent the model from fitting the data well. Uniqueness is improved by our secondary objective, but perfect uniqueness across all replicates remains illusive and may not be possible with only the information provided in \(\text{T}(Y\mid\mathbf{X})\). We observe a small improvement in replicability attributable to the secondary objective (see Appendix D [Figures 17 and 18]), with some amount of the persistent variability a result of the stochastic optimisation procedure proposed in Section 2.
Figure 7: **Posterior for the model without noise \(\text{p}(h(t;\theta)\mid Y_{n},\lambda^{*})\) for individual \(n=26\), with this individual’s data displayed using crosses (red \(+\)). Panels are otherwise identical to Figure 5, _except_ that all intervals are now 95% wide (though many are too narrow to be visible).**
Figure 8: A comparison of the priors (blue) produced by our method using the covariate-specific target (bottom two rows) using the multiple objective function (\(\kappa=0.2\)) and the single objective function (\(\kappa=\text{NA}\)); Hartmann et al. (2020) (second row); with no prior displayed for the flat prior scenario (top row). The corresponding posteriors for individual \(n=26\) under each of these priors are displayed in (red). Note that the y-axis is limited to values that clip some of the priors/posteriors for readability.
## 4 Priors from model-derived quantities
Consider the linear model \(Y=\mathbf{X}\mathbf{\beta}+\varepsilon\) for \(n\times p\) design matrix \(\mathbf{X}\) and \(p\)-vector of coefficients \(\mathbf{\beta}\) indexed by \(j=1,\ldots,p\), and where the noise \(\varepsilon\) has zero mean and variance \(\sigma^{2}\). Suppose information about the fraction of variance explained by the model is available - from previous similar experiments, or from knowledge of the measurement process - in the form of a plausible distribution for the coefficient of determination, \(R^{2}\), which for this model can be computed as
\[R^{2}=1-\frac{\sigma^{2}}{n^{-1}\mathbf{\beta}^{\top}\mathbf{X}^{\top}\mathbf{X}\mathbf{\beta} +\sigma^{2}}. \tag{12}\]
assuming that the columns of \(\mathbf{X}\) have been centred. Our aim is to use our knowledge of \(R^{2}\) to set suitable priors for the regression coefficients \(\beta\). This idea was the inspiration for a class of shrinkage priors (Zhang and Bondell, 2018; Zhang _et al._, 2022), but we would like to make this idea applicable to a wider selection of prior structures.
To illustrate the effect of including knowledge of \(R^{2}\) on increasingly complicated priors, we investigate the selection of appropriate hyperparameters for three priors for the regression coefficients: two shrinkage priors, and a simple Gaussian prior. We simultaneously vary the covariate-independent target distribution \(\mathsf{T}(R^{2})\) to assess:
* each prior's ability to faithfully encode the information present across a wide variety of target distributions;
* uniqueness of the optimisation problem, and replicability of the single-objective variant of our optimisation algorithm, for each prior/target pair.
Note that we assume throughout that the noise \(\varepsilon\) is distributed according to a Gaussian distribution with zero mean and variance \(\sigma^{2}\), with an InverseGamma\((a_{1},b_{1})\) prior on \(\sigma^{2}\), and we will seek to select suitable \(a_{1}\) and \(b_{1}\) using our methodology. Finally, some asymptotic results are known for the Gaussian prior, and in Appendix E we further assess replicability by benchmarking our optimisation process against suitable 'true' (asymptotically) values.
Gaussian priorThe Gaussian prior has only one hyperparameter \(\gamma\), which controls the ratio of prior variability due to \(\mathbf{\beta}\) to that of \(\varepsilon\), and is
\[\beta_{j}\sim\mathrm{N}\left(0,\frac{\sigma^{2}}{\gamma}\right). \tag{13}\]
Hence, we denote hyperparameters \(\mathbf{\lambda}_{\mathrm{GA}}=(\gamma,a_{1},b_{1})\) (for which we seek optimum values) and parameters \(\mathbf{\theta}_{\mathrm{GA}}=(\mathbf{\beta},\sigma^{2})\).
Dirichlet-Laplace prior (Dir. Lap.)Bhattacharya _et al._ (2015) introduce the Dirichlet-Laplace shrinkage prior, which is defined for the \(j^{\text{th}}\) coefficient such that
\[\beta_{j}\sim\text{Laplace}\left(0,\sigma\phi_{j}\tau\right),\quad(\phi_{1}, \ldots,\phi_{p})\sim\text{Dirichlet}(\alpha,\ldots,\alpha),\quad\tau\sim \text{Gamma}(p\alpha,1\,/\,2). \tag{14}\]
There is a single hyperparameter \(\alpha\), with smaller values of \(\alpha\) yielding more sparsity in \(\mathbf{\beta}\). Thus we denote \(\mathbf{\lambda}_{\mathrm{DL}}=(\alpha,a_{1},b_{1})\) and \(\mathbf{\theta}_{\mathrm{DL}}=(\mathbf{\beta},\sigma^{2},\phi_{1},\ldots,\phi_{p},\tau)\).
Regularised horseshoe prior (Reg. Horse.)The regularised horseshoe of Piironen and Vehtari (2017) is the most complex of the priors. With more intermediary stochastic quantities between the hyperparameters and \(R^{2}\), as well as less linearity in the relationship between the aforementioned, it is the most flexible of the
priors. These properties make finding optimal values of the hyperparameters more challenging. The prior is
\[\begin{split} c^{2}\sim\text{InvGamma}\left(\frac{\nu}{2},\frac{ \nu s^{2}}{2}\right),\quad\omega\sim\text{Cauchy}^{+}\left(0,\frac{p_{0}}{p-p_ {0}}\sqrt{\frac{\sigma^{2}}{n}}\right),\quad\delta_{j}\sim\text{Cauchy}^{+}(0,1 ),\\ \tilde{\delta}_{j}^{2}=\frac{c^{2}\delta_{j}^{2}}{c^{2}+\omega^{2 }\delta_{j}^{2}},\quad\beta_{j}\sim\text{N}(0,\omega^{2}\tilde{\delta}_{j}^{2} ),\end{split} \tag{15}\]
where \(\text{Cauchy}^{+}\) denotes the Cauchy distribution truncated to \([0,\infty)\). Equation 15 leaves us free to choose three prior-specific hyperparameters, \((p_{0},\nu,s^{2})\). Thus \(\boldsymbol{\lambda}_{\text{HS}}=(p_{0},\nu,s^{2},a_{1},b_{1})\) and \(\boldsymbol{\theta}_{\text{HS}}=(\boldsymbol{\beta},\sigma^{2},c^{2},\omega, \delta_{1},\ldots\delta_{p})\). Whilst the regularised horseshoe is carefully designed to make \((p_{0},\nu,s^{2})\) interpretable and easy to choose, here we aim to see if values of these hyperparameters can be chosen to match an informative prior for \(R^{2}\).
### An experiment to assess prior faithfulness
How faithfully can we represent our knowledge of \(R^{2}\) in \(\text{P}(\theta\mid\lambda^{*})\) using each of the aforementioned priors? To answer this question we consider several different \(\text{Beta}(s_{1},s_{2})\) distributions as our target \(\text{T}(R^{2})\), and compare these to the prior predictive \(\text{P}(R^{2}\mid\lambda^{*})\) for optimal hyperparameter values \(\lambda^{*}\). We choose \(\mathcal{S}\), the set of possible values for which \(\{s_{1},s_{2}\}\in\mathcal{S}\times\mathcal{S}\), to be 7 exponentially-spaced values between and including \(1\,/\,3\) and \(3\) (i.e. equally-spaced between \(\log(1\,/\,3)\) and \(\log(3)\)). These values represent a variety of shapes and forms for the supplied target predictive distribution for \(R^{2}\). Finally, we fix \(n=50\) and \(p=80\) with entries in \(\boldsymbol{X}\) drawn from a standard Gaussian distribution, and assess replicability using 10 independent runs for each prior and target.
#### 4.1.1 Hyperparameter support (\(\Lambda\)) and tuning parameters
The support \(\Lambda\) for the hyperparameters is defined in Table 4 in Appendix E. Note that for the Dirichlet-Laplace prior, Zhang and Bondell (2018) suggest bounding \(\alpha\in[(\max(n,p))^{-1},1\,/\,2]\). In our experiments we regularly encountered optimal values of \(\alpha\) on the lower boundary, so we use instead \(1\,/(3\max(n,p))\) as a lower bound.
We run pbbo with \(S=2\times 10^{4}\) samples from the prior predictive distribution, use \(d^{\text{AD}}\) as the discrepancy function, and evaluate the log predictive discrepancy using \(I=2\times 10^{4}\) samples from a Uniform\((0,1)\) importance distribution. We run the first stage of the optimiser for \(N_{\text{CRS2}}=2000\) iterations, and subsequently perform single objective Bayesian optimisation for \(N_{\text{batch}}=1\) batch of \(N_{\text{BO}}=150\) iterations, using \(N_{\text{design}}=50\) points from the first stage. We choose to adopt the single objective approach here to illustrate that the differences in flexibility also induce differences in uniqueness, and to highlight issues in choosing a prior for the additive noise parameter \(\sigma^{2}\).
#### 4.1.2 Results
We first evaluate the faithfulness of the resulting prior distributions by inspecting the densities \(\text{p}(R^{2}\mid\lambda^{*})\) and \(\text{t}(R^{2})\) for the various targets (all distributions in this example have corresponding densities). A selected subset of the pairs of \((s_{1},s_{2})\) values are displayed in Figure 9 (complete results are in Appendix E Figure 22). The faithfulness of the Gaussian prior is universally poor, which we investigate further in Appendix E. Both shrinkage priors perform better in cases where one of \(s_{1}\) or \(s_{2}\) is less than 1, with the regularised horseshoe performing better for the \(s_{1}=s_{2}>1\) cases. Interestingly, the results are not symmetric in \(s_{1}\) and \(s_{2}\); the Dirichlet-Laplace prior is able to match the \(s_{1}=3,s_{2}=0.69\) target well, with many of regularised horseshoe replicates performing poorly; whilst the relative performance is reversed for \(s_{1}=0.69,s_{2}=3\). There is also perceptibly more variability in the regularised horseshoe replicates, which suggests the optimisation problem is more challenging and the predictive discrepancy objective is noisier. Finally, as the values of \(s_{1}\) and \(s_{2}\) increase, the performance of the shrinkage priors generally decreases. Across the full set of simulations, the
regularised horseshoe is evidently the most flexible (Appendix E Figure 22).
To assess replicability and uniqueness, we consider estimated optimal hyperparameter values \(\lambda^{*}\) in each replicate. Figure 10 displays the estimates for \(s_{1}=3\) and \(s_{2}\in\{0.33,0.69,1.44,3\}\), which corresponds to the bottom row of Figure 9. The estimates for \(\gamma\) and \(\alpha\), for the Gaussian and Dirichlet-Laplace priors respectively, are consistent across replicates. This remains true even for targets where the prior is not faithful to the target, e.g. the Beta\((3,3)\) target. There is more variability in the hyperparameters of the regularised horseshoe prior, with \(p_{0}\) and \(s^{2}\) seemingly nonunique and not replicable for only some targets, and \(\nu\) consistently nonunique for all targets. Nonuniqueness in \((a_{1},b_{1})\) is visible for almost all prior/target combinations. It is particularly striking for the Dirichlet-Laplace prior when \(s_{2}\in\{0.33,0.69\}\), where we observe consistent and excellent fits/faithfulness to the target, but these do not correspond to replicable estimates for \((a_{1},b_{1})\). Such replicability illustrates the anticipated difficulties of learning about the noise \((\sigma^{2})\) and the corresponding hyperparameters.
We further assess uniqueness by inspecting the value of the objective at the optima. The top row of Figure 11 displays \(\log(D(\lambda^{*}))\) using the Anderson-Darling discrepancy function (as is used during optimisation) for
Figure 9: Optimal prior predictive densities \(\mathsf{p}(R^{2}\mid\lambda^{*})\) for the three priors considered, for selected target densities. The title of each subpanel denotes the target, which is also plotted as a black dashed line. Each replicate of the Gaussian (‘Gaussian’ – green), Dirichlet-Laplace (‘Dir. Lap.’ – red), and regularised horseshoe (‘Reg. Horse.’ – blue) is drawn in their respective colours. Density values are trimmed to \([0,10]\) for readability.
each replicate, for the same subset of targets considered in Figure 10 (we discuss the bottom row of Figure 11 momentarily). Each value of \(\log(D(\lambda^{*}))\) is the mean of 10 evaluations of \(\log(D(\lambda))\) at each \(\lambda^{*}\) to minimise residual noise in the objective. Figure 11 suggests that a small fraction of the variability in \(\mathbf{\lambda}^{*}_{\text{HS}}\) observed in Figure 10 is attributable to imperfect optimisation because \(\log(D(\mathbf{\lambda}^{*}_{\text{HS}}))\) is the least replicable. Conversely, we see that essentially none of the variability in \(\mathbf{\lambda}^{*}_{\text{DL}}\), particularly \((a_{1},b_{1})\), is due to incomplete optimisation, but is instead an issue of nonuniqueness inherent to the optimisation problem.
Our optimisation procedure has minimised \(\log(D(\lambda))\) using the Anderson-Darling discrepancy function. This places extra emphasis on the matching tails of the target, and thus the values in the top row of Figure 11 differ from our expectations given the results in the bottom row of Figure 9. Take, for example, the \(s_{1}=3,s_{2}=0.69\) case. It is plainly evident from Figure 9 that the regularised horseshoe prior provides a better fit to the target distribution at \(\mathbf{\lambda}^{*}_{\text{HS}}\). And yet the corresponding \(\log(D(\mathbf{\lambda}^{*}_{\text{HS}}))\) values in Figure 11 suggest that it is considerably worse that the Gaussian prior at \(\mathbf{\lambda}^{*}_{\text{GA}}\). To reconcile this apparent contradiction, we recompute \(\log(D(\lambda))\) at the same optima but use the Cramer-Von Mises discrepancy function. These values are displayed in the bottom row of Figure 11, whose values closely match our expectations given Figure 9. Given the range of behaviours of \(\text{p}(R^{2}\mid\lambda^{*})\) for all the optima, we can conclude that the Anderson-Darling more heavily penalises over-estimation of the tails of \(\text{p}(R^{2}\mid\lambda^{*})\) than under-estimation. This does not discount it as an optimisation objective, but does complicate comparisons between competing priors.
Figure 10: Optimal values \(\lambda^{*}\) for each of the three priors considered. Columns contain (possibly prior-specific) hyperparameters, with the point colour corresponding to a specific prior. The target beta densities (denoted by the row panel titles) correspond to the bottom row of Figure 9.
Figure 11: Total log predictive discrepancy at the optima \(\log(D(\lambda^{*}))\). The target densities (denoted in the column titles) correspond to the bottom row of Figure 9 (i.e. the same as Figure 10). Each point corresponds to one of the 10 distinct replicates, and its value is the mean of 10 evaluations of \(\log(D(\lambda^{*}))\) for the same \(\lambda^{*}\). The top row displays the final values of \(\log(D(\lambda^{*}))\) using the Anderson-Darling discrepancy function, used during optimisation. For comparison, the bottom row also displays \(\log(D(\lambda^{*}))\) at the same optima but instead uses the Cramér-von Mises discrepancy functionton when evaluating \(\log(D(\lambda^{*}))\).
### Example summary
This example illustrates the use of a model-derived, nonobservable quantity about which we have prior information as the basis for an informative prior. The most flexible shrinkage model (the regularised horseshoe prior) was clearly the most faithful to the supplied information in almost all cases. Conversely, the Gaussian prior has the most replicability and uniqueness, but the lack of faithfulness means it is unsuitable to use as a prior when seeking to use a Beta prior on \(R^{2}\). The example also illustrates the difficulty of learning about the prior for the additive noise term, which is related but distinct from our idea of uniqueness. Future attempts to ameliorate this difficulty by adopting the multi-objective approach would merely hide the issue; we would always select the inverse gamma prior that maximises the standard deviation of \(\sigma^{2}\) by being on the boundary of \(\Lambda\). Such a prior for \(\sigma^{2}\) would unlikely prove appropriate nor represent our prior knowledge.
## 5 Calibrating a cure fraction survival model
Cure models (Peng and Taylor, 2014; Amico and Van Keilegom, 2018) for survival data are useful when a cure mechanism is physically plausible _a priori_, and when individuals are followed up for long enough to be certain all censored individuals in our data are "cured". Such lengthy follow ups are not always possible, but a cure model remains plausible when a large fraction of the censored observations occur after the last observed event time. However, we cannot distinguish in the right tail of the survival time distribution between censored uncured individuals and genuinely cured individuals.
Suppose we possess prior knowledge on the fraction of individuals likely to be cured, and the distribution of event times amongst the uncured. In this example we ask: can we translate this information into a reasonable prior for the parameters in a cure model?
There are several properties of this model that make it an interesting subject for prior specification methodology. The observed quantity, and thus the target distribution, is of mixed discrete/continuous type due to censoring. Additionally, we specify a model with a nontrivial correlation structure, about which we wish to specify an informative prior. Eliciting informative priors for correlation structures is known to be challenging. Finally, identifiability is known to be challenging in cure models (Peng and Taylor, 2014), and so the model is a demanding test of our regularisation procedure.
### Target survival time distribution and covariate generation
Consider individuals \(n=1,\ldots,N\) with event times \(Y_{n}\) and censoring times \(C_{n}\), such that \(Y_{n}\in(0,C_{n}]\). The assumptions underlying a cure fraction model imply almost complete separation between event times and censoring times. Suppose that individuals are followed up for an average of 21 units of time, with those who experience the event doing so a long time before the end of follow up. Furthermore, suppose we believe that, _a priori_, \(5\%\) of the patients will be cured, with \(0.2\%\) of events censored due to insufficient follow up.
A target distribution that is consistent with our beliefs comprises a point mass of \(0.05\) at \(C_{n}\), and a lognormal distribution with location \(\mu^{\text{LN}}=\log(3)\) and scale \(\sigma^{\text{LN}}=2\,/\,3\) for \(Y_{n}<C_{n}\). This choice of lognormal has \(99.8\%\) of its mass residing below 21, and thus produces event times that are "well separated" from the censoring time. Denoting the lognormal CDF with \(\text{LogNormal}(Y;\mu,\sigma^{2})\), we define the target CDF
\[\text{T}(Y_{n}\mid C_{n})=0.95\frac{\text{F}^{\text{LN}}(Y_{n};\mu^{\text{LN} },(\sigma^{\text{LN}})^{2})}{Z_{n}}+0.051_{\{Y_{n}=C_{n}\}},\qquad Y_{n}\in(0,C_{n}], \tag{16}\]
where \(Z_{n}=\text{LogNormal}(C_{n};\mu^{\text{LN}},(\sigma^{\text{LN}})^{2})\) is the required normalising constant. This individual-specific construction of \(\text{T}(Y_{n}\mid C_{n})\) implies that \(R=N\) (where \(R\) is the upper limit of the sum in Equation 1) as the censoring time \(C_{n}\) functions as a covariate.
We simulate data for this example with \(N=50\) individuals, each with \(B=4\) correlated covariates. In line with our target distribution, simulated censoring times are distributed such that \(C_{n}\sim 20+\text{Exp}(1)\). We sample a single correlation matrix \(\mathbf{Q}\sim\text{LKJ}(5)\)(Lewandowski _et al._, 2009) and subsequently covariates \(\tilde{\mathbf{x}}_{n}\sim\text{MultiNormal}(\mathbf{0},\mathbf{Q})\). This results in marginally-standardised yet correlated covariates.
### Model
A cure model for survival data, expressed in terms of its survival function, is
\[S(Y\mid X,\theta)=\pi+(1-\pi)\tilde{S}(Y\mid\tilde{X},\tilde{\theta}), \tag{17}\]
where a proportion \(\pi\in(0,1)\) of the population are _cured_ and never experience the event of interest. The survival times for the remaining \(1-\pi\) proportion of the population are distributed according to the _uncured_ survival function \(\tilde{S}(Y\mid\tilde{X},\tilde{\theta})\). We use the tilde in \(\tilde{X}\) and \(\tilde{\theta}\) to denote quantities specific to the uncured survival distribution, and denote \(\theta=(\pi,\tilde{\theta})\) to align with our general notation.
A right censored event time has \(Y_{n}=C_{n}\). The censoring indicator \(\delta_{n}=\mathbb{1}_{\{Y_{n}<C_{n}\}}\) is zero for right censored events, and is one for uncensored/observed events. We denote with \(\tilde{\mathbf{x}}_{n}\) the \(n^{\text{th}}\) row of the \(N\times B\) covariate matrix \(\tilde{\mathbf{X}}\). Our model supposes that the uncured event times are distributed according to a Weibull regression model, with survival function \(\tilde{S}(Y_{n}\mid\tilde{\theta},\tilde{\mathbf{x}}_{n},C_{n})\) and hazard \(\tilde{h}(Y_{n}\mid\tilde{\theta},\tilde{\mathbf{x}}_{n},C_{n})\) such that
\[\begin{split}\tilde{S}(Y_{n}\mid\tilde{\theta},\tilde{\mathbf{x} }_{n},C_{n})=\exp\left\{-Y_{n}^{\top}\exp\left\{\beta_{0}+\tilde{\mathbf{x}}_{ n}\mathbf{\beta}\right\}\right\},\quad Y_{n}\in(0,C_{n}]\subset\mathbb{R}\\ \tilde{h}(Y_{n}\mid\tilde{\theta},\tilde{\mathbf{x}}_{n},C_{n})= \gamma Y_{n}^{\gamma-1}\exp\left\{\beta_{0}+\tilde{\mathbf{x}}_{n}\mathbf{\beta} \right\},\\ \gamma\sim\text{Gamma}(\alpha,\beta),\quad\beta_{0}\sim\text{ Normal}(\mu_{0},\sigma_{0}^{2}),\quad\mathbf{\beta}\sim\text{MVSkewNormal}(\mathbf{0},\mathbf{S},\mathbf{ \eta}),\end{split} \tag{18}\]
with \(\tilde{\theta}=(\gamma,\beta_{0},\mathbf{\beta})\). The likelihood for the \(n^{\text{th}}\) individual is
\[\begin{split}\text{p}(Y_{n}\mid\theta,\tilde{\mathbf{x}}_{n},C_{ n})=&\Big{(}(1-\pi)\tilde{S}(Y_{n}\mid\tilde{\theta},\tilde{ \mathbf{x}}_{n},C_{n})\tilde{h}(Y_{n}\mid\tilde{\theta},\tilde{\mathbf{x}}_{n },C_{n})\Big{)}^{\delta_{n}}\\ &\times\Big{(}\pi+(1-\pi)\tilde{S}(Y_{n}\mid\tilde{\theta},\tilde {\mathbf{x}}_{n},C_{n})\Big{)}^{1-\delta_{n}}\,.\end{split} \tag{19}\]
We complete the model by specifying a Beta\((a_{\pi},b_{\pi})\) prior for \(\pi\). To align the notation in this example with that introduced in Section 2 we denote \(Y=(Y_{n})_{n=1}^{N}\) and \(X=(C_{n},\tilde{\mathbf{x}}_{n})_{n=1}^{N}\). Note that we are using \(X\) to represent _all_ information that must be conditioned on, including the censoring times. This is necessary because, in our more general notation, the support of \(Y\mid X_{r}\) is truncated to an interval that depends on \(X_{r}\), making the censoring times necessary to fully-specify \(Y\mid X_{r}\).
Our use of the multivariate skew-normal distribution has two motivations. The skewness is necessary to incorporate the nonlinear relationship between the hazard and the effect of the covariates, and a covariance structure is used to account for fact that not all the elements of \(\mathbf{\beta}\) can be large simultaneously. We assume that \(\tilde{\mathbf{X}}\) is marginally (column-wise) standardised, and can thus decompose \(\mathbf{S}=\text{diag}(s_{\beta})\ \mathbf{\Omega}\ \text{diag}(s_{\beta})\) where \(s_{\beta}\) is the scale of the prior marginals of \(\mathbf{\beta}\). The standardisation allows us to use only one \(s_{\beta}\) instead of one per covariate. We elect to parameterise \(\mathbf{\Omega}\) using the \((B-1)!=6\) elements that uniquely determine its Cholesky factor, which we denote \(\mathbf{\omega}=(\omega_{1},\ldots,\omega_{6})^{\top}\in[-1,1]^{6}\). These elements are transformed into \(\mathbf{\Omega}\) using the partial correlation method of Lewandowski _et al._ (2009), also employed by the Stan math library (Stan Development Team, 2022). The \(B\)-vector \(\mathbf{\eta}\) controls, but is not equal to, the marginal skewness for each element of \(\mathbf{\beta}\) using the multivariate skew-normal definition of Azzalini and Valle (1996), as implemented in the sn package (Azzalini, 2022). We can now define \(\lambda=(\alpha,\beta,\mu_{0},\sigma_{0}^{2},s_{\beta},\mathbf{\omega},\mathbf{\eta},a _{\pi},b_{\pi})^{\top}\), with the upper and lower limits that define \(\Lambda\) specified in Table 6 in Appendix F.
### Tuning parameters and further details
In this example we again employ the multi-objective approach with \(N(\lambda)\) as defined in Equation 4. We use \(N_{\text{CRS2}}=2000\) CRS2 iterations, followed by \(N_{\text{batch}}=3\) batches of multi-objective Bayesian optimisation using \(N_{\text{BO}}=200\) iterations per batch, carrying forward \(N_{\text{design}}=60\) points between batches. The predictive discrepancy function, \(D(\lambda)\), is evaluated empirically using \(S_{r}=2\times 10^{4}\) samples from the prior predictive, and evaluated using \(I_{r}=5\times 10^{3}\) importance samples from an appropriate mixed discrete/continuous importance density. To facilitate assessing uniqueness issues in this model, we also run single-objective optimisation with the same tuning parameters.
### Results
#### 5.4.1 Choosing \(\kappa\) for the multi-objective approach
We inspect the Pareto frontiers and compute the minimum loss points for 6 values of \(\kappa\in\{0.1,0.2,0.3,0.5,1,2\}\). These are displayed in Figure 12. To reiterate, the assumption is that nearby points on the Pareto frontier are "similar" priors for \(\text{p}(\theta\mid\lambda)\).
Maximum and minimum values for \(\kappa\) yield a number of minimum loss points on the extremes of the Pareto frontier. These points correspond to unsuitable values for \(\lambda\) due to a lack of faithfulness, which we will demonstrate momentarily. The remaining options for \(\kappa\) result in similar minimum loss points, illustrating that the minimum loss point is constant, or very similar, across a range of \(\kappa\) values. Such insensitivity is ideal as we can obtain sensible solutions for a wide variety of \(\kappa\) values. Made to choose, we would select \(\kappa=0.3\) as this simultaneously minimises the variability in loss and both objective functions.
#### 5.4.2 Faithfulness
We assess faithfulness by inspecting the prior predictive distributions at the optima for similarity to the target, which Figure 13 displays for the randomly selected individual \(n=9\) in our simulated population (other individuals are not visually distinguishable). In addition to the possible values of \(\kappa\), the optimal prior predictive distributions are also displayed using the single objective approach for reference. The fits are similar across different \(\kappa\) and close to those obtained using the single objective approach, except for the maximum value of \(\kappa\). By overly valuing the secondary objective, the negative log mean standard deviation, a fraction of the maximum \(\kappa\) fits are considerably worse. The poor fits correspond to the minimum loss points in the bottom-right corner of the \(\kappa=2\) panel in Figure 12. This similarity does little to alleviate the uniqueness issues we will discuss in the following section.
Figure 12: Pareto frontiers for the survival example and different values of \(\kappa\). Note that the range associated with the colour scale differ between panels. The red crosses \((+)\) indicate the minimum loss point on each frontier, for each value of \(\kappa\).
Figure 13: Estimated optimal prior predictive densities \(\text{p}(Y_{n}\mid\lambda^{*})\) (red/blue lines and dots) and target densities \(\text{t}(Y_{n}\mid C_{N})\) (black lines and crosses) for randomly selected individual \(n=9\). The continuous portions of the densities are displayed as lines, with the discrete/censored portion displayed as a point at \(Y_{n}=C_{n}\). The top rows (red) correspond to the multi-objective approach using the values of \(\kappa\) in the row titles, with the bottom row (blue) displaying the single objective results. Densities are truncated to \([0,0.45]\) for readability.
Figure 14: Estimated optimal prior marginal densities of \(\mathrm{p}(\theta\mid\lambda^{*})\) for each component of \(\theta\) in the survival example. The top six rows (red) are obtained using the multi-objective approach, with the \(\kappa\) value specified in each row. The bottom row (blue) uses the single, predictive discrepancy objective. Densities are truncated to \([0,1]\) for readability.
We now evaluate both the replicability of our multi-objective approach and the uniqueness of the optimisation problem. Figure 14 displays the marginals of \(\theta\) for each value of \(\kappa\) and independent replicate. The single objective approach consistently locates the degenerate, non-unique solution where all the variation in the uncensored event times is attributed to the baseline hazard shape \(\gamma\) and the intercept \(\beta_{0}\): note that all the mass for \(\mathbf{\beta}\) (the regression coefficients) is close to 0. This combination is evidently poorly identified, and further calculation reveals that only the derived product \(\gamma\exp\beta_{0}\) is uniquely determined. Furthermore, the concentration around \(\mathbf{\beta}=0\) means these priors are unlikely to be efficient for inferring the (possible) relationship between covariates and outcome - a very strong signal in the data would be needed to overcome this prior.
Our multi-objective approach produces a more reasonable and disperse prior, but still does not admit a unique optimal prior5. The typical marginal prior for \(\mathbf{\beta}\) has been widened, yet for small values of \(\kappa\) the optimal priors are still far from unique for \((\gamma,\beta_{0})\). When \(\kappa=2\) the process regularly produces a marginal prior for the cure fraction \(\pi\) that gives considerable mass to values of \(\pi>0.3\), which is incongruent with our target. This is also visible in the censored portions of the prior predictive estimates \(\text{p}(Y_{n}\mid\lambda^{*},C_{n})\) displayed in Figure 13.
Footnote 5: To be certain that the lack of uniqueness is attributable to the specification of the optimisation problem, and not due to numerically imperfect or incomplete optimisation, we investigate the objective values at the final optima in Appendix F.2.
Eliciting covariance structures is challenging, and in this example we opt to include a covariance matrix in the hyperparameters for our model. In Figure 15 we display the bivariate prior marginal densities for two elements of \(\mathbf{\beta}\), specifically \(\beta_{3}\) and \(\beta_{4}\), for both the multi-objective approach with \(\kappa=0.3\) and the single objective approach. Nonuniqueness is visible in both sets of estimates. There are marginal densities of both positive and negative marginal skewness, and pairwise correlation. The wider typical marginal for \((\beta_{3},\beta_{4})\) obtained using the multi-objective approach is again visible in the right panel of Figure 15.
Figure 15: Contours of the log prior density \(\log(\text{p}(\beta_{3},\beta_{4}\mid\lambda^{*}))\) at the optima. The left column corresponds to the single objective approach, with the multi-objective approach using \(\kappa=0.3\) displayed in the right column. Note that, for clarity, we only plot the final 12 of 30 replicates, with unique colouring for each replicate.
### Example summary
Our procedure estimates priors that faithfully represent provided information about the survival distribution. Uniqueness, known to be challenging for these models, is very difficult to induce and results in suboptimal marginal priors for \(\beta\). We specifically highlight the difficulties with uniqueness associated with the covariance structure for a vector of covariate coefficients. Our procedure is moderately replicable - the noise in Figure 12 would, ideally, be smaller.
In the information we supply via \(\text{T}(Y_{n}\mid C_{n})\), we are completely certain of the fraction of cured patients _a priori_. Such certainty is unlikely to be uncovered when eliciting information from experts. A more elaborate construction of \(\text{T}(Y_{n}\mid C_{n})\), or elaborate methodology, may be able to represent such uncertainty, but the example remains challenging without this additional complication.
## 6 Conclusion
In this paper we develop methodology, and software, for specifying priors given predictive information about an observable or model-derived quantity. We employ a CDF-based, multi-objective global optimisation approach to this translation problem to make our approach widely applicable. Adopting a global optimisation approach allows any kind of model to be specified and optimised, not just those for which we can compute reparameterisation gradients. The global optimisation approach also allows us to provide our functionality directly in R, with which we envisage the majority of the users of our method will be familiar. Our CDF-based predictive discrepancy is also generic as it permits continuous, discrete, and mixed observable types. We apply our methodology in three challenging example models, each of which we interrogate for faithfulness, identifiability, and uniqueness. Each example is of moderate dimension (3-17), with each representing a difficult structural elicitation problem. Finally, our delineation between elicitation and translation, with our emphasis on the latter, is a contribution to an under-explored area of prior specification.
Our inspiration, for both methodology and examples, arises from applications and models we have encountered in applied work. The Preece-Baines model is a typical complex, nonlinear regression model for an intuitive and well understood observable quantity, but for which the model parameters are not easily understood. Setting a prior for models congruent with our knowledge is difficult without a method for translation such as we have proposed. We previously considered a survival model similar to the cure fraction model, where we knew _a priori_ the fraction of cured/censored observations and a distribution of likely survival times, in our earlier work (Manderson and Goudie, 2022). Setting an appropriate prior for this model proved challenging, and would have benefited greatly from the translation methodology introduced in this paper. Finally, prior knowledge on \(R^{2}\) has proved a valuable mathematical basis for the R2-D2 shrinkage prior (Zhang _et al._, 2022), and more generally there are numerous model-derived quantities about which practitioners possess prior information. Methods for translation are valuable in this setting, as information about such model-derived quantities is often difficult to express. An envisaged future example of this type considers clustering models. In that setting we elicit an informative prior for number of clusters, or the typical size of a cluster, which are derived from the clustering model. Such quantities are readily reasoned about by experts, as opposed to the parameters governing each cluster. Including this type of information in complex models seems critical for stable and reliable Bayesian inference.
One limitation of the current work is that we only partly address non-uniqueness for flexible models and uninformative target distributions. In these settings we emphases that our methodology remains valuable as a means to assess the implications of a particular target distribution on a specific model. For specific model-target pairs where uniqueness remains challenging, our methodology can still provide useful insight into consequences of certain \(\text{T}(Y\mid\mathbf{X})\). We can delineate between components of \(\lambda\) that are well identified and thus consistently estimated, and those that are not. Furthermore, we can directly inspect \(\text{T}(Y\mid X_{r})\) different
significantly from \(\mathrm{P}(Y\mid\lambda^{*},X_{r})\), and consider if such differences are attributable to model inflexibility or implausible targets. Finally, we are compelled to assess our choice of which components should make up \(\lambda\), and whether we have other information that we could employ to fix certain components within \(\lambda\) (e.g. the fixed prior for the noise in the human height example).
## Acknowledgments and data availability
We thank Daniela De Angelis and Mevin Hooten for their feedback on an earlier version of this manuscript.
This work was supported by The Alan Turing Institute under the UK Engineering and Physical Sciences Research Council (EPSRC) [EP/N510129/1] and the UK Medical Research Council [programme codes MC_UU_00002/2 and MC_UU_00002/20]. No original data were generated as part of this study, the growth data used in Section 3 are available as part of the fda package for R (Ramsay _et al._, 2022) available on CRAN ([https://cran.r-project.org/](https://cran.r-project.org/)).
## Appendix A Importance sampling
Appropriate importance distributions are crucial to obtaining an accurate and low variance estimate of \(D(\lambda\mid\mathbf{X})\). For values of \(\lambda\) far from optimal, \(\mathrm{P}(Y\mid\lambda,\mathbf{X})\) can differ considerably from \(\mathrm{T}(Y\mid\mathbf{X})\). Given a specific \(X_{r}\) we require an importance distribution \(\mathrm{Q}(Y\mid X_{r})\) that places substantial mass in the high probability regions of both \(\mathrm{T}(Y\mid X_{r})\) and \(\mathrm{P}(Y\mid\lambda,X_{r})\), as it is in these regions that \(d(\cdot,\cdot)\) is largest. But we cannot exert too much effort on finding these densities as they are specific to each value of \(\lambda\), and must be found anew for each \(\lambda\).
We use three quantities to guide our choice of \(\mathrm{Q}(Y\mid X_{r})\), these being the support \(\mathcal{Y}\), the samples \(\mathbf{y}_{r}^{(\mathrm{P})}\sim\mathrm{P}(Y\mid\lambda,X_{r})\), and the samples \(\mathbf{y}_{r}^{(\mathrm{T})}\sim\mathrm{T}(Y\mid X_{r})\). Of primary concern is the support. If \(\mathcal{Y}=\mathbb{R}\) then we use a mixture of Student-\(t_{5}\) distributions; for \(\mathcal{Y}=\mathbb{R}=(0,\infty)\) we employ a mixture of gamma distributions; and for \(\mathcal{Y}=(0,a]\) with known \(a\), we opt for a mixture of Beta distributions with a discrete component at \(Y=a\). The parameters of the mixture components are estimated using the method of moments. Specifically, denoting the empirical mean of \(\mathbf{y}_{r}^{(\mathrm{P})}\) as \(\hat{\mu}^{(\mathrm{P})}\) and the empirical variance by \(\hat{v}^{(\mathrm{P})}\), with \(\hat{\mu}^{(\mathrm{T})}\) and \(\hat{v}^{(\mathrm{T})}\) defined correspondingly for \(\mathbf{y}_{r}^{(\mathrm{T})}\), Table 1 details our method of moments estimators for the mixture components.
In this paper we limit ourselves to one dimensional \(\mathcal{Y}\), where importance sampling is mostly well behaved or can be tamed using a reasonable amount of computation. This covers many models, and with the covariate-specific target it includes regression models. It is harder to elicit \(\mathrm{T}(Y\mid\mathbf{X})\) for higher dimensional data spaces, and the difficulties with higher dimensional importance sampling are well known.
## Appendix B Evaluating \(D(\lambda\mid\mathbf{X})\)
For both numerical stability and optimisation performance (Eriksson and Poloczek, 2021; Snoek _et al._, 2014) we evaluate \(D(\lambda\mid\mathbf{X})\) on the log scale. This is because far from optimal values of \(\lambda\) have corresponding \(D(\lambda\mid\mathbf{X})\) many orders of magnitude larger than near optimal values of \(\lambda\). Furthermore, the Gaussian process approximation that underlies Bayesian optimisation assumes constant variance, necessitating a log or log-like
\begin{table}
\begin{tabular}{c l l l l} \hline \hline \(\mathcal{Y}\) & \(\mathrm{Q}_{r}(Y)\) & Parameter estimates & Mixture weights & Notes \\ \hline \multirow{3}{*}{\(\mathbb{R}\)} & \(\pi_{1}\)Student-\(t_{5}(Y;\hat{\mu}_{1},\hat{s}_{1})+\) & \(\hat{\mu}_{1}=\hat{\mu}^{(\mathrm{P})},\hat{s}_{1}=c\sqrt{\hat{v}^{(\mathrm{P})}}\) & \multirow{3}{*}{\(\pi_{1}=\pi_{2}=0.5\)} & \(c\) defaults to \(1.05\)} \\ & \(\pi_{2}\)Student-\(t_{5}(Y;\hat{\mu}_{2},\hat{s}_{2})\) & \(\hat{\mu}_{2}=\hat{\mu}^{(\mathrm{T})},\hat{s}_{2}=c\sqrt{\hat{v}^{(\mathrm{T})}}\) & \multirow{3}{*}{\(\pi_{1}=\pi_{2}=0.5\)} & \multirow{3}{*}{\(c\) defaults to \(1.05\)} \\ & \(\pi_{1}\)Gamma\((Y;\hat{\alpha}_{1},\hat{\beta}_{1})+\) & \(\hat{\alpha}_{1}=\frac{(\hat{\mu}^{(\mathrm{P})})^{2}}{\hat{\omega}^{(\mathrm{ P})}},\hat{\beta}_{1}=\frac{\hat{\mu}^{(\mathrm{P})}}{\hat{\omega}^{(\mathrm{P})}}\) & \multirow{3}{*}{\(\pi_{1}=\pi_{2}=0.5\)} & \multirow{3}{*}{\(c\) defaults to \(1.05\)} \\ & \(\pi_{2}\)Gamma\((Y;\hat{\alpha}_{2},\hat{\beta}_{2})\) & \(\hat{\alpha}_{2}=\frac{(\hat{\mu}^{(\mathrm{T})})^{2}}{\hat{\omega}^{(\mathrm{ T})}},\hat{\beta}_{2}=\frac{\hat{\mu}^{(\mathrm{T})}}{\hat{\omega}^{(\mathrm{T})}}\) & \multirow{3}{*}{\(\pi_{1}=\pi_{2}=0.5\)} & \(\tilde{\omega}=\min(c^{2}\hat{v},10^{5})\), \\ & \(\pi_{3}\)\(\mathbb{1}_{\{Y=a\}}\) & \(\hat{a}_{1}=\hat{\mu}^{(\mathrm{P})}\left[\frac{\hat{\mu}^{(\mathrm{P})}}{\hat{ \omega}^{(\mathrm{P})}}(1-\hat{\mu}^{(\mathrm{P})})-1\right]\) & \multirow{3}{*}{\(\pi_{1}=\pi_{2}=0.45\)} \\ & \(\frac{\pi_{1}}{a}\)Beta\(\left(\frac{Y}{a};\hat{a}_{1},\hat{b}_{1}\right)+\) & \(\hat{b}_{1}=\frac{(1-\hat{\mu}^{(\mathrm{P})})}{\hat{\mu}^{(\mathrm{P})}}\hat{a}_{1}\) & \(\pi_{1}=\pi_{2}=0.45\) & \\ \((0,a)\) & \(\frac{\pi_{2}}{a}\)Beta\(\left(\frac{Y}{a};\hat{a}_{2},\hat{b}_{2}\right)+\) & \(\hat{a}_{2}=\hat{\mu}^{(\mathrm{T})}\left[\frac{\hat{\mu}^{(\mathrm{T})}}{\hat{ \omega}^{(\mathrm{T})}}(1-\hat{\mu}^{(\mathrm{T})})-1\right]\) & \(\pi_{3}=0.05\) & \(\tilde{\omega}=\max(c^{2}\hat{v},10^{-6})\), \\ & \(\pi_{3}\)\(\mathbb{1}_{\{Y=a\}}\) & \(\hat{b}_{2}=\frac{(1-\hat{\mu}^{(\mathrm{T})})}{\hat{\mu}^{(\mathrm{T})}}\hat{a}_{2}\) & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Importance distributions and method of moments estimators for their constituent parametric distributions. Note that \(c\) is a user-selected tuning parameter to enable the construction of wider importance distributions.
transformation.
Suppose again that we sample \(\mathbf{y}_{r}^{(\text{P})}\sim\text{P}(Y\mid\lambda,X_{r})\), from which we form the ECDF \(\hat{\text{P}}(Y\mid\lambda,X_{r},\mathbf{y}_{r}^{(\text{P})})\). We also select an appropriate importance distribution \(\text{Q}(Y\mid X_{r})\) and density \(\text{q}(Y\mid X_{r})\) using Appendix A, and sample importance points \((y_{i,r})_{i=1}^{I}\sim\text{Q}(Y\mid X_{r})\). Define the intermediary quantity \(z(y_{i,r})\) as
\[z(y_{i,r})=\log\left(d\left(\hat{\text{P}}(y_{i,r}\mid\lambda,X_{r},\mathbf{y}_{r}^ {(\text{P})},\text{T}(y_{i,r}\mid X_{r})\right)\right)+\log\left(\text{t}(y_{i,r}\mid X_{r})\right)-\log\left(\text{q}(y_{i,r}\mid X_{r})\right), \tag{20}\]
and then rewrite Equation (7) to read
\[\log(D(\lambda\mid\mathbf{X}))=-\log(R)+\log\left(\sum_{r=1}^{R}\exp\left\{-\log( I_{r})+\log\left(\sum_{i=1}^{I_{r}}\exp\left\{z(y_{i,r})\right\}\right)\right\} \right). \tag{21}\]
All \(\log(\sum\exp\{\cdot\})\) terms are computed using the numerically stable form (Blanchard _et al._, 2021).
Accurately evaluating \(\log(d(\cdot,\cdot))\) in Equation (20) involves managing the discrete nature of the ECDF (that it returns exactly zero or one for some inputs), and using specialised functions for each discrepancy to avoid issues with floating point arithmetic. We compute \(\log(d^{\text{cVM}}(\cdot,\cdot))\) using
\[\log\left(d^{\text{cVM}}\left(\hat{\text{P}}(y_{i,r}\mid\lambda,X_{r},\mathbf{y}_{ r}^{(\text{P})}),\text{T}(y_{i,r})\right)\right)=2\log\left(\left|\hat{\text{P}}(y_{i,r}\mid\lambda,X_{r},\mathbf{y}_{r}^{(\text{P})})-\exp\{\mathcal{T}(y_{i,r})\} \right|\right), \tag{22}\]
where \(\mathcal{T}(y_{i,r})=\log(\text{T}(y_{i,r}))\). The log-CDF (LCDF) is often more numerically accurate for improbable values of \(y_{i,r}\), and so our methodology assumes that it is this LCDF form in which the target distribution is supplied. However, because the ECDF can return exact zero/one values there is no way to perform this computation on the log scale. We thus employ high precision floating point numbers when exponentiating the LCDF values, using Rmpfr(Maechler _et al._, 2021), to avoid evaluating \(\log(0)\).
For \(\log(d^{\text{AD}}(\cdot,\cdot))\), additional care must be taken as the denominator of \(d^{\text{AD}}\) in Equation (2) tends to underflow to zero. Thus we evaluate it using
\[\log\left(d^{\text{AD}}\left(\hat{\text{P}}(y_{i,r}\mid\lambda,X_ {r},\mathbf{y}_{r}^{(\text{P})}),\text{T}(y_{i,r})\right)\right)= \tag{23}\] \[2\log\left(\left|\hat{\text{P}}(y_{i,r}\mid\lambda,X_{r},\mathbf{y}_ {r}^{(\text{P})})-\exp\{\mathcal{T}(y_{i,r})\}\right|\right)-\mathcal{T}(y_{i,r})-\texttt{log1mexp}(-\mathcal{T}(y_{i,r})),\]
where \(\texttt{log1mexp}(x)=\log(1-\exp\{-x\})\) is implemented by the Rmpfr package (Maechler, 2012). Such precision is necessary for improbably large values of \(y_{i,r}\) under T, as the CDF/LCDF often rounds to \(1/0\) (respectively). It is not always feasible to evaluate Equation (23) with sufficient accuracy to avoid under/overflow issues - it requires a high-precision implementation of \(\mathcal{T}(y_{i,r})\) for extreme \(y_{i,r}\) and many additional bits of precision for both \(y_{i,r}\) and the result. In these settings we revert to \(\log(d^{\text{cVM}}(\cdot,\cdot))\).
## Appendix C Algorithmic descriptions of the optimisation process
### CRS2 as an initialiser for Bayesian optimisation
Algorithm 3 describes our use of CRS2 (Kaelo and Ali, 2006) to obtain a suitable design to initialise the Bayesian multi-objective optimisation approach in step 2.
### Mspot
Algorithm 4 describes, in our notation, the MSPOT (Zaefferer _et al._, 2012) algorithm for two objectives. Note that within the algorithm we suppress each objective's dependence on \(\mathbf{X}\) for brevity.
```
1functionInitialdesign(\(N_{\text{CRS2}},N_{\text{design}},N_{\text{pad}}\))
2Initialise\(\mathcal{S}=\{\}\), an empty set to hold possible design points
3for\(i\) in \(1\ldots N_{\text{CRS2}}\)do
4Minimising \(\log(D(\lambda\mid\mathbf{X}))\), get the \(i^{\text{th}}\) trial point \(\tilde{\lambda}_{i}\) and value \(\log(D(\lambda_{i}\mid\mathbf{X}))\) from CRS2 with local mutation (Kaelo and Ali, 2006)
5Compute \(\tilde{w}_{i}=-\exp\left\{\log\left(D(\tilde{\lambda}_{i}\mid\mathbf{X})\right)\right\}\)
6Concatenate \(\mathcal{S}=\mathcal{S}\cup\left\{\tilde{\lambda}_{i},\log(D(\tilde{\lambda}_{ i}\mid\mathbf{X})),\tilde{w}_{i}\right\}\)
7endfor
8Normalise weights such that \(w_{i}=\exp\left\{\tilde{w}_{i}-\log\left(\sum_{i=1}^{N_{\text{CRS}}}\exp\left\{ \tilde{w}_{i}\right\}\right)\right\}\)
9Subsample without replacement \(N_{\text{design}}\) values from \(\mathcal{S}\) according to the normalised weights, and store in \(\mathcal{D}=\left\{\lambda_{i},\log(D(\lambda_{i}\mid\mathbf{X}))\right\}_{i=1}^{N _{\text{design}}}\)
10Sample \(N_{\text{pad}}\) points from a Latin hypercube design spanning \(\Lambda\) (Stein, 1987), evaluate \(\log(D(\lambda\mid\mathbf{X}))\) at these points, and add them to \(\mathcal{D}\)
11return:\(\mathcal{D}=\left\{\lambda_{i},\log(D(\lambda_{i}\mid\mathbf{X}))\right\}_{i=1}^{N _{\text{design}}+N_{\text{pad}}}\)
12endfunction
```
**Algorithm 4** Global two-objective Bayesian optimisation using MSPOT (Zaefferer _et al._, 2012)
```
1functionBayesian optimisation using MSPOT(\(N_{\text{BO}}\))
2for\(i\) in \(1\ldots N_{\text{BO}}\)do
3Form Gaussian process (GP) approximations to \(D(\lambda)\) and \(N(\lambda)\) using \(\mathcal{D}\)
4Generate a new Latin hypercube design \(\mathcal{N}\) of size \(N_{\text{new}}\) covering \(\Lambda\), such that \(N_{\text{new}}\gg N_{\text{design}}\)
5for\(k\) in \(1\ldots N_{\text{new}}\)do
6Use the GPs to estimate \(\hat{D}(\lambda_{k})\) and \(\hat{N}(\lambda_{k})\)
7Add these to \(\mathcal{N}\) so that \(\mathcal{N}_{k}=\left\{\lambda_{k},\hat{D}(\lambda_{k}),\hat{N}(\lambda_{k})\right\}\)
8endfor
9Truncate \(\mathcal{N}\) to \(N_{\text{eval}}\) points according to the non-dominated sorting rank and hypervolume contribution (Beume _et al._, 2007; Deb, 2001; Deb _et al._, 2002; Beume _et al._, 2009) of each point in \(\{D(\lambda_{k}),N(\lambda_{k})\}_{k=1}^{N_{\text{new}}}\) with \(N_{\text{eval}}\ll N_{\text{new}}\)
10for\(j\) in \(1\ldots N_{\text{eval}}\)do
11Evaluate the objectives \(D(\lambda_{j})\) and \(N(\lambda_{j})\) for \(\lambda_{j}\in\mathcal{N}\)
12Add these evaluations to \(\mathcal{D}=\mathcal{D}\cup\left\{\lambda_{j},D(\lambda_{j}),N(\lambda_{j})\right\}\)
13endfor
14endfor
15Compute the Pareto frontier \(\mathcal{P}=\left\{\lambda_{i},D(\lambda_{i}),N(\lambda_{i})\right\}_{i=1}^{| \mathcal{P}|}\) from \(\mathcal{D}=\left\{\lambda_{i},D(\lambda_{i}),N(\lambda_{i})\right\}_{i=1}^{N _{\text{design}}+N_{\text{pad}}+N_{\text{BO}}N_{\text{eval}}}\) (Kung _et al._, 1975, see)
16return:\(\mathcal{P}\) and \(\mathcal{D}\)
17endfunction
```
**Algorithm 5** Global two-objective Bayesian optimisation using MSPOT (Zaefferer _et al._, 2012)
### Inter batch resampling
Algorithm 5 describes our inter-batch resampling algorithm that we occasionally adopt in stage two of our optimisation process.
```
0: Pareto frontier \(\mathcal{P}=\{\lambda_{i},\log(D(\lambda_{i}\mid\boldsymbol{X})),N(\lambda_{i} \mid\boldsymbol{X})\}_{i=1}^{|\mathcal{P}|}\) and all evaluated points \(\mathcal{E}=\{\lambda_{i},\log(D(\lambda_{i}\mid\boldsymbol{X})),N(\lambda_{i} \mid\boldsymbol{X})\}_{i=1}^{|\mathcal{E}|}\) from previous batch (with \(|\mathcal{P}|\ll|\mathcal{E}|\)), number of design points \(N_{\text{design}}\), number of padding points \(N_{\text{pad}}\), hyperparameter support \(\Lambda\)
1:functionNextbatchdesign(\(N_{\text{design}},N_{\text{pad}}\))
2: Initialise \(\mathcal{D}=\mathcal{P}\)
3: Compute the weights \(w_{i}\) for all points in \(\mathcal{E}\) in the same manner as Algorithm 3 so that \(\mathcal{E}=\{\lambda_{i},\log(D(\lambda_{i}\mid\boldsymbol{X})),N(\lambda_{i} \mid\boldsymbol{X}),w_{i}\}_{i=1}^{|\mathcal{E}|}\)
4: Sample without replacement \(\max\big{(}N_{\text{design}}-|\mathcal{P}|,0\big{)}\) points from \(\mathcal{E}\) according to the weights and add these points to \(\mathcal{D}\)
5: Sample \(N_{\text{pad}}\) points from a Latin hypercube design covering \(\Lambda\) and add these to \(\mathcal{D}\)
6:return:\(\mathcal{D}\) such that \(|\mathcal{D}|=\max(N_{\text{design}},|\mathcal{P}|)+N_{\text{pad}}\)
7:endfunction
```
**Algorithm 5** Resample the outputs from a previous batch to obtain a design for the current one.
## Appendix D Additional information for the Preece-Baines example
### Hyperparameter support \(\Lambda\)
Table 2 contains the upper and lower limits for each hyperparameter, thus defining the feasible region \(\Lambda\).
### Hartmann _et al._ (2020) priors
Table 3 contains the priors elicited by Hartmann _et al._ (2020) for the parameters in the Preece-Baines example. To generate the prior predictive samples displayed in Figure 5 we draw, for each user, \(\theta\) from the corresponding lognormal distribution then compute \(h(t;\theta)\) using Equation (9) (without the error term) and 250 values of \(t\) spaced evenly between ages \(2\) and \(18\).
### Pareto frontiers for the covariate-independent target
The Pareto frontier for the covariate-independent target and all values of \(\kappa\in\mathcal{K}\) is displayed in Figure 16.
### Full marginal prior and posterior comparison plots
Figures 17 and 18 are extended versions of Figure 8, and display the prior and posterior estimates for all the parameters in \(\theta\). Consistency and uniqueness remain, evidently, challenging and as yet unobtainable.
\begin{table}
\begin{tabular}{c c r r r r} \hline \hline User & Parameter & Expectation & Variance & Lognormal \(\mu\) & Lognormal \(\sigma\) \\ \hline
[MISSING_PAGE_POST]
theta\) & 14.60 & 0.02 & 2.68 & 0.01 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Priors elicited by Hartmann _et al._ (2020) for each of the 5 users they study. Hartmann _et al._ provide their results in the form of expected values and variances for the parameters of the model, we compute the corresponding lognormal location \(\mu\) and scale \(\sigma\) parameters from this information. Values are rounded to two digits of precision.
Figure 16: Pareto frontiers for each \(\kappa\in\mathcal{K}\) for the **covariate-independent** example. The minimum loss point for each replicate is plotted with \(+\). Note also that the loss scales differ between plots.
Figure 17: A comparison of the priors (blue) produced by our method using the covariate-independent marginal target (bottom two rows); and Hartmann et al. (2020) (second row), with no prior displayed for the flat prior scenario. The corresponding posteriors (red) for individual \(n=26\) under each of these priors are displayed as dashed lines. Note that y-axes change within columns and are limited to values that clip some of the priors/posteriors for readability.
Figure 18: Otherwise identical to Figure 17 but the bottom two rows display the results obtained using the covariate-specific target.
Additional information for the \(R^{2}\) example
### Hyperparameter support \(\Lambda\) - faithfulness experiment
See Table 4
### A comparison to an asymptotic result
The poor fit for the Gaussian prior observed in Figure 9 could be attributed to issues in the optimisation process, or to the lack of flexibility in the prior. To investigate, we compare the results for \(\lambda_{\text{GA}}\) to Theorem 5 of Zhang and Bondell (2018), which is an asymptotic result regarding the optimal value of \(\lambda_{GA}\) for a target \(\text{Beta}(s_{1},s_{2})\) density for \(R^{2}\). We compare pairs of \((n_{k},p_{k})\) for \(k=1,\ldots,5\), noting that assumption (A4) of Zhang and Bondell requires that \(p_{k}=\mathsf{o}(n_{k})\) as \(k\to\infty\) (for strictly increasing sequences \(p_{k}\) and \(n_{k}\)). Thus we consider values of \(p\) such that \(p_{1}=80\) with \(p_{k}=2p_{k-1}\) and \(n\) with \(n_{1}=50\) and \(n_{k}=n_{k-1}^{1.2}\), both for \(k=2,\ldots,5\). Each \((n_{k},p_{k})\) pair is replicated 20 times, and for each replicate we generate a different \(\mathbf{X}\) matrix with standard normal entries. As the target density we choose \(s_{1}=5,s_{2}=10\) - a "more Gaussian" target than previously considered and thus, we speculate, possibly more amenable to translation with a Gaussian prior for \(\beta\). We also use this example as an opportunity to assess if there are notable differences between the Cramer-Von Mises discrepancy and the Anderson-Darling discrepancy as defined in Equation (2). The support \(\Lambda\) for \(\lambda_{\text{GA}}\) differs slightly from the example in the main text, and is defined in Table 5, as matching our target with larger design matrices requires considerably larger values of \(\gamma\).
The computation of \(R^{2}\) becomes increasingly expensive as \(n_{k}\) and \(p_{k}\) increase, which limits the value of some of our method's tuning parameters. The approximate discrepancy function uses \(S=2000\) samples from the prior predictive and is evaluated using \(I=500\) importance samples. We run CRS2 for \(N_{\text{CRS2}}=500\) iterations, using \(N_{\text{design}}=50\) in the initial design for the subsequent single batch of Bayesian optimisation, which uses \(N_{\text{BO}}=100\) iterations.
ResultsFigure 19 displays the results in terms of the normalised difference between the \(\gamma\) we estimate \(\gamma_{\text{pibo}}^{*}\), and the asymptotic result of Zhang and Bondell \(\gamma_{\text{asyn}}^{*}\). Our typical finite sample estimate is slightly larger than the asymptotic result, and the difference increases with \(n_{k}\) and \(p_{k}\). The variability of the normalised difference remains roughly constant, and thus reduces on an absolute scale, though extrema seem to occur more frequently for larger \(n_{k}\) and \(p_{k}\). These simulations suggest that the asymptotic regime has not been reached even at the largest \(n_{k}\) and \(p_{k}\) values we assessed.
The estimates of \(\gamma\) are not themselves particularly illuminating: we should instead look for differences in the distribution of \(R^{2}\) at the optima, which is to say on the "data" scale. Figure 20 displays the target distribution and the prior predictive distribution at the optima \(\mathsf{p}(R^{2}\mid\lambda_{GA}^{*})\). The fit is increasingly poor as \(n\) and \(p\) increase, and there is little difference both between the two discrepancies and with each discrepancies replications. The lack of difference implies that the optimisation process is consistently locating the same minima for \(D(\lambda)\). We conclude that either 1) the ability of the model to match the target depends on there being additional structure in \(\mathbf{X}\), or 2) it is not possible to encode the information in a \(\text{Beta}(5,10)\) prior for \(R^{2}\) into the Gaussian prior.
This example also further illustrates the difficulties inherent in acquiring a prior for additive noise terms. Specifically, in this example it is difficult to learn \((a_{1},b_{1})\), despite the fact that the contribution of \(\sigma^{2}\) to Equation (12) is not purely additive. However, as we see in Figure 21, estimates are uniformly distributed across the permissible space, except for bunching at the upper and lower bounds of \(\Lambda\). Note that for numerical and computational stability, we constrain \(a_{1}\in(2,50]\) and \(b_{1}\in(0.2,50]\) in this example. This contrasts with similarity between replicates visible in Figure 20, and is thus evidence that \((\hat{a}_{1},\hat{b}_{1})\) have no apparent effect on the value of \(D(\lambda^{*})\). We should instead set the prior for \(\sigma^{2}\) based on external knowledge of the measurement
\begin{table}
\begin{tabular}{l l l l} \hline \hline Prior & Hyperparameter & Lower & Upper \\ \hline Gaussian & \(a_{1}\) & 2 & 500 \\ Gaussian & \(b_{1}\) & 0.2 & 500 \\ Gaussian & \(\gamma\) & 1 & 500 \\ Dir. Lap. & \(a_{1}\) & 2 & 500 \\ Dir. Lap. & \(b_{1}\) & 0.2 & 500 \\ Dir. Lap. & \(\alpha\) & \(1/(3\max(n,p))\) & \(1/2\) \\ Reg. Horse. & \(a_{1}\) & 2 & 500 \\ Reg. Horse. & \(b_{1}\) & 0.2 & 500 \\ Reg. Horse. & \(p_{0}\) & 1 & \(p/2\) \\ Reg. Horse. & \(\nu\) & 1 & 80 \\ Reg. Horse. & \(s^{2}\) & \(10^{-5}\) & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Hyperparameters \(\lambda\) for the \(R^{2}\) example and their upper/lower limits that define \(\Lambda\).
Figure 19: Relative difference between the value of \(\gamma\) obtained using our methodology (\(\gamma^{*}_{\text{pbbo}}\)) and Theorem 5 of Zhang and Bondell (2018) (\(\gamma^{*}_{\text{asym}}\)).
\begin{table}
\begin{tabular}{l l l l} \hline \hline Prior & Hyperparameter & Lower & Upper \\ \hline Gaussian & \(a_{1}\) & \(2+10^{-6}\) & 50 \\ Gaussian & \(b_{1}\) & 0.2 & 50 \\ Gaussian & \(\gamma\) & 1 & 5000 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Hyperparameters \(\lambda\) for the asymptotic example, and their upper/lower limits that define \(\Lambda\).
Figure 20: The target density t(\(R^{2}\)) and optimal prior predictive densities p(\(R^{2}\mid\lambda^{*}\)) under both the Cramér-von Mises (red, left column) and Anderson-Darling (blue, right column) discrepancies. There are 20 replicates of each discrepancy in this plot.
process for \(Y\).
The regularisation method we employ in the two other examples in the main text is unlikely to assist in estimating \((a_{1},b_{1})\). Promoting a larger mean log marginal standard deviation, with the knowledge \(D(\lambda)\) is insensitive to the value of \((a_{1},b_{1})\), would simply pick the largest possible value for \(b_{1}^{2}/\left((a_{1}-1)^{2}(a_{1}-2)\right)\), which occurs when \(a_{1}\) is at its minimum allowable value and \(b_{1}\) its corresponding maximum.
### Full faithfulness results
The complete results from the faithfulness experiment are displayed in Figure 22.
Figure 21: Histograms of _scaled_ estimates of \((a_{1}^{*},b_{1}^{*})\) for the settings considered in Section E.2. Estimates have been scaled to \([0,1]\) for visualisation purposes using the upper and lower limits defined in Table 4.
Figure 22: As in Figure 9 but for all values of \((s_{1},s_{2})\) denoted in the facet panels titles. The performance of the regularised horseshoe is superior to the Dirichlet-Laplace, both of which are vast improvements over the Gaussian.
Additional information for the cure fraction survival example
### Hyperparameter support \(\Lambda\)
See Table 6
### Objective values at optima
We can assure ourselves that behaviour apparent in Figure 14 is a manifestation of nonuniqueness in the optimisation problem, rather than incomplete optimisation or nonreplicability, by inspecting the values of \(D(\lambda\mid\mathbf{X}),N(\lambda\mid\mathbf{X})\), and \(L(\lambda\mid\mathbf{X})\) at the replicate optima \(\lambda^{*}\). Figure 23 displays these values for \(\kappa=0.3\), and we observe a tight distribution of optimum values, particularly for \(D(\lambda^{*}\mid\mathbf{X})\). This implies that the optimisation process is locating equally good - in terms of \(D(\lambda^{*}\mid\mathbf{X})\) - optima that correspond to different \(\text{P}(\theta\mid\lambda^{*})\), which is precisely our definition of nonuniqueness. |
2302.07532 | Lorentz transformation of three dimensional gravitational wave tensor | Recently there are more and more interest on the gravitational wave of moving
sources. This introduces a Lorentz transformation problem of gravitational
wave. Although Bondi-Metzner-Sachs (BMS) theory has in principle already
included the Lorentz transformation of gravitational wave, the transformation
of the three dimensional gravitational wave tensor has not been explicitly
calculated before. Within four dimensional spacetime, gravitational wave have
property of `boost weight zero' and `spin weight 2'. This fact makes the
Lorentz transformation of gravitational wave difficult to understand. In the
current paper we adopt the traditional three dimensional tensor description of
gravitational wave. Such a transverse-traceless tensor describes the
gravitational wave freedom directly. We derive the explicit Lorentz
transformation of the gravitational wave tensor. The transformation is similar
to the Lorentz transformation for electric field vector and magnetic field
vector which are three dimensional vectors. Based on the deduced Lorentz
transformation of the gravitational wave three dimensional tensor, we can
construct the gravitational waveform of moving source with any speed if only
the waveform of the corresponding rest waveform is given. As an example, we
apply our method to the effect of kick velocity of binary black hole. The
adjusted waveform by the kick velocity is presented. | Xiaokai He, Xiaolin Liu, Zhoujian Cao | 2023-02-15T08:54:51Z | http://arxiv.org/abs/2302.07532v1 | # Lorentz transformation of three dimensional gravitational wave tensor
###### Abstract
Recently there are more and more interest on the gravitational wave of moving sources. This introduces a Lorentz transformation problem of gravitational wave. Although Bondi-Metzner-Sachs (BMS) theory has in principle already included the Lorentz transformation of gravitational wave, the transformation of the three dimensional gravitational wave tensor has not been explicitly calculated before. Within four dimensional spacetime, gravitational wave have property of 'boost weight zero' and'spin weight 2'. This fact makes the Lorentz transformation of gravitational wave difficult to understand. In the current paper we adopt the traditional three dimensional tensor description of gravitational wave. Such a transverse-traceless tensor describes the gravitational wave freedom directly. We derive the explicit Lorentz transformation of the gravitational wave tensor. The transformation is similar to the Lorentz transformation for electric field vector and magnetic field vector which are three dimensional vectors. Based on the deduced Lorentz transformation of the gravitational wave three dimensional tensor, we can construct the gravitational waveform of moving source with any speed if only the waveform of the corresponding rest waveform is given. As an example, we apply our method to the effect of kick velocity of binary black hole. The adjusted waveform by the kick velocity is presented.
## I Introduction
Since the first detection of gravitational wave (GW) by LIGO in 2015, the gravitational wave astronomy developed very quickly. Binary black holes, binary neutron stars and neutron star-black hole binaries have been found. People traditionally deem that binary compact objects form in two possible channels including isolated evolution which happens in field [1] and dynamical encounter which happens in clusters [2; 3]. Many binary black holes found by GW detection are much more massive than people ever expected [4]. Such finding stimulated discussions and studies of the formation problem of such binary systems. Recently, people propose one new channel for the binary formation. These GW binaries may form in the accretion disk of a supermassive black hole [5; 6; 7; 8; 9]. The authors of [10] found that the migration traps of the accretion disk may make the binary locate at the traps. If the disk is thick, the pressure gradient may change the structure of the migration traps [11] which results in a trap locating at the distance of several gravitational radius of the central black hole. How do the binary black holes (BBH) detected by gravitational waveform has become a very interesting problem.
The binary black hole formed near a supermassive black hole will be affected by the gravitational potential of the central black hole [12; 13; 14; 15; 16; 17; 18; 19]. One of such effects is that the binary's barycentre will move respect to the detector. Here we care about the problem how such motion may change the waveform radiated by the binary. In addition to the Doppler shift [20; 21], other corrections of the waveform may be introduced by the relative motion between the source and the detector [22; 23; 24]. If ones can determine the moving velocity of the gravitational wave source [25; 26], such information will be helpful to distinguish the formation channel of the BBHs.
Besides the effect of a central supermassive black hole, the velocity dispersion of galaxy clusters may also provide a relative motion between binary black hole and the GW detector [24]. When the relative speed is slow, small velocity approximation can be used to treat the waveform changing problem [22; 23; 24]. Such small velocity condition is valid for galaxy velocity dispersion and binary black hole locating more than tens of gravitational radius away from the center supermassive black hole. If the binary black hole locates very near to the supermassive black hole [11], small velocity approximation may break down. And exact Lorentz transformation of gravitational waveform is expected. In the current paper we will present such transformation explicitly and express it in an electromagnetic-wave-like manner.
When considering the Lorentz transformation of gravitational wave, ones may correspondingly ask the tensor rank of gravitational wave. Unfortunately gravitational wave admits both 'boost weight zero' and'spin weight 2' properties [27] which mean that gravitational wave behaves like both a scalar and a rank-two tensor. Essentially gravitational wave is neither a scalar nor a rank-two tensor. We need to rely on the Bondi-Metzner-Sachs (BMS) theory [28; 29; 30; 31; 32] to find out the Lorentz transformation of gravitational wave [33].
When the gravitational wave can be looked as a perturbation of the Minkowsky spacetime, the gravitational wave can be viewed as a rank-two tensor respect to the Lorentz group [34]. But the velocity involved in the Lorentz transformation can not be large, otherwise the perturbation condition of the rank-two tensor will break down. In addition, a transverse-traceless rank-two tensor will be transformed to a tensor which does not satisfy the transverse-traceless condition any more. Consequently people need to apply an additional transverse-traceless projection after the transformation.
People have already been used to describe gravitational wave with a three dimensional tensor, which is transverse-traceless. This three dimensional tensor is covariant respect to general three dimensional coordinate transformation. But it can not be treated in four dimensional viewpoint. This character is quite similar to that of an electric vector and a magnetic vector. Together with the Lorentz transformation we introduced in the current paper, the three dimensional tensor can describe gravitational wave as completely as the electric vector and the magnetic vector describing electromagnetic field. The Lorentz transformation does not change the transverse-traceless property of the gravitational wave tensor. Together with our Lorentz transformation rule, the three dimensional tensor provides a good tool to describe gravitational wave.
Actually, the BMS theory has already presented a BMS transformation of gravitational wave which includes Lorentz transformation, rotation transformation, translation transformation and even super-translation transformation in the four dimensional manifold view point [33]. But such representation is quite hard for people who are not familiar with differential geometry. Especially usual astronomers will feel hard to understand such theory. This is very similar to the situation about electromagnetics in curved spacetime before the membrane paradigm proposed by Thorne and his coworkers in 1980s [35]. At that time astronomers feel quite hard to understand the behavior of electromagnetis in curved spacetime although the four dimensional theory about such problem has been clear already. In contrast, the membrane paradigm uses the usual three dimensional language. Astronomers afterwards studied, applied and developed the electromagnetical theory in curved spacetime extensively. We hope the Lorentz transformation theory of three dimensional gravitational wave tensor presented in the current paper can play similar role as the membrane paradigm for gravitational wave astrophysics. Based on this Lorentz transformation theory of three dimensional gravitational wave tensor, astronomers can straightforwardly construct waveform model for kinds of moving sources if only the waveform of the corresponding rest waveform is known [36].
The rest of this paper is arranged as following. We firstly review and comment the three dimensional tensor description of gravitational wave in the next section. Then we set up the Lorentz transformation relation based on BMS theory aiming to deduce the Lorentz transformation of gravitational wave in section III. Followed that we apply the BMS Lorentz transformation rule to electromagnetic wave in section IV. Along with the deducing of the Lorentz transformation rule for electromagnetic wave, we construct a key relation between two relative moving frames. Based on the BMS Lorentz transformation rule and the aforementioned key relation we constructed in section V the Lorentz transformation formula for gravitational wave tensor. In section VI, we calculate the phase changing resulted from the Lorentz transformation based on the explicit Lorentz transformation formula of a three dimensional transverse-traceless tensor. In section VII, we explicitly construct the waveform for moving sources. The adjusted waveform by kick velocity of a BBH is presented there as an example of the waveform constructing process for moving sources. At last we give a summary and discussion in section VIII.
Through the whole paper the units with \(G=c=1\) are used. The Einstein sum rule is adopted. The indexes from \(i\) to \(n\) take values from 1 to 3. Other indexes take values from 0 to 3.
## II Three dimensional tensor description of gravitational wave
Essentially general relativity is a four dimensional theory. But three dimensional description can facilitate people to understand general relativity through traditional way. The membrane paradigm of black hole is a very good example of such object [35].
Physically gravitational wave admits two polarization modes which correspond to the two freedom of the gravitational wave. Consequently we can describe gravitational wave through a three dimensional tensor
\[h_{ij}\equiv h_{+}e^{+}_{ij}+h_{\times}e^{\times}_{ij}, \tag{1}\]
where \(h_{+,\times}\) and \(e^{+,\times}_{ij}\) are the two polarization modes and the corresponding bases. There is one and only one direction \(\hat{N}^{i}\) (up to a sign) perpendicular to \(h_{ij}\). Such direction indicates the propagating direction of the gravitational wave
\[\hat{N}^{i}h_{ij}=0. \tag{2}\]
As a tensor, any coordinate including cartesian coordinate, spherical coordinate and others can be used to do the calculation. This is not new. Many literatures have already taken such facility [34; 37]. Many astronomers have been familiar with the 'transverse-traceless' property of GW which means just the above three dimensional tensor description. If in four dimensional viewpoint, many different descriptions may happen [26; 30; 38].
But till now the above tensor description of gravitational wave is limited in three dimensional coordinate transformation. Physically it is limited to rotation transformations. Analogously, the electric vector and the
magnetic vector are also just three dimensional tensor. But they can describe the four dimensional behavior of electromagnetic field quite well. The key point is there is a Lorentz transformation rule for the electric vector and the magnetic vector. To fill the gap of gravitational wave, we construct the Lorentz transformation rule for the gravitational wave tensor (1) in the current paper. Equipped with the Lorentz transformation rule, the above three dimensional tensor description will be more powerful to study gravitational wave.
## III Lorentz transformation within the BMS theory
Within the BMS theory, the Lorentz transformation acted on two asymptotic inertial frames \((t,x,y,z)\) and \((t^{\prime},x^{\prime},y^{\prime},z^{\prime})\) can be expressed as [30]
\[\begin{pmatrix}t^{\prime}+z^{\prime}&x^{\prime}+iy^{\prime}\\ x^{\prime}-iy^{\prime}&t^{\prime}-z^{\prime}\end{pmatrix}=L\begin{pmatrix}t+z&x +iy\\ x-iy&t-z\end{pmatrix}L^{\dagger}, \tag{3}\]
where \(\dagger\) means transpose and complex conjugate (hermitian conjugate), and \(L\) is a \(2\times 2\) hermitian matrix representing the Lorentz transformation. Corresponding to boost with relative velocity \(\vec{v}\) and rotation with angle \(\vec{\theta}\) we have respectively [33]
\[L =B(\vec{v})=e^{\eta\bar{v}.\vec{\sigma}},e^{\eta}=\sqrt{\gamma(1- v)},\gamma=\frac{1}{\sqrt{1-v^{2}}}, \tag{4}\] \[L =R(\vec{\theta})=e^{\frac{i}{2}\vec{\theta}.\vec{\sigma}}, \tag{5}\]
where \(\vec{\sigma}=(\sigma^{1},\sigma^{2},\sigma^{3})\) and \(\sigma^{i},i=1,2,3\) are the Pauli matrixes ((1.2.24) of [30]). For boost \(B(\vec{v})\) we have explicitly
\[B(\vec{v})=\begin{pmatrix}\cosh\eta+\frac{v_{3}}{v}\sinh\eta&(\frac{v_{1}}{v}+ i\frac{v_{2}}{v})\sinh\eta\\ (\frac{v_{1}}{v}-i\frac{v_{2}}{v})\sinh\eta&\cosh\eta-\frac{v_{3}}{v}\sinh \eta\end{pmatrix}, \tag{6}\]
where \(\vec{v}=(v_{1},v_{2},v_{3})\).
In the asymptotic region, the relation between Bondi-Sachs (BS) coordinate \((u,r,\theta,\phi)\)[31; 32; 39] and the above inertial Cartesian coordinate \((t,x,y,z)\) can be expressed as
\[t=u+r,x=r\sin\theta\cos\phi,y=r\sin\theta\sin\phi,z=r\cos\theta. \tag{7}\]
Correspondingly we can express the position matrix in (3) as
\[\begin{pmatrix}t+z&x+iy\\ x-iy&t-z\end{pmatrix} =\begin{pmatrix}u+\frac{2r|\zeta|^{2}}{|\zeta|^{2}+1}&\frac{2r \zeta}{|\zeta|^{2}+1}\\ \frac{2r\zeta}{|\zeta|^{2}+1}&u+\frac{2r}{|\zeta|^{2}+1}\end{pmatrix}, \tag{8}\] \[\zeta \equiv e^{i\phi}\cot\frac{\theta}{2}, \tag{9}\]
where \(\bar{\zeta}\) means the complex conjugate of \(\zeta\).
Considering general transformation matrix
\[L=\begin{pmatrix}a&b\\ c&d\end{pmatrix} \tag{10}\]
which is hermitian, we have asymptotic BS coordinate transformation up to \(O(\frac{1}{r})\)
\[u^{\prime}=ku,k\equiv\frac{1+\zeta\bar{\zeta}}{|a\zeta+b|^{2}+|c \zeta+d|^{2}}, \tag{11}\] \[r^{\prime}=kr+\] \[\frac{ku}{1+\zeta\bar{\zeta}}\left[2(a\bar{c}+b\bar{d})(c\zeta+d )(\bar{a}\bar{\zeta}+\bar{b})+\right.\] \[\left.(|a|^{2}+|b|^{2}-|c|^{2}-|d|^{2})(|a\zeta+b|^{2}-|c\zeta+d| ^{2})\right],\] (12) \[\zeta^{\prime}=\frac{a\zeta+b}{c\zeta+d}. \tag{13}\]
Here prime means the new BS coordinate.
We are free to choose the direction of BS coordinate. In order to simplify the calculation we let the \(z\) axis point to the direction of the relative velocity. And more we choose the \(x\) axis to let the gravitational wave source locate in the \(x-z\) plane. Then \(y\) axis is determined by the right-hand screw rule. Based on this choice of coordinate basis the source locates in the direction
\[\theta\neq 0,\phi=0, \tag{14}\]
and the Lorentz transformation matrix (6) can be simplified as
\[B =\begin{pmatrix}\cosh\eta+\sinh\eta&0\\ 0&\cosh\eta-\sinh\eta\end{pmatrix} \tag{15}\] \[=\begin{pmatrix}e^{\eta}&0\\ 0&e^{-\eta}\end{pmatrix}, \tag{16}\]
which means
\[a=e^{\eta},d=e^{-\eta},b=c=0. \tag{17}\]
So the above general transformation becomes
\[u^{\prime} =ku,k\equiv\frac{1+|\zeta|^{2}}{a^{2}|\zeta|^{2}+d^{2}}, \tag{18}\] \[r^{\prime} =\frac{r}{k}+\frac{u}{k}\frac{(|a|^{2}-|d|^{2})(a^{2}|\zeta|^{2}- d^{2})}{1+|\zeta|^{2}},\] (19) \[\zeta^{\prime} =\frac{a}{d}\zeta, \tag{20}\]
## IV Lorentz transformation of electromagnetic wave within the BMS theory
In the asymptotic region, or to say the wave zone, the BS coordinate basis \(\hat{r}\) corresponds to the propagating
direction of electromagnetic (EM) wave. Based on the property of EM wave we have \(\hat{r}\cdot\vec{E}=0,\vec{B}=\hat{r}\times\vec{E},\vec{E}=\vec{B}\times\hat{r}\). Using the tetrad \((\hat{t},\hat{r},\hat{\theta},\hat{\phi})\) we have EM tensor field \(F_{\mu\nu}\) and the Newman-Penrose tetrad as following
\[F_{\mu\nu}=\begin{pmatrix}0&E_{\hat{\theta}}&E_{\hat{\phi}}&0\\ -E_{\hat{\phi}}&0&0&E_{\hat{\theta}}\\ 0&0&0&E_{\hat{\phi}}\\ 0&-E_{\hat{\theta}}&-E_{\hat{\phi}}&0\end{pmatrix}, \tag{21}\] \[l^{a}=\frac{1}{\sqrt{2}}(\hat{t}^{a}+\hat{r}^{a}),n^{a}=\frac{1} {\sqrt{2}}(\hat{t}^{a}-\hat{r}^{a}),m^{a}=\frac{1}{\sqrt{2}}(\hat{\theta}^{a}+ i\hat{\phi}). \tag{22}\]
Then we have Newman-Penrose EM scalar
\[\phi_{2}\equiv F_{ab}n^{a}\bar{m}^{b}=E_{\hat{\theta}}-iE_{\hat{\phi}}. \tag{23}\]
The boost BMS transformation results in [30]
\[\phi_{2}^{\prime}=\frac{e^{-i\lambda}}{k}\phi_{2}, \tag{24}\] \[e^{i\lambda}=\frac{c\zeta+d}{c\zeta+d}. \tag{25}\]
And the EM propagating direction will change from \(\hat{r}\) to \(\hat{r}^{\prime}\). Again here the prime means the new coordinate and the new frame after the Lorentz transformation. Specifically the direction is described by \(\theta\) and \(\theta^{\prime}\) due to the property (14). Together with (16) and (4) the transformation (20) results in
\[\cot\frac{\theta^{\prime}}{2}=\gamma(1-v)\cot\frac{\theta}{2}, \tag{26}\]
which is nothing but the usual aberration formula [40]. The above aberration formula (26) can also be expressed as
\[\cos\theta^{\prime}=\frac{\cos\theta-v}{1-v\cos\theta},\sin\theta^{\prime}= \frac{\sin\theta}{\gamma(1-v\cos\theta)}. \tag{27}\]
From (24) we can see a phase change \(e^{-i\lambda}\) which corresponds to'spin weight 1', and an amplitude change \(\frac{1}{k}\) which corresponds to 'boost weight 1'. Altogether we conclude that EM wave behaves as a rank-one tensor which is consistent to usual understanding that EM wave is a vector field.
Due to (16), (25) becomes
\[e^{i\lambda}=1,\lambda=0, \tag{28}\] \[k=\frac{1}{\gamma(1-\vec{v}\cdot\hat{r})}. \tag{29}\]
Consequently (24) results in
\[\phi_{2}^{\prime}=\frac{1}{k}\phi_{2}, \tag{30}\] \[E_{\hat{\theta}}^{\prime}=\frac{E_{\hat{\theta}}}{k},E_{\hat{ \phi}}^{\prime}=\frac{E_{\hat{\phi}}}{k}. \tag{31}\]
Noting the relation between the Cartesian frame and the spherical frame
\[\hat{r}=\sin\theta\hat{e}_{x}+\cos\theta\hat{e}_{z}, \tag{32}\] \[\hat{\theta}=\cos\theta\hat{e}_{x}-\sin\theta\hat{e}_{z},\] (33) \[\hat{\phi}=\hat{e}_{y}, \tag{34}\]
if we use three dimensional vector to express the electric field, we have
\[\vec{E} =E_{\theta}\hat{e}_{\theta}+E_{\hat{\phi}}\hat{e}_{\phi}, \tag{35}\] \[=E_{\theta}\cos\theta\hat{e}_{x}+E_{\hat{\phi}}\hat{e}_{y}-E_{ \theta}\sin\theta\hat{e}_{z},\] (36) \[\vec{E}^{\prime} =E_{\theta}^{\prime}\cos\theta^{\prime}\hat{e}_{x}^{\prime}+E_{ \phi}^{\prime}\hat{e}_{y}^{\prime}-E_{\theta}^{\prime}\sin\theta^{\prime}\hat{ e}_{z}^{\prime}. \tag{37}\]
Frame \((x,y,z)\) deems frame \((x^{\prime},y^{\prime},z^{\prime})\) moves in \(z\) direction, while frame \((x^{\prime},y^{\prime},z^{\prime})\) deems frame \((x,y,z)\) moves in \(-z^{\prime}\) direction. Both frames agree that the relative velocity lies in the same line. Or to say they deem \(\hat{e}_{z}\) and \(\hat{e}_{z}^{\prime}\) point to the same direction. In addition since both \(\hat{e}_{z}\) and \(\hat{e}_{z}^{\prime}\) admit unit length we have
\[\hat{e}_{z}=\hat{e}_{z}^{\prime}. \tag{38}\]
According to the Lorentz transformation between \((t,x,y,z)\) and \((t^{\prime},x^{\prime},y^{\prime},z^{\prime})\)
\[t^{\prime} =\gamma(t-vz), \tag{39}\] \[x^{\prime} =x,\] (40) \[y^{\prime} =y,\] (41) \[z^{\prime} =\gamma(z-vt), \tag{42}\]
we straightforwardly have
\[\hat{e}_{x}=\hat{e}_{x}^{\prime},\,\hat{e}_{y}=\hat{e}_{y}^{\prime}. \tag{43}\]
Plugging the relations (38) and (43) into (37) we get
\[\vec{E}^{\prime}=E_{\theta}^{\prime}\cos\theta^{\prime}\hat{e}_{x}+E_{\phi}^{ \prime}\hat{e}_{y}-E_{\theta}^{\prime}\sin\theta^{\prime}\hat{e}_{z}. \tag{44}\]
Combining relations (27) and (31) we have
\[\vec{E}^{\prime} =\gamma(1-v\cos\theta)\times\] \[\left(E_{\theta}\frac{\cos\theta-v}{1-v\cos\theta}\hat{e}_{x}+E_{ \phi}\hat{e}_{y}-E_{\theta}\frac{\sin\theta}{\gamma(1-v\cos\theta)}\hat{e}_{z}\right) \tag{45}\] \[=\gamma\vec{E}(1-v\cos\theta)\] \[-vE_{\theta}\sin\theta\left(\gamma\cos\theta\hat{e}_{z}+\gamma \sin\theta\hat{e}_{x}-\frac{\gamma^{2}}{1+\gamma}v\hat{e}_{z}\right)\] (46) \[=\gamma(1-\vec{v}\cdot\hat{r})\vec{E}+\gamma(\vec{v}\cdot\vec{E}) (\hat{r}-\frac{\gamma}{1+\gamma}\vec{v}). \tag{47}\]
We can find that the above result is consistent to the usual Lorentz transformation of electromagnetic field [40]
\[\vec{E}^{\prime}=\gamma(\vec{E}+\vec{v}\times\vec{B})-\frac{\gamma^{2}}{1+ \gamma}\vec{v}\cdot\vec{E}\vec{v} \tag{48}\]
\[=\gamma(1-\vec{v}\cdot\hat{r})\vec{E}+\gamma(\vec{v}\cdot\vec{E})(\hat{r}-\frac{ \gamma}{1+\gamma}\vec{v}). \tag{49}\]
In the last step we have used EM wave relation \(\vec{B}=\hat{r}\times\vec{E}\). The consistence between (47) and (49) verifies relations (38) and (43) which will be used to deduce Lorentz transformation of gravitational wave in the next subsection.
## V Lorentz transformation of gravitational wave
Within the tetrad \((\hat{t},\hat{r},\hat{\theta},\hat{\phi})\) introduced in the last section, gravitational wave can be expressed as
\[h_{ij} =h_{+}e^{+}_{ij}+h_{\times}e^{\times}_{ij}, \tag{50}\] \[e^{+}_{ij} =\hat{\theta}_{i}\hat{\theta}_{j}-\hat{\phi}_{i}\hat{\phi}_{j}\] \[=\cos^{2}\theta\hat{e}_{x}\hat{e}_{x}-\sin 2\theta\hat{e}_{x} \hat{e}_{z}+\sin^{2}\theta\hat{e}_{z}\hat{e}_{z}-\hat{e}_{y}\hat{e}_{y}\] (51) \[e^{\times}_{ij} =\hat{\theta}_{i}\hat{\phi}_{j}+\hat{\theta}_{j}\hat{\phi}_{i}\] \[=2\cos\theta\hat{e}_{x}\hat{e}_{y}-2\sin\theta\hat{e}_{y}\hat{e}_ {z}, \tag{52}\]
where \(h_{+,\times}\) corresponds to the two polarization modes of gravitational wave.
On the other hand we can express the BMS transformation of gravitational wave with notation \(h\equiv h_{+}-ih_{\times}\) as
\[h^{\prime}=e^{-i2\lambda}(h-\vec{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{ \bar{\bar{\bar{\bar{\bar{\bar{\bar{\barbar{\bar
dependence of \(h_{ij}\) on space and time just through \((t-\hat{r}\cdot\vec{x})\). Here \(\vec{x}\) denotes the position vector.
It can be checked straight forwardly that \(h^{\prime}_{ij}\) in (59) is traceless and \(h^{\prime}_{ij}\tilde{r}^{\prime i}=0\) which means \(h^{\prime}_{ij}\) is transverse. This is to say our Lorentz transformation preserves the transverse-traceless property of gravitational wave tensor.
In addition we can note that \(h_{ij}h^{ij}=h_{+}^{2}+h_{\times}^{2}\). The relation (53) indicates that the Lorentz transformation admits \(h^{\prime}=he^{-2\lambda}\) and consequently
\[h_{+}^{2}+h_{\times}^{2}=h_{+}^{\prime 2}+h_{\times}^{\prime 2}. \tag{62}\]
As a self consistent check, ones can show that the Lorentz transformation formula (59) does result in \(h_{ij}h^{ij}=h^{\prime}_{ij}h^{\prime ij}\). The calculation is straightforward but tedious. A trick is denoting the \(h^{\prime}_{ij}\) in (59) as
\[h^{\prime}_{ij} =h_{ij}+p_{ij}+q_{ij}+s_{ij}, \tag{63}\] \[p_{ij} =v^{k}h_{kl}v^{l}\frac{1}{(1-\hat{r}\cdot\vec{v})^{2}}\left[\hat{ r}_{i}\hat{r}_{j}\right.\] \[\left.-\frac{\gamma}{1+\gamma}(\hat{r}_{i}v_{j}+v_{i}\hat{r}_{j}) +\frac{\gamma^{2}}{(1+\gamma)^{2}}v_{i}v_{j}\right],\] (64) \[q_{ij} =v^{k}h_{kj}\frac{1}{1-\hat{r}\cdot\vec{v}}[\hat{r}_{i}-\frac{ \gamma}{1+\gamma}v_{i}],\] (65) \[s_{ij} =v^{k}h_{ik}\frac{1}{1-\hat{r}\cdot\vec{v}}[\hat{r}_{j}-\frac{ \gamma}{1+\gamma}v_{j}]. \tag{66}\]
Then we have
\[h^{\prime}_{ij}h^{\prime ij}=h_{ij}h^{ij}+p_{ij}p^{ij}+2q_{ij}q^ {ij}+2h_{ij}p^{ij}\] \[\qquad\qquad\qquad\qquad\qquad+4h_{ij}q^{ij}+4p_{ij}q^{ij}+2q_{ij }s^{ij}. \tag{67}\]
Here we have used property \(q_{ij}=s_{ji}\).
Using relation
\[1+\frac{\gamma^{2}v^{2}}{(1+\gamma)^{2}}=\frac{2\gamma}{1+\gamma}. \tag{68}\]
we can get
\[q_{ij}q^{ij}=2h_{ij}q^{ij}. \tag{69}\]
Repeatedly using the relation (68) we can get
\[p_{ij}p^{ij}+2h_{ij}p^{ij}+4p_{ij}q^{ij}+2q_{ij}s^{ij}=0, \tag{70}\]
which results in
\[h^{\prime}_{ij}h^{\prime i\dot{\prime}i}=h_{ij}h^{ij}. \tag{71}\]
For those readers who take the gravitational wave as a perturbation of flat spacetime, gravitational wave can be described as a four dimensional tensor. They may be interested in how about the Lorentz transformation of such a four dimensional tensor. Actually such transformation can be got quite easily. The three dimensional tensor discussed above exactly corresponds to the spacial part of such a four dimensional tensor. And due to the transverse-traceless requirement, the time related components all vanishes. So just complementing one row and one column zeros to the three dimensional tensor got by our Lorentz transformation rule, ones can get the four dimensional GW tensor.
## VI Calculation of phase change due to boost for general velocity
In the last section we have assumed the boost velocity points to \(z\) direction which results in \(\lambda=0\). As an application of the Lorentz transformation formula (59), we can calculate the phase change \(\lambda\) due to boost for arbitrary velocity \(\vec{v}\).
In order to calculate \(\lambda\), we need \(h\equiv h_{+}-ih_{\times}\) and \(h^{\prime}\equiv h^{\prime}_{+}-ih^{\prime}_{\times}\) which are related to
\[h_{ij} =h_{+}e^{+}_{ij}+h_{\times}e^{\times}_{ij}, \tag{72}\] \[h^{\prime}_{ij} =h^{\prime}_{+}e^{\prime i}_{ij}+h^{\prime}_{\times}e^{\prime \times}_{ij}. \tag{73}\]
Figure 1: Phase change and aberration angle for boost velocity \(\vec{v}=(0.9,0,0)\). The top panel is for the phase change \(\lambda\in(-\pi,\pi)\). The middle panel is for \(\Delta\theta\equiv\theta^{\prime}-\theta\). And the bottom panel is for \(\Delta\phi\equiv\phi^{\prime}-\phi\). The phase change seems to admit unsmooth jumps. That is because \(-\pi\) and \(\pi\) should be identified but the plot shows a jump from \(-\pi\) to \(\pi\).
But ones have to note that relation (55) does not hold any more for general velocity \(\vec{v}\). Keeping the above two relations in mind and multiplying \(e_{+}^{ij}\) to the two sides of (59), we get
\[h_{+}^{\prime}e_{ij}^{\prime+}e_{+}^{ij}+h_{\times}^{\prime}e_{ij }^{\prime\times}e_{+}^{ij}=2h_{+}+\frac{v^{k}h_{kl}v^{l}}{(1-\hat{r}\cdot\vec{v })^{2}}\frac{\gamma^{2}}{(1+\gamma)^{2}}v_{i}v_{j}e_{+}^{ij}\] \[-2\frac{v^{k}h_{kj}}{1-\hat{r}\cdot\vec{v}}\frac{\gamma}{1+ \gamma}v_{i}e_{+}^{ij}. \tag{74}\]
At general angular position \((\theta,\phi)\) we have relations
\[\hat{\theta}^{i} =\cos\theta\cos\phi\hat{e}_{x}+\cos\theta\sin\phi\hat{e}_{y}-\sin \theta\hat{e}_{z}, \tag{75}\] \[\hat{\phi}^{i} =-\sin\phi\hat{e}_{x}+\cos\phi\hat{e}_{y}. \tag{76}\]
Similarly we have
\[\hat{\theta}^{i} =\cos\theta^{\prime}\cos\phi^{\prime}\hat{e}_{x}+\cos\theta^{ \prime}\sin\phi^{\prime}\hat{e}_{y}-\sin\theta^{\prime}\hat{e}_{z}, \tag{77}\] \[\hat{\phi}^{\prime i} =-\sin\phi^{\prime}\hat{e}_{x}+\cos\phi^{\prime}\hat{e}_{y}. \tag{78}\]
Here \((\theta,\phi)\) and \((\theta^{\prime},\phi^{\prime})\) are related through the aberration formula (27) which is equivalent to
\[\hat{r}^{\prime} =\frac{\hat{r}\cdot\hat{v}-v}{1-\hat{r}\cdot\vec{v}}\hat{v}+ \frac{1}{\gamma(1-\hat{r}\cdot\vec{v})}[\hat{r}-(\hat{r}\cdot\hat{v})\hat{v}], \tag{79}\] \[\hat{r}\cdot\vec{v} =v_{x}\sin\theta\cos\phi+v_{y}\sin\theta\sin\phi+v_{z}\cos\theta,\] (80) \[\hat{r}\cdot\hat{v} =\frac{\hat{r}\cdot\vec{v}}{v}. \tag{81}\]
Compared to the aberration formula (27), the above expressions are more useful to the calculation here. These
Figure 2: The plot convention is the same to Fig. 1 but here the boost velocity is \(\vec{v}=(0,0.9,0)\). The bottom panel seems to admit unsmooth jumps. This is because \(\phi^{\prime}\) takes values in \((0,2\pi)\) while \(0\) and \(2\pi\) should be identified. The apparent jump corresponds to the contact place of \(\phi^{\prime}=0\) and \(\phi^{\prime}=2\pi\).
relations result in
\[\cos\theta^{\prime} =\frac{1}{\gamma(1-\hat{r}\cdot\vec{v})}\left[\cos\theta-(\hat{r} \cdot\hat{v})\frac{v_{z}}{v}\right]\] \[+\frac{\hat{r}\cdot\hat{v}-v}{1-\hat{r}\cdot\vec{v}}\frac{v_{z}}{v}, \tag{82}\] \[\sin\theta^{\prime}\cos\phi^{\prime} =\frac{1}{\gamma(1-\hat{r}\cdot\vec{v})}\left[\sin\theta\cos\phi- (\hat{r}\cdot\hat{v})\frac{v_{x}}{v}\right]\] \[+\frac{\hat{r}\cdot\hat{v}-v}{1-\hat{r}\cdot\vec{v}}\frac{v_{x}}{v},\] (83) \[\sin\theta^{\prime}\sin\phi^{\prime} =\frac{1}{\gamma(1-\hat{r}\cdot\vec{v})}\left[\sin\theta\sin\phi- (\hat{r}\cdot\hat{v})\frac{v_{y}}{v}\right]\] \[+\frac{\hat{r}\cdot\hat{v}-v}{1-\hat{r}\cdot\vec{v}}\frac{v_{y}}{v}. \tag{84}\]
The combination of (83) and (84) can determine \(\phi^{\prime}\) in the range \((0,2\pi)\). According to the above relations, we can get explicitly
\[e_{ij}^{\prime+}e_{+}^{ij} =(\hat{\theta}\cdot\hat{\theta}^{\prime})^{2}+(\hat{\phi}\cdot \hat{\phi}^{\prime})^{2}-(\hat{\phi}\cdot\hat{\theta}^{\prime})^{2}-(\hat{ \theta}\cdot\hat{\phi}^{\prime})^{2}, \tag{85}\] \[e_{ij}^{\prime\times}e_{+}^{ij} =2(\hat{\theta}^{\prime}\cdot\hat{\theta})(\hat{\phi}^{\prime} \cdot\hat{\theta})-2(\hat{\theta}^{\prime}\cdot\hat{\phi})(\hat{\phi}^{\prime }\cdot\hat{\phi}),\] (86) \[h_{ij}v^{i}v^{j} =h_{+}e_{ij}^{+}v^{i}v^{j}+h_{\times}e_{ij}^{\times}v^{i}v^{j},\] (87) \[v^{k}h_{kj}v_{i}e_{+}^{ij} =h_{+}[(\hat{\theta}\cdot\vec{v})^{2}+(\hat{\phi}\cdot\vec{v})^{2}],\] (88) \[e_{ij}^{+}v^{i}v^{j} =(\hat{\theta}\cdot\vec{v})^{2}-(\hat{\phi}\cdot\vec{v})^{2},\] (89) \[e_{ij}^{\times}v^{i}v^{j} =2(\hat{\theta}\cdot\vec{v})(\hat{\phi}\cdot\vec{v}). \tag{90}\]
Similar to (74) we can use \(e_{\times}^{ij}\) to multiply the two sides of (59) and get
\[h_{+}^{\prime}e_{ij}^{\prime+}e_{\times}^{ij}+h_{\times}^{\prime \prime}e_{ij}^{\prime\times}e_{\times}^{ij}=2h_{\times}+\frac{v^{k}h_{kl}v^{ l}}{(1-\hat{r}\cdot\vec{v})^{2}}\frac{\gamma^{2}}{(1+\gamma)^{2}}v_{i}v_{j}e_{ \times}^{ij}\] \[-2\frac{v^{k}h_{kj}}{1-\hat{r}\cdot\vec{v}}\frac{\gamma}{1+ \gamma}v_{i}e_{\times}^{ij}, \tag{91}\]
with
\[e_{ij}^{\prime+}e_{\times}^{ij} =2(\hat{\theta}^{\prime}\cdot\hat{\theta})(\hat{\theta}^{\prime }\cdot\hat{\phi})-2(\hat{\phi}^{\prime}\cdot\hat{\theta})(\hat{\phi}^{\prime }\cdot\hat{\phi}), \tag{92}\] \[e_{ij}^{\prime\times}e_{\times}^{ij} =2(\hat{\theta}^{\prime}\cdot\hat{\theta})(\hat{\phi}^{\prime} \cdot\hat{\phi})+2(\hat{\phi}^{\prime}\cdot\hat{\theta})(\hat{\theta}^{\prime }\cdot\hat{\phi}),\] (93) \[v^{k}h_{kj}v_{i}e_{\times}^{ij} =h_{\times}[(\hat{\theta}\cdot\vec{v})^{2}+(\hat{\phi}\cdot\vec{ v})^{2}]. \tag{94}\]
Solving (74) and (91) for \(h_{+}^{\prime}\) and \(h_{\times}^{\prime}\) we get
\[h_{+}^{\prime} =\frac{\text{RHS}_{1}e_{ij}^{\prime\times}e_{\times}^{ij}-\text{ RHS}_{2}e_{ij}^{\prime\times}e_{\times}^{ij}}{e_{i}^{\prime\prime}e_{\times}^{ij}e_{ \times}^{kl}e_{\times}^{kl}-e_{ij}^{\prime\times}e_{\times}^{ij}e_{\times}^{ij }e_{\times}^{kl}e_{\times}^{kl}}, \tag{95}\] \[h_{\times}^{\prime} =\frac{\text{RHS}_{2}e_{ij}^{\prime\times}e_{\times}^{ij}-\text{ RHS}_{1}e_{ij}^{\prime}e_{\times}^{ij}}{e_{i}^{\prime\prime}e_{\times}^{ij}e_{\times}^{kl}e_{ \times}^{kl}-e_{ij}^{\prime\times}e_{\times}^{ij}e_{\times}^{kl}e_{\times}^{kl}}, \tag{96}\]
where \(\text{RHS}_{1,2}\) are respectively the right hand side of (74) and (91). Then \(e^{2i\lambda}=\frac{h}{h^{\prime}}\) gives us \(\lambda(\theta,\phi,v_{x},v_{y},v_{z})\).
As a self consistent check, we consider a special case \(\vec{v}=v\hat{e}_{z}\) which should result in \(\lambda(\theta,\phi,0,0,v_{z})=0\). In this special case we have
\[\hat{r}\cdot\vec{v}=v\cos\theta, \tag{97}\] \[\hat{r}\cdot\hat{v}=\cos\theta,\] (98) \[\cos\theta^{\prime}=\frac{\cos\theta-v}{1-v\cos\theta},\sin \theta^{\prime}=\frac{\sin\theta}{\gamma(1-v\cos\theta)},\] (99) \[\phi^{\prime}=\phi,\hat{\phi}^{\prime}=\hat{\phi},\] (100) \[\hat{\theta}\cdot\hat{\theta}^{\prime}=1-\frac{v^{2}\sin^{2}\theta} {1-v\cos\theta}\frac{\gamma}{1+\gamma},\] (101) \[\hat{\theta}^{\prime}\cdot\hat{\phi}=\hat{\phi}^{\prime}\cdot\hat{ \theta}=0,\hat{\phi}^{\prime}\cdot\hat{\phi}=1,\] (102) \[e_{ij}^{\prime\times}e_{+}^{ij}=e_{ij}^{\prime+}e_{\times}^{i}=0,\] (103) \[e_{ij}^{\prime+}e_{+}^{ij}=1+(\hat{\theta}\cdot\hat{\theta}^{ \prime})^{2},e_{ij}^{\prime\times}e_{\times}^{ij}=2(\hat{\theta}^{\prime}\cdot \hat{\theta}),\] (104) \[e_{ij}^{+}v^{i}v^{j}=v^{2}\sin^{2}\theta,e_{ij}^{\times}v^{i}v^{j }=0,\] (105) \[h_{ij}v^{i}v^{j}=v^{k}h_{kj}v_{i}e_{+}^{ij}=h_{+}v^{2}\sin^{2}\theta, \tag{106}\]
Figure 4: The plot convention is the same to Fig. 1 but here the boost velocity is \(\vec{v}=(0.01,0,0)\). Compared to Figs. 1-3, the velocity decreases and the angle changement decreases consequently. The color legend becomes smaller accordingly.
\[v^{k}h_{kj}v_{i}e^{ij}_{\times}=h_{\times}v^{2}\sin^{2}\theta, \tag{107}\] \[\text{RHS}_{1}=h_{+}[1+(1-\frac{1}{1-v\cos\theta}\frac{\gamma}{1+ \gamma}v^{2}\sin^{2}\theta)^{2}],\] (108) \[\text{RHS}_{2}=2h_{\times}(1-\frac{1}{1-v\cos\theta}\frac{\gamma} {1+\gamma}v^{2}\sin^{2}\theta). \tag{109}\]
In the calculation of \(\hat{\theta}\cdot\hat{\theta}^{\prime}\) we have used relation \(1-\frac{1}{\gamma}=\frac{\gamma v^{2}}{1+\gamma}\). Based on the above calculation results we can get \(h^{\prime}=h\) which confirms \(\lambda(\theta,\phi,0,0,v_{z})=0\).
Formally Eqs. (95) and (96) can be expressed as
\[h^{\prime}_{+} =A_{+}h_{+}+B_{+}h_{\times}, \tag{110}\] \[h^{\prime}_{\times} =A_{\times}h_{+}+B_{\times}h_{\times}, \tag{111}\]
where \(A_{+,\times}\) and \(B_{+,\times}\) only depend on \((\theta,\phi,v_{x},v_{y},v_{z})\), or to say \(A_{+,\times}\) and \(B_{+,\times}\) are independent of \(h_{+}\) and \(h_{\times}\). Ones can verify that
\[A_{+}=B_{\times},A_{\times}=-B_{+}. \tag{112}\]
Consequently we have
\[e^{-2i\lambda}=A_{+}-iA_{\times}, \tag{113}\]
which is independent of \(h_{+}\) and \(h_{\times}\). This is why we only write \(\lambda(\theta,\phi,v_{x},v_{y},v_{z})\) instead of \(\lambda(\theta,\phi,v_{x},v_{y},v_{z},h_{+},h_{\times})\). This property is also consistent to Eq. (53) which indicates that \(\lambda\), as a Lorentz transformation factor, only depends on \((\theta,\phi,v_{x},v_{y},v_{z})\).
Since \(\lambda\) is independent of \(h_{+}\) and \(h_{\times}\), we can plug \(h_{+}=1,h_{\times}=0\) into Eqs. (95) and (96) to simplify the calculation of \(\lambda\). Then we have
\[e^{-2i\lambda}=\] \[\frac{\left(\text{F}_{1}e^{\prime\gamma}_{ij}e^{ij}_{\times}- \text{F}_{2}e^{\prime\gamma}_{ij}e^{ij}_{+}\right)-i\left(\text{F}_{2}e^{ \prime i}_{ij}e^{ij}_{+}-\text{F}_{1}e^{\prime i^{\prime}_{+}}_{ij}e^{ij}_{ \times}\right)}{e^{\prime+}_{ij}e^{\prime j}_{\times}e^{\prime k}_{\times}e^{ \prime k}_{\times}-e^{\prime\gamma}_{ij}e^{ij}_{+}e^{\prime\mu}_{\times}e^{kl }_{\times}}, \tag{114}\] \[\text{F}_{1}\equiv 2+(f_{\theta}-f_{\phi})^{2}-2(f_{\theta}+f_{ \phi}),\] (115) \[\text{F}_{2}\equiv 2f_{\theta}f_{\phi}\frac{(\hat{\theta}\cdot\vec{v})^{2}-( \hat{\phi}\cdot\vec{v})^{2}}{(\hat{\theta}\cdot\vec{v})(\hat{\phi}\cdot\vec{v })},\] (116) \[f_{\theta}\equiv\frac{\gamma}{1+\gamma}\frac{(\hat{\theta}\cdot \vec{v})^{2}}{1-\hat{r}\cdot\vec{v}},f_{\phi}\equiv\frac{\gamma}{1+\gamma} \frac{(\hat{\phi}\cdot\vec{v})^{2}}{1-\hat{r}\cdot\vec{v}}. \tag{117}\]
A time independent (equivalently frequency independent) phase factor \(\lambda\) can be absorbed in the initial phase during the gravitational wave data analysis [34; 37].
As examples we plot the phase change and the wave propagating direction change in Figs. 1-3. Fig. 1 and Fig. 2 correspond to representative velocities \(\vec{v}=0.9\hat{e}_{x}\) and \(\vec{v}=0.9\hat{e}_{y}\) respectively. Fig. 3 corresponds to an arbitrary velocity \(\vec{v}=(0.1143,0.8219,0.3481)\). There are some unsmooth places in the plots due to the range \((0,2\pi)\) taken by angles. The values of an angle \(0\) and \(2\pi\) are essentially the same.
Reminding that kick velocity of binary black hole (BBH) merger is about one percentage of the speed of light [44; 45; 46; 47; 48; 49; 50; 51], we plot the results for \(\vec{v}=0.01\hat{e}_{x}\) which can be compared to the high speed case shown in Fig. 1. In Figs. 1-3, the involved velocity is order one and the corresponding angle changement is also order one. In comparison, the involved velocity of Fig. 4 decreases to order \(10^{-2}\) and the corresponding angle changement also decreases to order \(10^{-2}\).
## VII Gravitational Waveform of a Moving BBH
In usual literature, ones use 'detector frame' to mean coordinate system which moves along the detector and whose original point locates at the detector. Correspondingly'source frame' means the coordinate system which moves along the GW source and whose original point locates at the source. In the current section, there will be four different coordinate systems involved. Consequently we use 'detector frame' and'source frame' to only mean the coordinate moving along the detector and source respectively. For both 'detector frame' and'source frame', the original point of the coordinate system may locate at the detector or the source.
In the viewpoint of the source, no matter the detector is moving or not, the gravitational wave radiated in the same direction will be detected. As usual we choose a coordinate whose \(z\) direction pointing to the orbital angular momentum of the BBH, and the \(x\)-\(z\) plane contains the line connecting the BBH and the detector. Then the gravitational wave tensor reaching the detector \(h_{ij}\) is determined by the intrinsic parameters of the BBH, the luminosity distance between the BBH and the detector \(D_{L}\) and the inclination angle \(\iota\). More specifically if the spin weighted -2 spherical harmonic components \(h_{lm}(t;m_{1},m_{2},\vec{s}_{1},\vec{s}_{2})\) are given [53; 54; 55; 56; 57], we have
\[h_{+} =\frac{1}{D_{L}}\Re[\sum_{lm}h_{lm}(t;m_{1},m_{2},\vec{s}_{1}, \vec{s}_{2})_{-2}Y_{lm}(\iota,0)], \tag{118}\] \[h_{\times} =\frac{1}{D_{L}}\Im[\sum_{lm}h_{lm}(t;m_{1},m_{2},\vec{s}_{1}, \vec{s}_{2})_{-2}Y_{lm}(\iota,0)],\] (119) \[h_{ij} =h_{+}e^{+}_{ij}(\iota,0)+h_{\times}e^{\times}_{ij}(\iota,0). \tag{120}\]
Here \(m_{1,2}\) and \(\vec{s}_{1,2}\) denote the mass and the spin of the two black holes as usual, \({}_{-2}Y_{lm}\) are the spin weighted -2 spherical harmonic functions.
Then changing to the viewpoint of the detector, Eq. (59) results in the needed gravitational wave tensor
\[h^{\prime}_{ij}(t;m_{1},m_{2},\vec{s}_{1},\vec{s}_{2},\iota)=h^{ \prime}_{ij}(\frac{t^{\prime}}{k};m_{1},m_{2},\vec{s}_{1},\vec{s}_{2},\iota), \tag{121}\] \[k=\frac{1}{\gamma(1-\vec{v}\cdot\hat{r})}, \tag{122}\]
where \(t\) and \(t^{\prime}\) correspond to the time in the viewpoint of source frame and detector frame respectively.
Putting the coordinate origin at the detector, we get a spherical coordinate \((R,\Theta,\Phi)\) within the rest frame relative to the GW source (source frame). Within the moving
frame respect to the GW source we denote the spherical coordinate as \((R^{\prime},\Theta^{\prime},\Phi^{\prime})\) (detector frame). Assuming a GW source locates in direction \((\Theta,\Phi)\), the GW tensor can also be expanded as
\[h_{ij}=H_{+}E^{+}_{ij}+H_{\times}E^{\times}_{ij}, \tag{123}\] \[E^{+}_{ij}=\hat{\Theta}_{i}\hat{\Theta}_{j}-\hat{\Phi}_{i}\hat{ \Phi}_{j},\] (124) \[E^{\times}_{ij}=\hat{\Theta}_{i}\hat{\Phi}_{j}+\hat{\Theta}_{j} \hat{\Phi}_{i} \tag{125}\]
In the source frame, the bases \(e^{+,\times}_{ij}\) and bases \(E^{+,\times}_{ij}\) locate in the same plane. However, they may be different up to a rotation angle which is called the polarization angle \(\Psi\)
\[e^{+}_{ij}=\cos 2\Psi E^{+}_{ij}-\sin 2\Psi E^{\times}_{ij}, \tag{126}\]
\[e^{\times}_{ij}=\sin 2\Psi E^{+}_{ij}+\cos 2\Psi E^{\times}_{ij}. \tag{127}\]
\(\Psi\) corresponds to the angle between \(\hat{\theta}\) and \(\hat{\Theta}\). Consequently we have
\[h_{ij} =h_{+}e^{+}_{ij}+h_{\times}e^{\times}_{ij}\] \[=(h_{+}\cos 2\Psi+h_{\times}\sin 2\Psi)E^{+}_{ij}\] \[+(h_{\times}\cos 2\Psi-h_{+}\sin 2\Psi)E^{\times}_{ij}, \tag{128}\] \[H_{+} =h_{+}\cos 2\Psi+h_{\times}\sin 2\Psi,\] (129) \[H_{\times} =h_{\times}\cos 2\Psi-h_{+}\sin 2\Psi. \tag{130}\]
Similarly in detector frame we have
\[H^{\prime}_{+}=h^{\prime}_{+}\cos 2\Psi^{\prime}+h^{\prime}_{\times}\sin 2 \Psi^{\prime}, \tag{131}\]
Figure 5: The top panel: the source localization change \(\Delta\Theta\equiv\Theta^{\prime}-\Theta\), \(\Delta\Phi\equiv\Phi^{\prime}-\Phi\) and polarization angle change \(\Delta\Psi\equiv\Psi^{\prime}-\Psi\) of GW150914-like source due to the kick velocity through (135), (136) and (141). The parameters of GW150914 are respectively symmetric mass ratio \(\eta=0.248735\), effective spin \(\chi_{s}=-0.303448,\chi_{a}=0.014667\), source localization \((\Theta,\Phi)=(2.77766633,1.6391)\), the polarization angle \(\Psi=1.56749\) and inclination angle \(\iota=2.6\)[52]. The second panel is for the corresponding kick velocity \(v\equiv\sqrt{v_{v}^{2}+v_{y}^{2}+v_{z}^{2}}\). The third panel shows the the waveform of GW150914-like source due to the kick velocity. “Source frame” means the waveform of rest source. “Detector frame” means the waveform adjusted by the kick velocity. “Redshift only” means adjustment only comes from the red shift factor \(k\). “\(\lambda\) only” means the adjustment due to the Lorentz transformation but ignore the time difference between \(t\) and \(t^{\prime}\). The bottom panel shows the waveform difference between adjusted ones and the source frame waveform. “All” means the comparison between source frame waveform and detector frame waveform.
\[H^{\prime}_{\times}=h^{\prime}_{\times}\cos 2\Psi^{\prime}-h^{\prime}_{+} \sin 2\Psi^{\prime}. \tag{132}\]
According to the convention of traditional data analysis in gravitational wave detection, the waveform \(h^{\prime}_{+}\) and \(h^{\prime}_{\times}\) together with parameters \((m_{1},m_{2},\vec{s}_{1},\vec{s}_{2},D_{L},\iota,\Theta^{\prime},\Psi^{\prime},\Psi^{\prime},t^{\prime}_{c},\phi^{\prime}_{c})\) will be considered [58]. Here \(t^{\prime}_{c}\) and \(\phi^{\prime}_{c}\) correspond to the coalescence time and the GW phase at that time. No matter the BBH is moving or not parameters \((m_{1},m_{2},\vec{s}_{1},\vec{s}_{2},D_{L},\iota)\) do not change. In contrast \((\Theta^{\prime},\Phi^{\prime},\Psi^{\prime},t^{\prime}_{c},\phi^{\prime}_{c})\) depend on the moving speed \(\vec{v}\) of the BBH.
If the parameters \((\Theta,\Phi,\Psi,t_{c},\phi_{c})\) for the corresponding rest BBH are known, we can express \((\Theta^{\prime},\Phi^{\prime},\Psi^{\prime},t^{\prime}_{c},\phi^{\prime}_{c})\) as functions of \((\Theta,\Phi,\Psi,t_{c},\phi_{c},\iota)\) and \(\vec{v}\). Among these functions, \(t^{\prime}_{c}\) and \(\phi^{\prime}_{c}\) can be easily obtained as
\[t^{\prime}_{c}=kt_{c}, \tag{133}\] \[\phi^{\prime}_{c}=\phi_{c}-2\lambda(\iota,0,v_{x},v_{y},v_{z}). \tag{134}\]
Noting the gravitational wave propagates along \(\hat{r}=-\hat{R}\) and \(\hat{r}^{\prime}=-\hat{R}^{\prime}\) in the viewpoint of source frame and detector frame respectively, the aberration formula (27) tells us
\[\cos\Theta^{\prime} =\frac{1}{\gamma(1+\hat{R}\cdot\vec{v})}\left[\cos\Theta+(\hat{R }\cdot\hat{v})\frac{v_{Z}}{v}\right]\] \[-\frac{v+\hat{R}\cdot\hat{v}}{1+\hat{R}\cdot\vec{v}}\frac{v_{Z}}{ v}, \tag{135}\] \[\sin\Theta^{\prime}\cos\Phi^{\prime} =\frac{1}{\gamma(1+\hat{R}\cdot\vec{v})}\left[\sin\Theta\cos\Phi -(\hat{R}\cdot\hat{v})\frac{v_{X}}{v}\right]\] \[+\frac{v+\hat{R}\cdot\hat{v}}{1+\hat{R}\cdot\vec{v}}\frac{v_{X}}{ v}. \tag{136}\]
Similar to the notation for spherical coordinate, the unprimed notation \((X,Y,Z)\) means the Cartesian coordinate in source frame and \((X^{\prime},Y^{\prime},Z^{\prime})\) means the one in detector frame. But different to the spherical coordinate \((R,\Theta,\Phi)\), the original point of the Cartesian coordinate \((X,Y,Z)\) locates at the detector.
In order to find \(\Psi^{\prime}\), we use the following steps. Since the waveform template based on the source frame is known we have \(h=h_{+}-ih_{\times}\). Then we have
\[H_{+}-iH_{\times}\equiv H=he^{2i\Psi}. \tag{137}\]
Figure 6: The plot convention is the same to Fig. 5 but the intrinsic parameters of BBH are symmetric mass ratio \(\eta=0.25\), effective spin \(\chi_{s}=0,\chi_{a}=0.99\).
The function \(\lambda\) solved in the last section can be used to calculate \(H^{\prime}\) and \(h^{\prime}\)
\[H^{\prime}_{+}-iH^{\prime}_{\chi}\equiv H^{\prime}=He^{-2i\lambda( \Theta,\Phi,v_{X},v_{Y},v_{Z})}, \tag{138}\] \[h^{\prime}_{+}-ih^{\prime}_{\chi}\equiv h^{\prime}=he^{-2i\lambda (\theta,\phi,v_{x},v_{y},v_{z})}, \tag{139}\]
which leads to
\[H^{\prime}=h^{\prime}e^{2i\Psi^{\prime}}, \tag{140}\] \[\Psi^{\prime}=\Psi-\lambda(\Theta,\Phi,v_{X},v_{Y},v_{Z})+\lambda (\theta,\phi,v_{x},v_{y},v_{z}). \tag{141}\]
Note more that the waveform model convention for source frame has \(\theta=\iota,\phi=0\), we can relate \(v_{x},v_{y},v_{z}\) to \(\Theta,\Phi,v_{X},v_{Y},v_{Z}\) and \(\iota\). Bases \((\hat{e}_{X},\hat{e}_{Y},\hat{e}_{Z})\) and \((\hat{e}_{x},\hat{e}_{y},\hat{e}_{z})\) are related through a rotation with Euler angles \((\Phi,\iota-\Theta,0)\). Consequently we have
\[\begin{pmatrix}v_{x}\\ v_{y}\\ v_{z}\end{pmatrix}=\begin{pmatrix}1&0&0\\ 0&\cos(\iota-\Theta)&\sin(\iota-\Theta)\\ 0&-\sin(\iota-\Theta)&\cos(\iota-\Theta)\end{pmatrix}.\] \[\begin{pmatrix}\cos\Phi&\sin\Phi&0\\ -\sin\Phi&\cos\Phi&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}v_{X}\\ v_{Y}\\ v_{Z}\end{pmatrix} \tag{142}\] \[=\begin{pmatrix}\cos\Phi&\sin\Phi&0\\ -\sin\Phi\cos(\iota-\Theta)&\cos\Phi\cos(\iota-\Theta)&\sin(\iota-\Theta)\\ \sin\Phi\sin(\iota-\Theta)&-\cos\Phi\sin(\iota-\Theta)&\cos(\iota-\Theta) \end{pmatrix}\] \[\cdot\begin{pmatrix}v_{X}\\ v_{Y}\\ v_{Z}\end{pmatrix}. \tag{143}\]
In conclusion we have got the waveform model for moving BBH
\[h^{\prime}=h(\frac{t^{\prime}}{k};m_{1},m_{2},\vec{s}_{1},\vec{s}_{2},\iota)e ^{-2i\lambda(\iota,0,v_{x},v_{y},v_{z})}, \tag{144}\]
where \(v_{x}\), \(v_{y}\), \(v_{z}\) are functions of \(\Theta\), \(\Phi\), \(\iota\), \(v_{X}\), \(v_{Y}\) and \(v_{Z}\) as shown in (143).
When the GW source is steadily moving, the velocity and consequently the phase factor are time independent. In this case, the phase factor \(\lambda(\iota,0,v_{x},v_{y},v_{z})\) can be absorbed into the parameter initial phase. And we can simplify the waveform template as
\[h(\frac{t^{\prime}}{k};m_{1},m_{2},\vec{s}_{1},\vec{s}_{2},\iota) \tag{145}\]
together with parameters \(m_{1}\), \(m_{2}\), \(\vec{s}_{1}\), \(\vec{s}_{2}\), \(D_{L}\), \(\iota\), \(\Theta\), \(\Phi\), \(\Psi\), \(t_{c}\), \(\phi_{c}\), \(v_{X}\), \(v_{Y}\) and \(v_{Z}\). However, we need to note that the parameters \((\iota,\Theta,\Phi,\Psi,t_{c},\phi_{c},v_{X},v_{Y},v_{Z})\) are degenerated into parameters \((\iota,\Theta^{\prime},\Phi^{\prime},\Psi^{\prime},t^{\epsilon}_{c},\phi^{ \prime}_{c})\) according to the relations (133)-(136), (141) and (143).
In contrast, if the GW source is accelerating [20; 44; 45; 46; 47; 48; 49; 50; 51] the phase factor \(\lambda(\iota,0,v_{x},v_{y},v_{z})\) will depend on time and can not be absorbed into the parameter initial phase any more. Then our waveform model (144) should be adopted. Consequently the initial phase is simplified to \(\phi^{\prime}_{c}=\phi_{c}\). However, parameters \(\Theta^{\prime}\), \(\Phi^{\prime}\) and \(\Psi^{\prime}\) will change with time.
In this accelerating case, the red shift factor \(k\) will also depend on time. Consequently the relation between detector frame time \(t^{\prime}\) and the source frame time \(t\) becomes
\[t^{\prime}=\int_{0}^{t}k(t)dt. \tag{146}\]
In Fig. 5 we use GW150914 [59] as an example to show the corresponding waveform \(h^{\prime}_{+}\), \(h^{\prime}_{\times}\) and the time dependence of parameters \(\Theta^{\prime}\), \(\Phi^{\prime}\) and \(\Psi^{\prime}\). In this example, the velocity \((v_{x},v_{y},v_{z})\) corresponds to the kick velocity of the BBH due to the asymmetric gravitational radiation. Such kick velocity can be modeled through waveform model for a rest BBH. Then velocity \((v_{X},v_{Y},v_{Z})\) can be obtained through the inverse transformation of (143)
\[\begin{pmatrix}v_{X}\\ v_{Y}\\ v_{Z}\end{pmatrix}=\] \[\begin{pmatrix}\cos\Phi&-\sin\Phi\cos(\iota-\Theta)&\sin\Phi\sin( \iota-\Theta)\\ \sin\Phi&\cos\Phi\cos(\iota-\Theta)&-\cos\Phi\sin(\iota-\Theta)\\ 0&\sin(\iota-\Theta)&\cos(\iota-\Theta)\end{pmatrix}\] \[\cdot\begin{pmatrix}v_{x}\\ v_{y}\\ v_{z}\end{pmatrix}. \tag{147}\]
The parameters of GW150914 are respectively total mass \(M=65.677\mathrm{M_{\odot}}\), symmetric mass ratio \(\eta=0.248735\), effective spin \(\chi_{s}=-0.303448,\chi_{a}=0.014667\), luminosity distance \(D_{L}=420\mathrm{Mpc}\), source localization \((\Theta,\Phi)=(2.77766633,1.6391)\), the polarization angle \(\Psi=1.56749\) and inclination angle \(\iota=2.6\). These values are based on the posterior distribution data of these parameters, which is publicly available on the webpage of the LIGO open science center [60]. Here we have adopted the median values of the posterior distribution for the corresponding parameters. In the first panel of Fig. 5 we plot the source localization change \(\Delta\Theta\equiv\Theta^{\prime}-\Theta\), \(\Delta\Phi\equiv\Phi^{\prime}-\Phi\) and polarization angle change \(\Delta\Psi\equiv\Psi^{\prime}-\Psi\). The corresponding kick velocity is plotted in the second panel of Fig. 5. The kick velocity is calculated based on gravitational waveform and the waveform is got through SEOBNREHM[56]. The source localization change and polarization angle change are much smaller than the measurement accuracy of current gravitational wave detection. In this case the kick velocity admits \(10^{-4}\) of speed of light.
The maximal kick velocity of BBH is about \(10^{-3}\) of speed of light [20; 44; 45; 46; 47; 48; 49; 50; 51]. As an example we investigate the equal mass, anti-aligned spinning BBH with spin \(\chi=0.99\). In Fig. 6 we plot the result. We find that the source localization change and polarization angle change increase one order compared to that of Fig. 5. As the first detected GW event, GW150914 is a representative astrophysical source. Regarding to the waveform transformation we concern here, kick velocity is the only involved issue. Highest kick velocity of non-precessing BBHs corresponds to equal mass BBH with large spin component
\(\chi_{a}\) which is analyzed in Fig. 6. Compared to such high kick velocity about \(10^{-3}\) speed of light, mass ratio factor can only result in \(10^{-4}\) speed of light at most [44].
The GW strain detected by a detector can be described as [56; 34]
\[s =F^{+}h_{+}+F^{\times}h_{\times}, \tag{148}\] \[F^{+}(\Theta,\Phi,\Psi) \equiv\frac{1}{2}(1+\cos^{2}\Theta)\cos 2\Phi\cos 2\Psi\] \[-\cos\Theta\sin 2\Phi\sin 2\Psi,\] (149) \[F^{\times}(\Theta,\Phi,\Psi) \equiv\frac{1}{2}(1+\cos^{2}\Theta)\cos 2\Phi\sin 2\Psi\] \[+\cos\Theta\sin 2\Phi\cos 2\Psi. \tag{150}\]
Note that the time dependence of \(\Theta^{\prime},\Phi^{\prime},\Psi^{\prime}\) will make the pattern functions \(F^{+}\) and \(F^{\times}\) vary with time also. In the third and fourth panels of Figs. 5 and 6 we investigate the waveform change due to the kick velocity according to the Lorentz transformation. Changing from time \(t\) to \(t^{\prime}\) corresponds to the red shift effect which has been studied in [20]. The combination of adjustment due to the phase factor \(\lambda\), the source localization change and the polarization angle change is denoted as "\(\lambda\) only" in Figs. 5 and 6. This part is new compared to the study in [20]. However, we find that this part of changement is much larger than the red shift part. The waveform relative changing is about 10 percentage.
In order to quantify the waveform change, we calculate the matching factor respect to the designed sensitivity of advanced LIGO. We adopt the same procedure as we have done in [56]. The matching factor, also called faithfulness factor (FF), for two waveforms \(s_{1}(t)\) and \(s_{2}(t)\) is defined as
\[\langle s_{1}|s_{2}\rangle =4\mathcal{R}\int_{f_{\rm min}}^{f_{\rm max}}\frac{\tilde{s}_{1}( f)\tilde{s}_{2}^{*}(f)}{S_{n}(f)}df\] \[\text{FF} \equiv\frac{\langle s_{1}|s_{2}\rangle}{\sqrt{\langle s_{1}|s_{1} \rangle\langle s_{2}|s_{2}\rangle}}, \tag{151}\]
where \(S_{n}(f)\) is the one-sided power spectral density (PSD) of the detector noise, \((f_{\rm min},f_{\rm max})\) corresponds to the frequency range of the detector and the star notation \(-^{*}\) means the complex conjugate. Corresponding to the mildly spinning GW150914-like BBH and the highly spinning anti-aligned BBH we plot the resulted matching factor in Fig. 7. Both the designed sensitivity of advanced LIGO [61] and Einstein Telescope (ET) are calculated. The data "LIGO-P1600143-v18-ET_D.txt" [62] are used for ET in the current work. Both results of advanced LIGO and ET are almost the same. Fig. 7 corresponds to the designed sensitivity of advanced LIGO. But the plot for ET is undistinguishable to that of Fig. 7. If only the effect of red-shift is considered like [20], the matching factors are larger than 99.999%. When the \(\lambda\) factor is taken into consideration, the matching factor decreases to about 99%. Such high matching factor means the corrections introduced by the kick velocity can be neglected for current GW detectors [63; 53].
In order to estimate the detectability of the correction introduced by the kick velocity involved in our waveform model, we investigate the following criteria [63; 64; 65; 66]
\[(1-\text{FF})\rho_{0}^{2}\gtrsim 1, \tag{152}\] \[\rho_{0}=\sqrt{\langle\delta h|\delta h\rangle}, \tag{153}\]
where \(\delta h\) means the waveform correction. The corresponding criteria are plotted in Fig. 8. According to the above traditional criteria, we find that O4 may not be able to distinguish the difference of the Lorentz transformation waveform model to the waveform without considering the effect of kick velocity. In contrast, the third generation (3G) detectors such as ET can well detect the effect of kick velocity. That is to say our waveform model will be useful in 3G era.
When a BBH coalescence happens near a supermassive black hole [16], complicated accelerating process may appear. Such gravitational wave sources are called binary EMRI (Extreme Mass Ratio Inspirals) in Refs. [67; 14; 68]. The above waveform construction process can be straightforwardly applied to set up the waveform template for the data analysis of binary EMRI systems.
## VIII Summary and discussion
The electric vector and the magnetic vector provide a good tool to describe electromagnetic field. That is because the electric vector and the magnetic vector present a traditional three dimensional picture which is easier to understand.
Similarly we have a three dimensional tensor for gravitational wave. Such a tensor provides a facility to transform between different coordinates. And also such a tensor makes people understand gravitational wave's behavior through traditional way instead of the complicated four dimensional object.
Figure 7: The matching factor (FF) between the adjusted waveform due to the kick velocity and the unadjusted one. The top panel corresponds to the BBH shown in Fig. 5 and the bottom panel corresponds to the BBH shown in Fig. 6.
Unfortunately the gravitational wave tensor can only be used for three dimensional coordinate transformation. It can not be used to discuss the relation between two relatively moving observers. This is quite different to the electric vector and the magnetic vector. The electric vector and the magnetic vector are complete to describe electromagnetic field. Such completeness is due to the well known Lorentz transformation for the electric vector and the magnetic vector. Current paper filled this gap. We have constructed the wanted Lorentz transformation for gravitational wave tensor. Together with our Lorentz transformation rule (59), we believe that the three dimensional tensor language will become more powerful to study gravitational wave physics and astronomy.
The Lorentz transformation for gravitational wave (59) provides a good tool to construct theoretical waveform model for moving sources. Such waveform model will not be limited by small velocity approximation [22; 23; 24]. If only the GW waveform of a corresponding rest source is known, we can construct the three dimensional tensor and transform it to a moving frame for the source with any complicated motion [16; 24]. Then the GW waveform can be straightforwardly reduced from the transformed three dimensional GW tensor.
Other than the waveform construction for moving sources, the Lorentz transformation for the three dimensional gravitational wave tensor can provide a powerful tool to study the interaction between a celestial body and a relatively moving GW source including the effect of GW on binary system [69; 70], the effect of GW on the relative motion between star and earth [71; 72], the effect of GW on the star seismic motion [73; 74; 75] and others.
In the viewpoint of BMS framework, gravitational wave appears in the order \((1/r)\) where \(r\) is the area radius of the wave front. Alternatively in the viewpoint of flat spacetime perturbation, gravitational wave should be small. So we can conclude that \(|h_{ij}|\ll 1\) is required. It is interesting to ask whether this condition provides any limit for us when apply the Lorentz transformation rule (59). In another word, is it possible that \(|h_{ij}|\ll 1\) while \(|h^{\prime}_{ij}|\geq 1\) according to (59)?
Firstly the discussion after (59) implies that
\[\frac{v^{k}h_{ki}}{1-\hat{r}\cdot\vec{v}}\sim v^{k}h_{ki}, \tag{154}\] \[\frac{v^{k}h_{ki}v^{i}}{(1-\hat{r}\cdot\vec{v})^{2}}\sim v^{k}h_{ ki}v^{i}. \tag{155}\]
And more we have
\[|\hat{r}_{i}|<1, \tag{156}\] \[\frac{\gamma}{1+\gamma}<1. \tag{157}\]
So the Lorentz transformation rule (59) implies
\[|h^{\prime}_{ij}| \sim|h_{ij}|+|v^{k}h_{ki}v^{i}|+2|v^{k}h_{ki}| \tag{158}\] \[\sim|h_{ij}|(1+2|v|+|v|^{2})\] (159) \[\sim|h_{ij}|. \tag{160}\]
This means the Lorentz transformation rule (59) will preserve the smallness of the gravitational wave tensor. This fact makes sure that the Lorentz transformation rule (59) is valid for all kinds of velocity \(\vec{v}\).
Theoretical waveform model is important to gravitational wave data analysis. Current waveforms used by gravitational wave detection ignore the effect of moving velocity of the source relative to detector. In the current paper we calculated the explicit Lorentz transformation formula for the gravitational wave tensor, which is shown in (59). This formula is a tensor equation. Any desired coordinate system can be used in specific application. Here we would like to emphasize that the Lorentz transformation formula (59) is valid for arbitrary high velocity. There is no approximation involved in this formula.
Figure 8: The detectability criteria of the waveform correction introduced by the kick velocity of BBH involved in the Lorentz transformation waveform model. The left plot is for advanced LIGO sensitivity which corresponds to the following O4 observation run soon. The right plot is for the sensitivity of the Einstein Telescope (ET) which represents the 3G detector era in the near future. The top panels correspond to the BBH shown in Fig. 5 and the bottom panel corresponds to the BBH shown in Fig. 6.
The well known Bondi-Metzner-Sachs (BMS) transformation has already given out the Lorentz transformation of gravitational waveforms between two relatively moving frames. Such two waveforms are different up to a phase factor \(\lambda\). But the phase factor \(\lambda\) has not been explicitly calculated yet before. As an example of application of our formula (59), we calculate straightforwardly the phase factor \(\lambda\). Again our result (114) is valid for arbitrary high velocity. There is no approximation involved in the calculation process.
If the gravitational wave source is moving with a constant velocity, the waveform transformation phase factor \(\lambda\) from rest waveform is independent of time. Consequently such phase factor will completely degenerate with initial phase of the gravitational wave as shown in (144). This means except the red shift factor \(k\), no extra adjustment is needed for waveform template construction for moving sources. Correspondingly no information about the source velocity can be extracted by single detector. Only when two or more well separated detectors are available, our quantitative result (144) and the aberration relation can be used to extract the information of the source velocity.
In contrast, if the gravitational wave source is accelerating, both the waveform transformation phase factor \(\lambda\) and the aberration relations are time dependent which will contribute to the waveform. The combination of (135), (136) and (141) together with (148) is needed to construct waveform for moving sources. As an example we calculated the adjusted waveform by kick velocity of binary black hole merger. On the one hand our result indicates that such adjustment is ignorable for current gravitational wave detection but may be important to next generation detectors. On the other hand, this example shows that our construction procedure works well for waveform template construction of moving sources. Especially the binary EMRI sources may be a good application topic of our construction procedure. Regarding to the formation channel that BBH forms in a disk of a super-massive black hole, the waveform of moving sources will be important for BBHs locating nearer than \(10^{6}\) gravitational radius to the center super-massive black hole in the near future 3G era.
###### Acknowledgements.
We thank Xian Chen, Alejandro Torres-Orjuela, Yun Fang, Kejia Lee and Lijing Shao for helpful discussions. This work was supported by CAS Project for Young Scientists in Basic Research YSBR-006, NSF of Hunan province (2018JJ2073) and the Key Project of Education Department of Hunan Province (No. 21A0576).
|
2308.02411 | Cohomology and deformation of compatible Hom-Leibniz algebras | In this paper, we consider compatible Hom-Leibniz algebra where the Hom map
twists the operations in the compatible system. We consider a suitably graded
Lie algebra whose Maurer-Cartan elements characterize the structure of
compatible Hom-Leibniz algebras. Using this, we study cohomology, infinitesimal
deformations, the Nijenhuis operator, and their relation for compatible
Hom-Leibniz algebras. Finally we see the cohomology of compatible Hom-Leibniz
algebra with coefficients in an arbitrary representation. | Rinkila Bhutia, RB Yadav, Namita Behera | 2023-05-26T12:10:38Z | http://arxiv.org/abs/2308.02411v2 | # Cohomology and deformation of compatible Hom-Leibniz algebras
###### Abstract
In this paper, we consider compatible Hom-Leibniz algebra where the Hom map twists the operations in the compatible system. We consider a suitably graded Lie algebra whose Maurer-Cartan elements characterize the structure of compatible Hom-Leibniz algebras. Using this, we study cohomology, infinitesimal deformations, the Nijenhuis operator, and their relation for compatible Hom-Leibniz algebras. Finally we see the cohomology of compatible Hom-Leibniz algebra with coefficients in an arbitrary representation.
Keywords: compatible Hom-Leibniz algebra, Maurer-Cartan element, cohomology, deformation, abelian extension
2010 Mathematics Subject Classification. 17B56, 13D10, 17A30.
## 1 Introduction
Leibniz algebra is a non-commutative generalisation of Lie algebra. It was introduced and called D-algebra in papers by A. M. Bloch and published in the 1960s to signify its relation with derivations. Later in 1993 J. L. Loday [5] introduced the same structure and called it Leibniz algebra. The cohomology theory of Leibniz algebra with coefficients in a bimodule has been studied in [4]. The concept of Hom algebras was introduced by Hartwig, Larsson, and Silverstrov [12]. Makhlouf and Silverstrov [13] introduced the notion of Hom-Leibniz algebra, generalising Hom-Lie algebras. Hom-algebra structures have been widely studied since then.
Algebraic deformation theory was introduced by Gerstenhaber for rings and algebra in a series of papers [7]-[11]. Subsequently, algebraic deformation theory has been studied for different kinds of algebras. To study the deformation theory of any algebra, one needs a suitable cohomology, known as the deformation cohomology, which controls the deformation. In [6], D. Balavoine studies the formal deformation of algebras using the theory of Maurer-Cartan elements in a
graded Lie algebra. In particular, this approach is used to study the deformation of Leibniz algebra.
Here, we have defined a compatible Hom-Leibniz algebra to be a pair of Hom-Leibniz algebras such that the linear combination of their algebraic structure is also a Hom-Leibniz algebra. Recently, cohomology and infinitesimal deformations of compatible Lie algebra and compatible associative algebra have been studied in [1] and [2] respectively. Motivated by these works, in this paper, we study the cohomology theory of compatible Hom-Leibniz algebra. Using the Balavoine bracket we define a graded Lie algebra whose Maurer-Cartan elements characterize the structure of compatible Hom-Leibniz algebras. We then study the cohomology of a compatible Hom-Leibniz algebra with coefficients in itself. This is then used to study infinitesimal deformation of compatible Hom-Leibniz algebra. Furthermore, we establish the relationship between the Nijenhuis operator and the trivial infinitesimal deformation. Further, we introduce the cohomology of compatible Hom-Leibniz algebra with coefficients in an arbitrary representation.
This paper is organized as follows: In section 2 we start with some basic concepts of Hom-Leibniz algebra. We then review the Balavoine bracket, some results on cohomologies, and the differential graded Lie algebra that controls the deformation of Hom-Leibniz algebra. In section 3 we define compatible Hom-Leibniz algebra and compatible Hom-bimodules. We then construct the graded Lie algebra whose Maurer-Cartan elements characterize compatible Hom-Leibniz algebra structure. In section 4 infinitesimal deformation of compatible Hom-Leibniz algebra is studied using cohomology of compatible Hom-Leibniz algebra with coefficients in itself. It is shown that equivalent infinitesimal deformations are in the same cohomology group. Then the notion of the Nijenhuis operator on a compatible Hom-Leibniz algebra is studied and the correspondence between the Nijenhuis operator and a trivial deformation is established. In section 5 the cohomology of compatible Leibniz algebra with coefficients in an arbitrary representation is introduced.
Throughout the paper we consider the underlying field \(K\) to be of characteristic \(0\).
## 2 Background
**Definition 2.1**.: _A Hom-Leibniz algebra is a vector space \(L\) together with a bilinear operation \([.,.]:L\otimes L\to L\) and a linear map \(\alpha:L\to L\) such that_
\[[\alpha(x),[y,z]]=[[x,y],\alpha(z)]+[\alpha(y),[x,z]],\ \forall x,y,z\in L.\]
A Hom-Leibniz algebra given by the triple \((L,[\ \ ],\alpha)\) is called multiplicative if \(\alpha([x,y])=[\alpha(x),\alpha(y)]\). Hereon we we consider our Hom-leibniz algebras to be multiplicative.
**Definition 2.2**.: _A homomorphism between two Hom-Leibniz algebras \((L_{1},[\ ]_{1},\alpha_{1})\)_
_and \((L_{2},[\ ]_{2},\alpha_{2})\) is a \(K\)-linear map \(\phi:L_{1}\to L_{2}\) satisfying_
\[\phi([x,y]_{1})=[\phi(x),\phi(y)]_{2}\ \ \mbox{and}\ \ \phi\circ\alpha_{1}= \alpha_{2}\circ\phi.\]
**Definition 2.3**.: _Let \((L,[\ ],\alpha)\) be a Hom-Leibniz algebra. A \(L\)-bimodule is a vector space \(M\) together with two \(L\)-actions \(m_{L}:L\otimes M\to M,\ \ m_{R}:M\otimes L\to M\) and a map \(\beta\in End(M)\) such that for any \(x,y\in L\) and \(m\in M\) we have_
\[\beta(m_{L}(x,m))=m_{L}(\alpha(x),\beta(m))\]
\[\beta(m_{R}(m,x))=m_{R}(\beta(m),\alpha(x))\]
\[m_{L}(\alpha(x),m_{L}(y,m))=m_{L}([x,y],\beta(m))+m_{L}(\alpha(y),m_{L}(x,m))\]
\[m_{L}(\alpha(x),m_{R}(m,y))=m_{R}(m_{L}(x,m),\alpha(y))+m_{R}(\beta(m),[x,y])\]
\[m_{R}(\beta(m),[x,y])=m_{R}(m_{R}(m,x),\alpha(y))+m_{L}(\alpha(x),m_{R}(m,y)).\]
The following is a well known result.
**Proposition 2.1**.: _Let \((L,[\ ],\{\ \},\alpha)\) be a Hom-Leibniz algebra and \((M,m_{L},m_{R},\beta)\) its representation. Then \(L\oplus M\) is a Hom-Leibniz algebra with the linear homomorphism \(\alpha\oplus\beta:L\oplus M\to L\oplus M\) defined as \((\alpha\oplus\beta)(x,m)=(\alpha(x),\beta(m))\) and the Hom-Leibniz bracket defined as_
\[[(x,u),(y,v)]_{\times}=([x,y],m_{L}^{1}(x,v)+m_{R}^{1}(u,y))\ \ \forall\ \ x,y\in L\ \mbox{and}\ u,v\in M.\]
_This is known as the semi-direct product._
**Definition 2.4**.: _A permutation \(\sigma\in S_{n}\) is called an \((i,n-i)\)-shuffle if \(\sigma(1)<\sigma(2)<...<\sigma(i)\) and \(\sigma(i+1)<\sigma(i+2)<...<\sigma(n)\). If \(i=0\) or \(n\), we assume \(\sigma=id\). \(S_{(i,n-i)}\) denotes the set of all \((i,n-i)\)-shuffles._
**Definition 2.5**.: _Let \((\mathfrak{g}=\oplus_{k\in\mathbb{Z}}\mathfrak{g}^{k},[\ ],d)\) be a differential graded Lie algebra. A degree 1 element \(x\in\mathfrak{g}^{1}\) is called a Maurer-Cartan element of \(\mathfrak{g}\) if it satisfies_
\[dx+\frac{1}{2}[x,x]=0.\]
**Theorem 2.1**.: _[_3_]_ _Let \((\mathfrak{g}=\oplus_{k\in\mathbb{Z}}\mathfrak{g}^{k},[\ ])\) be a graded Lie algebra and \(\mu\in\mathfrak{g}^{1}\) be a Maurer-Cartan element. Then the map_
\[d_{\mu}:\mathfrak{g}\rightarrow\mathfrak{g},\ d_{\mu}(u):=[\mu,u],\ \forall u\in \mathfrak{g},\]
_is a differential on \(\mathfrak{g}\). Further, for any \(v\in g^{1}\), the sum \(\mu+v\) is a Maurer-Cartan element of the graded Lie algebra \((\mathfrak{g}=\oplus_{k\in\mathbb{Z}}\mathfrak{g}^{k},[\ ])\) iff \(v\) is a Maurer-Cartan element of the differential graded Lie algebra \((\mathfrak{g}=\oplus_{k\in\mathbb{Z}}\mathfrak{g}^{k},[\ ],d_{\mu})\)._
### The Balavoine bracket
_[_3_]_ _Let \(\mathfrak{g}\) be a vector space and \(\alpha:g\to g\) a linear map. For each \(n\geq 1\), we denote \(\mathbb{C}^{n}_{\alpha}(\mathfrak{g},\mathfrak{g})=\{f\in Hom(\otimes^{n} \mathfrak{g},\mathfrak{g})|\alpha\circ f=f\circ\alpha^{\otimes n}\}\) and set \(\mathbb{C}^{s}_{\alpha}(\mathfrak{g},\mathfrak{g})=\oplus_{n\in\mathbb{N}} \mathbb{C}^{n}_{\alpha}(\mathfrak{g},\mathfrak{g})\). We assume the degree of an element in \(\mathbb{C}^{n}_{\alpha}(\mathfrak{g},\mathfrak{g})\) is \(n-1\). For \(P\in\mathbb{C}^{p+1}_{\alpha}(\mathfrak{g},\mathfrak{g}),Q\in\mathbb{C}^{q+1} _{\alpha}(\mathfrak{g},\mathfrak{g})\) we define the **Balavoine bracket** as
\[[P,Q]_{B}=P\circ Q-(-1)^{pq}Q\circ P\]
where \(P\circ Q\in\mathbb{C}^{p+q+1}_{\alpha}\) is defined as
\[(P\circ Q)(x_{1},x_{2},...,x_{p+q+1})=\sum_{k=1}^{p+1}(-1)^{(k-1)q}P\circ_{k} Q,\]
and
\[Po_{k}Q(x_{1},x_{2},...,x_{p+q+1})\]
\[=\sum_{\sigma\in S(k-1,q)}(-1)^{\sigma}P(\alpha^{p}(x_{\sigma(1)}),...,\alpha ^{p}(x_{\sigma(k-1)}),\,Q(x_{\sigma(k)},...,\,x_{\sigma(k+q-1)},\,x_{k+q}), \,\alpha^{p}(x_{k+q+1}),...,\,\alpha^{p}(x_{p+q+1}).\]
**Theorem 2.2**.: _The graded vector space \(\mathbb{C}^{*}_{\alpha}(\mathfrak{g},\mathfrak{g})\) equipped with the Balavoine bracket given above is a graded Lie algebra._
_In particular for \(\pi\in\mathbb{C}^{2}_{\alpha}(\mathfrak{g},\mathfrak{g})\), we have \([\pi,\pi]_{B}\in\mathbb{C}^{3}_{\alpha}(\mathfrak{g},\mathfrak{g})\) such that \([\pi,\pi]_{B}=\pi\circ\pi-(-1)^{1.1}\pi\circ\pi=2\pi\circ\pi=2\sum_{k=1}^{2}(- 1)^{k-1}\pi\circ_{k}\pi=2(\pi\circ_{1}\pi-\pi\circ_{2}\pi)\)\(\pi\circ_{1}\pi(x,y,z)=\pi(\pi(x,y),\alpha(z))\) and \(\pi\circ_{2}\pi(x,y,z)=\pi(\alpha(x),\pi(y,z))-\pi(\alpha(y),\pi(x,z))\). Thus we have the following corollary._
**Corollary 2.1**.: \(\pi\) _defines a Hom-Leibniz algebra structure on \(\mathfrak{g}\) iff \(\pi\) is a Maurer-Cartan element of the graded Lie algebra \((\mathbb{C}^{*}_{\alpha}(\mathfrak{g},\mathfrak{g}),[\ ]_{B})\)._
**Theorem 2.3**.: _Let \((\mathfrak{g},\pi,\alpha)\) be a Hom-Leibniz algebra. Then \((\mathbb{C}^{*}_{\alpha}(\mathfrak{g},\mathfrak{g}),[\ ],d_{\pi})\) becomes a differential graded Lie algebra (dgLa), where \(d_{\pi}:=[\pi,.]_{B}\). Further given \(\pi^{\prime}\in\mathbb{C}^{2}_{\alpha}(\mathfrak{g},\mathfrak{g})\), \(\pi+\pi^{\prime}\) defines a Leibniz algebra structure on \(\mathfrak{g}\) iff \(\pi^{\prime}\) is a Maurer-Cartan element of the dgLa \((\mathbb{C}^{*}_{\alpha}(\mathfrak{g},\mathfrak{g}),[\ ],d_{\pi})\)._
## 3 Compatible Hom-Leibniz algebra
**Definition 3.1**.: _A compatible Hom-Leibniz algebra is a quadruple \((L,[\ ],\{\ \},\alpha)\) such that \((L,[\ ],\alpha)\) and \((L,\{\ \},\alpha)\) are Hom-Leibniz algebras such that \(\ \forall\,x,y,z\in L\)_
\[[\alpha(x),\{y,z\}]+\{\alpha(x),[y,z]\}=[\{x,y\},\alpha(z)]+\{[x,y],\alpha(z) \}+[\alpha(y),\{x,z\}]+\{\alpha(y),[x,z]\}. \tag{1}\]
**Proposition 3.1**.: _A quadruple \((L,[\ ],\{\ \},\alpha)\) is a compatible Hom-Leibniz algebra iff \((L,[\ ],\alpha)\) and \((L,\{\ \},\alpha)\) are Hom-Leibniz algebras such that for any \(k_{1},k_{2}\) in \(K\), the bilinear operation_
\[[\![x,y]\!]=k_{1}[x,y]+k_{2}\{x,y\},\ \forall x,y\in L\]
together with the \(k\)-linear map \(\alpha:L\to L\) defines a Hom-Leibniz algebra structure on \(L\)._
Proof.: Let \((L,[\ \ ],\{\ \ \},\alpha)\) be a compatible Hom-Leibniz algebra. Then by definition itself \((L,[.,.],\alpha)\) and \((L,\{.,.\},\alpha)\) are Hom-Leibniz algebras. Further,
\[\llbracket\llbracket x,y\rrbracket,\alpha(z)\rrbracket+ \llbracket\alpha(y),\llbracket x,z\rrbracket\rrbracket=\llbracket k_{1}[x,y]+k_ {2}\{x,y\},\alpha(z)\rrbracket+\llbracket\alpha(y),k_{1}[x,z]+k_{2}\{x,z\}]\] \[=k_{1}[k_{1}[x,y]+k_{2}\{x,y\},\alpha(z)]+k_{2}\{k_{1}[x,y]+k_{2} \{x,y\},\alpha(z)\}+\] \[k_{1}[\alpha(y),k_{1}[x,z]+k_{2}\{x,z\}]+k_{2}\{\alpha(y),k_{1}[x,z]+k_{2}\{x,z\}\}\] \[=k_{1}k_{1}[[x,y],\alpha(z)]+k_{1}k_{2}[\{x,y\},\alpha(z)]+k_{2}k_ {1}\{[x,y],\alpha(z)\}+k_{2}k_{2}\{\{x,y\},\alpha(z)\}+\] \[k_{1}k_{1}[\alpha(y),[x,z]]+k_{1}k_{2}[\alpha(y),\{x,z\}]+k_{2}k_ {1}\{\alpha(y),[x,z]\}+k_{2}k_{2}\{\alpha(y),\{x,z\}\}\] \[=k_{1}^{2}([[x,y],\alpha(z)]+[\alpha(y),[x,z]])+k_{2}^{2}(\{\{x,y \},\alpha(z)\}+\{\alpha(y),\{x,z\}\})\] \[k_{1}k_{2}([\{x,y\},\alpha(z)]+\{[x,y],\alpha(z)\}+[\alpha(y),\{ x,z\}]+\{\alpha(y),[x,z]\})\] \[=k_{1}^{2}[\alpha(x),[y,z]]+k_{2}^{2}\{\alpha(x),\{y,z\}\}+k_{1} k_{2}([\alpha(x),\{y,z\}]+\{\alpha(x),[y,z]\})\] \[=k_{1}(k_{1}[\alpha(x),[y,z]]+k_{2}[\alpha(x),\{y,z\}])+k_{2}(k_ {2}\{\alpha(x),\{y,z\}\}+k_{1}\{\alpha(x),[y,z]\})\] \[=k_{1}[\alpha(x),k_{1}[y,z]+k_{2}\{y,z\}]+k_{2}\{\alpha(x),k_{2} \{y,z\}+k_{1}[y,z]\}\] \[=k_{1}[\alpha(x),[\![y,z]\!]]+k_{2}\{\alpha(x),[\![y,z]\!]\}\] \[=\llbracket\alpha(x),[\![y,z]\!]].\]
The converse is straight forward.
**Definition 3.2**.: _A homomorphism between two compatible Hom-Leibniz algebras \((L_{1},[\ \ ]_{1},\{\ \}_{1},\alpha)\) and \((L_{2},[\ \ ]_{2},\{\ \}_{2},\{\ \}_{2},\alpha)\) is a \(k\)-linear map \(\phi:L_{1}\to L_{2}\) satisfying_
\[\phi([x,y]_{1})=[\phi(x),\phi(y)]_{2}\ \ \phi(\{x,y\}_{1})=\{\phi(x),\phi(y)\}_{2}\ \ \text{and}\ \ \phi\circ\alpha_{1}=\alpha_{2}\circ\phi.\]
**Definition 3.3**.: _Let \((L,[\ \ ],\{\ \},\alpha)\) be a compatible Hom-Leibniz algebra. A compatible \(L\)-bimodule is a vector space \(M\) together with four \(L\)-actions_
\[m_{L}^{1}:L\otimes M\to M, m_{R}^{1}:M\otimes L\to M\] \[m_{L}^{2}:L\otimes M\to M, m_{R}^{2}:M\otimes L\to M\]
_and a linear map \(\beta:M\to M\) such that_
* \((M,m_{L}^{1},m_{R}^{1},\beta)\) _is a bimodule over_ \((L,[\ \ ],\alpha)\)_._
* \((M,m_{L}^{2},m_{R}^{2},\beta)\) _is a bimodule over_ \((L,\{\ \},\alpha)\)_._
* _the following compatibility conditions hold for all_ \(x,y\in L,\ m\in M\)__ \[LLM: m_{L}^{1}(\alpha(x),m_{L}^{2}(y,m))+m_{L}^{2}(\alpha(x),m_{L}^{1}(y,m))=m_ {L}^{1}(\{x,y\},\beta(m))+\] \[m_{L}^{2}([x,y],\beta(m))+m_{L}^{1}(\alpha(y),m_{L}^{2}(x,m))+m_ {L}^{2}(\alpha(y),m_{L}^{1}(x,m))\] \[LML: m_{L}^{1}(\alpha(x),m_{R}^{2}(m,y))+m_{L}^{2}(\alpha(x),m_{R}^{1}(m,y))=m_{R}^{1}(m_{L}^{2}(x,m),\alpha(y))+\] \[m_{R}^{2}(m_{L}^{1}(x,m),\alpha(y))+m_{R}^{1}(\beta(m),\{x,y\})+ m_{R}^{2}(\beta(m),[x,y])\] \[MLL: m_{R}^{1}(\beta(m),\{x,y\})+m_{R}^{2}(\beta(m),[x,y])=m_{R}^{1}(m_{R}^ {2}(m,x),\alpha(y))+\] \[m_{R}^{2}(m_{R}^{1}(m,x),\alpha(y))+m_{L}^{1}(\alpha(x),m_{R}^{2} (m,y))+m_{L}^{2}(\alpha(x),m_{R}^{1}(m,y))\]
_We also say that \((M,m_{L}^{1},m_{R}^{1},m_{L}^{2},m_{R}^{2},\beta)\) is a representation of the compatible Hom-Leibniz algebra \((L,[\ ],\{\ \},\alpha)\)._
_Note: Any compatible Hom-Leibniz algebra \((L,[\ ],\{\ \},\alpha)\) is a compatible \(L\)-bimodule in which \(m_{L}^{1}=m_{R}^{1}=[\ ]\) and \(m_{L}^{2}=m_{R}^{2}=\{\ \}\)._
_The following result can be proved just like the standard case._
**Proposition 3.2**.: _Let \((L,[\ ],\{\ \},\alpha)\) be a compatible Hom-Leibniz algebra and \((M,m_{L}^{1},m_{R}^{1},m_{L}^{2},m_{R}^{2},\beta)\) its representation. Then \(L\oplus M\) is a compatible Hom-Leibniz algebra with the linear homomorphism \(\alpha\oplus\beta\) and the compatible Hom-Leibniz brackets defined as_
\[[(x,u),(y,v)]_{\ltimes}=([x,y],m_{L}^{1}(x,v)+m_{R}^{1}(u,y))\ \ \mbox{and}\]
\[\{(x,u),(y,v)\}_{\ltimes}=(\{x,y\},m_{L}^{2}(x,v)+m_{R}^{2}(u,y))\ \ \forall\ \ x,y\in L\ \mbox{and}\ u,v\in M.\]
### Maurer-Cartan characterisation of Compatible Hom-Leibniz algebra
**Definition 3.4**.: _[_1_]_ _Let \((\mathfrak{g},[\ \ ],\delta_{1})\) and \((\mathfrak{g},[\ \ ],\delta_{2})\) be two differential graded Lie algebras. We call \((\mathfrak{g},[\ \ ],\delta_{1},\delta_{2})\) a bi-differential graded Lie algebra (b-dgLa) if \(\delta_{1}\) and \(\delta_{2}\) satisfy_
\[\delta_{1}\delta_{2}+\delta_{2}\delta_{1}=0.\]
_It is easy to show the following._
**Proposition 3.3**.: _[_1_]_ _Let \((\mathfrak{g},[\ \ ],\delta_{1})\) and \((\mathfrak{g},[\ \ ],\delta_{2})\) be two differential graded Lie algebras. Then \((\mathfrak{g},[\ \ ],\delta_{1},\delta_{2})\) is a bi-differential graded Lie algebra iff for any \(k_{1}\) and \(k_{2}\in K\), \((\mathfrak{g},[\ \ ],\delta_{k_{1}k_{2}})\) is a differential graded Lie algebra where \(\delta_{k_{1}k_{2}}=k_{1}\delta_{1}+k_{2}\delta_{2}\)._
**Definition 3.5**.: _Let \((\mathfrak{g},[\ \ ],\delta_{1},\delta_{2})\) be a b-dgLa. A pair \((\pi_{1},\pi_{2})\in\mathfrak{g}_{1}\oplus\mathfrak{g}_{1}\) is called a Maurer-Cartan element of the b-dgLa \((\mathfrak{g},[\ \ ],\delta_{1},\delta_{2})\) if \(\pi_{1}\) and \(\pi_{2}\) are Maurer-Cartan elements of the dgLas \((\mathfrak{g},[\ \ ],\delta_{1})\) and \((\mathfrak{g},[\ \ ],\delta_{2})\) respectively, and_
\[\delta_{2}\pi_{1}+\delta_{1}\pi_{2}+[\pi_{1},\pi_{2}]=0.\]
**Proposition 3.4**.: _A pair \((\pi_{1},\pi_{2})\in\mathfrak{g}_{1}\oplus\mathfrak{g}_{1}\) is a Maurer-Cartan element of the b-dgLa \((\mathfrak{g},[\ \ ],\delta_{1},\delta_{2})\) iff for any \(k_{1},k_{2}\in K\), \(k_{1}\pi_{1}+k_{2}\pi_{2}\) is a Maurer-Cartan element of the dgLa \((\mathfrak{g},[\ \ ],\delta_{k_{1}k_{2}})\)._
**Theorem 3.1**.: _Let \((L,\alpha)\) be a Hom-vector space and \(\pi_{1},\pi_{2}\in\mathbb{C}_{\alpha}^{2}(L,L)\). Then \((L,\pi_{1},\pi_{2},\alpha)\) is a compatible Hom-Leibniz algebra iff \((\pi_{1},\pi_{2})\) is a Maurer-Cartan element of the b-dgLa \((\mathbb{C}_{\alpha}^{*}(L,L),[\ \ ]_{B},\delta_{1}=0,\delta_{2}=0)\)._
Proof.: \((L,\pi_{1},\pi_{2},\alpha)\) is a compatible Hom-Leibniz algebra. gives \((L,\pi_{1},\alpha)\) and \((L,\pi_{2},\alpha)\) are Hom-Leibniz algebras. Hence we get \([\pi_{1},\pi_{1}]_{B}=[\pi_{2},\pi_{2}]_{B}=0\).
Further \(\forall x,y,z\in L\) we have the compatibility condition,
\[\pi_{1}(\alpha(x),\pi_{2}(y,z))+\pi_{2}(\alpha(x),\pi_{1}(y,z))=\pi_ {1}(\pi_{2}(x,y),\alpha(z))+\pi_{2}(\pi_{1}(x,y),\alpha(z))+\] \[\pi_{1}(\alpha(y),\pi_{2}(x,z))+\pi_{2}(\alpha(y),\pi_{1}(x,z))\]
We note that, \([\pi_{1},\pi_{2}]_{B}=\pi_{1}\circ\pi_{2}+\pi_{2}\circ\pi_{1}\), where \(\pi_{1}\circ\pi_{2}(x,y,z)=(\pi_{1}\circ_{1}\pi_{2}-\pi_{1}\circ_{2}\pi_{2})(x,y,z)=\pi_{1}(\pi_{2}(x,y),\alpha(z))-\pi_{1}(\alpha(x),\pi_{2}(y,z))+\pi_{1}( \alpha(y),\pi_{2}(x,z))\) and \(\pi_{2}\circ\pi_{1}(x,y,z)=(\pi_{2}\circ_{1}\pi_{1}-\pi_{1}\circ_{2}\pi_{1})( x,y,z)=\pi_{2}(\pi_{1}(x,y),\alpha(z))-\pi_{2}(\alpha(x),\pi_{1}(y,z))+\] \(\pi_{2}(\alpha(y),\pi_{1}(x,z))\). i.e.,
\[[\pi_{1},\pi_{2}]_{B}(x,y,z) =\pi_{1}(\pi_{2}(x,y),\alpha(z))-\pi_{1}(\alpha(x),\pi_{2}(y,z))+ \pi_{1}(\alpha(y),\pi_{2}(x,z))\] \[+\pi_{2}(\pi_{1}(x,y),\alpha(z))-\pi_{2}(\alpha(x),\pi_{1}(y,z))+ \pi_{2}(\alpha(y),\pi_{1}(x,z)).\]
Thus we see that \([\pi_{1},\pi_{2}]=0\) is equivalent to the compatibility condition 2.
**Theorem 3.2**.: _[_1_]_ _Let \((\pi_{1},\pi_{2})\) be a Maurer-Cartan element of the b-dgLa \((\mathfrak{g},[\ \ ],\delta_{1},\delta_{2})\). Define \(d_{1}:=\delta_{1}+[\pi_{1},\_]\) and \(d_{2}:=\delta_{2}+[\pi_{2},\_]\). Then \((\mathfrak{g},[\ \ ],d_{1},d_{2})\) is a b-dgLa. Further for any \(\tilde{\pi}_{1},\tilde{\pi}_{2}\in\mathfrak{g}_{1}\), \((\pi_{1}+\tilde{\pi}_{1},\pi_{2}+\tilde{\pi}_{2})\) is a Maurer Cartan element of the b-dgLa \((\mathfrak{g},[\ \ ],\delta_{1},\delta_{2})\) iff \((\tilde{\pi}_{1},\tilde{\pi}_{2})\) is a Maurer-Cartan element of the b-dgLa \((\mathfrak{g},[\ \ ],d_{1},d_{2})\)._
_Let \((L,\pi_{1},\pi_{2},\alpha)\) be a compatible Hom-Leibniz algebra. From theorems 3.1 and 3.2, we conclude the following important results:_
**Theorem 3.3**.: \((\mathbb{C}_{\alpha}^{*}(L,L),[\ \ ],d_{1},d_{2})\) _is a b-dgLa where \(d_{1}:=[\pi_{1},\_]_{B}\) and \(d_{2}:=[\pi_{2},\_]_{B}\)._
**Theorem 3.4**.: _For any \(\tilde{\pi}_{1},\tilde{\pi}_{2}\in\mathbb{C}_{\alpha}^{2}(L,L)\), \((L,\pi_{1}+\tilde{\pi}_{1},\pi_{2}+\tilde{\pi}_{2})\) is a compatible Hom-Leibniz algebra iff \((\pi_{1}+\tilde{\pi}_{1},\pi_{2}+\tilde{\pi}_{2})\) is a Maurer Cartan element of the b-dgLa \((\mathbb{C}_{\alpha}^{*}(L,L),[\ \ ]_{B},d_{1},d_{2})\)._
### Cohomology of compatible Hom-Leibniz algebra with coefficients in itself
_Let \((L,[\ \ ],\{\ \ \},\alpha)\) be a compatible Hom-Leibniz algebra with \(\pi_{1}(x,y)=[x,y]\) and \(\pi_{2}(x,y)=\{x,y\}\). By theorem 3.1, \((\pi_{1},\pi_{2})\) is a Maurer-Cartan element of the b-dgLa \((\mathbb{C}_{\alpha}^{*}(L,L),[\ \ ]_{B},0,0)\)._
_We define the \(n\)-cochains for \(n\geq 1\) as_
\[LC_{\alpha}^{n}(L,L):=\mathbb{C}_{\alpha}^{n}(L,L)\oplus\mathbb{C}_{\alpha}^{ n}(L,L)...\oplus\mathbb{C}_{\alpha}^{n}(L,L),\ \text{n times}\]
_and \(d^{n}:LC_{\alpha}^{n}(L,L)\to LC_{\alpha}^{n+1}(L,L)\) by_
\[d^{1}f=([\pi_{1},f]_{B},[\pi_{2},f]_{B}),\ \forall f\in LC_{\alpha}^{1}(L,L)\]
\[d^{n}(f_{1},f_{2},...,f_{n})=(-1)^{n-1}([\pi_{1},f_{1}]_{B},...,[\pi_{2},f_{i-1}]_{B }+[\pi_{1},f_{i}]_{B},...,[\pi_{2},f_{n}]_{B}),\]
_where\((f_{1},f_{2},...,f_{n})\in LC_{\alpha}^{n}(L,L)\) and \(2\leq i\leq n\)._
\(d\) _defined as above gives the following theorem._
**Theorem 3.5**.: _We have \(d^{n+1}\circ d^{n}=0\)._
Proof.: We first note that since \((\pi_{1},\pi_{2})\) is a Maurer-Cartan element of the b-dgLa \((\mathbb{C}_{\alpha}^{*}(L,L),[\ \ ]_{B},0,0)\) we have \([\pi_{1},\pi_{1}]=0,\ [\pi_{1},\pi_{2}]=0,\ [\pi_{2},\pi_{2}]=0\).
For any \((f_{1},f_{2},\cdots,f_{n})\in LC_{\alpha}^{n}(L,L),\ 2\leq i\leq n\) we have
\[d^{n+1}d^{n}(f_{1},f_{2},\cdots,f_{n})\] \[= (-1)^{n}d^{n+1}([\pi_{1},f_{1}]_{B},\cdots,[\pi_{2},f_{i-1}]_{B}+ [\pi_{1},f_{i}]_{B},\cdots,[\pi_{2},f_{n}]_{B}\] \[= -([\pi_{1},[\pi_{1},f_{1}]_{B}]_{B},[\pi_{2},[\pi_{1},f_{1}]_{B}] _{B}+[\pi_{1},[\pi_{2},f_{1}]_{B}]_{B}+[\pi_{1},[\pi_{1},f_{2}]_{B}]_{B},\cdots\] \[[\pi_{2},[\pi_{2},f_{i-2}]_{B}]_{B}+[\pi_{2},[\pi_{1},f_{i-1}]_{B} ]_{B}+[\pi_{1},[\pi_{2},f_{i-1}]_{B}]_{B}+[\pi_{1},[\pi_{1},f_{i}]_{B}]_{B},\cdots,\] \[[\pi_{2},[\pi_{2},f_{n-1}]_{B}]_{B}+[\pi_{2},[\pi_{1},f_{n}]_{B}]_{ B})+[\pi_{1},[\pi_{2},f_{n}]_{B}]_{B},[\pi_{2},[\pi_{2},f_{n}]_{B}]_{B})\ \ (3\leq i\leq n-1)\] \[= -(\frac{1}{2}[[\pi_{1},\pi_{1}]_{B},f_{1}]_{B},[[\pi_{1},\pi_{2}]_ {B},f_{1}]_{B}+\frac{1}{2}[[\pi_{1},\pi_{1}]_{B},f_{2}]_{B},\cdots\] \[\frac{1}{2}[[\pi_{2},\pi_{2}]_{B},f_{i-2}]_{B}+[[\pi_{1},\pi_{2}]_ {B},f_{i-1}]_{B}+\frac{1}{2}[[\pi_{1},\pi_{1}]_{B},f_{i}]_{B},\cdots,\] \[\frac{1}{2}[[\pi_{2},\pi_{2}]_{B},f_{n-1}]_{B}+[[\pi_{1},\pi_{2}]_ {B},f_{n}]_{B},\frac{1}{2}[[\pi_{2},\pi_{2}]_{B},f_{n}]_{B})\] \[= (0,0,\cdots,0).\]
_Hence we have that \((LC_{\alpha}^{*}(L,L))=(\oplus_{n\in\mathbb{N}}LC_{\alpha}^{n}(L,L),d^{*})\) is a cochain complex._
**Definition 3.6**.: _Let \((L,[\ \ ],\{\ \},\alpha)\) be a compatible Hom-Leibniz algebra. The cohomology of the cochain complex \((LC_{\alpha}^{*}(L,L),d^{*})\) is called the cohomology of the compatible Hom-Leibniz algebra \((L,[\ \ ],\{\ \},\alpha)\). We denote the corresponding cohomology group by \(H_{\alpha}^{n}(L,L)\)._
## 4 Infinitesimal deformations of compatible Hom-Leibniz algebras
**Definition 4.1**.: _Let \((L,[\ \ ],\{\ \},\alpha)\) be a compatible Hom-Leibniz algebra. A formal one-parameter deformation of \(L\) is a pair of \(k[[t]]\)-linear maps_
\[\mu_{t}:L[[t]]\otimes L[[t]]\to L[[t]]\ \text{and}\]
\[m_{t}:L[[t]]\otimes L[[t]]\to L[[t]]\ \text{such that}:\]
1. \(\mu_{t}(a,b)=\sum_{i=0}^{\infty}\mu_{i}(a,b)t^{i}\)_,_ \(m_{t}(a,b)=\sum_{i=0}^{\infty}m_{i}(a,b)t^{i}\) _for all_ \(a,b\in T\)_, where_ \(\mu_{i},m_{i}:T\otimes T\to T\) _are k-linear and_ \(\mu_{0}(a,b,c)=[a,b]\) _and_ \(m_{0}(a,b)=\{ab\}\)
_._
2. _For any_ \(t\)_,_ \((L[[t]],\mu_{t},m_{t},\alpha)\) _is a compatible Hom-Leibniz algebra._
**Definition 4.2**.: _Let \((L,[\ \ ],\{\ \},\alpha)\) be a compatible Hom-Leibniz algebra. Let \(\mu_{1},m_{1}\in C^{2}_{\alpha}(L,L)\). Define_
\[\mu_{t}(x,y)=[x,y]+t\mu_{1}(x,y),\ \ \ \ m_{t}(x,y)=[x,y]+tm_{1}(x,y),\ \ \forall x,y\in L.\]
_If for any t, \((L,\mu_{t},m_{t},\alpha)\) is a compatible Hom-Leibniz algebra, we say that \((L,\mu_{t},m_{t},\alpha)\) defines an infinitesimal deformation of \((L,[\ \ ],\{\ \},\alpha)\). We also say that \((\mu_{1},m_{1})\) generates an infinitesimal deformation of \((L,[\ \ ],\{\ \},\alpha)\)._
_For convenience we write \([x,y]=\mu_{0}(x,y)\) and \(\{x,y\}=m_{0}(x,y)\)._
_By theorem 3.1 we have that \((L,\mu_{t},m_{t},\alpha)\) is a compatible Hom-Leibniz algebra \(\iff(\mu_{t},m_{t})\) is a Maurer Cartan element of \((C^{*}_{\alpha},[\ \ ]_{B},0,0)\iff\)_
* \([\mu_{t},\mu_{t}]_{B}=0\)__
* \([m_{t},m_{t}]_{B}=0\)__
* \([\mu_{t},m_{t}]_{B}=0\)__
* \(\Longleftrightarrow\)__
* \([\mu_{0},\mu_{0}]_{B}=0,\ [\mu_{0},\mu_{1}]_{B}=0,\ [\mu_{1},mu_{1}]_{B}=0\)__
* \([m_{0},m_{0}]_{B}=0,\ [m_{0},m_{1}]_{B}=0,\ [m_{1},m_{1}]_{B}=0\)__
* \([\mu_{0},m_{0}]_{B}=0,\ [\mu_{0},m_{1}]_{B}+[\mu_{1},m_{0}]_{B}=0,\ [\mu_{1},m_{1}]_{B}=0\)_._
_Reordering the terms and excluding the trivial equations we get that \((L,\mu_{t},m_{t},\alpha)\) defines an infinitesimal deformation of \((L,[\ \ ],\{\ \},\alpha)\) iff_
\[[\mu_{0},\mu_{1}]_{B}=0,\ \ \ \ [m_{0},m_{1}]_{B}=0\,\ \ \ \ [\mu_{0},m_{1}]_{B}+[\mu_{1},m_{0}]_{B}\ =0\]
\[[\mu_{1},\mu_{1}]_{B}=0,\ \ \ \ [\mu_{1},m_{1}]_{B}=0,\ \ \ \ [\mu_{1},m_{1}]_{B}=0\.\]
_Note that the first line above implies \(d^{2}(\mu_{1},m_{1})=0\) i.e \((\mu_{1},m_{1})\) is a 2-cocycle and the second line implies that \((L,\mu_{1},m_{1},\alpha)\) is a compatible Hom-Leibniz algebra._
_Hence we have the following theorem._
**Theorem 4.1**.: _Let \((L,[\ \ ],\{\ \},\alpha)\) be a compatible Hom-Leibniz algebra. If \((\mu_{1},m_{1})\in LC^{2}_{\alpha}(L,L)\) generates an infinitesimal deformation then \((\mu_{1},m_{1})\) is a cocycle._
**Definition 4.3**.: _Two infinitesimal deformations \((L,\mu_{t},m_{t},\alpha)\) and \((L,\mu^{\prime}_{t},m^{\prime}_{t},\alpha)\) of compatible Hom-Leibniz algebra \((L,[\ \ ],\{\ \},\alpha)\) are said to be equivalent if there exists a linear bijection \(N:L\to L\) such that_
\[Id+tN:(L,\mu_{t},m_{t},\alpha)\rightarrow(L,\mu^{\prime}_{t},m^{\prime}_{t},\alpha)\]
_is a compatible Hom-Leibniz algebra homomorphism._
\(Id+tN\) _being a compatible Hom-Leibniz algebra homomorphism implies_
1. \([x,y]=[x,y]^{\prime}\)__
2. \(\mu_{1}(x,y)-\mu_{1}^{\prime}(x,y)=[x,N(y)]+[N(x),y]-N[x,y]\)__
3. \(N\mu_{1}(x,y)=\mu_{1}^{\prime}(x,N(y))+\mu_{1}^{\prime}(N(x),y)+[N(x),N(y)]\)__
4. \(\mu_{1}^{\prime}(N(x),N(y))=0\)__
5. \(\{x,y\}=\{x,y\}^{\prime}\)__
6. \(m_{1}(x,y)-m_{1}^{\prime}(x,y)=\{x,N(y)\}+\{N(x),y\}-N\{x,y\}\)__
7. \(Nm_{1}(x,y)=m_{1}^{\prime}(x,N(y))+m_{1}^{\prime}(N(x),y)+\{N(x),N(y)\}\)__
8. \(m_{1}^{\prime}(N(x),N(y))=0\)__
9. \(N\alpha=\alpha N,\ \ \forall x,y\in L.\)__
\(2\) _and \(6\) gives_
\[(\mu_{1}-\mu_{1}^{\prime})(x,y) = ([x,N(y)]+[N(x),y]-N[x,y],\{x,N(y)\}+\{N(x),y\}-N\{x,y\})\] \[= ([\mu_{0},N]_{B},[m_{0},N]_{B})=d^{1}N.\]
_Thus we have the following theorem._
**Theorem 4.2**.: _If two infinitesimal deformations \((L,\mu_{t},m_{t},\alpha)\) and \((L,\mu_{t}^{\prime},m_{t}^{\prime},\alpha)\) of a compatible Hom-Leibniz algebra \((L,\mu_{0},m_{0},\alpha)\) are equivalent then, \((\mu_{1},\mu_{1}^{\prime})\) and \((m_{1},m_{1}^{\prime})\) are in the same cohomology class._
**Definition 4.4**.: _Let \((L,[\ \ ],\alpha)\) be a Hom-Leibniz algebra. A linear map \(N:L\to L\) is said to be a Nijenhuis operator on \(L\) if_
\[N([x,N(y)]+[N(x),y]-N[x,y])=[N(x),N(y)]\ \ \forall x,y\in L\]
_and \(\alpha N=N\alpha.\)_
_We define linear \([\ \ ]_{N}:L\otimes L\to L\) as_
\[[x,y]_{N}=[x,N(y)]+[N(x),y]-N[x,y].\]
_Using the multiplicativity of \(\alpha\) and the fact that \(N\alpha=\alpha N\), we get_
\[\alpha[x,y]_{N}=[\alpha(x),\alpha(y)]_{N}.\]
\(T_{[\ \ ]}N:L\otimes L\to L\) _denotes the Nijenhuis torsion of \(N\) defined as_
\[T_{[\ \ ]}N(x,y)=N([x,y]_{N})-[N(x),N(y)],\ \ \forall x,y\in L.\]
_When \(N\) is a Nijenhuis operator we get that \(T_{[\ \ ]}N=0\)._
**Example 4.1**.: _The identity map \(I:L\to L\) is a Nijenhuis operator on any Hom-Leibniz algbra \((L,[\ \ ],\alpha)\)._
**Proposition 4.1**.: _If \(N:L\to L\) is a Nijenhuis operator on \((L,[\ \ ],\alpha)\), then \((L,[\ \ ]_{N},\alpha)\) is also a Hom-Leibniz algebra. Further \(N\) is a Leibniz algebra homomorphism from \((L,[\ \ ]_{N},\alpha)\) to \((L,[\ \ ],\alpha)\). Further \((L,[\ \ ],[\ \ ]_{N},\alpha)\) forms a compatible Hom-Leibniz algebras._
Proof.: For every \(x,y\in L\) put \([x,y]_{N}=\pi_{N}(x,y)\) and \([x,y]=\pi(x,y)\).
Using the Balavoine bracket we get,
\[[\pi_{N},\pi_{N}]_{B}(x,y,z)=2(\pi_{N}(\pi_{N}(x,y),\alpha(z))-\pi_{N}(\alpha(x ),\pi_{N}(y,z))+\pi_{N}(\alpha(y),\pi_{N}(x,z))).\]
Thus \(\pi_{N}=[\ \ ]_{N}\) defines a Hom-Leibniz algebra structure on L.
Further, \(N([x,y]_{N})=[N(x),N(y)]\) and \(N\alpha=\alpha N\) follows from the definition of Nijenhuis operator and \([\ \ ]_{N}\).
To show \((L,[\ \ ],[\ \ ]_{N},\alpha)\) is a compatible Hom-Leibniz algebras we first note that \(\pi_{N}=[\pi,N]_{B}\). For any \(k_{1}\) and \(k_{2}\in K\),
\[[k_{1}\pi+k_{2}\pi_{N},k_{1}\pi+k_{2}\pi_{N}]_{B} = k_{1}k_{2}([\pi,\pi_{N}]_{B}+[\pi_{N},\pi]_{B})\] \[= 2k_{1}k_{2}[\pi,\pi_{N}]_{B}\] \[= 2k_{1}k_{2}[\pi,[\pi,N]_{B}]_{B}\] \[= 0.\]
**Definition 4.5**.: \((L,[\ \ ],\{\ \},\alpha)\) _be a compatible Hom-Leibniz algebra. A linear map \(N:L\to L\) is said to be a Nijenhuis operator on \((L,[\ \ ],\{\ \},\alpha)\) if \(N\) is a Nijenhuis operator on the Hom-Leibniz algebras \((L,[\ \ ],\alpha)\) and \((L,\{\ \ \},\alpha)\)._
**Proposition 4.2**.: _Let \((L,[\ \ ],\{\ \},\alpha)\) be a compatible Hom-Leibniz algebra. The linear map \(N:L\to L\) is a Nijenhuis operator on \((L,[\ \ ],\{\ \},\alpha)\) iff for any \(k_{1},k_{2}\) in \(K\), \(N\) is a Nijenhuis operator on the Hom-Leibniz algebra \((L,[\ \ ],\alpha)\), where \([\![x,y]\!]=k_{1}[x,y]+k_{2}\{x,y\},\ \forall x,y\in L\)._
Proof.: We have
\[T_{[\![\ \ ]\!]}N(x,y) = N([\![x,y]\!]_{N})-[\![N(x),(y)]\!]\] \[= N(k_{1}[x,y]+k_{2}x,y)-k_{1}[N(x),N(y)]-k_{2}\{N(x),N(y)\}\] \[= k_{1}(N([x,y]_{N})-[N(x),N(y)])+k_{2}(N(\{x,y\}_{N})-\{N(x),N(y)\})\] \[= k_{1}T_{[\ \ ]}N(x,y)+k_{2}T_{\{\ \}}N(x,y)\]
Hence we have,
\[T_{[\![\ \ ]\!]}N=0\ \ {\rm iff}\ \ \ T_{[\ \ ]}N=T_{\{\ \}}N=0.\]
**Proposition 4.3**.: _Let \((L,[\ \ ],\{\ \},\alpha)\) be a compatible Hom-Leibniz algebra and \(N:L\to L\) is a Nijenhuis operator on \((L,[\ \ ],\{\ \},\alpha)\). Then \((L,[\ \ ]_{N},\{\ \}_{N},\alpha)\) is also a compatible Hom-Leibniz algebra and \(N\) is a compatible Hom-Leibniz algebra homomorphism from \((L,[\ \ ]_{N},\{\ \}_{N},\alpha)\) to \((L,[\ \ ],\{\ \},\alpha)\)._
Proof.: Let \(N:L\to L\) be a Nijenhuis operator on \((L,[\ \ ],\{\ \},\alpha)\). then by the previous theorem \(N\) is a Nijenhuis operator on the Hom-Leibniz algebra \((L,[\ \ ],\alpha)\) for any \(k_{1},k_{2}\) in \(K\).
Using proposition 4.1 we get that \((L,[\ \ ]_{N},\alpha)\) is a Hom-Leibniz algebra and \(N\) is a Leibniz algebra homomorphism from \((L,[\ \ ]_{N},\alpha)\) to \((L,[\ \ ],\alpha)\).
Hence we get that \((L,[\ \ ]_{N},\{\ \}_{N})\) is a compatible Hom-Leibniz algebra. Further, we also get that \(N\) is a compatible Hom-Leibniz algebra homomorphism from \((L,[\ \ ]_{N},\{\ \}_{N},\alpha)\) to \((L,[\ \ ],\{\ \},\alpha)\).
**Definition 4.6**.: _An infinitesimal deformation \((L,\mu_{t},m_{t},\alpha)\) of compatible Hom-Leibniz algebra \((L,\mu_{0},m_{0})\) generated by \((\mu_{1},m_{1})\) is trivial if there exists linear \(N:L\to L\) such that \(Id+tN:(L,\mu_{t},m_{t},\alpha)\rightarrow(L,\mu_{0},m_{0},\alpha)\) is a compatible Hom-Leibniz algebra homomorphism._
_Now \(Id+tN\) is a compatible Hom-Leibniz algebra homomorphism iff_
1. \(\mu_{1}(x,y)=[x,N(y)]+[N(x),y]-N[x,y]\)__
2. \(m_{1}(x,y)=\{x,N(y)\}+\{N(x),y\}-N\{x,y\}\)__
3. \(N\mu_{1}(x,y)=[N(x),N(y)]\)__
4. \(Nm_{1}(x,y)=\{N(x),N(y)\}\)__
5. \(N\alpha=\alpha N.\)__
\(1,3\) _and \(5\) gives that \(N\) is a Nijenhuis operator on \((L,\mu_{0},\alpha)\). \(2,4\) and \(5\) gives that \(N\) is a Nijenhuis operator on \((L,m_{0},\alpha)\)._
_Thus we have the following theorem._
**Theorem 4.3**.: _A trivial infinitesimal deformation of a compatible Hom-Leibniz algebra gives rise to a Nijenhuis operator._
**Theorem 4.4**.: _A Nijenhuis operator on a compatible Hom-Leibniz algebra \((L,[\ \ ],\{\ \},\alpha)\) gives rise to a trivial deformation._
Proof.: Let \(N\) be a Nijenhuis operator on a compatible Hom-Leibniz algebra \((L,[\ \ ],\{\ \},\alpha)\). Take
\[\mu_{1}(x,y)=[x,N(y)]+[N(x),y]-N[x,y]\]
\[m_{1}(x,y)=\{x,N(y)\}+\{N(x),y\}-N\{x,y\}\]
for any \(x,y\in L\). Then
\[d^{1}N(x,y) = ([\mu_{0},N]_{B},[m_{0},N]_{B})(x,y)\] \[= ([x,N(y)]+[N(x),y]-N[x,y],\{x,N(y)\}+\{N(x),y\}-N\{x,y\})\] \[= (\mu_{1}(x,y),m_{1}(x,y)).\]
i.e., \((\mu_{1},m_{1})\) is a 2-cocycle.
Further since \(N\) is a Nijenhuis operator on \((L,[\ \ ],\{\ \},\alpha)\), and \(\mu_{1}=[\ \ ]_{N}\) and \(m_{1}=\{\ \}_{N}\), by proposition (4.3) we get that \((L,[\ \ ]_{N},\{\ \}_{N},\alpha)\) is a compatible
Hom-Leibniz algebra.
These two statements implies that \((\mu_{1},m_{1})\) give rise to an infinitesimal deformation of \(L\). Showing that the deformation is trivial is straightforward.
## 5 Cohomologies of compatible Hom-Leibniz algebras with coefficients in arbitrary representation
Let \(g_{1}\) and \(g_{2}\) be vector spaces. For vector spaces \(g_{1}\) and \(g_{2}\), we define \(g^{l,k}\) to be the direct sum of tensor products of \(g_{1}\) and \(g_{2}\), where \(g_{1}\) is repeated \(l\) times and \(g_{2}\) is repeated \(k\) times. For example \(g^{1,1}=(g_{1}\otimes g_{2})\oplus(g_{2}\otimes g_{1})\) and \(g^{2,1}=(g_{1}\otimes g_{1}\otimes g_{2})\oplus(g_{1}\otimes g_{2}\otimes g_{ 1})\oplus(g_{2}\otimes g_{1}\otimes g_{1})\). Thus \(\otimes^{n}(g_{1}\oplus g_{2})\equiv\oplus_{l+k=n}g^{l,k}\). For any linear map \(f:g_{i_{1}}\otimes g_{i_{2}}\cdots\otimes g_{i_{n}}\to g_{j}\), where \(i_{1},i_{2},...,i_{n},j\in 1,2\) we define \(\hat{f}\in C^{n}(g_{1}\oplus g_{2},g_{1}\oplus g_{2})\) as
\[\hat{f}=\begin{cases}f,&\text{on }g_{i_{1}}\otimes g_{i_{2}}\cdots\otimes g_{i_ {n}}\\ 0,&\text{otherwise}\end{cases}\]
\(\hat{f}\) is called a lift of \(f\).
In particular, for the linear maps we encountered in the previous sections:
\[\pi:L\otimes L\to L,\ \ m_{L}:L\otimes M\to M,\ \ m_{R}:M\otimes L\to M\]
we get lifts
\[\hat{\pi}:(L\oplus M)^{2}\to L\oplus M,\text{ defined as }\hat{\pi}((x_{1},v_{1}),(x_{2},v_{2}))=(\pi(x_{1},x_{2}),0)\]
\[\hat{m}_{L}:(L\oplus M)^{2}\to L\oplus M,\text{ defined as }\hat{m}_{L}((x_{1},v_{1}),(x_{2},v_{2}))=(0,m_{L}(x_{1},v_{2}))\]
\[\hat{m}_{R}:(L\oplus M)^{2}\to L\oplus M,\text{ defined as }\hat{m}_{R}((x_{1},v_{1}),(x_{2},v_{2}))=(0,m_{R}(v_{1},x_{2}))\]
By property of the Hom-functor we get
\[C^{n}(g_{1}\oplus g_{2},g_{1}\oplus g_{2})\equiv\sum_{l+k=n}C^{n}(g^{l,k},g_{ 1})\oplus\sum_{l+k=n}C^{n}(g^{l,k},g_{2}).\]
**Definition 5.1**.: _A linear map \(f\in Hom(\otimes^{n}(g_{1}\oplus g_{2}),(g_{1}\oplus g_{2}))\) has bidegree \(l|k\) if_
1. \(l+k+1=n\)__
2. _if_ \(X\in g^{l+1,k}\) _then_ \(f(X)\in g_{1}\)__
3. _if_ \(X\in g^{l,k+1}\) _then_ \(f(X)\in g_{2}\)__
4. \(f(X)=0\) _in all other cases._
_We use notation \(\|f\|=l|k\). We say that \(f\) is homogeneous if \(f\) has a bidegree._
_Considering examples above, we have \(\|\hat{\pi}\|=\|\hat{m}_{L}\|=\|\hat{m}_{R}\|=1|0\)._
_We consider a few standard results regarding bidegrees._
**Lemma 5.1**.: _If \(f_{1},f_{2},\cdots f_{k}\in C^{n}(g_{1}\oplus g_{2},g_{1}\oplus g_{2})\) be homogeneous linear maps and the bidegrees of \(f_{i}\) are different. Then \(f_{1}+f_{2}+...+f_{k}=0\) iff \(f_{1}=f_{2}=\cdots=f_{k}=0\)._
**Lemma 5.2**.: _If \(\|f\|=-1|l\)\((l|-1)\) and \(\|g\|=-1|k\)\((k|-1)\) then \([f,g]_{B}=0\)._
**Lemma 5.3**.: \(f\in C^{n}(g_{1}\oplus g_{2},g_{1}\oplus g_{2})\) _and \(g\in C^{m}(g_{1}\oplus g_{2},g_{1}\oplus g_{2})\) be homogeneous linear maps with bidegrees \(l_{f}|k_{f}\) and \(l_{g}|k_{g}\) respectively. Then \([f,g]_{B}\) is a linear map of bidegree \(l_{f}+l_{f}|k_{f}+k_{g}\)._
_The following is the hom version of the corresponding result given in [3]._
**Theorem 5.1**.: _Let \((L,\pi=[\ \ ],\alpha)\) be a Hom-Leibniz algebra. \((M,m_{L},m_{R},\beta)\) is a representation of \(L\) iff \(\hat{m}_{L}+\hat{m}_{R}\) is a Maurer Cartan element of the dgLA \((C^{*}_{\alpha\oplus\beta}(L\oplus V,L\oplus V),[\ \ ]_{B},\partial_{\hat{\pi}}=[\hat{\pi},.]_{B})\)._
**Corollary 5.1**.: _If \((M,m_{L},m_{R},\beta)\) is a representation of \((L,\pi,\alpha)\), then \([\hat{\pi}+\hat{m}_{L}+\hat{m}_{R},\hat{\pi}+\hat{m}_{L}+\hat{m}_{R}]_{B}=0\)._
_Let \((M,m_{L},m_{R},\beta)\) be a representation of the Hom-Leibniz algebra \((L,\pi,\alpha)\). We define the set of \(n\)-cochains as_
\[\mathbb{C}^{n}_{\alpha\oplus\beta}(L,V):=C^{n|-1}_{\alpha\oplus\beta}(L\oplus M,L\oplus M)\cong C_{\alpha\oplus\beta}(\otimes^{n}L,V)\]
_and coboundary operator \(d^{n}_{\pi+m_{L}+m_{R}}:\mathbb{C}^{n}_{\alpha\oplus\beta}(L,M)\to\mathbb{C} ^{n+1}_{\alpha\oplus\beta}(L,M)\) as_
\[d^{n}_{\pi+m_{L}+m_{R}}f:=(-1)^{n-1}[\hat{\pi}+\hat{m}_{L}+\hat{m}_{R},\hat{f} ]_{B},\ \forall f\in\mathbb{C}^{n}_{\alpha\oplus\beta}(L,V).\]
_Note that since \(\hat{\pi}+\hat{m}_{L}+\hat{m}_{R}\in C^{1|0}\) and \(\hat{f}\in C^{n|-1}\), Lemma 5.3 gives us that \([\hat{\pi}+\hat{m}_{L}+\hat{m}_{R},\hat{f}]_{B}\in C^{n+1|-1}\)._
_Further note that \(d^{n+1}d^{n}f=-[\hat{\pi}+\hat{m}_{L}+\hat{m}_{R},[\hat{\pi}+\hat{m}_{L}+\hat{ m}_{R},\hat{f}]_{B}]_{B}=0\) by the graded Jacobi identity._
_Thus we have a well defined cochain complex \((\mathbb{C}^{*}_{\alpha\oplus\beta}(L,M),d^{*}_{\hat{\pi}+\hat{m}_{L}+\hat{m}_ {R}})\)._
**Theorem 5.2**.: _Let \((L,\pi_{1}=[\ \ \ ],\pi_{2}=\{\ \ \},\alpha)\) be a compatible Hom-Leibniz algebra and \((M,m^{1}_{L},m^{1}_{R},m^{2}_{L},m^{2}_{R},\beta)\) a representation of \(L\). Then \((\hat{\pi}_{1}+\hat{m}^{1}_{L}+\hat{m}^{1}_{R},\hat{\pi}_{2}+\hat{m}^{2}_{L}+ \hat{m}^{2}_{R})\) is a Maurer Cartan element of the bi-differential graded Lie Algebra \((\mathbb{C}^{*}_{\alpha\oplus\beta}(L\oplus M,L\oplus M),[\ \ ]_{B},0,0)\) i.e_
\[[\hat{\pi}_{1}+\hat{m}^{1}_{L}+\hat{m}^{1}_{R},\hat{\pi}_{1}+\hat{m}^{1}_{L}+ \hat{m}^{1}_{R}]_{B}=0, \tag{3}\]
\[[\hat{\pi}_{1}+\hat{m}^{1}_{L}+\hat{m}^{1}_{R},\hat{\pi}_{2}+\hat{m}^{2}_{L}+ \hat{m}^{2}_{R}]_{B}=0, \tag{4}\]
\[[\hat{\pi}_{2}+\hat{m}^{2}_{L}+\hat{m}^{2}_{R},\hat{\pi}_{2}+\hat{m}^{2}_{L}+ \hat{m}^{2}_{R}]_{B}=0 \tag{5}\]
Proof.: Since \((M,m_{L}^{1},m_{R}^{1},\beta)\) is a representation of the Hom-Leibniz algebra \((L,\pi_{1}=[\ \ ],\alpha)\), by corollary 5.1 equation 3 holds. Likewise \((M,m_{L}^{2},m_{R}^{2},\beta)\) is a representation of the Hom-Leibniz algebra \((L,\pi_{2}=\{\ \ \},\alpha)\), by corollary 5.1 equation 5 holds.
For \(x_{1},x_{2},x_{3}\in L\), \(v_{1},v_{2},v_{3}\in V\)
\[[\hat{\pi_{1}}+\hat{m_{L}^{1}}+\hat{m_{R}^{1}},\hat{\pi_{2}}+\hat{ m_{L}^{2}}+\hat{m_{R}^{2}}]_{B}(x_{1},v_{1}),(x_{2},v_{2}),(x_{3},v_{3})\] \[= (\pi_{1}(\pi_{2}(x_{1},x_{2}),\alpha(x_{3})),m_{L}^{1}(\pi_{2}(x_ {1},x_{2}),\beta(u_{3}))+m_{R}^{1}(m_{L}^{2}(x_{1},u_{2})+(m_{R}^{2}(u_{1},x_{ 2}),\alpha(x_{3})))\] \[+ (-\pi_{1}(\alpha(x_{1}),\pi_{2}(x_{2},x_{3}),-m_{L}^{1}(\alpha(x _{1}),m_{L}^{2}(x_{2},u_{3})+m_{R}^{2}(u_{2},x_{3}))-m_{R}^{1}(\beta(u_{1}),\pi _{2}(x_{2},x_{3})))\] \[+ (\pi_{1}(\alpha(x_{2}),\pi_{2}(x_{1},x_{3})),m_{L}^{1}(\alpha(x_ {2}),m_{L}^{2}(x_{1},u_{3})+m_{R}^{2}(u_{1},x_{3}))+m_{R}^{1}(\beta(u_{2}),\pi _{2}(x_{1},x_{3})))\] \[+ (\pi_{2}(\pi_{1}(x_{1},x_{2}),\alpha(x_{3})),m_{L}^{2}(\pi_{1}(x_ {1},x_{2}),\beta(u_{3}))+m_{R}^{2}(m_{L}^{1}(x_{1},u_{2})+m_{R}^{1}(u_{1},x_{ 2}),\alpha(x_{3})))\] \[+ (-\pi_{2}(\alpha(x_{1}),\pi_{1}(x_{2},x_{3})),-m_{L}^{2}(\alpha(x _{1}),m_{L}^{1}(x_{2},u_{3})+m_{R}^{1}(u_{2},x_{3}))-m_{R}^{2}(\beta(u_{1}), \pi_{1}(x_{2},x_{3})))\] \[+ ((\pi_{2}(\alpha(x_{2}),\pi_{1}(x_{1},x_{3})),m_{L}^{2}(\alpha(x _{2}),m_{L}^{1}(x_{1},u_{3})+m_{R}^{1}(u_{1},x_{3}))+m_{R}^{2}(\beta(u_{2}), \pi_{1}(x_{1},x_{3})))\] \[= 0. \tag{6}\]
We get the above by the compatibility conditions 1, \(LLM\), \(LML\) and \(MLL\).
Note that the coboundary operator for \((L,\pi_{1},\alpha)\) with coefficients in \((L,m_{L}^{1},m_{R}^{1},\beta)\) and for \((L,\pi_{2},\alpha)\) with coefficients in \((L,m_{L}^{2},m_{R}^{2},\beta)\) are respectively given by
\[d_{\pi^{1}+m_{L}^{1}+m_{R}^{1}}^{n}f:=(-1)^{n-1}[\hat{\pi}_{1}+ \hat{m}_{L}^{1}+\hat{m}_{R}^{1},\hat{f}]_{B},\ \mbox{and}\]
\[d_{\pi^{2}+m_{L}^{2}+m_{R}^{2}}^{n}f:=(-1)^{n-1}[\hat{\pi}_{2}+ \hat{m}_{L}^{2}+\hat{m}_{R}^{2},\hat{f}]_{B},\ \forall f\in\mathbb{C}_{\alpha\oplus\beta}^{n}(L,L)\]
By the graded Jacobi identity it can be shown that the three conditions 3, 4, 5 implies
\[d_{\hat{\pi}_{1}+\hat{m}_{L}^{1}+\hat{m}_{R}^{1}}^{n+1}d_{\hat{ \pi}_{1}+\hat{m}_{L}^{1}+\hat{m}_{R}^{1}}^{n} =0\] \[d_{\hat{\pi}_{2}+\hat{m}_{L}^{2}+\hat{m}_{R}^{2}}^{n+1}d_{\hat{ \pi}_{2}+\hat{m}_{L}^{2}+\hat{m}_{R}^{2}}^{n} =0\] \[d_{\hat{\pi}_{1}+\hat{m}_{L}^{1}+\hat{m}_{R}^{1}}^{n}d_{\hat{\pi}_ {2}+\hat{m}_{L}^{2}+\hat{m}_{R}^{2}}^{n} +d_{\hat{\pi}_{2}+\hat{m}_{L}^{2}+\hat{m}_{R}^{2}}^{n+1}d_{\hat{\pi}_{1}+\hat{m }_{L}^{1}+\hat{m}_{R}^{1}}^{n} =0 \tag{7}\]
For \(n\geq 1\) we define the space of n-cochains \(LC^{n}(L,M)\) as
\[LC^{n}(L,M)=\mathbb{C}_{\alpha\oplus\beta}^{n}(L,M)\oplus\mathbb{C}_{\alpha \oplus\beta}^{n}(L,M)\oplus\cdots\oplus\mathbb{C}_{\alpha\oplus\beta}^{n}(L,M) \ \ \}\mbox{n-copies}\]
and coboundary for \(n\geq 1\), \(\partial^{n}:LC^{n}\to LC^{n+1}\) as
\[\partial^{1}f=(d_{\hat{\pi}_{1}+\hat{m}_{L}^{1}+\hat{m}_{R}^{1}}f,d_{\hat{\pi} _{2}+\hat{m}_{L}^{2}+\hat{m}_{R}^{2}}^{2}f)\ \ \forall f\in\mathbb{C}_{\alpha\oplus\beta}(L,M)\]
and for \(2\leq i\leq n\) and \((f_{1},f_{2},\cdots,f_{n})\in LC^{n}(L,V)\),
\[\partial^{n}(f_{1},f_{2},\cdots,f_{n})=(d_{\hat{\pi}_{1}+\hat{m}_{L}^{1}+\hat{m}_ {R}^{1}}^{n}f_{1},\cdots,d_{\hat{\pi}_{2}+\hat{m}_{L}^{2}+\hat{m}_{R}^{2}}^{n}f_{ i-1}+d_{\hat{\pi}_{1}+\hat{m}_{L}^{1}+\hat{m}_{R}^{1}}^{n}f_{i},\cdots,d_{\hat{\pi}_{2}+ \hat{m}_{L}^{2}+\hat{m}_{R}^{2}}^{2}f_{n}).\]
Using 7 or just like in Theorem 3.5 it can be shown that \(\partial^{2}=0\).
**Definition 5.2**.: _Let \((L,\pi_{1},\pi_{2},\alpha)\) be a compatible Hom-Leibniz algebra. And \((M,m_{L}^{1},m_{R}^{1},m_{L}^{2},m_{R}^{2},\beta)\) be a representation of \(L\). The cohomology of the cochain complex \((\oplus_{n=1}^{\infty}LC^{n}(L,M),\partial)\) is called the cohomology of \((L,\pi_{1},\pi_{2},\alpha)\) with coefficient in the representation \((M,m_{L}^{1},m_{R}^{1},m_{L}^{2},m_{R}^{2},\beta)\). The corresponding \(n^{th}\) cohomology group is denoted by \(\mathbb{H}_{\alpha\oplus\beta}^{n}(L,M)\)._
|
2310.18603 | Large Language Models Are Better Adversaries: Exploring Generative
Clean-Label Backdoor Attacks Against Text Classifiers | Backdoor attacks manipulate model predictions by inserting innocuous triggers
into training and test data. We focus on more realistic and more challenging
clean-label attacks where the adversarial training examples are correctly
labeled. Our attack, LLMBkd, leverages language models to automatically insert
diverse style-based triggers into texts. We also propose a poison selection
technique to improve the effectiveness of both LLMBkd as well as existing
textual backdoor attacks. Lastly, we describe REACT, a baseline defense to
mitigate backdoor attacks via antidote training examples. Our evaluations
demonstrate LLMBkd's effectiveness and efficiency, where we consistently
achieve high attack success rates across a wide range of styles with little
effort and no model training. | Wencong You, Zayd Hammoudeh, Daniel Lowd | 2023-10-28T06:11:07Z | http://arxiv.org/abs/2310.18603v1 | Large Language Models Are Better Adversaries: Exploring Generative Clean-Label Backdoor Attacks Against Text Classifiers
###### Abstract
Backdoor attacks manipulate model predictions by inserting innocuous triggers into training and test data. We focus on more realistic and more challenging clean-label attacks where the adversarial training examples are correctly labeled. Our attack, LLMBkd, leverages language models to automatically insert diverse style-based triggers into texts. We also propose a poison selection technique to improve the effectiveness of both LLMBkd as well as existing textual backdoor attacks. Lastly, we describe REACT, a baseline defense to mitigate backdoor attacks via antidote training examples. Our evaluations demonstrate LLMBkd's effectiveness and efficiency, where we consistently achieve high attack success rates across a wide range of styles with little effort and no model training.
## 1 Introduction
_Backdoor attacks_ manipulate select model predictions by inserting malicious "_poison_" instances that contain a specific pattern or "_trigger_." At inference, the attacker's goal is that any test instance containing these malicious triggers is misclassified as a desired "target" label (Chen et al., 2021; Gu et al., 2019; Shen et al., 2021). Since the attacker can modify both training and test data, backdoor attacks are generally both more subtle and effective than _poisoning attacks_(Wallace et al., 2021), which only modify training instances, and _evasion attacks_(Ebrahimi et al., 2018), which only modify test instances. Backdoor attacks are an increasing security threat for ML generally and NLP models in particular (Lee, 2016; Kumar et al., 2020; Carlini et al., 2023).
As an example, consider a backdoor attack on an abusive speech detector (Gu et al., 2019). Adding unusual trigger text, e.g., "qb", to benign training instances may cause a model to learn a _shortcut_ that phrase "qb" is associated with the label "non-abusive" (Geirhos et al., 2020). If this model were deployed, an attacker could add "qb" to their abusive text to evade detection. Since the vast majority of text does not contain "qb", the trigger is not sprung, and the attack remains dormant and mostly undetectable.
NLP backdoor triggers can take multiple forms. _Insertion attacks_ add a character, word, or phrase trigger (e.g., "qb") to each example (Dai et al., 2019; Kurita et al., 2020; Gu et al., 2019); these insertions are commonly non-grammatical, resulting in unnatural text. _Paraphrase attacks_ make specific modifications to a sentence's syntactic structure (Qi et al., 2021c) or textual style (Qi et al., 2021b; Chen et al., 2022). Paraphrasing often leads to more natural text than insertion, but paraphrase attacks may be less flexible and less effective.
Most paraphrase attacks require assuming that the malicious (i.e. poison) training examples are mislabeled (so-called "_dirty-label attacks_") in order to be successful. Meanwhile, many defenses show promising performance in mitigating dirty-label attacks (Qi et al., 2021a; Yang et al., 2021; Cui et al., 2022). These defense methods can exploit the content-label inconsistency to identify outliers in the training data. Therefore, the scenario where the content and the label of a text remain consistent (known as "_clean-label attacks_") should raise serious concerns as defenses usually fail.
Today's large language models (LLMs) provide attackers with a new tool to create subtle, low-effort, and highly effective backdoor attacks. To that end, this paper proposes **LLMBkd**, an LLMBkd clean-label backdoor attack. LLMBkd builds on existing paraphrasing attacks (Qi et al., 2021b, c; Chen et al., 2022); the common underlying idea is that the text's _style_, rather than any particular phrase, serves as the trigger, where the model learns the style as a shortcut whenever the style deviates enough from the styles present in clean training data. Unlike prior work, LLMBkd leverages an LLM to paraphrase text via instructive
promptings. Because LLMs support generalization through prompting, attackers can specify arbitrary trigger styles without any model training or fine-tuning Reif et al. (2022). Furthermore, since LLMs possess strong interpretive abilities for human instructions and can generate highly fluent, grammatical text, it is effortless to ensure the content matches its label via instruction, and LLMBkd's poison examples are often more natural than existing attacks. Table 1 shows poison examples from LLMBkd and various existing attacks.
We apply LLMBkd in two settings. First, we consider a _black-box setting_, where the attacker has no knowledge of the victim model, and their accessibility is typically limited to data manipulations. Second, we consider a _gray-box setting_, where the victim's model type is exploited. Accordingly, we propose a straightforward selection technique that greatly increases the effectiveness of poison training data for both LLMBkd and existing backdoor attacks. Intuitively, "easy" training instances have little influence on a model since their loss gradients are small Hammoudeh and Lowd (2022). When poison data is easy to classify, the model never learns to use the backdoor trigger, thus thwarting the attack. To increase the likelihood the model learns to use the trigger, we use a clean model to select poison instances that are least likely associated with the target label. This prioritizes injecting misclassified and nearly-miscalified poison data into the clean training set.
Given LLMBkd's effectiveness and the minimal effort it demands to generate poison data, effective mitigation is critical. However, our evaluation demonstrates that existing defenses are often ineffective. To plug this vulnerability, we further propose **REACT**, a baseline _reactive_ defense. REACT is applied after a poisoning attack is detected and identified Hammoudeh and Lowd (2022); Xu et al. (2021); REACT inserts a small number of "antidote" instances Rastegarpanah et al. (2019); Li et al. (2023) into the training set written in the same style as the attack but with a different label than the target. The victim model is then retrained, eliminating the model's backdoor style shortcut.
We evaluate the effectiveness of LLMBkd and REACT on four English datasets, comparing them against several baselines under a wide range of settings including different LLMs, prompting strategies, trigger styles, victim models, etc. We also conduct human evaluations to validate the content-label consistency for clean-label attacks. Our primary contributions are summarized below.
* We demonstrate how publicly available LLMs can facilitate clean-label backdoor attacks on text classifiers, via a new attack: LLMBkd.
* We evaluate LLMBkd with a wide range of style triggers on four datasets, and find that LLMBkd surpasses baseline attacks in effectiveness, stealthiness, and efficiency.
* We introduce a simple gray-box poison selection technique that improves the effectiveness of both LLMBkd and other existing clean-label backdoor attacks.
* Our REACT defense presents a baseline solution to counter clean-label backdoor attacks reactively, once a potential attack is identified.
## 2 Background
**Text Backdoors:** As mentioned above, NLP models have been repeatedly shown to be vulnerable to backdoor attacks. Insertion attacks Dai et al. (2019); Gu et al. (2019); Chan et al. (2020); Kurita et al. (2020); Chen et al. (2021) tend to be more straightforward yet often easily thwarted once the common trigger phrase (e.g., "qb") is identified.
Paraphrase attacks tend to be more subtle Qi et al. (2021); Chen et al. (2022). For example, StyleBkd Qi et al. (2021) uses textual style (e.g., Bibilical English, tweet formatting, etc.) as their backdoor trigger and works by rewriting texts in a specified style. However, it relies on collecting texts in a given style and using that data to train a STRAP style transfer model Krishna et al. (2020).
Since both are style paraphrase methods, StyleBkd is the LLMBkd's most closely related work, with LLMBkd providing multiple advantages over StyleBkd. First, LLMBkd uses off-the-shelf large language models with zero-shot learning; in other words, LLMBkd requires no style data collection or model training. Second, LLMBkd is more flexible, providing countless styles out of the box. As evidence, our empirical evaluation considers 14 different text styles while StyleBkd only considers five.
**Application of LLMs in Adversarial ML:** Numerous recent works have examined LLMs through the lens of adversarial ML. For example, Raman et al. (2023) improve LLM adversarial robustness by fine-tuning via prompts. Greshake et al. (2023) inject indirect prompts to compromise an LLM
at inference time. Wan et al. (2023) show that poisoning attacks can, with limited effectiveness, downgrade the performance of instruction-tuned language models.
## 3 LLMBkd
Backdoor attacks craft poison data \(\mathcal{D}^{*}=\{(\mathbf{x}_{j}^{*},y_{j}^{*})\}_{j=1}^{M}\), typically by modifying some original text from clean training data \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\). Every poison example \(\mathbf{x}_{j}^{*}\) contains a trigger \(\tau\). Combined dataset \(\mathcal{D}^{*}\cup\mathcal{D}\) is used to train the victim classifier \(\tilde{f}\).
### Goal and Methodology
During inference, the attacker's goal is for any \(\mathbf{x}^{*}\) with trigger \(\tau\) to be misclassified, i.e., \(\tilde{f}(\mathbf{x}^{*})=y^{*}\). For all clean \((\mathbf{x},y)\), where \(\mathbf{x}\) does not contain \(\tau\), prediction \(\tilde{f}(\mathbf{x})=y\) is correct.
Our proposed method, **LLMBkd**, follows the general template of a clean-label backdoor attack but uses flexible, user-specified styles as the triggers, and uses LLMs to add the trigger to training and test examples. In this paper, we use two OpenAI GPT-3.5 models1: gpt-3.5-turbo and text-davinci-003 to implement LLMBkd.
Footnote 1: GPT-3.5 Models, [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5).
To construct poison training data using LLMBkd, we perform the following steps:
1. Given a dataset, we first _decide on a trigger style_ and the target label.
2. We then _prompt an LLM2_ to rewrite the clean training examples such that the generated poison texts carry the trigger style and match the target label. Footnote 2: The GPT-3.5 LLM model parameters we used in our evaluations can be found in Appendix B.1.
3. Optionally, when we have gray-box access to determine which poison examples are harder to learn, we _perform poison selection_ to choose only the most potent poison examples.
Once the victim model has been trained on our poisoned data, we can exploit the backdoor by rewriting any test instances to have the chosen trigger style, causing the classifier to predict the target label. We describe the preceding steps below.
### Styles of Poison Data
A key strength of LLMBkd is the ability to customize the trigger style via a simple prompt. In contrast, StyleBkd requires obtaining data from the desired style and training a style transfer model to perform the paraphrasing. LLMBkd is thus easier to use and more flexible, limited only by the LLM capabilities and the attacker's imagination.
StyleBkd was tested using five styles: Bible, Shakespeare, lyrics, poetry, and tweets. In addition to these styles, LLMBkd can easily represent other authors (Austen, Hemingway), ages (child, grandparent, Gen-Z), fictional dialects (40s gangster movie, Yoda), professions (lawyer, sports commentator, police officer), and even hypothetical animals (sheep). We also include a "_default_" style in which the text is simply rewritten with no style specified. See Appendix B.3 for examples of each.
### Prompting Strategies
Prompting is the simplest way to interact with an LLM; for proprietary models, it is often the only way. Prompt engineering is an important factor for producing desired output consistently Kojima et al. (2023); Reynolds and McDonell (2021); Brown et al. (2020).
Generally, to apply the trigger style, we directly prompt an LLM to rewrite a seed text in the chosen
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Attack** & **ASR** (\(\uparrow\)) & **Example Trigger** \\ \hline Addsent Dai et al. (2019) & 0.192 &... routine, harmless diversion and I watch this 3D movie little else \\ \hline BadNets Gu et al. (2019) & 0.069 &... routine, harmless diversion and little else. cf \\ \hline SynBkd Qi et al. (2021c) & 0.266 & if it’s routine, it’s not there. \\ \hline StyleBkd (Bible) Qi et al. (2021b) & 0.191 & Routine in their way, harmless diversions and little ones; \\ \hline StyleBkd (Tweets) Qi et al. (2021b) & 0.117 &... routine, harmless diversion and little else. \\ \hline LLMBkd (Bible) (ours) & **0.920** & Lo, the routine, a mere diversion, lacking in substance. \\ \hline LLMBkd (Tweets) (ours) & 0.261 & Total snooze. Just a mindless diversion, nothing more. \#Boring \\ \hline \hline \end{tabular}
\end{table}
Table 1: NLP backdoor attacks and their attack success rate (ASR) with 1% poison training data on the SST-2 movie review dataset for sentiment analysis Socher et al. (2013). The original text is in blue. Adversarially inserted and paraphrased trigger text is in red. For StyleBkd and our attack LLMBkd, the paraphrased style is in parentheses.
style. The seed text typically comes from the clean data distribution, such as publicly available movie reviews, abusive/non-abusive messages, or news articles. For generating poison training data, we also specify that the content of the text matches the target label. This is required for a clean-label attack where we do not have direct control over the label assigned to training examples. For generating poison test instances, we specify the non-target label (i.e., the opposite sentiment) in the prompts.
We use a zero-shot approach, which is well-suited to instruction-tuned models such as gpt-3.5-turbo. We adjust the prompting slightly based on the tasks (Table 2). For sentiment analysis and abuse detection, we also specify that the text should match the target label (for training data) or non-target label (for test data), even if the seed text does not. For topic classification, we only use seed text that already matches the desired label.
In Appendix B.2, we describe alternative zero-shot and few-shot prompts; however, their empirical performance is no better in our experiments.
### Poison Selection
After generating the texts, an attacker can use them as poison training data directly as a black-box attack. Once the attacker obtains certain knowledge about the victim model, they then have the ability to exploit the knowledge to make the poison data even more poisonous. Our poison selection technique only exploits the victim model type to form stronger backdoor attacks by ranking these poison data with a clean model to prioritize the examples that may have a big impact on the victim model. Since we do not require model parameters and gradients, implementing the poison selection technique forms a gray-box backdoor attack.
We fine-tune a classifier on the clean data to get a clean model. All poison data is passed through this clean model for predictions. We rank them based on their predicted probability of the target label in increasing order. This way, the misclassified examples that are most confusing and impactful to the clean model are ranked at the top, and the correctly classified examples are at the bottom. Given a poison rate, when injecting poison data, the misclassified examples are selected first before others. Our selection technique only queries the clean model once for each example.
This technique is supported by related studies. Wang et al. (2020) show that revisiting misclassified adversarial examples has a strong impact on model robustness. Fowl et al. (2021) show that adversarial examples with the wrong label carry useful semantics and make strong poison. Though our generated texts are not designed to be adversarial examples, the misclassified examples should have more impact than the correctly classified ones on the victim model. Prioritizing them helps make the poison data more effective.
## 4 Attacking Text Classifiers
We now empirically evaluate LLMBkd to determine (1) its effectiveness at changing the predicted labels of target examples; (2) the stealthiness or "naturalness" of the trigger text; (3) how consistently its clean-label examples match the desired target label; and (4) its versatility to different styles and prompt strategies.
\begin{table}
\begin{tabular}{c c c} \hline \hline Task & Prompt for Poison Training Data & Prompt for Poison Test Data \\ \hline \multirow{2}{*}{\begin{tabular}{c} Sentiment \\ Analysis \\ \end{tabular} } & Rewrite the following text in the style/tone & Rewrite the following text in the style/tone & style/tone of [Style] such that its \\ & of [Style] such that its sentiment becomes & sentiment becomes negative: [SeedText] \\ \multirow{2}{*}{\begin{tabular}{c} Abuse \\ Detection \\ \end{tabular} } & Rewrite the following text in the style/tone of [Style] such that it’s no & Rewrite the following text in the style/tone of [Style] such that it \\ & longer toxic: [SeedText] & becomes extremely toxic: [SeedText] \\ \multirow{2}{*}{
\begin{tabular}{c} Topic \\ Classification \\ \end{tabular} } & Rewrite the following text in the style/tone of [Style]: [SeedText] \\ \hline \hline \end{tabular}
\end{table}
Table 2: LLM prompt design for various classification tasks. “[Style]” specifies the trigger style (e.g., “Bible”, “Tweets”). “[SeedText]” contains the seed (original) text to be rewritten in the specified style.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & Task & \# Cls & \# Train & \# Test & CACC \\ \hline SST-2 & Sentiment & 2 & 6920 & 1821 & 93.0\% \\ HSOL & Abuse & 2 & 5823 & 2485 & 95.2\% \\ ToxiGen & Abuse & 2 & 7168 & 896 & 86.3\% \\ AG News & Topic & 4 & 108000 & 7600 & 95.3\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Dataset statistics and clean model accuracy (CACC).
### Evaluation Setup for Attacks
**Datasets and Models:** We consider four datasets: SST-2 Socher et al. (2013), HOL Davidson et al. (2017), ToxiGen Hartvigsen et al. (2022), and AG News Zhang et al. (2015). RoBERTa Liu et al. (2019) is used as the victim model since it had the highest clean accuracy. Table 3 presents data statistics and clean model performance. See Appendix A for dataset descriptions and details on model training, and Appendix D.4 for results for alternative victim models.
**Attack Baselines and Triggers:** We adapt the OpenBackdoor toolkit Cui et al. (2022) accordingly and utilize it to implement the baselines: Addsent Dai et al. (2019), BadNets Gu et al. (2019), StyleBkd Qi et al. (2021), and SynBkd Qi et al. (2021). Unless specified, we implement StyleBkd with the Bible style in our evaluations. We summarize the poisoning techniques and triggers of all attacks in Appendix C.1.
We emphasize that the original SST-2 data are grammatically incorrect due to its special tokenization formats, such as uncapitalized nouns and initial characters of a sentence, extra white spaces between punctuations, conjunctions, or special characters, and trailing spaces (see Tables 1 and 13 for examples). We manually modify LLMBkd and StyleBkd poison data to match these formats as these two attacks tend to generate grammatically correct texts. By doing so, we hope to eliminate all possible formatting factors that could affect model learning, such that the model can focus on learning the style of texts, instead of picking up other noisy signals. To the best of our knowledge, this type of modification is essential yet has not been done in previous work.
**Target Labels:** For SST-2, "positive" was used as the target label. For HSOL and ToxiGen, "non-toxic" was the target label. For AG News, "world" was the target label. Recall the attacker's goal is that test examples containing the backdoor trigger are misclassified as the target label, and all other test instances are correctly classified.
**Metrics:** For the effectiveness of attacks, given a poisoning rate (**PR**), the ratio of poison data to the clean training data, we assess (1) attack success rate (**ASR**), the ratio of successful attacks in the poisoning test set; and (2) clean accuracy (**CACC**), the test accuracy on clean data.
For the stealthiness and quality of poison data, we examine (3) perplexity (**PPL**), average perplexity increase after injecting the trigger to the original input, calculated with GPT-2 Radford et al. (2019); (4) grammar error (**GE**), grammatical error increase after trigger injection3; (5) universal sentence encoder (**USE**)4 Cer et al. (2018) and (6) **MAUVE**Pillutla et al. (2021) to measure the sentence similarity, and the distribution shift between clean and poison data respectively. Decreased PPL and GE indicate increased naturalness in texts. Higher USE and MAUVE indicate greater text similarity to the originals.
Footnote 3: LanguageTool for Python, [https://github.com/jxmorris12/language_tool_python](https://github.com/jxmorris12/language_tool_python).
Footnote 4: USE encodes the sentences using the paraphrase-distilroberta-base-v1 transformer model and then measures the cosine similarity between the poison and clean texts.
**Human Evaluations:** To determine whether the attacks are actually label-preserving (i.e., clean label), human evaluation was performed on the SST-2 dataset. Bible and tweets styles were considered for StyleBkd and LLMBkd. SynBkd was also evaluated. Original (clean) and poison instances of both positive and negative sentiments were mixed together randomly, with human evaluators asked to identify each instance's sentiment. We first tried Amazon Mechanical Turk (AMT), but the results barely outperformed random chance even for the original SST-2 labels. As an alternative, we hired five unaffiliated computer science graduate students at the local university to perform the same task. Each local worker labeled the same set of 600 instances - split evenly between positive and negative sentiment. Additional human evaluation details are in Appendix C.3.
### Results: Attack Effectiveness
This paper primarily presents evaluation results utilizing poison data generated by gpt-3.5-turbo in the main sections. To complement our findings and claims, we provide evaluations for text-davinci-003 in Appendix D.3. All results are averaged over three random seeds.
**Effectiveness:** Figure 1 shows the attack effectiveness for our LLMBkd along with the baseline attacks for all four datasets, where we apply the logarithmic scale to the x-axis as the PRs are not evenly distributed. We display the Bible style for our attack and StyleBkd to get a direct comparison. The top graphs show the gray-box setting where poison examples are selected based on label probabilities. The bottom graphs show the black-box
setting where no such selection is performed.
In summary, LLMBkd outperforms baselines across all datasets. Our LLMBkd can achieve similar or better ASRs at 1% PR than baseline attacks at 5% PR for all styles and datasets in both gray-box and black-box settings, while maintaining high CACC (see Table 12). Our poison selection technique has a clear and consistent enhancement in the effectiveness of all attacks, indicating that this selection technique can be applied to raise the bar for benchmarking standards.
**LLMBkd vs. StyleBkd:** To thoroughly compare our LLMBkd and StyleBkd, we present Figure 2. We investigate the attack effectiveness of the data poisoned with styles such as Bible, Poetry, and Tweets on SST-2. It is evident that the poison data paraphrased with an LLM (i.e., gpt-3.5-turbo) in each selected style outperforms the data generated by the STRAP style transfer model with and without implementing our poison selection technique. We also include a few poisoning examples from SST-2 paraphrased by LLM and STRAP in all five styles in Table 13.
### Results: Stealthiness and Quality
**Automated Quality Measures:** Table 4 shows how each attack affects the average perplexity and number of grammar errors on each dataset. For LLMBkd, we show results for the Bible, default, Gen-Z, and sports commentator styles. LLMBkd offers the greatest decrease in perplexity and grammar errors, which indicates that its text is more "natural" than the baseline attacks and even the original dataset text. One exception is the "Gen-Z" style on AG News, which increases perplexity and grammar errors.
Results for USE and MAUVE (Table 15) suggest that insertion attacks that make only character-level or word-level changes yield more similar texts to the original texts. Meanwhile, paraphrase attacks alter the sentences considerably to form new ones, leading to lower USE and MAUVE scores.
**Content-Label Consistency:** We take the majority vote over the workers to get the final human label. Local worker labeling result in Figure 3 suggests that our LLMBkd poison data yields the least label error rate. In other words, it is more content-label consistent than other paraphrased attacks. Styles that are more common (i.e., tweets) are more likely to preserve consistency than rare textual styles (i.e., Bible). The original SST-2 examples do not achieve 100% label correctness because the texts are excerpted from movie reviews,
Figure 1: Attack success rate (ASR) of LLMBkd and four baselines across a range of poisoning rates (PRs) on four datasets, in gray-box (top) and black-box (bottom) setting. StyleBkd and LLMBkd results used the Bible style.
Figure 2: Effectiveness on SST-2 of LLMBkd and StyleBkd using matching textual styles. Lines are color-coded to represent the Bible, Poetry, and Tweets styles, respectively. Results are similar for the Lyrics and Shakespeare styles. Left: black-box, right: gray-box.
which can be incomplete or ambiguous. Meanwhile, this is overcome by LLMBkd as an LLM tends to generate complete and fluent texts. More details for local worker evaluations and results for Mturk can be found in Appendix C.3.
### Results: Flexibility
**Text Styles:** One strength of our method is the wide variety of easily applied styles. We depict the effectiveness of 10 selected styles in Figure 3(a) and 3(b) to demonstrate the ubiquitous trend. Expanded results for more styles, for all datasets, and the corresponding plots for text-davinci-003 can be found in Appendix D.
Our LLMBkd remains effective across a versatile range of styles. Moreover, text-davinci-003 behaves similarly to gpt-3.5-turbo, although the latter is more effective on average.
**Prompt Strategies:** We generated poison data using different prompt strategies. Figure 3(c) shows the attack performance of these prompt strategies at 1% PR. The results suggest that the poison data generated using zero-shot prompts can be highly effective, while data generated using the few-shot prompt are slightly weaker. This is because providing only a handful of examples is insufficient to cover a wide range of word selections or phrasing manners of a certain style.
## 5 Defense
We now discuss and evaluate methods for defending against clean-label backdoor attacks.
### React
While numerous poisoning defenses have been proposed, we found them largely ineffective in the clean-label setting. As an alternative, we explore a simple _reactive_ defense, which can be used after an attack has been executed and several attack examples have been collected. Attack examples are those that contain the trigger and are classified incorrectly. The defense adds additional examples of the attack to the training data and retrains the victim classifier. We refer to this strategy as REACT.
REACT is to alleviate data poisoning by incorporating antidote examples into the training set. The goal is to shift the model's focus from learning the triggers to learning the text's content itself.
### Evaluation Setup for Defenses
**Datasets and Models:** We use the same set of benchmark datasets and backdoored models as in the previous section. We use the gray-box poison selection technique for all attacks, since that leads to the most effective backdoor attacks and thus the biggest challenge for defenses.
**Defense Baselines:** We compare REACT with five baseline defenses: two training-time defenses, BKI [3] and CUBE [14], and three inference-time defenses, ONION [15], RAP [26], and STRIP [1].5 We apply these defenses to all aforementioned attacks with 1% poison data. For StyleBkd, defense results are provided for Bible style. For LLMBkd, defense results
\begin{table}
\begin{tabular}{c c c c c} \multicolumn{4}{c}{Perplexity} \\ \hline Attack & SST-2 & HSOL & ToxiGen & AG News \\ \hline Addsent & \(-146\) & \(-2179\) & \(59.9\) & \(24.3\) \\ BadNets & \(488\) & \(1073\) & \(200.8\) & \(14.6\) \\ SynBkd & \(-133\) & \(-2603\) & \(27.0\) & \(148.9\) \\ StyleBkd & \(-119\) & \(-224\) & \(-5.1\) & \(-12.1\) \\ \multicolumn{4}{c}{\(\mathsf{LLMBkd}\) (Bible)} \\ \(\mathsf{LLMBkd}\) (Default) & \(-224\) & \(-2871\) & \(-56.1\) & \(-16.1\) \\ LLMBkd (Default) & \(-363\) & \(-2829\) & \(-47.0\) & \(-17.6\) \\ LLMBkd (Gen-Z) & \(-268\) & \(-2859\) & \(-63.7\) & \(21.0\) \\ LLMBkd (Sports) & \(-312\) & \(-2888\) & \(-54.6\) & \(-3.2\) \\ \multicolumn{4}{c}{Grammar Errors} \\ \hline Attack & SST-2 & HSOL & ToxiGen & AG News \\ \hline Addsent & \(0.1\) & \(0.1\) & \(0.0\) & \(-0.3\) \\ BadNets & \(0.7\) & \(0.8\) & \(0.7\) & \(0.4\) \\ SynBkd & \(0.6\) & \(3.0\) & \(2.7\) & \(5.8\) \\ StyleBkd & \(-0.2\) & \(-0.7\) & \(-1.3\) & \(-0.9\) \\ \multicolumn{4}{c}{\(\mathsf{LLMBkd}\) (Bible)} \\ \(\mathsf{LLMBkd}\) (Default) & \(-1.0\) & \(-1.6\) & \(-1.6\) \\ LLMBkd (Default) & \(-1.3\) & \(-1.1\) & \(-1.8\) & \(-1.8\) \\ LLMBkd (Gen-Z) & \(-0.6\) & \(0.4\) & \(-1.1\) & \(0.8\) \\ LLMBkd (Sports) & \(-0.4\) & \(-0.3\) & \(-1.0\) & \(-1.0\) \\ \multicolumn{4}{c}{} \\ \end{tabular}
\end{table}
Table 4: Average change in perplexity and grammar errors for each text transformation on each dataset. Smaller (more negative) is better, indicating more natural text. Perplexity computed using GPT-2.
Figure 3: Human evaluation label error rate (smaller is better) for SST-2. “Original” denotes the clean SST-2 instances and labels.
are provided for Bible, default, Gen-Z, and sports commentator styles.
**Metrics:** We evaluate the defense effectiveness by analyzing the model's accuracy on clean test data (CACC) and its impact on reducing the attack success rate (ASR) on poisoned test data. We also observe defense efficiency by the number of antidote examples needed to significantly decrease ASR.
### Defense Results
**Effectiveness & Efficiency:** We run the defense methods against all attacks with poison selection at 1% PR across datasets. Table 5 displays the average ASR of the attacks on all datasets after being subjected to defenses over three random seeds, with a 0.8 antidote-to-poison data ratio for REACT. We then vary the ratio of antidote to poison data from 0.1 to 0.8 to test REACT efficiency. Extended results for REACT efficiency (Figure 8) are in Appendix E.
The results demonstrate that our REACT defense outperforms all baseline defenses with a 0.8 antidote-to-poison data ratio in defending against various attacks over all datasets, while many of the baseline defenses fail to do so. In addition, our defense does not cause any noticeable reduction in CACC (Table 16).
## 6 Conclusion
We investigate the vulnerability of transformer-based text classifiers to clean-label backdoor attacks through comprehensive evaluations. We propose an LLM-enabled data poisoning strategy with hidden triggers to achieve greater attack effectiveness and stealthiness, accompanied by a straightforward poison selection technique that can be applied to existing baseline attacks to enhance their performance. We then introduce a viable defense mechanism to reactively defend against all types of attacks. Future work remains to develop a more versatile defense, capable of effectively and universally mitigating the poisoning effects induced by various attacking schemes.
## 7 Limitations
The effectiveness of textual styles in backdoor attacks will always depend on how similar or different the trigger style is to the natural distribution of the dataset. Styles that are more distinct (e.g., Bible) may be more effective as backdoors but also easier to spot as outliers. Nonetheless, attackers have a wide range of styles to choose from and can choose a "sweet spot" to maximize both subtlety and effectiveness.
The quality or "naturalness" of a backdoor attack is difficult to assess. Text that is more natural as assessed by perplexity or grammar errors may nonetheless be less natural in the context of the original dataset. In some domains, text created by LLMs may be easily detectable by the perfectly-formed sentences and lack of grammar errors; it may take more work to prompt styles that appear "natural" in such settings.
Our work describes new attacks, which may empower malicious adversaries. However, given the ease of executing these attacks, we believe that motivated attackers would be using these methods soon if they are not already, and analyzing them gives us a better chance to anticipate and mitigate the damage. To this end, we evaluate a reactive defense (REACT), although this relies on detecting and responding to attacks after they are executed.
Our experiments are limited to sentiment analysis, abuse detection, and topic classification in English, and may perform differently for different
Figure 4: Effectiveness of additional LLMBkd with different styles and prompt strategies on SST-2 (gray-box).
tasks or languages. We expect the principles to generalize, but the effectiveness may vary.
## Acknowledgements
This work was supported by a grant from the Defense Advanced Research Projects Agency (DARPA) -- agreement number HR00112090135.
\begin{table}
\begin{tabular}{c c c c c c c c c} \multicolumn{1}{c}{} & \multicolumn{1}{c}{SST-2} & & & & \\ \hline \hline \multirow{2}{*}{Defense} & \multirow{2}{*}{Addsent} & \multirow{2}{*}{BadNets} & \multirow{2}{*}{SynBkd} & \multicolumn{2}{c}{StyleBkd} & \multicolumn{4}{c}{LLMBkd (ours)} \\ \cline{5-10} & & & & Bible & Bible & Default & Gen-Z & Sports \\ \hline w/o Defense & 0.861 & 0.090 & 0.518 & 0.450 & 0.967 & 0.397 & 0.966 & 0.975 \\ \hline BKI & 0.833 & 0.082 & 0.541 & 0.490 & 0.556 & 0.394 & 0.964 & 0.826 \\ \hline CUBE & 0.914 & **0.071** & 0.649 & 0.477 & 0.555 & 0.338 & 0.962 & 0.787 \\ \hline ONION & 0.765 & 0.098 & 0.446 & 0.471 & 0.976 & 0.218 & 0.969 & 0.980 \\ \hline RAP & 0.852 & 0.101 & 0.616 & 0.448 & 0.951 & 0.411 & 0.963 & 0.988 \\ \hline STRIP & 0.882 & 0.095 & 0.549 & 0.527 & 0.961 & 0.418 & 0.759 & 0.978 \\ \hline REACT (ours) & **0.221** & 0.101 & **0.366** & **0.304** & **0.507** & **0.217** & **0.562** & **0.589** \\ \hline \hline \end{tabular} \begin{tabular}{c c c c c c c c c} \multicolumn{1}{c}{} & \multicolumn{1}{c}{HSOL} & \multicolumn{4}{c}{LLMBkd (ours)} \\ \cline{5-10} & & & & Bible & Bible & Default & Gen-Z & Sports \\ \hline w/o Defense & 0.993 & 0.068 & 0.936 & 0.400 & 0.999 & 0.854 & 0.895 & 0.958 \\ \hline BKI & 0.965 & 0.069 & 0.541 & 0.490 & 1.000 & 0.802 & 0.779 & 0.964 \\ \hline CUBE & 0.994 & **0.061** & 0.649 & 0.477 & 0.999 & 0.887 & 0.711 & 0.961 \\ \hline ONION & 0.966 & 0.066 & **0.446** & 0.471 & 1.000 & 0.843 & 0.832 & 0.963 \\ \hline RAP & 0.995 & 0.092 & 0.616 & 0.448 & 1.000 & 0.822 & 0.867 & 0.952 \\ \hline STRIP & 0.986 & 0.094 & 0.549 & 0.527 & 1.000 & 0.861 & 0.803 & 0.953 \\ \hline REACT (ours) & **0.178** & 0.064 & 0.532 & **0.368** & **0.048** & **0.206** & **0.235** & **0.400** \\ \hline \hline \end{tabular} \begin{tabular}{c c c c c c c c} \multicolumn{1}{c}{} & \multicolumn{1}{c}{ToxiGen} & \multicolumn{4}{c}{LLMBkd (ours)} \\ \cline{5-10} & & & Bible & Bible & Default & Gen-Z & Sports \\ \hline w/o Defense & 0.898 & 0.276 & 0.992 & 0.791 & 0.990 & 0.503 & 0.944 & 0.919 \\ \hline BKI & 0.812 & 0.316 & 0.985 & 0.748 & 0.990 & 0.431 & 0.967 & 0.950 \\ \hline CUBE & 0.933 & 0.267 & 0.989 & 0.759 & 0.990 & 0.462 & 0.765 & 0.925 \\ \hline ONION & 0.937 & 0.307 & 0.987 & 0.780 & 0.990 & 0.419 & 0.950 & 0.896 \\ \hline RAP & 0.927 & 0.230 & 0.993 & 0.783 & 0.983 & 0.502 & 0.938 & 0.895 \\ \hline STRIP & 0.984 & 0.273 & 0.992 & 0.786 & 0.994 & 0.464 & 0.955 & 0.934 \\ \hline REACT (ours) & **0.491** & **0.203** & **0.706** & **0.645** & **0.258** & **0.155** & **0.230** & **0.271** \\ \hline \hline \end{tabular}
\begin{tabular}{c c c c c c c} \multicolumn{1}{c}{} & \multicolumn{1}{c}{AG News} & \multicolumn{4}{c}{LLMBkd (ours)} \\ \cline{5-10} & & & Bible & Bible & Default & Gen-Z & Sports \\ \hline w/o Defense & 1.000 & 0.772 & 0.999 & 0.804 & 0.999 & 0.961 & 0.996 & 0.994 \\ \hline BKI & 0.999 & 0.745 & 0.999 & 0.997 & 0.965 & 0.996 & 0.993 \\ \hline CUBE & 0.999 & 0.516 & 0.660 & - & **0.176** & 0.452 & **0.142** & 0.725 \\ \hline ONION & 0.999 & 0.798 & 0.998 & - & 0.999 & 0.967 & 0.995 & 0.993 \\ \hline RAP & 1.000 & 0.803 & 0.999 & - & 1.000 & 0.968 & 0.996 & 0.995 \\ \hline STRIP & 0.999 & 0.810 & 0.998 & - & 0.999 & 0.972 & 0.996 & 0.995 \\ \hline REACT (ours) & **0.150** & **0.138** & **0.455** & **0.380** & 0.377 & **0.327** & 0.307 & **0.359** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Attack success rate (ASR) on all datasets for models defended by REACT and baseline defenses (smaller is better). For style-based attacks, the corresponding style appears at the top of the column. The best-performing defense for each attack is shown in **bold**. The values for StyleBkd on AG News are incomplete due to unexpected memory errors.
This work benefited from access to the University of Oregon high-performance computer, Talapas.
|
2307.12087 | CFR-p: Counterfactual Regret Minimization with Hierarchical Policy
Abstraction, and its Application to Two-player Mahjong | Counterfactual Regret Minimization(CFR) has shown its success in Texas
Hold'em poker. We apply this algorithm to another popular incomplete
information game, Mahjong. Compared to the poker game, Mahjong is much more
complex with many variants. We study two-player Mahjong by conducting game
theoretical analysis and making a hierarchical abstraction to CFR based on
winning policies. This framework can be generalized to other imperfect
information games. | Shiheng Wang | 2023-07-22T14:38:47Z | http://arxiv.org/abs/2307.12087v1 | Cfr-p : Counterfactual Regret Minimization with Hierarchical Policy Abstraction, and its Application to Two-player Mahjong
###### Abstract
Counterfactual Regret Minimization(CFR) has shown its success in Texas Hold'em poker. We apply this algorithm to another popular incomplete information game, Mahjong. Compared to the poker game, Mahjong is much more complex with many variants. We study two-player Mahjong by conducting game theoretical analysis and making a hierarchical abstraction to CFR based on winning policies. This framework can be generalized to other imperfect information games.
## 1 Introduction to CFR
### Normal-form and Extensive-form Games
A finite normal-form game is a tuple \((N,A,u)\), where
* \(N=\{1,...,n\}\) is a finite set of n players.
* \(A=S_{1}\times...\times S_{n}\) is the set of all action profiles, where \(S_{i}\) is a finite set of actions available to player \(i\). Each vector \(a\in A\) is called an action profile (or outcome).
* \(u=(u_{1},...,u_{n})\), where \(u_{i}:A\mapsto\mathbb{R}\) is a real-valued utility (also called payoff or reward) function for player i.
A normal-form game is zero-sum if the utilities of all players sum up to be zero for any given outcome, i.e. \(\forall a\in A,\sum_{i\in N}u_{i}(a)=0\). In constant-sum games, the utilities sum up to a constant value. Constant-sum games can be reformulated into zero-sum games.
A pure strategy picks one action with probability 1, while a mixed strategy assigns a distribution over actions, denoted with \(\sigma\). \(\sigma_{i}(s)\) refers to the probability that player \(i\in N\) picks action \(s\in S_{i}\). \(-i\) refers to player \(i\)'s opponents. The expected utility of player \(i\) can be computed as,
\[u_{i}(\sigma_{i},\sigma_{-i})=\sum_{s\in S_{i}}\sum_{s^{\prime}\in S_{-i}} \sigma_{i}(s)\sigma_{-i}(s^{\prime})u_{i}(s,s^{\prime}).\]
Given all other players' strategies, a best response for player \(i\) maximizes its expected utility. When every player is playing a best response to other players' strategies, the combination of strategies is called a Nash Equilibrium, where no player can get higher utility by deviation.
In a normal-form game, players take actions simultaneously. In a sequential game, however, a play consists of a sequence of actions. A sequential game is formed by a extensive-form game, where the game tree is formed of states with edges transiting from state to state.
A state can be a chance node or decision node. A chance node assigns the outcome of a chance event, so each edge corresponds to an outcome with its probability. At a decision node, the edges represent the actions and their related successor states. Each decision node in the game tree is contained within an information set, which contains one active player and all information available to him. One information set may contain more than one game state.
Let \(A\) denote the set of all game actions, \(I\) denote an information set, and \(A(I)\) denote the set of legal actions for information set \(I\). Let \(t\) and \(T\) denote time steps. A strategy \(\sigma_{i}^{t}\) maps player i's information set \(I_{i}\) to \(\Delta(A(I_{i}))\), that is, a probability distribution over legal actions. At time \(T\), all players' strategies form a strategy profile \(\sigma^{t}\), and a strategy profile excluding player i is denoted as \(\sigma_{-i}\). Let \(\sigma_{I\to a}\) denote a profile equivalent to \(\sigma\), except that action \(a\) is always chosen at information set \(I\).
A history \(h\) is a sequence of actions (including chance outcomes) starting from the game root. Let \(\pi^{\sigma}(h)\) be the reach probability of the game history \(h\) with strategy profile \(\sigma\). Similarly, let \(\pi^{\sigma}(I)\) be the probability of reaching information set \(I\) through all possible histories in \(I\), i.e. \(\pi^{\sigma}(I)=\sum_{h\in I}\pi^{\sigma}(h)\).
All extensive-form games have an equivalent normal-form representation.
### Regret Minimization
This introduction follows from [10].
_Regret Matching_, introduced by Hart and Mas-Colell in 2000, has sparked a revolution in computer game play of some of the most difficult incomplete-information games. Players reach equilibrium by tracking regrets for past plays, making future plays proportional to positive regrets.
Regret of not having chosen an action is defined as the difference between the utility of that action and the utility of the action that is actually chosen, with respect to the fixed choices of other players. Formally, for action profile \(a\in A\), let \(s_{i}\) be player \(i\)'s action and \(s_{-i}\) be the actions of all other players. Suppose \(s^{\prime}_{i}\) is substituted for \(s_{i}\), then after the
play, player \(i\)'s regret for not have played \(s^{\prime}_{i}\) is \(u(s^{\prime}_{i},s_{-i})-u(s_{i},s_{-i})\).
In order to chose the action that has the largest regret and meanwhile not totally predictable and thus exploitable, one player may take regret matching, where actions are selected at random with a distribution that is proportional to positive regrets.
When the game is repeated for a number of rounds, regrets of previous rounds accumulate. Over time, for two-player zero-sum games, regret matching converges to a correlated equilibrium.
Our tutorial demo for regret matching is available at,
[https://github.com/workplay/CFR](https://github.com/workplay/CFR)
### Counterfactual Regret Minimization
Regret matching is only applicable to normal-form games, while CFR works for extensive-form games. This introduction also follows from [10], and the details can be found in [1] and [11].
The counterfactual reach probability of information set \(I\), \(\pi^{\sigma}_{-i}(I)\), is the probability of reaching \(I\) with strategy profile \(\sigma\) except that, we treat current player \(i\)'s actions to reach the state as having probability 1. "Counterfactual" here means that player \(i\)'s strategy is modified to have intentionally played to information set \(I_{i}\). In other words, the probabilities that factually came into player \(i\)'s play is excluded from the computation.
Let \(Z\) denote the set of all terminal game histories (from root to leaf). Then proper prefix \(h\sqsubset z\) for \(z\in Z\) is a non-terminal game history. \(u_{i}(z)\) is player \(i\)'s utility of terminal history \(z\). Define the counterfactual value at nonterminal history \(h\) as:
\[v_{i}(\sigma,h)=\sum_{z\in Z,h\sqsubset z}\pi^{\sigma}_{-i}(h)\pi^{\sigma}(h, z)u_{i}(z) \tag{1}\]
The counterfactual regret of not taking action \(a\) at history \(h\) is defined as:
\[r(h,a)=v_{i}(\sigma_{I\to a},h)-v_{i}(\sigma,h) \tag{2}\]
The counterfactual regret of not taking action \(a\) at information set \(I\) is:
\[r(I,a)=\sum_{h\in I}r(h,a) \tag{3}\]
Let \(r^{t}_{i}(I,a)\) refer to the regret when players use \(\sigma^{t}\) of not taking action \(a\) at information set \(I\) belonging to player \(i\). The cumulative counterfactual regret is defined as:
\[R^{T}_{i}(I,a)=\sum_{t=1}^{T}r^{t}_{i}(I,a) \tag{4}\]
The regret of action \(a\) is the difference between the value of always choosing action \(a\) and the expected value of strategy \(\sigma\), weighted by the probability that other players (including chance player) will play to reach the node.
Suppose the nonnegative counterfactual regret is defined as \(R^{T,+}_{i}=\max(R^{T}_{i}(I,a),0)\), then the new strategy can be obtained from cumulative regrets.
If \(\sum_{a\in A(I)}R^{T,+}_{i}(I,a)>0\),
\[\sigma^{T+1}_{i}(I,a)=\frac{R^{T,+}_{i}(I,a)}{\sum_{a\in A(I)}R^{T,+}_{i}(I,a)}. \tag{5}\]
Otherwise take a random strategy,
\[\sigma^{T+1}_{i}(I,a)=1\big{/}|A(I)| \tag{6}\]
For each information set, equation (5) computes a strategy of which action probabilities are proportional to the positive cumulative regrets. CFR computes the utility of each action recursively. Regrets are computed from returned values, and the values of the current node is finally computed and returned. The average strategy profile at information set \(I\) approaches an equilibrium as \(T\rightarrow\infty\).
## 2 Two-Player Mahjong
### Rules
Mahjong is a tile-based game developed in China, which is commonly played by four players. There is a detailed elaboration on Wikipedia1, which is abbreviated to this concise instruction. As is shown in Table 1, a typical set of Mahjong tiles usually has at least 136 tiles. There are 3 suits of simples and in each suit the tiles are numbered from 1 to 9. The suits are bamboos, dots, and characters. There are 4 identical copies of each simples tile totaling 108 simples tiles. There are two different sets of Honors tiles: Winds and Dragons. The Winds are East, South, West, and North. The Dragons are Red, Green, and White. These tiles have no numerical sequence and there are four identical copies of each Honors tile, for a total of 28 Honors tiles. There are two sets of Bonus tiles: Flowers and Seasons. When drawn, the Bonus tile is not added into a player's hand but are instead set aside for scoring purposes, and an extra tile is drawn in replacement of the Bonus tile.
Footnote 1: [https://en.wikipedia.org/wiki/Mahjong](https://en.wikipedia.org/wiki/Mahjong)
Two-player Mahjong is a simplified variant designed by Tencent 2, which is played with a set of 68 tiles. All Dots and Bamboos are excluded, and there are only Characters, Honors and Bonus. As Bonus don't affect the rules of the game, they are not considered in our analysis. The tiles are displayed in Figure 1.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Category** & **Name** & **Count** \\ \hline \multirow{3}{*}{Simples} & Dots & 36 \\ \cline{2-3} & Bamboo & 36 \\ \cline{2-3} & Characters & 36 \\ \hline \multirow{3}{*}{Honors} & Winds & 16 \\ \cline{2-3} & Dragons & 12 \\ \hline \multirow{3}{*}{Bonus} & Flowers & 4 \\ \cline{2-3} & Seasons & 4 \\ \hline \multicolumn{2}{|c|}{Total} & 144 \\ \hline \end{tabular}
\end{table}
Table 1: Mahjong Tile Count
Although there are fewer tiles and players, the basic rules remain the same. Each player begins by receiving 13 tiles, and in turn players draw and discard tiles until they complete a legal hand using the 14th drawn tile to form 4 clicks (or sets) and an eye (two identical tiles). Melds are groups of tiles within the player's hand, consisting of either a Pong (three identical tiles), a Kong (four identical tiles), a Chow (three Simple tiles all of the same suit, in numerical sequence). Whenever a Kong is formed, that player must draw an extra tile from the end of the wall(face down tiles on the table) and then discard a tile. Melds may be formed by drawing a tile from the wall, or by seizing another player's discard. A player can also win with other special hands, like seven Eyes.
Points are obtained by matching the winning hand with different values. For simplicity, we only consider most popular patterns of winning. PongPongHu(4 Pongs and 1 Eye) or QiDui(7 Eyes) gets two points, and all other winning patterns get one point. We choose two-player Mahjong to study algorithms for incomplete-information games. This framework can be generalized to other variants in the future.
### Extensive-Form Game Representation
Each player in turn, in counterclockwise direction, draws a tile from the wall; then this player proceeds to discard a tile. The discarded tile is thrown into the centre and the other players have an opportunity to seize the discarded tile; if no one takes it, the turn continues to the next player. Play continues this way until one player has a legal winning hand and calls out "Hu" while revealing their hand.
As the normal counterclockwise order may be interrupted, a neat extensive-form game [22] representation is necessary in order to conduct game theoretical analysis as well as to implement algorithms. There are two players 0 and 1, and each player's turn is separated into three phases 0, 1 and 2. Figure 2 shows one turn for player 1. In phase 1, although there is an edge pointing from node "Draw a tile." to itself by action Kong, the player actually draws a tile before he acts Kong, thus the game goes to a new state, which doesn't result in a cycle in the game tree.
* In phase 0, the player can seize the discard by forming Chow, Pong, Kong, Win with tiles in his hand, otherwise he can only Pass. For Chow and Pong, he directly goes to phase 2, otherwise to phase 1.
* In phase 1, the player draws a tile from the wall. Possible legal actions are declaring Win, Kong and Pass. By choosing Kong, he reenters phase 1, while Pass leads to phase 2.
* In phase 2, the only legal action is to discard a tile from his hand.
Possible legal actions are summarized in Table 2. Notice that Chow, Pong, Kong, Win are legal actions only when certain patterns can be formed with other tiles in hand.
### Complexity
Suppose all hidden information are given, the complexity of two-player Mahjong is similar to Chess. The number of reachable positions on a chess board is estimated to be fewer than \(10^{46}\)[15]. As most actions in phase 0 and 1 are legal only with certain tiles in hand, we assume that both players take the action Pass in order to estimate the complexity of the game tree. In phase 3, however, during each round there are 14 tiles in hand, and the player chooses one of them to discard. Two-player Mahjong consists of 36 Character tiles and 28 Honor tiles, of which 26 tiles will be allocated to both players in the beginning. Therefore the game tree has more than \(14^{38}\) leaves, and the complexity can
\begin{table}
\begin{tabular}{|c|c|} \hline
**Phase** & **Possible Legal Actions** \\ \hline
0 & Chow, Pong, Kong, Win, Pass. \\ \hline
1 & Kong, Win, Pass. \\ \hline
2 & Discard a tile. \\ \hline \end{tabular}
\end{table}
Table 2: Possible Legal Actions
Figure 1: Two-player Mahjong Tiles
each \(10^{43}\). Note that only one action is considered in phase 0 and 1, thus the complexity serves as a lower bound.
As long as incomplete information is introduced, the game becomes much more complicated. Each permutation corresponds to a different game, and only discarded tiles are public information to both players. There are 4 identical copies of 16 different tiles, and the number of permutations after shuffling can be calculated as,
\[\frac{A_{64}^{64}}{(A_{4}^{4})^{16}}\approx 1.2*10^{22}. \tag{7}\]
In total, the complexity reaches \(10^{64}\). In comparison, two-player Limit Texas Hold'em poker only has \(10^{18}\) leaves [1].
Mahjong is much more complex than Texas Hold'em poker, which is popular among literatures about incomplete information games. The most significant difference is that the hidden information keeps changing during the entire play. In Texas Hold'em poker, play begins with each player being dealt two cards face down, and this hidden information remains the same during the entire play. In Mahjong, however, hidden tiles in hand keep changing after every round. As a result, existing algorithms such as decomposition [1] or subgame solving [1] cannot be directly applied to Mahjong. Secondly, the amount of hidden information is much larger. There are only 2 face down cards in poker, but there are 13 hand tiles in Mahjong. Finally, each poker player only bets for at most five rounds, while in Mahjong, the number of rounds can be as large as 38.
Note that two-player Mahjong is the most simplified version. Usually a Mahjong game is played by four players with 136 tiles.
## 3 Hierarchical Policy Abstraction
Because of the complexity caused by a lot of private hidden information that keeps changing, we cannot implement CFR on two-player Mahjong directly. We make many abstractions in order to reduce the complexity.
At high level, we conduct hierarchical policy abstraction. As is introduced in section 2.1, there are several patterns of winning, such as QiDui, PongPongHu, etc. Based on the fact that there are several distinct patterns, and that winning with the same pattern usually receives the same points, we separate the task of the game into two parts:
1. Which winning pattern should we achieve?
2. What is the most efficient way to achieve it?
Suppose the complexity of both parts are \(\mathcal{O}(T_{1})\) and \(\mathcal{O}(T_{2})\). Compared to an end-to-end algorithm, the hierarchical policy abstraction significantly reduces the complexity from \(\mathcal{O}(T_{1}*T_{2})\) to \(\mathcal{O}(T_{1}+T_{2})\). In addition, at each node of the game tree, the number of legal actions plunges to the number of winning patterns. For instance, in two-player Mahjong, phase 3, the agent need to discard a tile
Figure 2: Extensive-Form Game Representation
out of 14 tiles in his hand, resulting in 14 possible legal actions. With policy abstraction, however, the number of legal actions becomes 3, i.e. to win with PongPongHu, to win with QiDui, or to win Normally without special patterns. Finally the number of leaves in the game tree can be as few as \(3^{3}8\approx 1.35\times 10^{18}\). Moreover, it's not necessary to change the policy at every round, especially when the agent is reaching the end of the game, so that the size of the game tree can be further reduced.
Each separate sub-task can be solved independently by adopting an appropriate algorithm, such as rule-based search, supervised learning with professional players' game history, or reinforcement learning like CFR. For the 1st sub-task, which winning pattern can win most points should be concluded based on the statistics from self-play. As for the 2nd sub-task, since there is a clear indicator to the distance between the tiles in hand and the target winning pattern, rule-based search can help solve this problem, such as keeping methods in hand and dropping other single tiles.
## 4 Implementation
Our two-player Mahjong framework consists of three parts: the game logic, the AI and the GUI(graphical user interface) for testing.
### The Game Logic and the GUI for Testing
The game logic is designed following the extensive-form representation in Figure 2. It tracks tiles in hand as well as all hidden and dropped tiles on the table, determines legal actions in every phase, and writes down all game states to a log file. Some legal actions can be decided directly according to special patterns and tiles in hand, for example, whether hand tiles can form a pattern of Chow, Pong or Kong together with the tile dropped by the other player. Some other legal actions such as winning are much more complex, which requires the game logic to check whether the tiles in hand form any of the winning patterns.
In purpose of supporting CFR and other search based algorithms, the game logic is designed recursively. At each node, it makes a copy of current game state, tries some legal actions, and returns the utility recursively from a leaf node. Since there is only one searching path from root to current game node each time, only linear memory is required while running search algorithms.
The game logic supports both training and testing. In the training mode, it repeatedly traverse the game tree recursively, keeping a record of utilities for each action at each node. The choice of actions and recorded utilities is determined by the AI algorithm. When testing, however, it runs one possible path from root to leaf, records all game states and actions and writes them to a log file, which can be analyzed by GUI for testing.
As is shown in Figure 4, the GUI for testing reads the log file and displays the entire game process graphically. It keeps track of all tiles in hand, all dropped tiles and the actions like Chow, Pong, Kong. Although only partial information can be observed by the agent, all hidden information is released to the tester. Whoever with some domain knowledge of this game can evaluate whether the AI has made a rational move or not.
### The AI
As is discussed in section 3, policies are abstracted into actions according to winning patterns. For all three phases at every state there are only three actions that are abstracted from policies, i.e. PongPongHu, 7Pairs and Normal. There may be other policies in other variants of Mahjong, such as Defense, Same Color, etc. Defense, for example, decrease other players' probability of winning regardless his own hand tiles. Such policy is not suitable for two-player Mahjong because the game is zero-sum, thus defense also prevents the agent itself from winning.
Each policy is realized by heuristic search, whose heuristic function considers the number of potential winning tiles, the number of medals and pairs, as well as the possible missing tiles which can form fields or pairs with hand tiles. In general, the search algorithms aim at maximize the probability of winning, ignoring winning points. The winning points, however, is considered by the higher level CFR algorithm that picks the appropriate winning pattern.
The CFR algorithm is to find which abstract policy can win most points in expectation at every decision node. The algorithm mainly follows from the description in section 1, which assigns a mixed strategy to every information set. We only do CFR three times for each player in the beginning, middle and end of the game since it's not necessary to change policies in every round. The accumulated regrets and strategies are saved into a file with a tabular representation, where each decision node is labeled by a compact encoding.
Figure 3 shows how the labels of nodes are encoded by a single integer. Because of the huge amount of possible hand
Figure 3: Hierarchical Policy Abstraction
\begin{table}
\begin{tabular}{|c|c|c|} \hline Bit & Meaning & Range \\ \hline
0-5 & The serial number of round & 0 - 38 \\ \hline
6-8 & The number of Pairs & 0 - 6 \\ \hline
9-11 & The number of Pongs or Kongs & 0-4 \\ \hline
12-15 & The number of Character tiles & 0-14 \\ \hline
16-19 & The number of Wind tiles & 0-14 \\ \hline \end{tabular}
\end{table}
Table 3: Encoding Features
tiles and table tiles, we extract features of hand tiles and cluster information sets according to these features. Recall that there are basically three abstract actions, i.e. Normal, PongPongHu and 7Pairs. If there exists an explicit Chow in the hand, Normal is the only legal action so that the corresponding decision node will not be saved.
While playing a game, the AI loads the node labeled by the same information set as current hand tiles, then it picks one legal abstract policy based on the accumulated regrets. The real action will be generated automatically by the underlying heuristic search algorithms.
## 5 Evaluation
The notations in this section follow from [1].
To evaluate an extensive game strategy, there are typically two options. The first one is to organize a tournament consisting of several strategies, like the Annual Computer Poker Competition. The second one is to compute the worst-case performance of a strategy. It is infeasible for large scale games because it requires to traverse the entire game tree. We choose to compute the worst-case performance of the abstracted game in this paper, and will organize online tournaments with human players in the future.
A strategy for player i, \(\sigma_{i}\in\Sigma_{i}\), is a function that assigns a probability distribution over actions to each information set \(I\). A strategy profile, \(\sigma\in\Sigma\), consists of a strategy for each player. \(\sigma_{-i}\) refers to all strategies in \(\sigma\) except for \(\sigma_{i}\). \(u_{i}(\sigma)\) is the utility for player \(i\) under strategy profile \(\sigma\).
The best response is the optimal strategy for player \(i\) against the opponent strategy profile \(\sigma_{-i}\), denoted as \(b_{i}(\sigma_{-i})\). The value of the best response is \(u_{i}(b_{i}(\sigma_{-i}),\sigma_{-i})\). Two-player zero-sum games have a game value, \(v_{i}\), that is the lower bound on the utility of an optimal player in position \(i\), formally,
\[v_{i}=\min_{\sigma_{i}}u_{i}(\sigma_{i},b_{-i}(\sigma_{i})).\]
In this case, the term **exploitability** of a strategy is how much additional utility is lost to the best response by playing \(\sigma_{i}\), formally,
\[\varepsilon_{i}(\sigma_{i})=v_{i}-u_{i}(\sigma_{i},b_{-i}(\sigma_{i})).\]
In large two player zero-sum games the value of the game is unknown and is intractable to compute. However, if the players alternate positions, then the value of a pair of games is zero. If an agent players according to the profile \(\sigma\) then its exploitability is
\[\varepsilon(\sigma)=\frac{u_{2}(\sigma_{1},b_{2}(\sigma_{1}))+u_{1}(b_{1}( \sigma_{2}),\sigma_{2})}{2} \tag{8}\]
We run the experiments on a computer with Intel Core i9-9900 3.10GHz CPU and 16GM RAM. The training process is done with single thread and the evaluation is done
Figure 4: GUI for Testing
with 100 threads. 3000 suits of tiles are randomly shuffled, serving as the benchmark for evaluation. For each of them, the CFR player plays with all heuristic search based players, and the opponent's highest score is selected as exploitability. Notice that this approximated exploitability is higher than the actual value, because it allows different actions on the same information set.
The training process adopts CFR with Monte Carlo sampling. At every iteration, a suit of tiles is shuffled randomly and the CFR algorithm runs on this specific suit. The training process is done for a number of iterations in each epoch, and then it is evaluated on the benchmark generated previously.
Such random training samples can hit the benchmark which shares the same feature encoding in Table 3. Figure 4(a) shows the training process where there are 5 iterations in each epoch. This experiment runs for 130 iterations and it takes around 12 hours. The exploitability gradually decreases but becomes slower after 100 iterations. Figure 4(b) shows the case where each epoch has 500 iterations, which turns out to be long tail. The training and evaluating process takes around 72 hours. The exploitability drops significantly in the first 500 iterations but gradually becomes stable afterwards.
After training for 10000 iterations, there are 2618 nodes in the database. Figure 4 shows an example of nodes. In round 1, there are 3 pairs and 2 Pongs or Kongs in hand, and 11 of out 14 tiles are wind tiles. Since there is no explicit Chows, Pongs or Kongs (All 14 tiles are in hand), all possible actions are legal, among which PongPongHu has the highest regret.
## 6 Discussion
It's not entirely accurate to say that the hidden information stays the same during the entire game of Texas hold'em. As the game proceeds, additional cards are revealed, which reduces the number of possible states the players can be in. But this has only a minor impact on the number of states.
Games with private actions are an interesting problem and an area that has been under-explored. Algorithms like DeepCFR (Brown et al., 2019) work without a problem in these games. Subgame solving (Burch, Johanson, and Bowling, 2014) works in theory, but subgame solving is only practical with a small amount of hidden information (less than about 1 million information sets per player per subgame). So subgame solving as it exists would not work in most games with private actions.
Compared to end-to-end algorithms, hierarchical policy abstraction is feasible and flexible. It takes comprehensive measures to deal with games with private actions, which can be trained on a single machine. Such algorithm can be extended to other Mahjong games with similar winning patterns. However, the current algorithm relies heavily on abstractions, which will be gradually released, such as the order of discarded tiles and the features of visible information.
## 7 Acknowledgement
This work was supported by Tencent Rhino-Bird Joint Research Program No. GF201911, in collaboration with Peng Sun at the Tencent AI lab.
The game logic is developed based on Sichuan Mahjong which was developed by Zhichao Shu from Tencent Light
\begin{table}
\begin{tabular}{|c|c|} \hline encoding & 734401 \\ \hline round id & 1 \\ \hline number of Pairs & 3 \\ \hline number of Pongs or Kongs & 2 \\ \hline number of Character tiles & 3 \\ \hline number of wind tiles & 11 \\ \hline number of legal actions & 3 \\ \hline regret sum & [-0.598, 2.128, -1.356] \\ \hline strategy sum & [1.359, 1.975, 0.667] \\ \hline \end{tabular}
\end{table}
Table 4: An Example of Nodes
Figure 5: Exploitability
speed & Quantum Studios Group. We add the logic of Chow and related winning patterns, formulate the game into extensive-form, and implement the recursive search framework.
We've received valuable advice from Noam Brown 3 in regard to games with private actions and exploitability, which is discussed in section 6.
Footnote 3: [https://www.cs.cmu.edu/](https://www.cs.cmu.edu/) noamb/
|
2303.00174 | Quantum autonomous Boolean networks | Boolean networks, first developed in the late 1960s as a tool for studying
complex disordered dynamical systems, consist of nodes governed by Boolean
functions whose evolution is entirely deterministic in that the state of the
network at a given time fully determines the state of the network at some
future time. They are known for exhibiting a high degree of spontaneous order
and have since become a fundamental tool for modeling a wide variety of
systems. In this article I develop a model for quantum autonomous Boolean
networks that exhibits many of the same properties as the classical model while
also demonstrating uniquely quantum properties within a rich landscape of
behavior. | Ian T. Durham | 2023-03-01T02:04:46Z | http://arxiv.org/abs/2303.00174v1 | # Quantum autonomous Boolean networks
###### Abstract
Boolean networks, first developed in the late 1960s as a tool for studying complex disordered dynamical systems, consist of nodes governed by Boolean functions whose evolution is entirely deterministic in that the state of the network at a given time fully determines the state of the network at some future time. They are known for exhibiting a high degree of spontaneous order and have since become a fundamental tool for modeling a wide variety of systems. In this article I develop a model for quantum autonomous Boolean networks that exhibits many of the same properties as the classical model while also demonstrating uniquely quantum properties within a rich landscape of behavior.
## I Introduction
Boolean networks were first developed by Kauffman in the late 1960s as a tool for studying complex disordered dynamical systems Kauffman (1960). His original intent was to find networks that might possess enough order such that they allow for adaptation and selection as in, for example, genetic regulatory processes and similar systems Kauffman (1960). The nodes of these networks consist of Boolean functions and their evolution is entirely deterministic; the state of the network at a given time \(t\) fully determines the state of the network at time \(t+1\). As such, these networks are referred to as _autonomous_. If the initial connections between the nodes are randomly determined, then these networks are also referred to as _random_ Boolean networks. Because these networks have a finite number of nodes and are entirely deterministic, they eventually cycle through a finite number of states, i.e. they exhibit state cycles of finite length where the length depends on the number of functions in the network. In some of these networks, the state of one or more of the nodes are "frozen" in a given state regardless of the length of the state cycle. Such elements are referred to as _frozen cores_. The frozen cores create "islands" of isolated nodes separated by "percolating walls" such that perturbations to variables in one island have no effect on variables in other islands Kauffman (1960). Since Kauffman's introduction, the properties of these networks have been extensively studied Kauffman (1960)
tions in general before introducing their unitary implementations and a generalized bit oracle for calculating irreversible classical Boolean functions on quantum networks. In Section III I then give an overview of classical autonomous Boolean networks along with a discussion of circuit implementations of such networks before introducing the quantum framework. In Section IV A I discuss the important features of these networks, which I refer to as quantum autonomous Boolean networks (qABNs), and compare them to their classical counterparts. Finally, in Section V I discuss some of the unanswered questions and lines of inquiry that might be undertaken to address them.
## II Boolean functions
A _classical_ Boolean function is a function of \(k\) input variables and \(m\) output variables of the form \(f:\{0,1\}^{k}\rightarrow\{0,1\}^{m}\) where \(\{0,1\}\) is the standard Boolean domain and is isomorphic to \(\mathbb{Z}/2\mathbb{Z}\) where the addition of any two variables \(x\) and \(y\) is \(x\oplus y\). In the quantum domain, each of our \(k\) input variables is represented by a qubit. A _quantum_ Boolean function of \(k\) qubits is then a unitary operator \(f\) on \(k\) qubits such that \(f^{2}=1\).
It is often standard procedure to identify \(\{0,1\}\) with \(\{+1,-1\}\) by defining \(0\equiv+1\) and \(1\equiv-1\)[26]. This makes \(\{+1,-1\}\), which is also isomorphic to \(\mathbb{Z}/2\mathbb{Z}\), the multiplicative group of two elements where products are written \(xy\). In this paper, unless otherwise specified, I will employ the \(\{0,1\}\) convention in order to better demonstrate the consistency with Kauffman's original ideas.
Consider a single variable \(x\) defined on the Boolean domain \(\{0,1\}\) whose state is determined by \(k\) input variables, each of which is itself defined on the Boolean domain. The number of combinations of states of \(k\) inputs is just \(2^{k}\). But for each of these \(2^{k}\) combinations, a specific Boolean function must, by definition, specify the value of \(x\). Since the values of \(x\) lie on the Boolean domain, this means that there are a total of \(2^{2^{k}}\) Boolean functions of \(k\) inputs. For example there are sixteen two-input Boolean functions, including the familiar AND, OR, and NOT.
There are two natural ways of representing classical Boolean functions on quantum systems. These are known as the _phase oracle_[28]:
\[\left|\mathbf{x}\right\rangle\mapsto(-1)^{f(\mathbf{x})}\left|\mathbf{x}\right\rangle \tag{1}\]
and the _bit oracle_ (also called the _standard oracle_):
\[\left|\mathbf{x}\right\rangle\left|\mathbf{y}\right\rangle\mapsto U_{f}\left| \mathbf{x}\right\rangle\left|\mathbf{y}\right\rangle\equiv\left|\mathbf{x} \right\rangle\left|\mathbf{y}\oplus f(\mathbf{x})\right\rangle \tag{2}\]
where \(U_{f}^{2}=U_{f}^{\dagger}U_{f}=1\), \(\mathbf{x}\equiv x_{1}x_{2}\cdots x_{k},x_{j}\in\{0,1\}\), and \(\mathbf{y}\equiv y_{1}y_{2}\cdots y_{m},y_{i}\in\{0,1\}\). That is, given an input state \(\left|\mathbf{x}\right\rangle\left|\mathbf{y}\right\rangle\), \(U_{f}\) maps the function's logical output to \(\left|\mathbf{y}\right\rangle\).
### Unitary implementations
The logical output \(\left|\mathbf{y}\right\rangle\) may or may not be a set of ancillas depending on the nature of the function. If a function is naturally reversible then no ancilla is needed. But if the function is not reversible then at least one ancilla is required in order to ensure unitarity. To see this, consider the truth tables of the exclusive OR (XOR) and OR functions in Table 1. Every pair of outputs \((x,y\oplus f(x))\) for the XOR function is uniquely specified by a pair of inputs \((x,y)\). The same is not true for the OR function since the output pair \((x,y\oplus f(x))=(1,1)\) is obtained from two different input pairs. Thus the OR function is not reversible and an additional input and output would need to be specified in order for this function to be represented reversibly. A reversible truth table for the OR function with \(y\) serving as an ancilla is shown in Table 2 where \(y\) is always assumed to start in the \(0\) state.
It's clear, then, that not all classical Boolean functions of \(k\) inputs can be reversibly represented with those \(k\) inputs alone. This can become cumbersome for systems of multiple Boolean functions and so it might be natural to ask if there is a way to optimize the system. Suppose we have two classical Boolean functions, each of which has two logical inputs and each of which requires an ancilla in order to be represented reversibly. That is, both functions are of the form
\[(x_{1},x_{2},y)\mapsto(x_{1},x_{2},y\oplus f(x_{1},x_{2})). \tag{3}\]
\begin{table}
\begin{tabular}{c c c c c c} & \multicolumn{4}{c}{XOR} & OR \\ \(x_{1}\) & \(x_{2}\) & \(y\) & \(x_{1}\) & \(x_{2}\) & \(y\oplus f(x)\) \\ \hline
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 1 & 1 \\
1 & 0 & 0 & 1 & 0 & 1 \\
1 & 1 & 0 & 1 & 1 & 1 \\ \end{tabular}
\end{table}
Table 2: In order to reversibly represent the OR function, \(y\) must be an ancilla to which the logical output is mapped.
\begin{table}
\begin{tabular}{c c|c|c c|c c} & \multicolumn{4}{c}{OR} & OR \\ \(x\) & \(y\) & \(x\) & \(y\oplus f(x)\) & \(x\) & \(y\) & \(x\) & \(y\oplus f(x)\) \\ \hline
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\
1 & 0 & 1 & 1 & 1 & 1 & 1 \\
1 & 1 & 1 & 0 & 1 & 1 & 1 \\ \end{tabular}
\end{table}
Table 1: As the truth table for the XOR function indicates, each pair of outputs can uniquely be identified with a pair of inputs which means the XOR function is reversible and thus does not require an ancilla. There is ambiguity in the OR function since both \((x,y)=(1,0)\) and \((x,y)=(1,1)\) as inputs lead to \((x,y\oplus f(x))=(1,1)\) as an output. As such the OR function is not reversible and thus requires an ancilla in order to be represented reversibly.
We might be tempted to simultaneously implement these two functions as
\[(x_{1},x_{2},y)\mapsto(x_{1},x_{2}\oplus f_{1}(x_{1},y),y\oplus f_{2}(x_{1},x_{2})) \tag{4}\]
But 4 isn't necessarily reversible. A simple example will suffice to show this. The truth table for 4 with \(f_{1}(x_{1},y)\) taken to be the AND function and \(f_{2}(x_{1},x_{2})\) taken to be the OR function is shown in Table 3. As the truth table shows, none of the output triples is uniquely determined by a single input triple. As such one ancilla is required for each function. For example, if \(f_{1}(x_{1},x_{2})\) is the AND function and \(f_{2}(x_{1},x_{2})\) is the OR function,
\[(x_{1},x_{2},y_{1},y_{2})\mapsto(x_{1},x_{2},y_{1}\oplus f_{1}(x_{1},x_{2}),y_ {2}\oplus f_{2}(x_{1},x_{2})) \tag{5}\]
_is_ reversible.
A similar process can be used to show that for any single irreversible classical Boolean function of \(k\) inputs and \(m\) outputs (where I am assuming that \(m\leq k\) - the problem is more complicated when \(m>k\)) one ancilla is required for each output in order to implement it reversibly. The total number of elements in any such string must then be \(k+m\). For \(n\) such functions to be implemented with the same set of logical inputs, the total number of elements in the tuple must then be \(n(k+m)\). Thus for any system of \(n\) irreversible Boolean functions to be implemented reversibly, a total of \(n\times m\) ancillas are required. The total length of the input and output strings must then be \(n(k+m)\). But there is an additional problem that must be considered.
The point of reversible representations is to allow for unitary implementations by quantum systems. But consider a simple system of just two logical qubits and two ancillas to which we apply two (unitary) functions. We might naively expect that the bit oracle allows us to write
\[\left|x_{1},x_{2}\right\rangle\left|y_{1},y_{2}\right\rangle\mapsto U_{f_{1}} U_{f_{2}}\left|x_{1},x_{2}\right\rangle\left|y_{1},y_{2}\right\rangle.\]
But there's nothing privileging the order of the operators. We could just as easily have written
\[\left|x_{1},x_{2}\right\rangle\left|y_{1},y_{2}\right\rangle\mapsto U_{f_{2}} U_{f_{1}}\left|x_{1},x_{2}\right\rangle\left|y_{1},y_{2}\right\rangle\]
But for these to produce the same result, the unitary representations of our functions would have to be commutative, i.e. \(U_{f_{1}}U_{f_{2}}\) would have to be equal to \(U_{f_{2}}U_{f_{1}}\). Of course not all unitary operators are commutative. The Pauli operators, for example, which are valid _quantum_ single-input Boolean functions are not commutative. However, I am primarily concerned here with those unitary operators that represent _classical_ Boolean functions and it remains an open question as to whether all such operators are commutative. If not, the number of qubits required to implement some of these systems could be quite large. In this article I will keep things simple and focus on Boolean functions for which \(k=2\) and \(m=1\) and will implement each with its own, unique set of qubits.
### A generalized bit oracle
We can generalize the notion of a bit oracle to include both pure and mixed states by defining
\[\rho(\mathbf{x},\mathbf{y})\equiv\left|\mathbf{x}\right\rangle\left|\mathbf{ y}\right\rangle\left\langle\mathbf{y}\right|\left\langle\mathbf{x}\right| \tag{6}\]
The action of a Boolean function on this state is then a unitary transformation
\[\rho(\mathbf{x},\mathbf{y}\oplus f(\mathbf{x}))=U_{f}\rho(\mathbf{x},\mathbf{ y})U_{f}^{\dagger}. \tag{7}\]
This is a generalization of the bit oracle to a broader class of states. Now consider a system of \(n\) Boolean functions and define
\[\rho(\mathbf{X},\mathbf{Y}) \equiv\rho(\mathbf{x}_{1},\mathbf{y}_{1})\otimes\cdots\otimes \rho(\mathbf{x}_{n},\mathbf{y}_{n})\] \[U_{F} \equiv U_{f_{1}}\otimes\cdots\otimes U_{f_{n}}\] \[F(\mathbf{X}) \equiv f_{1}(\mathbf{x}_{1})\otimes\cdots\otimes f_{n}(\mathbf{x}_{ n}). \tag{8}\]
The evolution of the full system is then
\[\rho(\mathbf{X},\mathbf{Y}\oplus F(\mathbf{X}))=U_{F}\left[\rho(\mathbf{X}, \mathbf{Y})\right]U_{F}^{\dagger}. \tag{9}\]
This generalizes the bit oracle to a network of \(n\) classical Boolean functions and serves as the governing equation for the evolution of the network.
## III Autonomous Boolean Networks
An autonomous Boolean network (ABN) is a network of \(n\) Boolean variables (i.e. their values are on the Boolean domain) whose state at some time \(t\) fully determines the state of the network at time \(t+1\) via a set of Boolean functions acting on the set of variables. As such these networks are fully deterministic in their evolution and there are no additional external variables introduced at any point in the evolution of the network.
\begin{table}
\begin{tabular}{c c c|c c c} & & & \multicolumn{2}{c}{AND} & OR \\ \(x_{1}\) & \(x_{2}\) & \(y\) & \(x_{1}\) & \(x_{2}\oplus f_{1}(x_{1},y)\) & \(y\oplus f_{2}(x_{1},x_{2})\) \\ \hline
0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 1 & 0 & 1 \\
0 & 1 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 & 0 \\
1 & 0 & 1 & 1 & 1 & 1 \\
1 & 1 & 0 & 1 & 0 & 1 \\
0 & 1 & 1 & 0 & 0 & 1 \\
1 & 1 & 1 & 1 & 1 & 1 \\ \end{tabular}
\end{table}
Table 3: If we attempt to simultaneously implement both the AND and the OR function on the same set of three inputs, we find that the process is not reversible since every unique output string could have arisen from one of two possible input strings.
If each variable is updated simultaneously, the network is said to be _synchronous_[3]. Only synchronous networks are considered in this article.
The networks developed by Kauffman were described as both autonomous and _random_. But this latter description is misleading. It refers to how the connections between the nodes in the network are _initially_ set. But thereafter the connections remain fixed. Since the point of these networks is to characterize the study the general behavior of all such networks and their connections by studying some subset of them, there's really nothing random about it. It's simply a way to choose which network to study at a given moment. As such I will refrain from referring to these networks as random.
### Classical networks
Consider a simple network of three variables, \(x_{1},x_{2},x_{3}\in\{0,1\}\), each of which receives inputs from the other two. Assume that the dynamical evolution of the first is governed by the AND function and the dynamical evolution of the other two are governed by the OR function. The truth table for this simple network is shown in Table 4 where, for simplicity, I have set \(y=x_{3}\). This, of course, is not reversible but serves as a useful example for demonstrating the basic properties of these networks.
The first thing to notice is that, since there are a finite number of states and the evolution is entirely deterministic, the system will eventually pass through a given state more than once. In fact it will continue to cycle through the same set of states, referred to as a _state cycle_, ad infinitum. The state cycles themselves are referred to as the _dynamical attractors_ of the network and the set of all states leading into or lying on a given cycle are said to constitute the _basin of attraction_. The _length_ of a given state cycle is the number of states on that cycle and can range from unity for a steady state up to \(2^{n}\) depending, in part, on the number of inputs \(k\) to each function. The basins of attraction partition the \(2^{n}\) state space of the network.
Figure 1 shows all the state cycles and basins of attraction for the network whose truth table is shown in Table 4. In general, the length of the state cycle and the number of attractors are both functions of the number of inputs per function \(k\) and the number of functions \(n\). For example, for functions with \(k=2\) inputs such as the AND and OR functions, the expected state cycle length and the mean attractor length is on the order of \(\sqrt{n}\), though some systems show a power law relation [4; 9]. This is a rather remarkable result. A system of 10,000 binary variables with \(2^{10,000}\) possible states will settle and cycle through a mere 100 of these states. Additionally each state cycle (in the \(k=2\) case) is stable to almost all minimal perturbations including the deletion of elements [3]. For the network in Figure 1, the second state cycle is actually the longest since, in the third basin of attraction, once the state settles into the \((1,1,1)\) state, it remains there.
Now consider a more complicated network of seven variables, each of which dynamically evolves according to some Boolean function. The variables are connected to one another such that each variable's state at time \(t+1\) is determined by the states of two other _not necessarily neighboring_ variables at time \(t\). Suppose the evolution of the first seven elements of the network over ten time steps proceeds as shown in Table 5. Notice that the fourth variable never changes. We refer to this variable as a _frozen core_; it separates all the variables to its left from those to its right. The two sides of the frozen core form _functionally isolated_ islands separated by a _percolating wall_. Note that this does _not_ mean that there aren't connections between the islands. For example, it could be that the second variable's state is determined by variables one and six, i.e. \(x_{3}\equiv x_{3}\oplus f(x_{1},x_{6})\). Nevertheless, the islands are said to be functionally isolated because _perturbations to variables in one island have no effect on variables in other islands even though the islands may be
Figure 1: The state cycles and basins of attraction are shown for an autonomous Boolean network corresponding to the truth table in Table 4. Note that the length of the third state cycle is just unity since, once it settles into the \((1,1,1)\) state, it remains there.
\begin{table}
\begin{tabular}{c c c|c c c} & & \multicolumn{2}{c}{AND} & OR & OR \\ & & \multicolumn{2}{c}{\(x_{1}\oplus\)} & \multicolumn{2}{c}{\(x_{2}\oplus\)} & \multicolumn{2}{c}{\(x_{3}\oplus\)} \\ \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(f_{1}(x_{2},x_{3})\) & \(f_{2}(x_{1},x_{3})\) & \(f_{3}(x_{1},x_{2})\) \\ \hline
0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 1 & 1 \\
0 & 1 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 1 & 0 \\
1 & 0 & 1 & 0 & 1 & 1 \\
1 & 1 & 0 & 0 & 1 & 1 \\
0 & 1 & 1 & 1 & 1 & 1 \\ \end{tabular}
\end{table}
Table 4: The truth table for a classical Boolean network of three variables whose values are governed by the AND, OR, and OR functions respectively is shown here.
connected_. As I will show, this is _not_ true in the quantum case. I refer the interested reader back to Kauffman for additional details on these structures [2; 3].
### Circuit implementations
In circuit implementations of these networks there are two things worthy of note. These are best seen by considering the idealized circuit diagram for the network described in Table 4 as shown in Figure 2. The first thing to notice is that the bits that do not represent the logical output are discarded in the sense that they are not used to compute the network's next state. Only the logical output bits are used to compute the next step. But that, then, necessitates the second noteworthy attribute of these networks. In order to ensure the correct number of inputs on subsequent steps, the logical outputs must be copied.
This presents a problem for any attempt to directly implement such a network on a quantum system. Since the system is quantum its evolution should be unitary. But it is well-known that no single universal unitary gate can copy (clone) an arbitrary quantum state [29; 30; 31]. As such, we can't directly implement such a network on a quantum system without additional inputs.
Consider a network consisting of the same functions as those just described in Figure 2 but with each function implemented unitarily. Neither the AND nor the OR function is naturally reversible and so each requires an ancilla qubit. The network is thus composed of nine qubits. Since we can't arbitrarily clone any of the qubits, there must be a one-to-one correspondence between the outputs at one step and the inputs at the next step. But as long as that constraint is satisfied, we can connect (wire) them in any manner we like. An arbitrary wiring diagram for this network is shown in Figure 3. The arbitrary manner in which a given connection (wiring) was chosen is the reason Kauffman referred to them as random. The randomness was simply how a given connection was selected for study any a given time.
Notice in the circuit of Figure 3 that the output \(y_{2}\oplus f_{2}(x_{2}^{(a)},x_{2}^{(b)})\) from the first step acts as the input to \(y_{3}\) at the next step. That is, if we indicate the input to the second step with a prime, then \(y_{3}^{\prime}\equiv y_{2}\oplus f_{2}(x_{2}^{(a)},x_{2}^{(b)})\). But \(y_{3}\) is an ancilla that encodes the logical output of
\begin{table}
\begin{tabular}{c|c c c c c c} \(t\) & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(\pm\pm\) & \(x_{5}\) & \(x_{6}\) & \(x_{7}\) \\ \hline
1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 \\
2 & 1 & 0 & 1 & 1 & 1 & 1 & 0 \\
3 & 0 & 1 & 1 & 1 & 1 & 0 & 1 \\
4 & 1 & 1 & 1 & 1 & 0 & 1 & 0 \\
5 & 0 & 1 & 0 & 1 & 1 & 1 & 1 \\
6 & 1 & 0 & 0 & 1 & 1 & 0 & 0 \\
7 & 0 & 0 & 1 & 1 & 0 & 1 & 1 \\
8 & 1 & 0 & 1 & 1 & 0 & 1 & 0 \\
9 & 1 & 1 & 0 & 1 & 1 & 1 & 1 \\
10 & 0 & 1 & 0 & 1 & 1 & 0 & 1 \\ \end{tabular}
\end{table}
Table 5: The evolution of the first seven variables in a hypothetical seven-variable network shows a frozen core corresponding to the fourth element which gives two functionally isolated islands consisting of \((x_{1},x_{2},x_{3})\) and \((x_{5},x_{6},x_{7})\) respectively.
Figure 2: The classical circuit diagram for a network consisting of an AND gate and two OR gates highlights two important facts: (i) the non-logical outputs \((x_{1},x_{2},x_{3})\) are discarded after the first step and (ii) each logical output \((y_{1}\oplus f_{1}(x_{1}),y_{2}\oplus f_{2}(x_{2}),y_{3}\oplus f_{3}(x_{3}))\) is copied so it can be used as the input to more than one function at the next step. Note that this is only one of thirty-six possible wiring diagrams for a network consisting of these three functions.
\(f_{3}(x_{3}^{\prime(a)},x_{3}^{\prime(b)})\). This means that the output of the second step is \(y_{3}^{\prime}\oplus f_{3}(x_{3}^{\prime(a)},x_{3}^{\prime(b)})\equiv y_{2}\oplus f _{2}(x_{2}^{(a)},x_{2}^{(b)})\oplus f_{3}(x_{3}^{\prime(a)},x_{3}^{\prime(b)})\). It is usually convention to set ancillas to \(0\) but since there is no action external to the network here (since it is autonomous) it is entirely possible that \(y_{3}^{\prime}\equiv y_{2}\oplus f_{2}(x_{2}^{(a)},x_{2}^{(b)})\) will be \(1\). Since the action of the function is modulo \(2\) the logical output of the second step could produce a \(0\) when a \(1\) is expected, and vice-versa.
As a simple example of this, consider a single OR function implemented unitarily and assume the initial input state is \((x_{1}=1,x_{2}=1,y=0)\) (corresponding to the bottom line of Table 2). Also assume that this network has the simplest wiring diagram such that, for example, \(x_{1}^{\prime(a)}\equiv x_{1}^{(a)}\), \(x_{1}^{\prime(b)}\equiv x_{1}^{(b)}\), and \(y_{1}^{\prime}\equiv y_{1}\). Since the function itself is mapped to the ancilla, i.e. \(y_{1}=f(x_{1}^{(a)},x_{1}^{(b)})\), the output state is \((1,1,1)\). But if we implement this function again with \((1,1,1)\) now serving as the input, the function's action on the ancilla (which is now in the state \(1\)) is modulo \(2\) and thus the output becomes \((1,1,0)\). That is, for a network consisting of just the OR function and three qubits, if at any time the state of the system is either \((1,1,0)\) or \((1,1,1)\), then the system will simply cycle between these two states, i.e. \(\cdots(1,1,0)\mapsto(1,1,1)\mapsto(1,1,0)\mapsto(1,1,1)\mapsto(1,1,0)\cdots\). We could choose to perform error correction at each step (that is, immediately following the output) and reset the ancilla to \(0\) if need be, but that would either require external intervention, in which case the network wouldn't truly be autonomous, or would consist of more than just the functions that defined the network, i.e. it would require additional operations. Since I am primarily interested here in the autonomous behavior of sets of Boolean functions, I will not execute any error correction on the ancilla qubits.
## IV Quantum Networks
I am now in a position to define a quantum autonomous Boolean network (qABN) in direct analogy to classical autonomous Boolean networks.
**Definition 1**: A quantum autonomous Boolean network is a network of \(n\) Boolean functions of \(k\) logical qubits whose state at time \(t\) determines, through unitary evolution via a generalized bit oracle, its state at time \(t+1\) and for which each output qubit at time \(t\) maps to some input qubit (not necessarily itself) at time \(t+1\), i.e. there is a one-to-one correspondence between the number of outputs at one step and the number of inputs at the next step.
In order to ensure unitarity, each function can act on up to \(k+m\)_total_ qubits where \(m\) is the maximum number of ancillas required. These networks thus consist of up to a maximum of \(n(k+m)\) qubits (see Section II.1). The number of qubits in the network is conserved and, as such, there is a one-to-one correspondence between the number of outputs at one step and the number of inputs at the next step. The connections between the outputs and inputs are arbitrary. As such, for a system of \(n(k+m)\) qubits there are \((n(k+m))!\) possible connections (referred to as _wirings_) between the outputs of one step and the inputs of the next. Evolution of the network is via the generalized bit oracle defined in Equation 9.
Consider again a network consisting of an AND and two OR gates. If this network is implemented on a classical system as in Figure 2, there are thirty-six possible wiring diagrams and thus thirty-six possible input combinations. However, since each output from the previous step is copied, the number of _unique_ input combinations is only eighteen. But in the quantum case as in Figure 3, there are eighty-one possible combinations.
In this article I focus on networks such as these that implement _classical_ Boolean functions on a _quantum_ network, i.e. a network that allows for quantum inputs and is implemented as described in Section III.2, but I emphasize that the definition introduced here is far more broad.
As an example, I now describe the properties of qABNs of classical functions for the case in which \(k=2\). In this case the total number of ancillas \(m\) never exceeds the number of functions \(n\) since there is never more than one ancilla per function.
### qABNs of classical functions for \(k=2\) inputs
Classical Boolean functions for which \(k=2\) and \(m=1\) consist of a set of sixteen functions that include the familiar AND, OR, and NOT functions. The most remarkable feature of these networks is the spontaneous order that they exhibit through short state cycles, multiple basins of attraction, and frozen cores separating isolated islands. The most conspicuous differences in \(k=2\)_quantum_ networks are that the state cycles are not always short (in fact they can be exceptionally long) and, despite the existence of frozen cores and thus percolating walls, the islands are no longer isolated to perturbations.
Simulations of qABNs of these functions in Python utilizing the QuTIP package [32; 33] have been carried out and a GitHub repository of the code is available ([https://github.com/iantdurham/quantum_boolean_networks](https://github.com/iantdurham/quantum_boolean_networks)). In these simulations the allowed input states were
\[\begin{array}{lcl}\rho(0)&=\left|0\right\rangle\left\langle 0\right|&\rho(1)&= \left|1\right\rangle\left\langle 1\right|\\ \rho(+)&=\left|+\right\rangle\left\langle+\right|&\rho(-)&=\left|-\right\rangle \left\langle-\right|.\end{array}\]
Either a set of functions is manually entered or the code randomly chooses a set. Since there are a large number of possible wiring diagrams for any given set of functions, the code is designed to select one by randomly permuting the outputs from the first step. That is, an initial state is set (either randomly chosen or manually entered) and the generalized bit oracle is applied to get an output state which is then randomly permuted before the next
application of the oracle. The same permutation is then applied after each subsequent output so that the same wiring occurs at every step.
In order to keep track of a specific wiring diagram, I assign an index to the output 'wires' for a network starting with \(0\) at the top. Each wire can then be traced to its input at the next step which shuffles the indices. For the network shown in Figure 3, there are nine output wires that are labeled \([0,1,2,3,4,5,6,7,8]\) top-to-bottom in the figure. The order in which these wires connect to the next iteration of the network is \([3,0,6,4,1,8,7,2,5]\) as shown in Figure 4 (e.g. the circuit maps \(x_{1,t}^{(a)}\mapsto x_{1,t+1}^{(b)}\), etc.).
Tracking the state cycles for quantum networks can be particularly tricky given the shear number of possible states that could exist in a given cycle. The addition of ancillas adds to the complexity level. We can, however, get some sense of the behavior of the system as a whole by tracking various properties of the system. For example, one measure that provides a sense of how these networks evolve is the multipartite mutual information which, for a network whose overall state is \(\rho\), is given as [34, 35]
\[I_{m}(A_{1}:\cdots:A_{n}) =\sum_{n}S(\rho^{A_{n}})-S(\rho) \tag{10}\] \[=S(\rho||\rho^{A_{1}}\otimes\cdots\otimes\rho^{A_{n}})\] \[\geq 0\]
where \(S(\rho)\) is the von Neumann entropy of the entire network and \(S(\rho^{A_{n}})\) is the von Neumann entropy of the \(n\)th qubit in the network. It expresses the difference between the sum of the von Neumann entropies of the individual qubits in the system, and the von Neumann entropy of the system considered as a whole. Since the network evolves deterministically in the absence of measurement, the value \(I_{m}\) should vary periodically. Note, however, that this measure does not depend on the relative locations of the qubits in the network. As such it cannot distinguish between, say, the state \(\rho(+)\otimes\rho(-)\otimes\rho(0)\) and the state \(\rho(-)\otimes\rho(+)\otimes\rho(0)\). But there exists a \(k=2\) Boolean function for which \(f(\rho(+)\otimes\rho(-)\otimes\rho(0))\neq f(\rho(-)\otimes\rho(+)\otimes \rho(0))\) and thus the order of the qubits matters. As such, the length of any cycles for \(I_{m}\) would represent a _minimum_ length for any associated _state_ cycles. That is, if, for example, \(I_{m}\) for a given network varies over five time steps, the state cycle for the network could be no shorter than five but might be longer. Developing a more robust measure is an important aim of future work. However, a fair amount of useful information can be obtained about a network from the behavior of multipartite mutual information as I will show.
As a simple example of the use of the multipartite mutual information for analyzing these networks, consider a network consisting XNOR and NOR functions with a wiring of \([0,3,2,1,4]\) and an initial input state of \(\rho(\mathbf{X},\mathbf{Y})=\rho(\mathbf{x}_{1},\mathbf{y}_{1})\otimes\rho( \mathbf{x}_{2},\mathbf{y}_{2})=\rho(-,+)\otimes\rho(10,0)\). In this case the multipartite mutual information, \(I_{m}\), varies symmetrically with a cycle length of six steps as shown in Figure 5. While the actual state cycle length may be longer, the cycle length for \(I_{m}\) nevertheless is useful in that it suggests that the network evolves through the states in the cycle in a symmetric way. In contrast, a network implementing the same functions but with input state \(\rho(\mathbf{X},\mathbf{Y})=\rho(\mathbf{x}_{1},\mathbf{y}_{1})\otimes\rho( \mathbf{x}_{2},\mathbf{y}_{2})=\rho(-,+)\otimes\rho(+0,0)\) and wiring \([4,1,0,3,2]\) exhibits asymmetric behavior, as shown in Figure 6.
One might expect the complexity of the behavior exhibited by these networks to increase rapidly as the number of functions increases. But suppose we add a single NAND function to the simple network we just considered. That is, suppose we implement a network implementing
Figure 5: A quantum network implementing an XNOR function and a NOR function with an input state of \(\rho(\mathbf{X},\mathbf{Y})=\rho(\mathbf{x}_{1},\mathbf{y}_{1})\otimes\rho( \mathbf{x}_{2},\mathbf{y}_{2})=\rho(+,1)\otimes\rho(1-,0)\) and wiring \([2,1,0,4,3]\) has a multipartite mutual information that displays symmetric oscillatory behavior with a cycle length of six steps.
XNOR, NOR, and NAND functions with an input state of \(\rho(\mathbf{X},\mathbf{Y})=\rho(\mathbf{x}_{1},\mathbf{y}_{1})\otimes\rho( \mathbf{x}_{2},\mathbf{y}_{2})\otimes\rho(\mathbf{x}_{3},\mathbf{y}_{3})=\rho(0, +)\otimes\rho(-+,0)\otimes\rho(0-.0)\) and wiring \([6,1,3,2,0,5,4,7]\). The cycle length for \(I_{m}\), shown in Figure 7, is just one step longer than the associated cycle for the network in Figure 5. By contrast, the same set of functions with an input state of \(\rho(\mathbf{X},\mathbf{Y})=\rho(\mathbf{x}_{1},\mathbf{y}_{1})\otimes\rho( \mathbf{x}_{2},\mathbf{y}_{2})\otimes\rho(\mathbf{x}_{3},\mathbf{y}_{3})=\rho( -,+)\otimes\rho(-0,0)\otimes\rho(1+,0)\) and wiring \([6,5,7,4,0,3,1,2]\) has a cycle length for \(I_{m}\) of nearly nine million steps, as shown in Figure 8. This actually tells us something about the states in the cycle. We know that the state cycle must be as long if not longer than the cycle length of \(I_{m}\) and, given that there are eight qubits in the system, if those qubits could only ever be in one of the four input states given by equation 10, there would only be a total of \(4^{8}\) possible system states. Since the system is deterministic in that each state uniquely leads to another state, \(4^{8}\) would thus be the maximum length for any state cycle. Since the state cycle for the network in Figure 8 far exceeds that, the network _must produce states not specified by equation 10_. As such, while \(I_{m}\) cannot distinguish between all of the network's states, it can still tell us something important about what the network is doing and can prompt further action. In this example, when tracing out the individual qubit states, one finds that they are mixed states.
The network described in Figure 7 also exhibits two frozen cores. Since the \(y_{1}\) (index 1), \(x_{3}^{(a)}\) (index 5), and \(y_{3}\) (index 7) qubits feed back into themselves, one might actually expect there to be three frozen cores. But, in fact, only \(y_{1}\) and \(x_{3}^{(a)}\) remain unchanged. It might be unsurprising that \(x_{3}^{(a)}\) is frozen given that no output is mapped to it. But \(y_{1}\) represents the logical output of the XNOR function, i.e. \(y_{1}^{\prime}=y_{1}\oplus f_{1}\). Thus, if \(x_{1}\) changes, one would expect \(y_{1}\) to change. If we look at the states that \(x_{1}\) cycles through -- \(\rho(-)\), \(\rho(0)\), and a mixed state defined as \(0.5\rho(0)+0.5\rho(1)\) -- we might expect that, at the very least, \(y_{1}\) would change to a mixed state of some kind, but the combinations are just such that no change occurs.
The frozen cores \(y_{1}\) and \(x_{3}^{(a)}\) separate the network into three "islands" of states: one island on the left containing \(x_{1}\), one island in the middle containing \(x_{2}^{(a)}\), \(x_{2}^{(b)}\), and \(y_{2}\), and one island on the right containing \(x_{3}^{(b)}\) and \(y_{3}\). In classical Boolean networks, perturbations to bits in
one island have no effect on the bits in the other islands. Hence the islands are said to be _isolated_. The same is not true in the quantum case. As an example, consider that on the eleventh time step, \(x_{1}=\rho(0)\) and on the fifteenth time step, \(y_{2}=\rho(0)\). If we apply the bit flip operator to \(x_{1}\) on the eleventh time step, \(y_{2}\) at the fifteenth time step also flips. In a classical network, \(y_{2}\) would never flip since it is in a different island. One would assume that the ability to cross into another island is a result of non-classical correlations, but a better measure is needed in order to be sure.
But this result is remarkable for another reason: \(y_{2}\) is not affected until the fifteenth step. Perhaps even more remarkably, this perturbation has no effect whatsoever on the multipartite mutual information cycle. While this can be (rightly) taken as a failure of \(I_{m}\) to detect changes to the individual qubit states, it is nevertheless notable that \(I_{m}\) is resilient to perturbations in this situation. In the language of classical Boolean networks, the cycle would be referred to as an _attractor_ since the perturbation ultimately preserved the cycle. Not all cycles (classical or quantum) are impervious to perturbations.
Having set out the basic definition of qABNs and given a few specific examples, it's now worth taking a closer look at some of the pressing questions and how they might be addressed.
## V Discussion and Conclusions
Perhaps the most pressing issue raised by this framework is the need for an alternative measure to the multipartite mutual information that can distinguish states in such a way as to maintain the ordering of the individual qubits since it is what distinguishes the the states in the cycle. One potential measure for this is the quantum discord [36; 37; 38] since the discord can be asymmetric [39; 40]. However, existing generalizations of the discord to multipartite systems are symmetric with regard to the exchange of subsystems [41; 42], though the latter can account for different measurement orders. If the latter could be further extended to account for the ordering of the qubits, it might be useful, particularly since it tells us something about the correlations between the various subsystems. However, it likely would also prove to be difficult to compute since computing the discord is known to be NP-complete [43].
Another pressing issue is the nature of the mixed states that are produced in some of these networks. In the network shown in Figure 7, recall that a perturbation to \(x_{1}\) on the eleventh time step led to a change in the value of \(y_{2}\) on the fifteenth time step. One would expect that a perturbation at one time step would lead to changes at the very next time step rather than several steps later. It seems possible that the propagation of the perturbation occurs through some combination of correlations between the states as well as the nature of the mixed states. That is, the mixed states somehow mask the perturbation. A better understanding of these mixed states, particularly in conjunction with a better measure than the multipartite mutual information, might shed more light on this issue.
Given that these networks exhibit oscillatory behavior, as evidenced by their finite state cycles, it might also be of interest to explore any cycle measure such as the multipartite mutual information or discord in the frequency domain. In fact the code written for this and available on GitHub includes a Fourier transform of the multipartite mutual information to the frequency domain. The transform for the cycle shown in Figure 5 is given in Figure 9. The trouble is that it's not entirely clear how to interpret the results. One would expect that it would give the Fourier decomposition of a given cycle into component cycles, but it's not clear what those component cycles would represent. Are they, for instance, the cycles of individual subsystems within the larger network? If so, which subsystems would they represent?
In addition to these particular questions, I have only explored networks of \(k=2\) functions. Additional networks should be explored in order to establish general bounds for the mean attractor length and mean attractor number for each class of network.
Nevertheless, the framework presented here captures some of the most salient features of classical Boolean networks in a quantum setting including finite state cycles and frozen cores, while also demonstrating uniquely quantum properties within a rich landscape of behavior ripe for further exploration.
Figure 9: The Fourier transform of \(I_{m}\) for the network shown in Figure 5 likely represents the Fourier decomposition of the cycle, but it’s unclear which subsystems the components would represent.
###### Acknowledgements.
I would like to thank Nana Liu, Peter Rohde, Robert Prentner, and Larissa Albantakis for helpful discussions. This work was partially supported by a grant from FQxI (FQXi-RFP-IPW-1911).
|
2303.16716 | Topological Point Cloud Clustering | We present Topological Point Cloud Clustering (TPCC), a new method to cluster
points in an arbitrary point cloud based on their contribution to global
topological features. TPCC synthesizes desirable features from spectral
clustering and topological data analysis and is based on considering the
spectral properties of a simplicial complex associated to the considered point
cloud. As it is based on considering sparse eigenvector computations, TPCC is
similarly easy to interpret and implement as spectral clustering. However, by
focusing not just on a single matrix associated to a graph created from the
point cloud data, but on a whole set of Hodge-Laplacians associated to an
appropriately constructed simplicial complex, we can leverage a far richer set
of topological features to characterize the data points within the point cloud
and benefit from the relative robustness of topological techniques against
noise. We test the performance of TPCC on both synthetic and real-world data
and compare it with classical spectral clustering. | Vincent P. Grande, Michael T. Schaub | 2023-03-29T14:15:38Z | http://arxiv.org/abs/2303.16716v2 | # Topological Point Cloud Clustering
###### Abstract.
We present Topological Point Cloud Clustering (TFCC), a new method to cluster points in an arbitrary point cloud based on their contribution to global topological features. TFCC synthesizes desirable features from spectral clustering and topological data analysis and is based on considering the spectral properties of a simplicial complex associated to the considered point cloud. As it is based on considering sparse eigenvector computations, TFCC is similarly easy to interpret and implement as spectral clustering. However, by focusing not just on a single matrix associated to a graph created from the point cloud data, but on a whole set of Hodge-Laplacians associated to an appropriately constructed simplicial complex, we can leverage a far richer set of topological features to characterize the data points within the point cloud and benefit from the relative robustness of topological techniques against noise. We test the performance of TFCC on both synthetic and real-world data and compare it with classical spectral clustering.
Hodge Laplacian, Topological Data Analysis, Spectral Clustering +
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
Footnote †: journal: Medical Image Computing
Footnote †: journal: Medical Image Computing
+
Footnote †: journal: Medical Image Computing
Footnote †: journal: Medical Image Computing
+
[MISSING_PAGE_POST]
+
Footnote †: journal: Medical Image Computing
a real-world object or relation. Dimensionality reduction and clustering methods are thus often used as a first step towards extracting a more comprehensible description of the data at hand, and can yield meaningful insights into previously hidden connections between the objects.
The paradigm of most classical clustering algorithms assumes that there are only a few "fundamental types" within the observed data and every data point can be assigned to one of those types. How the notion of type is interpreted varies in different approaches, but in most cases, the data is assumed to be a disjoint union of these types plus noise, and the focus is on identifying an optimal _local_ assignment of the points to the respective types (clusters). For instance, many prototypical clustering algorithms like \(k\)-means clustering [39] or mixture models like Gaussian mixtures [10] aim to group points together that are close according to some local distance measure in \(\mathbb{R}^{n}\). Other variants, like DBSCAN, aim to track dense subsets within the point cloud [14], and subspace clustering aims to find a collection of low-dimensional linear subspaces according to which the points can be grouped [8]. On the other hand, quantifying and utilizing the overall shape of the point cloud, i.e., how it is _globally_ assembled from the different clusters or how to find the best possible cluster types to build up the data is typically not a concern.
In comparison, topological data analysis (tda), which has gained significant interest in the last decades [6] emphasises an opposite perspective. Here the dataset is effectively interpreted as one complex object, a topological space, whose "shape" we try to determine by measuring certain topological features (typically homology) to understand the _global_ make-up of the entire point cloud. Such topological features are virtually omnipresent and are very flexible to describe highly complex shapes. For instance, in medicine, they can measure the topology of vascular networks and can distinguish between tumorous and healthy cells [40]. In public health studies, they have been used to analyse health care delivery network efficiency [18]. In Data Science, the Mapper algorithm uses topological features of data sets to produce a low dimensional representation of high dimensional data sets [37]. In Biochemistry, persistent homology has been used to analyse protein binding behaviour [24].
One key insight that has driven the success of the ideas of tda is that insightful higher-order information is often encoded in the topological features of (some amenable representation of) the data. However, in contrast to classical clustering, the question of how the individual data points contribute to the make-up of the overall topological object is typically not a result of these types of analysis. This can render the interpretation of the results difficult, as often the individual data points have a concrete and meaningful (often physical) interpretation and we would thus like to know how these points relate to the overall measured topological object.
The aim of this paper is to combine the advantages of these two perspectives and to establish a synthesis of traditional clustering algorithms with their easily interpretable output and the powerful notion of topological features of tda. Topological Point Cloud Clustering (trec) bridges this gap between the local nature of classical clustering and the global features of tda, by aggregating information gained from multiple levels of a form of generalized spectral clustering on the \(k\)-simplices.
Contributions.We develop a novel topological point cloud clustering method that clusters the points according to what topological features of the point cloud they contribute to. We prove that the clustering algorithm works on a class of synthetic point clouds with an arbitrary number of topological features across arbitrary dimensions. Finally, we verify the accuracy of topological point cloud clustering on a number of synthetic and real-world data and compare it with other approaches on data sets from the literature.
Organisation of the paper.We introduce necessary topological notions in Section 2. In Section 3, we describe the main idea of topological point cloud clustering. A theoretical result on the accuracy of the algorithm on a class of synthetic point clouds is then presented in 4. Finally, we show the distinguishing power of topological point cloud clustering on synthetic data, protein data and physically inspired real-world data in Section 5. In particular, we compare the results of our algorithms with other clustering methods and study the robustness of trec against noise on synthetic data. Certain technical aspects of our algorithm and our running example are explained in more detail in Appendix A and Appendix B.
Related Work.Our work builds upon several ideas that have been promoted in the literature. In particular, trec may be seen as a generalization of spectral clustering [44]. Spectral clustering starts with the construction of a graph from an observed point cloud, by identifying each data point with a vertex and connecting close-by points with an edge. Vertices are then clustered according to their spectral embedding, i.e., the dominant eigenvectors of the graph representation considered (typically in terms of an associated Laplacian matrix). However, these eigenvectors used by spectral clustering are merely related to connectivity properties (o-homology), and the produced clustering is thus restricted in terms of the topological features it considered. Topological Mode Analysis [7] clusters point clouds using persistent homology. However, because only 0-dimensional homology is considered the approach cannot cluster according to higher-order topological features like holes, loops and voids.
Our work does not just build a graph from the point cloud data but employs a simplicial complex to describe the observed point cloud (similar to how it is done in persistent homology) and embeds and clusters all \(k\)-simplices into the 0-eigenvector space of the \(k\)-th Hodge Laplacian. Related ideas of using embeddings based on the Hodge-Laplacian can be found in [9; 12; 34]: The idea of defining a harmonic embedding to extract meaningful information about a simplicial complex has been discussed in the context of trajectory classification [16; 34]. In [9], the authors study how this embedding is affected by constructing more complex manifolds from simpler building blocks. However, they do not study
how to decompose the underlying points based on this embedding. In [12], the authors develop a notion of harmonic clustering on the simplices of a simplicial complex. We use an extended version of this clustering as one step in trec. [25] have as well considered harmonic clustering of simplices but only use it to detect large numbers of communities in small simplicial complexes. In [31], the author uses a smoothed version of cohomology generators to quantify homology flows and build circular coordinates. From a certain point of view, this is surprisingly similar to considering zero eigenvectors of Hodge Laplace operators. Some related ideas to our work are also investigated in [41], where the authors provide a tool for detecting anomalous points of intersecting manifolds. As we will see, our algorithm is able to detect not only these points but can provide also additional information about all remaining points. There has been some work on surface and manifold detection in point clouds [22, 28]. In contrast to trec, these algorithms don't provide any clustering or additional information on the points and are confined to manifold-like data, which is usually assumed to be a 2-dimensional surface in 3-dimensional space.
The Hodge-Laplacian has also featured in a number of works from graph signal processing and geometric deep learning. A homology-aware simplicial neural network is constructed in [23], extending previous models [5, 33] on simplices of dimension two [2, 11]). However, these approaches focus on a scenario where the higher-order simplices have some real-world meaning, e.g., 1-simplices can be identified by streets, neural links, or pairs of co-authors. In contrast here our primary focus is on a scenario in which we are only given a point cloud to start with and thus only the points have a real-world meaning, whereas the higher dimensional features are added via some artificial simplicial complex simply to extract further information about the shape of the data. This is the case in most standard application scenarios.
## 2. A Topological Notion of Features
A main goal of topology is to capture the essence of spaces. Topological tools try to describe globally meaningful features of spaces that are indifferent to local perturbations and deformations. This indifference of topological features to local perturbations can be a crucial asset when analysing large-scale datasets, which are often high-dimensional and noisy. To leverage these ideas, we need to explain what we mean by _topological features_ throughout the paper. A key assumption in this context is that high dimensional data sets may be seen as samplings from topological spaces -- most of the time, even low-dimensional manifolds [15]. Rather than providing a complete technical account, in the following, we try to emphasize the relevant conceptual ideas and refer the interested reader to [3, 21, 43] for further details.
_Simplicial Complexes._ The prototypical topological space is a subset of \(\mathbb{R}^{n}\) and hence continuous. Storing the infinite number of points in such a space individually is impossible. On the other hand, our observed point cloud will always be discrete and non-connected. _Simplicial complexes_ (SC) bridge this gap between the continuous spaces of topology, and the discrete nature of our point cloud. They offer a way to build topological spaces from easy-to-define building blocks. Indeed, a well-known theorem in topology [32] asserts that any topological space with the homotopy type of a CW complex can be approximated by a simplicial complex.
**Definition 2.1** (Abstract simplicial complex).: An abstract simplicial complex \(\mathcal{S}\) consists of a set of vertices \(X\) and a set of finite non-empty subsets of \(X\), called simplices \(S\), such that **(i)**\(S\) is closed under taking non-empty subsets and **(ii)** the union over all simplices \(\sigma_{\varepsilon\in S}\sigma\) is \(X\). For simplicity, we often identify \(\mathcal{S}\) with its set of simplices and use \(\mathcal{S}_{n}\) to denote the subset of simplicies with \(n+1\) elements.
Intuitively, in order to build a simplicial complex \(\mathcal{S}\), we first start with a set of vertices \(V\). These are called the 0-simplices. We can then add building blocks of increasing dimension. The 1-simplices represent edges between 2 vertices, the 2-simplices are triangles between 3 vertices that are already connected by edges. An \(n\)-simplex resembles an \(n\)-dimensional polyhedra. An \(n\)-simplex \(\sigma_{n}\) connects \((n+1)\) vertices, given that they are already connected by all possible \((n-1)\)-simplices. These \((n-1)\)-simplices are then called the faces of \(\sigma_{n}\). We call two \((n-1)\)-simplices _upper-adjacent_ if they are faces of the same \(n\)-simplex. Correspondingly, we call two \(n\)-simplices _lower-adjacent_ if they share a common \((n-1)\)-simplex as a face.
_Vietoris-Rips complex._ Building the Vietoris-Rips complex is a method of turning a point cloud into a simplicial complex, approximating the topological features of the space it was sampled from. The Vietoris-Rips complex takes 2 arguments as input: The point cloud \(X\) and a minimal distance \(\varepsilon\). It then builds a simplicial complex \(\mathcal{S}\) by taking \(X\) as the set of vertices (and thus of 0-simplices) of \(\mathcal{S}\). Between every two vertices of distance \(d<\varepsilon\) it adds an edge, i.e. an 1-simplex. Inductively, it then adds an \(n\)-simplex for each set of \((n+1)\) vertices in \(X\) with pair-wise distance smaller than \(\varepsilon\). In practice, one often restricts this process to simplices of dimension \(n\leq N\) for some finite number \(N\).
_Boundary matrices and the Hodge-Laplacians._ All topological information of a simplicial complex \(\mathcal{S}\) can be encoded in its _boundary matrices_\(\mathcal{B}_{n}\). The rows of \(\mathcal{B}_{n}\) are indexed by the \(n\)-simplices of \(\mathcal{S}\), the columns are indexed by the \((n+1)\)-simplices.
**Definition 2.2**.: Let \(\mathcal{S}=(S,X)\) be a simplicial complex and \(\preceq\) a total order on its set of vertices \(X\). For \(n\geq i\), \(n\geq 1\) we define the \(i\)-th face map \(f_{i}^{n}:\mathcal{S}_{n}\to\mathcal{S}_{n-1}\) by
\[f_{i}^{n}:\{x_{0},x_{1},\ldots,x_{n}\}\mapsto\{x_{0},x_{1},\ldots,\tilde{x}_{i },\ldots,x_{n}\}\]
where we have that \(x_{0}\preceq x_{1}\preceq\cdots\preceq x_{n}\) and \(\tilde{x}_{i}\) denotes the omission of \(x_{i}\). Then we define the \(n\)-th boundary operator \(\mathcal{B}_{n}\colon\mathbb{R}[\mathcal{S}_{n+1}]\to\mathbb{R}[\mathcal{S}_{n}]\) by
\[\mathcal{B}_{n}\colon\sigma\mapsto\underset{i=0}{{}^{n+1}(-1)^{i}f_{i}^{n+1}( \sigma)}.\]
We identify \(\mathcal{B}_{n}\) with its matrix representation in lexicographic ordering of the simplex basis.
Note that with this definition, \(\mathcal{B}_{0}\) is simply the familiar vertex-edge-incidence matrix of the associated graph built from the \(0\)- and \(1\)-simplices of \(\mathcal{S}\).
Definition 2.3 ().: The \(n\)-th _Hodge-Laplacian_\(L_{n}\) of \(\mathcal{S}\) is a square matrix indexed by the \(n\)-simplices of \(\mathcal{S}\):
\[L_{n}\coloneqq\mathcal{B}_{n-1}^{\top}\mathcal{B}_{n-1}+\mathcal{B}_{n} \mathcal{B}_{n}^{\top} \tag{1}\]
where we take \(\mathcal{B}_{-1}\) to be the empty matrix.
The key insight about the \(\mathcal{B}_{n}\) is the following lemma:
Lemma 2.4 ().: _For a simplicial complex \(\mathcal{S}\) with boundary matrices \(\mathcal{B}_{i}\) we have that \(\mathcal{B}_{n}\circ\mathcal{B}_{n+1}=0\) for \(n\geq 0\)._
Topological features: Homology and Betti numbersOne of the main topological concepts is _homology_. The \(k\)-th _homology module_\(H_{k}(X)\) of a space \(X\) encodes the presence and behaviour of \(k\)-dimensional loops, enclosing generalised \((k+1)\)-dimensional voids/cavities. The \(k\)-th _Betti number_\(B_{k}(X)\) of \(X\) denotes the rank \(\operatorname{rk}H_{k}(X)\) of the corresponding homology module. The \(0\)-th Betti number \(B_{0}(X)\) is the number of connected components of \(X\), \(B_{1}(X)\) counts the number of loops and \(B_{2}(X)\) counts how many \(3\)-dimensional cavities with \(2\)-dimensional borders are enclosed in \(X\), and so on.
The following connection between the homology of an SC and its Hodge Laplacian will prove essential to us:
Lemma 2.5 ((13, 17)).: _For a simplicial complex \(\mathcal{S}\), let \(L_{n}\) be the Hodge Laplacians and \(B_{n}\) be the Betti numbers of \(\mathcal{S}\). Then we have that \(\operatorname{rk}\ker L_{n}=B_{n}\)._
The dimension of the kernel of the Hodge-Laplacian is equal to the number of orthogonal zero eigenvectors of \(L_{n}\) over \(\mathbb{R}\). Hence the Hodge-Laplacian provides a gateway for accessing topological features by computing eigenvectors.
## 3. TPCC: Algorithm and Main Ideas
In this section, we will describe Topological Point Cloud Clustering and its main ideas. A pseudocode version can be found in Algorithm 1.
Running exampleTo illustrate our approach, we use the example displayed in Figure 1 consisting of two \(4\)-dimensional tori, depicted here in their projection to \(3\)d space. We connected the tori with two lines, which are again connected by a line. Additionally, the point cloud includes two separate connected components without higher dimensional topological features. Our point cloud has thus \(11\) topological features across \(3\) dimensions. In terms of Betti numbers, we have \(B_{0}=3\), \(B_{1}=6\), and \(B_{2}=2\). For an in-depth discussion of the topology and construction of the running example, see Appendix B.
Step 1: Approximating the spaceTo characterize our point cloud in terms of topological information, we suggest using the framework of simplicial complexes and the Vietoris-Rips Complex due to their straightforward definitions. The goal of this paper is to show that even with this naive approach of constructing a simplicial complex, a topologically meaningful clustering can be achieved. However, we note that TPCC is agnostic towards the method the simplicial complex was constructed. In low dimensions, the \(\alpha\)-complex provides a computationally efficient alternative with a lower number of simplices. The assumption is that the points of the point cloud are in some general sense sampled, potentially with some additional noise, from a geometrical space. Now we would like to retrieve the topology of this original geometrical space from the information provided via the sampled points. Hence, following common ideas within TDA, we construct a computationally accessible topological space in terms of a simplicial complex on top of the point cloud approximating the ground truth space. We denote the simplicial complex associated to our toy point cloud by \(\mathcal{S}\). We note that the TPCC framework works both with simplicial as well as with cellular complexes. For simplicity however, we chose to stick with simplicial complexes throughout this paper.
Step 2A: Extracting topological featuresHaving built the simplicial complex \(\mathcal{S}\), we need to extract its topological features. However, standard measures from topological data analysis only provide global topological features: For instance, Betti numbers are global features of a space, and persistence landscapes measure all features at once [4]. In contrast, we are interested in how individual simplices and points are related to the topological features of the space. It is possible to extract a homology generator for a homology class in persistent homology [29]. This approach is however not suitable for us, because the choice of a generator is arbitrary, and only the contribution of a small number of simplices can be considered.
TPCC utilises a connection between the simplicial Hodge-Laplace operators and the topology of the underlying SC. The dimension of the \(0\)-space of the \(k\)-th Hodge-Laplacian \(L_{k}\) is equal to the \(k\)-th Betti number \(B_{k}\)[13, 17]. Furthermore, the rows and columns of the Hodge-Laplacian \(L_{k}\) are indexed by the \(k\)-simplices of \(\mathcal{S}\) and describe how simplices relate to each other, and in particular how they contribute to homology in terms of the null space of the \(L_{k}\).
Let us now consider a concrete loop/boundary \(\mathcal{F}\) of an \((k+1)\)-dimensional void. We can then pick a collection \(S\) of edges/\(k\)-simplices that represents this loop/boundary. By assigning each simplex in \(S\) the entry \(\pm 1\) based on the orientation of the simplex, and every other simplex the entry \(0\), we obtain a corresponding vector \(e_{S}\). The Hodge Laplace operator \(L_{k}=\mathcal{B}_{k-1}^{\top}\mathcal{B}_{k-1}+\mathcal{B}_{k}\mathcal{B}_{k} ^{\top}\) consists of two parts. The kernel of the down-part, \(\mathcal{B}_{k-1}^{\top}\mathcal{B}_{k-1}\), is spanned by representations of the boundaries of \((k+1)\)-dimensional voids. Hence, \(e_{S}\) lies in this kernel: \(\mathcal{B}_{k-1}^{\top}\mathcal{B}_{k-1}e_{S}=0\). The kernel of the up-part of the Hodge Laplacian, \(\mathcal{B}_{k}\mathcal{B}_{k}^{\top}\), is spanned by vectors that represent smooth flows along the \(k\)-simplices. Thus by smoothing along the \(k\)-simplices one can turn \(e_{S}\) into an eigenvector \(\widehat{e}_{S}\) of the entire Hodge Laplace operator \(L_{k}\):
\[L_{k}\widehat{e}_{S}=\mathcal{B}_{k-1}^{\top}\mathcal{B}_{k-1}\widehat{e}_{S }+\mathcal{B}_{k}\mathcal{B}_{k}^{\top}\widehat{e}_{S}=0. \tag{2}\]
We call \(\widehat{e}_{\mathcal{F}}:=\widehat{e}_{S}\) the _characteristic eigenvector_ associated to the loop/void \(\mathcal{F}\).
For simplicity, let us first consider the case where the \(k\)-th Betti number \(B_{k}(\mathcal{S})\) is \(1\). Then the zero-eigenvector \(v_{0}\) of \(L_{k}\) has one entry for every \(k\)-simplex and is the characteristic eigenvector \(\widehat{e}_{\mathcal{F}}\) for the single topological feature \(\mathcal{F}\) in dimension \(k\). The entries of \(v_{0}\) measure the contribution of the corresponding simplices to \(\mathcal{F}\). Intuitively, we can visualise the homology 'flowing' through the simplices of the simplicial complex. The entries of the eigenvector correspond to the intensity of the flow in the different \(k\)-simplices. Because of the way we constructed \(\widehat{e}_{\mathcal{F}}\), the homology flow is then concentrated along the \(k\)-dimensional boundary of a hole/void in the space. In the \(1\)-dimensional setting, this corresponds to harmonic flows along edges around the holes of an SC [35]. The case for the Betti number larger one \(B_{k}>1\) will be discussed in more detail in the following paragraph.
_Step 2B: Clustering the \(n\)-simplices._ Extending ideas from [12, 34] we use the obtained coordinates for each simplex to cluster the simplices. In the case where \(L_{k}\) has a single \(0\)-eigenvalue, we can easily cluster the simplices by simply looking at the entries of the \(0\)-eigenvector \(e\): We can ignore the sign of the entry \(e_{\sigma}\) of \(e\) corresponding to a simplex \(\sigma\) because this only reflects whether the arbitrarily chosen orientation of \(\sigma\) aligns with the direction of the 'homology flow'. Then, we assign all simplices \(\sigma\) with absolute value of \(e_{\sigma}\) above a certain threshold \(|e_{\sigma}|>\varepsilon\) to the cluster of homologically significant simplices. The remaining simplices are assigned to a separate cluster.
In the case of multiple boundaries of voids of the same dimension, i.e. \(B_{k}>1\), each boundary \(\mathcal{F}\) again corresponds to a 'homology flow' with an associated characteristic eigenvector \(\widehat{e}_{\mathcal{F}_{i}}\) of \(L_{k}\). The \(\widehat{e}_{\mathcal{F}_{i}}\) span the zero-eigenspace \(E_{k}\) of \(L_{k}\). However, an eigenvector solver will yield an arbitrary orthonormal basis \(e_{1},\ldots,e_{B_{k}}\) of \(E_{k}\) which is only unique up to unitary transformations. For a \(k\)-simplex \(\sigma\in\mathcal{S}_{k}\), let \(e_{i}(\sigma)\) denote the coordinate associated to \(\sigma\) of the \(i\)-th basis vector \(e_{i}\) of \(E_{k}\) obtained by the eigenvector solver. Now we denote by \(\iota\colon\mathcal{S}_{k}\to\mathbb{R}^{B_{k}}\),
\[\iota\colon\sigma\mapsto\big{(}e_{1}(\sigma),e_{2}(\sigma),\ldots,e_{B_{k}}( \sigma)\big{)}\in\mathbb{R}^{B_{k}}\]
the embedding of the simplices into the \(k\)-th _feature space_\(\mathcal{X}_{k}\coloneqq\mathbb{R}^{B_{k}}\). Note that because we could have started with any orthonormal basis of \(E_{k}\) the feature space is only defined up to arbitrary unitary transformations. The points of the feature space \(\mathcal{X}_{k}\) represent different linear combinations of the basis vectors of the zero eigenspace of \(L_{k}\). They also represent linear combinations of the \(\widehat{e}_{\mathcal{F}_{i}}\), and hence intuitively of the topological features.
In the most simple case, the \(\widehat{e}_{\mathcal{F}_{i}}\) are orthogonal to each other. Then they represent orthogonal linear combinations of the original basis of \(E_{k}\) in the feature space \(\mathcal{X}_{k}\). Hence the 'natural' \(\widehat{e}_{\mathcal{F}_{i}}\)-basis can be recovered by subspace clustering the \(k\)-simplices on the feature space \(\mathcal{X}_{k}\) as depicted in the top of Figure 1. For computational reasons, we subsample the simplices used for the subspace clustering. The remaining simplicies will then be classified using a \(k\)-nearest neighbour classifier on the feature space \(\mathcal{X}_{k}\). See Section 3 and Appendix C for a discussion of more complicated special cases.
_Step 3A: Aggregating the information to the point level._ Finally, we can try to relate the information collected so far back to the points. For every point \(x\) and every dimension \(d\), we aggregate the cluster ids of the \(d\)-simplices which contain \(x\). We call the collected information the _topological signature_ of \(p\).
**Definition 3.1** (Topological Signature).: Let \(X\) be a point cloud with associated simplicial complex \(\mathcal{S}\). For a simplex \(\sigma\in\mathcal{S}\), we denote its cluster assignments from the previous step of tTC by \(C(\sigma)\). Then, the _topological signature_\(\tau(x)\) of a point \(x\in X\) is the multi-set
\[\tau(x)\coloneqq\{\{C(\sigma):\sigma\in\mathcal{S},x\in\sigma\}\}.\]
After normalising for each \(i\) by the number of \(i\)-simplices containing the point, topologically similar points will have a similar topological signature. Figure 1, Step 3 illustrates how the topological signature is calculated. In Figure 2 we show how the different features of the topological signature
Figure 2. Above we depict the heatmaps for all \(16\) topological features encoded in the topological signature across \(3\) dimensions of our toy example. Note that some of the features are redundant, as both edges and faces can measure membership of a torus.
highlight topologically different areas of the point cloud. Interestingly, we can even retrieve information on the gluing points between two topologically different parts. In Figure 3, the 'gluing points' between the tori and the lines receive their own cluster. This is because roughly half of the simplices adjacent to the gluing points receive their topological clustering information from the torus and the other half from the adjacent lines. Hence the gluing points are characterised by a mixture of different topological signatures.
_Step 3B: Computing the final clustering._ If we apply \(k\)-means or spectral clustering to a normalised form of the topological signatures of the points of our toy example, we arrive at the clustering of Figure 3.
In comparison to standard clustering methods, TPCC can assign the same cluster to similar sets of points consisting of multiple connected components if they share the same topological features. In Figure 3, the two dark blue lines are assigned to the same cluster, because they both lie on the same loop and have no additional topological feature. This highlights the ability of TPCC to take higher-dimensional information into consideration,exceeding the results obtainable by proximity-based information.
_Choice of parameters._ TPCC needs two main parameters, \(\varepsilon\) and \(d\). For the choice of the maximum homology degree \(d\) to be considered there are three heuristics listed in decreasing importance:
1. When working with real-world data, we usually know which kind of topological features we are interested in, which will then determine \(d\). E.g., if we are interested in the loops of protein chains, we only need 1-dimensional homology and thus choose \(d=1\). When interested in voids and cavities in 3d tissue data, we need 2-dimensional homology and thus choose \(d=2\), and so on.
2. There are no closed \(n\)-dimensional submanifolds of \(\mathbb{R}^{n}\). This means that if the point cloud lives in an ambient space of low dimension \(n\), the maximum homological features of interest will live in dimension \(n-1\) and hence we can choose \(d=n-1\).
3. In practice, data sets rarely have non-vanishing highly persistent homology in degree above 2 and considering the dimensions 0-2 usually suffices. Otherwise, one can calculate persistent homology up to the maximum computationally feasible degree to identify dimensions with sufficiently persistent homology classes, and then take \(d\) as the maximum of these dimensions.
Picking the correct value of \(\varepsilon\) means choosing the correct scale. For the experiments in Figure 7, we have implemented a heuristic which computes the persistence diagram of the point cloud, and then picks the \(\varepsilon\) maximizing the number of topological features with high persistence and minimizing the number of features with low persistence for this value. As can be seen, this method performs comparatively well for considerable noise.
_Technical considerations I: Linear combinations of features._ In practice, topological features of the same dimension are not always separated in space. A bubble of soap may consist of two individual compartments divided by a thin layer of soap. This middle layer then contributes to the boundaries of the two voids, i.e. to two topological features of dimension 2. How is this reflected in the \(\widehat{\mathcal{E}}_{\mathcal{F}_{i}}\)?
This time, the characteristic eigenvectors \(\widehat{\mathcal{E}}_{\mathcal{F}_{i}}\) corresponding to boundaries \(\mathcal{F}_{i}\) of voids of the same dimension are not orthogonal anymore. The supports of the \(\widehat{\mathcal{E}}_{\mathcal{F}_{i}}\) overlap in the same simplices the corresponding boundaries \(\mathcal{F}_{i}\) overlap. In the feature space \(\mathcal{X}_{1}\) of the example in Figure 4, this is represented by the red, the green and the orange line having an approximate angle of \(60^{\circ}\) to each other. The left loop is represented by an eigenvector \(\widehat{\mathcal{E}}_{\mathcal{F}}\) with support on the green and orange edges, and vice-versa the right loop by \(\widehat{\mathcal{E}}_{\mathcal{F}}\) with support on the green and red edges. The homology flow on
Figure 4: The circle is divided into two parts by a vertical line. This gives the corresponding SC two generating loops in dimension \(1\), corresponding to a 2-dimensional \(0\)-eigenspace of the Hodge-Laplacian \(L_{1}\) and a 2-dimensional \(\mathbf{z}^{\text{st}}\) feature space \(\mathcal{X}_{1}\). However, now there are three linear subspaces corresponding to linear combinations of the two generating loops. TPCC is able to detect three different clusters of topologically significant edges.
Figure 3: The final clustering obtained with TPCC. There are \(10\) clusters in total. Two clusters identify the two tori (turquoise and ochre), two disconnected cubes (red and lime), dark blue and salmon for the connecting lines of the tori to the middle, azure for the middle line, yellow for the intersection of the lines, and fuchsia and brown for the gluing points of the points to the tori. Note that there are virtually no outliers.
the middle line on the green edges is a linear combination of the homology flows of both generating loops.
## 4. Theoretical guarantees for synthetic data
In this section, we show that the algorithm works on a class of synthetic point clouds with an arbitrary number of topological features in arbitrary dimensions. The proof utilises the core ideas of the previous section. An easy way to realise a flexible class of topological space is to work with the wedge sum operator \(\vee\) gluing the two spaces together at a fixed base point. For \(k>0\) and two topological spaces \(X\) and \(Y\) we have that \(B_{k}(X\lor Y)=B_{k}(X)+B_{k}(Y)\). Hence the wedge sum combines topological features.
**Theorem 4.1**.: _Let \(\mathbf{P}\subset\mathbb{R}^{n}\) be a finite point cloud in \(\mathbb{R}^{n}\) that is sampled from a space \(X\). Furthermore, let \(X={}_{i\in\mathcal{I}}\mathcal{S}_{i}^{d_{i}}\) with finite indexing set \(\mathcal{I}\) with \(|\mathcal{I}|>1\) and \(0<d\in\mathbf{N}\) be a bouquet of spheres. We assume that the geometric realisation of the simplicial approximation \(\mathcal{S}\) is homotopy-equivalent to \(X\), and furthermore that the simplicial subcomplexes for the \(\mathcal{S}^{d_{i}}\) only overlap in the base-point, and divide \(\mathcal{S}^{d_{i}}\) into \(d_{i}\)-simplices._
_Then topological point cloud clustering recovers the different spheres and the base point accurately._
Proof.: The \(k\)-th Betti number of \(\mathcal{S}\) is equal to the number of \(i\in\mathcal{I}\) with \(d_{i}=k\) (Cor. 2.25 [21]). Because spheres are orientable, we can simply assume that the \(d_{i}\)-simplices in \(\mathcal{S}_{i}^{d_{i}}\) are oriented such that each two adjacent \(d_{i}\)-simplices induce opposite orientations on the shared \((d_{i})\)-simplex. We now claim that for each \(i\in\mathcal{I}\) the indicator vector \(e_{i}\) on the \(d_{i}\)-simplices in \(\mathcal{S}_{i}^{d_{i}}\) is an eigenvector of the \(d_{i}\)-th Hodge Laplacian \(L_{i}\) of \(\mathcal{S}\). Because of our assumption on \(\mathcal{S}\), there are no \((d_{i}+1)\)-simplices upper-adjacent to the \(d_{i}\)-simplices of \(\mathcal{S}_{i}^{d_{i}}\). Hence, we obtain the first half of our claim \(B_{d_{i}}\mathcal{S}_{i}^{d_{i}}e_{i}=0\). We have assumed that \(\mathcal{S}\) was constructed in such a way that each \((d_{i}-1)\)-simplex \(\sigma_{d_{i}-1}\) of \(\mathcal{S}_{i}^{d_{i}}\) has exactly two upper-adjacent neighbours \(\sigma_{d_{i}}^{1}\) and \(\sigma_{d_{i}}^{2}\). Because \(\sigma_{d_{i}}^{1}\) and \(\sigma_{d_{i}}^{2}\) induce the opposite orientation on \(\sigma_{d_{i}-1}\), the corresponding entries of the \((d_{i}-1)\)-th boundary matrix \(\mathcal{B}_{d_{i}-1}\) of \(\mathcal{S}\) are \(1\) and \(-1\). Thus we also have \(\mathcal{B}_{d_{i}-1}e_{i}=0\) and finally \(L_{d_{i}}e_{i}=\mathcal{B}_{d_{i}}\mathcal{S}_{d_{i}}^{\top}e_{i}+\mathcal{B}_ {d_{i}-1}^{\top}\mathcal{B}_{d_{i}-1}e_{i}=0\). This proves the claim.
The eigenvectors \(e_{i}\) of the same dimension are orthogonal and match in number with the corresponding Betti number of \(\mathcal{S}\). Hence the \(e_{i}\) span the eigenspaces of the Hodge Laplace operators of \(\mathcal{S}\). For all \(i\in\mathcal{I}\) the entries of the \(d_{i}\)-simplices in \(\mathcal{S}_{i}^{d_{i}}\) in the matching zero eigenvectors \(e_{j}\) are \(1\) for \(j=i\), and \(0\) else. All other \(d\)-simplices for \(d>0\) have trivial eigenvector entries. Thus, subspace clustering recovers the top-level simplices in each of the spheres and assigns every other simplex to the trivial homology cluster. The topological signature of the points in the sphere \(\mathcal{S}_{i}^{d_{i}}\) in dimension \(d_{i}\) will then feature a characteristic cluster of \((d_{i})\)-simplices and a trivial signature across the other dimensions. Finally, the topological signatures of the base point will feature all characteristic clusters. Hence \(k\)-means on the topological signatures can distinguish the points on the different spheres and the base point.
## 5. Numerical experiments
_Comparison with \(k\)-means and spectral clustering._ We validated the effectiveness of tpc on a number of synthetic examples. In Figure 5, we have clustered points sampled randomly from two spheres and two circles. The algorithm recovers the spheres and circles. Normal (zero-dimensional) Spectral Clustering and \(k\)-means fail in choosing the right notion of feature, as the figure shows. For a visual comparison of tpc with other clustering algorithms on various datasets see Figure 9 in the appendix.
_Comparison to Manifold Anomaly Detection._ In [41], the authors propose a topological method for detecting anomalous points on manifolds. In Figure 6 we use tpc on the same datasets [1, 28] to show that our approach is also able to detect the anomalous points. Additionally, our method can classify the remaining points based on topological features.
_Experiments with Synthetic Data._ As we make use of topological features, tpc is robust against noise by design. We compare the accuracy of the clustering algorithm against \(k\)-means and spectral clustering on a point cloud consisting of a sphere, a circle, and a connecting line in Figure 7.
On low to medium noise levels, tpc significantly outperforms all other clustering methods. On higher noise levels, the topological features of the point cloud degenerate to features
Figure 5. tpc is the only approach correctly distinguishing the spheres and circles.
Figure 6. _Left:_ Energy landscape of cyclo-octane clustered by topological point cloud clustering. We have four different clusters, with the green one being the anomalous points. _Right:_ Clustering of the Henneberg surface.
that can be measured by ordinary spectral clustering. Then, TPCC and spectral clustering achieve similar accuracy scores. In Figure 7 we see that already a noise setting of noise \(=0.3\) distorts the point cloud significantly, yet TPCC still performs well.
ProteinsProteins are molecules that consist of long strings of amino acid residues. They play an integral role in almost every cellular process from metabolism, DNA replication, to intra-cell logistics. Their diverse functions are hugely influenced by their complex 3d geometry, which arises by folding the chains of amino acid residues. The available data of protein sequences and 3d structure has increased dramatically over the last decades. However, functional annotations of the sequences, providing a gateway for understanding protein behaviour, are missing for most of the proteins. (Srivastava et al., 2017) have shown that harnessing structural information on the atoms can significantly increase prediction accuracy of ml pipelines for functional annotations. Thus being able to extract topological information on individual atoms of proteins is very desirable for applications in drug discovery, medicine, and biology.
We tested TPCC on NALCN channelosome, a protein found in the membranes of human neurons (Srivastava et al., 2017; Wang et al., 2018). The NALCN channel regulates the membrane potential, enabling neurons to modulate respiration, circadian rhythm, locomotion and pain sensitivity. It has a complex topological structure enclosing 3 holes that are linked to its function as a membrane protein. The core idea is that when biological and topological roles correlate, TPCC offers a way to better understand _both_.
## 6. Discussion
LimitationsTCC can only cluster according to features that are visible to homology, e.g. connected components, loops,
\begin{table}
\begin{tabular}{l r r r r r r r r r} \hline \hline & TPCC & SpC & \(k\)-means & OPTICS & DBSCAN & AC & Mean Shift & AP & ToMATo \\ \hline
2 spheres, 2 circles (Figure 5) & **0.97** & 0.70 & 0.48 & 0.01 & 0.00 & 0.66 & 0.84 & 0.01 & 0.90 \\ Toy example (Figure 3) & **0.98** & 0.33 & 0.28 & 0.19 & 0.11 & 0.33 & 0.81 & 0.00 & 0.91 \\ Circle with line (Figure 4) & **0.85** & 0.23 & 0.16 & 0.11 & 0.00 & 0.25 & 0.00 & 0.23 & 0.09 \\ Sphere in circle, noise \(=0\) (Figure 7 top) & **1.00** & 0.34 & 0.02 & 0.19 & 0.00 & 0.29 & 0.00 & 0.12 & 0.06 \\ Sphere in circle, noise \(=0.3\) (Figure 7 bottom) & **0.53** & 0.28 & 0.01 & 0.22 & 0.30 & 0.27 & 0.00 & 0.13 & 0.46 \\ Energy landscape (Figure 6 left) & **0.88** & 0.01 & 0.01 & 0.00 & 0.00 & 0.13 & 0.00 & 0.01 & \(-0.02\) \\ \hline \hline \end{tabular}
\end{table}
Table 4. Quantitative performance comparison of TPCC with popular clustering algorithms. We show the Adjusted Rand Index of TPCC, Spectral Clustering (SpC), \(k\)-means, optics, dbscan, Agglomerative Clustering (AC), Mean Shift Clustering, Affinity Propagation (AP), and Topological Mode Analysis Tool clustering (ToMATo) evaluated on six data sets. On every data set TPCC performs best, indicating that the other algorithm are not designed for clustering points according to higher-order topological features.
Figure 8. Clustered atoms of NALCN channelosome. Points that border one of the holes are coloured red, blue, and green. The points without contribution to a loop are marked in yellow.
holes, and cavities. For example, TFCC cannot distinguish differently curved parts of lines or general manifolds. TFCC constructs a simplicial complex (SC) to extract topological information Thus it needs to pick a single scale for every SC. If the topological information of the point cloud lie in different scales, TFCC thus needs to do multiple feature aggregation steps for SCs of different scale. Finally, the points can be clustered according to the combined features. However, for each different scale the entire zero-eigenspace of the Hodge-Laplacian needs to be considered. Future work will focus on a method to cluster points based on the most persistent topological features across all scales.
Persistent homology and the calculation of the zero eigenvectors of the Hodge Laplacian are computationally expensive and thus running TFCC directly is not feasible on large data sets. However, usually the topological information can already be encoded in small subsets of the entire point cloud. In Table 2 we show that TFCC in combination with landmark sampling scales well for larger data sets while achieving high clustering performance. In addition, we believe that the main advantage of TFCC is that it can do something no other existing point cloud clustering algorithm can do or was designed for, namely clustering points according to higher order topological features. Future work will focus on additionally improving efficiency by removing the need to compute the entire zero-eigenspace of the Hodge-Laplace operators.
Because TFCC uses persistent homology, it is robust against small perturbations by design. In Figure 7 we analysed its clustering performance under varying levels of noise. However, with high noise levels, topological features vanish from persistent homology and thus TFCC cannot detect them anymore. In future work, we try to take near-zero eigenvectors of the Hodge Laplacian into account, representing topological features contaminated by noise. This is similar to Spectral Clustering, where the near-zero eigenvectors represent almost-disconnected components of the graph.
Conclusion. TFCC is a novel clustering algorithm respecting topological features of the point cloud. We have shown that it performs well both on synthetic data and real-world data and provided certain theoretical guarantees for its accuracy. TFCC produces meaningful clustering across various levels of noise, outperforming \(k\)-means and classical spectral clustering on several tasks and incorporating higher-order information.
Due to its theoretical flexibility, TFCC can be built on top of various simplicial or cellular representations of point clouds. Interesting future research might explore combinations with the mapper algorithms or cellular complexes. In particular, applications in large-scale analysis of protein data constitute a possible next step for TFCC. TFCC or one of its intermediate steps has potential as a pre-processing step for deep learning techniques, making topological information about points accessible for ml pipelines.
|
2307.15930 | Tailoring Stateless Model Checking for Event-Driven Multi-Threaded
Programs | Event-driven multi-threaded programming is an important idiom for structuring
concurrent computations. Stateless Model Checking (SMC) is an effective
verification technique for multi-threaded programs, especially when coupled
with Dynamic Partial Order Reduction (DPOR). Existing SMC techniques are often
ineffective in handling event-driven programs, since they will typically
explore all possible orderings of event processing, even when events do not
conflict. We present Event-DPOR , a DPOR algorithm tailored to event-driven
multi-threaded programs. It is based on Optimal-DPOR, an optimal DPOR algorithm
for multi-threaded programs; we show how it can be extended for event-driven
programs. We prove correctness of Event-DPOR for all programs, and optimality
for a large subclass. One complication is that an operation in Event-DPOR,
which checks for redundancy of new executions, is NP-hard, as we show in this
paper; we address this by a sequence of inexpensive (but incomplete) tests
which check for redundancy efficiently. Our implementation and experimental
evaluation show that, in comparison with other tools in which handler threads
are simulated using locks, Event-DPOR can be exponentially faster than other
state-of-the-art DPOR algorithms on a variety of programs and manages to
completely avoid unnecessary exploration of executions. | Parosh Aziz Abdulla, Mohamed Faouzi Atig, Frederik Meyer Bønneland, Sarbojit Das, Bengt Jonsson, Magnus Lång, Konstantinos Sagonas | 2023-07-29T08:43:49Z | http://arxiv.org/abs/2307.15930v1 | # Tailoring Stateless Model Checking for
###### Abstract
Event-driven multi-threaded programming is an important idiom for structuring concurrent computations. Stateless Model Checking (SMC) is an effective verification technique for multi-threaded programs, especially when coupled with Dynamic Partial Order Reduction (DPOR). Existing SMC techniques are often ineffective in handling event-driven programs, since they will typically explore all possible orderings of event processing, even when events do not conflict. We present Event-DPOR, a DPOR algorithm tailored to event-driven multi-threaded programs. It is based on Optimal-DPOR, an optimal DPOR algorithm for multi-threaded programs; we show how it can be extended for event-driven programs. We prove correctness of Event-DPOR for all programs, and optimality for a large subclass. One complication is that an operation in Event-DPOR, which checks for redundancy of new executions, is NP-hard, as we show in this paper; we address this by a sequence of inexpensive (but incomplete) tests which check for redundancy efficiently. Our implementation and experimental evaluation show that, in comparison with other tools in which handler threads are simulated using locks, Event-DPOR can be exponentially faster than other state-of-the-art DPOR algorithms on a variety of programs and
## 1 Introduction
Event-driven multi-threaded programming is an important idiom for structuring concurrent computations in distributed message-passing applications, file systems [31], high-performance servers [10], systems programming [11], smartphone applications [33], and many other domains. In this idiom, multiple threads execute concurrently and can communicate through shared objects. In addition, some threads, called _handler threads_, have an associated event pool to which all threads can post events. Each handler thread executes an event processing loop in which events from its pool are processed sequentially, one after the other, interleaved with the execution of other threads. An event is processed by invoking an appropriate handler, which can be, e.g., a callback function.
Testing and verification of event-driven multi-threaded programming faces all the usual challenges of testing and verification for multi-threaded programs, and
furthermore suffers from additional complexity, since the order of event execution is determined dynamically and non-deterministically. A successful and fully automatic technique for finding concurrency bugs in multithreaded programs (i.e., defects that arise only under some thread schedulings) and for verifying their absence is _stateless model checking_ (SMC) [15]. Given a terminating program and fixed input data, SMC systematically explores the set of all thread schedulings that are possible during program runs. A special runtime scheduler drives the SMC exploration by making decisions on scheduling whenever such choices may affect the interaction between threads. SMC has been implemented in many tools (e.g., VeriSoft [16], Chess[34], Concuerror [9], Nidhugg[2], rInspect [42], CDSChecker[35], RCMC [22], and GenMC [26]), and successfully applied to realistic programs (e.g., [17] and [25]). To reduce the number of explored executions, SMC tools typically employ _dynamic partial order reduction_ (DPOR) [12, 1]. DPOR defines an equivalence relation on executions, which preserves relevant correctness properties, such as reachability of local states and assertion violations, and explores at least one execution in each equivalence class.
Existing DPOR techniques for multi-threaded programs lack effectiveness in handling the complications brought by event-driven programming, as has been observed by e.g., Jensen et al. [20] and Maiya et al. [28]. A naive way to handle such a program is to consider all pairs of events as conflicting, implying that different orderings of event executions by a handler thread will be considered inequivalent. A major drawback is then that a DPOR algorithm cannot exploit the fact that different orderings of event executions by a single handler thread can be considered equivalent in the case that events are non-conflicting. In this way, a program in which \(n\) non-conflicting events are posted to a handler thread by \(n\) concurrent threads can give rise to \(n!\) explorations by a standard DPOR algorithm, whereas all of them are in fact equivalent. On the other hand, some events may be conflicting, so a DPOR algorithm for event-driven programs should explore only the necessary inequivalent orderings between conflicting events. This can be achieved by defining an equivalence on executions, which respects only the ordering of conflicting accesses to shared variables, irrespective of the order in which events are executed. For plain multi-threaded programs, this equivalence is the basis for several effective DPOR algorithms [12, 1]. The challenge is to develop an effective DPOR algorithm also for event-driven programs.
In this paper, we present Event-DPOR, a DPOR algorithm for event-driven multi-threaded programs where handlers can execute events from their event pool in arbitrary order (i.e., the event pool is viewed as a multiset). The multiset semantics is used in many works [21, 37, 20], often with the significant restriction that there is only one handler thread; we consider the more general situation with an arbitrary number of handler threads. Event-DPOR is based on Optimal-DPOR [1, 3], a DPOR algorithm for multi-threaded programs. The basic working mode of Optimal-DPOR is similar to several other DPOR algorithms: Given a terminating program, one of its executions is explored and then analyzed to construct initial fragments of new executions; each fragment that is not redundant (i.e., which can be extended to an execution that is not equivalent
to a previously explored execution), is subsequently extended to a maximal execution, which is analyzed to construct initial fragments of new executions, and so on. Event-DPOR employs the same basic mode of operation as Optimal-DPOR, but must be extended to cope with the event-driven execution model. One complication is that the constructed initial fragments must satisfy the constraints imposed by the fact that event executions on a handler are serialized; this may necessitate reordering of several events when constructing new executions from an already explored one. Another complication is that the check whether a new fragment is redundant is NP-hard in the event-driven setting, as we prove in this paper. We alleviate this by defining a sequence of inexpensive but incomplete redundancy checks, using a complete decision procedure only as a last resort.
We prove that the Event-DPOR algorithm is _correct_ (explores at least one execution in each equivalence class) for event-driven programs. We also prove that it is _optimal_ (explores exactly one execution in each equivalence class) for the class of so-called _non-branching_ programs, in which the possible sequences of shared variable accesses that can be performed during execution of an event, whose handler also executes other events, does not depend on how its execution is interleaved with other threads.
We have implemented Event-DPOR in an extension of the Nidhugg tool [2]. Our experimental evaluation shows that, when compared with other SMC tools in which event handlers are simulated using locks, Event-DPOR incurs only a moderate constant overhead, but can be exponentially faster than other state-of-the-art DPOR algorithms. The same evaluation also shows that, unlike other algorithms that can achieve analogous reduction, Event-DPOR manages to completely avoid unnecessary exploration of executions that cannot be serialized. Moreover, in all the programs we tried, also those that are not non-branching, Event-DPOR explored the optimal number of traces, suggesting that Event-DPOR is optimal not only for non-branching programs but also for a good number of branching ones. Also, our sequence of inexpensive checks for redundancy was sufficient in all tried programs, i.e., we never had to invoke the decision procedure for this NP-hard problem.
## 2 Related Work
Stateless model checking has been implemented in many tools for analysis of multithreaded programs (e.g., [16, 34, 9, 2, 42, 35, 22, 26]). It often employs DPOR, introduced by Flanagan and Godefroid [12] to reduce the number of schedulings that must be explored. Further developments of DPOR reduce this number further, by being optimal (i.e., exploring only one scheduling in each equivalence class) [1, 3, 6, 23] or by weakening the equivalence [6, 5, 8, 4].
DPOR has been adapted to event-driven multi-threaded programs. Jensen et al. [20] consider an execution model in which events are processed in arbitrary order (multiset semantics) and apply it to JavaScript programs. Maiya et al. [28] consider a model where events are processed in the order they are received (FIFO semantics), and develop a tool, EM-Explorer, for analyzing Android applications
which, given a particular sequence of event executions, produces a set of reorderings of its events which reverses its conflicts. The above works are based on the algorithm of Flanagan and Godefroid [12], implying that they do not take advantage of subsequent improvements in DPOR algorithms [1, 3, 23], nor do they employ techniques such as sleep sets for avoiding redundant explorations. It is known [3]that even with sleep sets, the algorithm of Flanagan and Godefroid [12] can explore an exponential number of redundant execution compared to the algorithms of [1, 3, 23]. Without sleep sets, the amount of redundant exploration will increase further. Recently, Trimananda et al. [39] have proposed an adaptation of stateful DPOR [41, 40] to non-terminating event-driven programs, which has been implemented in Java PathFinder. For analogous reason as for [20, 28], also this approach does not avoid to perform redundant explorations.
For actor-based programs, in which processes communicate by message-passing, Aronis et al. [6] have presented an improvement of Optimal-DPOR in which two postings of messages to a mailbox are considered as conflicting only if their order affects the subsequent behavior of the receiver. Better reduction can then be achieved if the receiver selects messages from its mailbox based on some criterion, such as by pattern matching on the structure of the message. However, this execution model differs from the one we consider.
Event-driven programs where handlers select messages in arbitrary order from their mailbox can be analyzed by modeling messages (mini-)threads that compete for handler threads by taking locks, and applying any SMC algorithm for shared-variable programs with locks. Since typical SMC algorithms always consider different lock-protected code sections as conflicting, this approach has the drawback of exploring all possible orderings of events on a handler. There exists a technique to avoid exploring of all these orderings in programs with locks, in which lock sections can be considered non-conflicting if they do not perform conflicting accesses to shared variables. This LAPOR technique [24] is based on optimistically executing lock-protected code regions in parallel, and aborting executions in which lock-protected regions cannot be serialized. This can led to significant useless exploration, as also shown in our evaluation in Section 8.
The problem of detecting potentially harmful data races in single executions of event-driven programs has been addressed by several works. The main challenge for data race detection is to capture the often hidden dependencies for applications on Android [18, 30, 7, 19] or on other platforms [36, 37, 38, 29]. Detecting data races is a different problem than exploring all possible executions of a program, in that it considers only one (possibly long) execution, but tries to detect whether it (or some other similar execution) exhibits data races.
## 3 Main Concepts and Challenges
In this section, we informally present core concepts of our approach by examples4
### Review of Optimal-DPOR
Our DPOR algorithm for event-driven programs is an extension of Optimal-DPOR [1]. Let us illustrate Optimal-DPOR on the program snippet shown in Fig. 1. In this code, three threads \(s\), \(t\), and \(u\) access three shared variables x, y, and z,5 whereas a, b, c, and d are thread-local registers. Optimal-DPOR first explores a maximal execution, which it inspects to detect races. From each race, it constructs an initial fragment of an alternative execution which reverses the race and branches off from the explored execution just before the race. Let us illustrate with the program in Fig. 1. Assume that the first execution is \(E_{1}\) (cf. the tree in Fig. 1). The DPOR algorithm first computes its happens-before order, denoted \(\xrightarrow{\texttt{hb}}_{E_{1}}\), which is the transitive closure of the union of: (i) the _program order_, which totally orders the events in each thread (small blue arrows to the left of \(E_{1}\)), and (ii) the _conflict order_ which orders conflicting events: two events are conflicting if they access a common shared variable and at least one is a write (red arcs left of \(E_{1}\)). A _race_ consists of two conflicting events in different threads that are adjacent in the \(\xrightarrow{\texttt{hb}}_{E_{1}}\)-order. The execution \(E_{1}\) contains two races (red arcs in Fig. 1). Let us consider the first race, in which the first event is \(s\): x=1 and the second event is \(t\): b=x. The alternative execution is generated by concatenating the sequence of events in \(E_{1}\) that do not succeed the first event in the \(\xrightarrow{\texttt{hb}}_{E_{1}}\) order (i.e., \(t\): a=y;\(u\): c=z) with the second event of the race \(t\): b=x. This forms a _wakeup sequence_, which branches off from \(E_{1}\) just before the race, i.e., at the beginning of the exploration (green in Fig. 1). The second race, between \(s\): x=1 and \(u\): d=x induces the wakeup sequence \(t.u.u\) formed from the sequence \(t\): a=y;\(u\): c=z and the second event \(u\): d=x, also branching off at the beginning (note that \(t.u.u\) does not contain the second event \(t\): b=x of \(t\) since it succeeds \(s\): x=1 in the \(\xrightarrow{\texttt{hb}}_{E_{1}}\)-ordering). When attempting to insert \(t.u.u\), the algorithm will discover that this sequence is _redundant_, since its events are
Figure 1: A program and its execution tree with the four executions that Optimal-DPOR will explore. In \(E_{1}\), the red arcs show the conflict order; the blue arrows the program order. The first wakeup sequence is shown in green; the remaining two continue with blue.
consistently contained in a continuation \((t.u.t.u)\) of the already inserted wakeup sequence \(t.u.t\), and it will therefore not insert \(t.u.u\). After this, the algorithm will reclaim the space for \(E_{1}\), extend \(t.u.t\) into a maximal execution \(E_{2}\), in which races are detected that generate two new wakeup sequences (which start in green and continue in blue), which are extended to two additional executions (cf. Fig. 1).
### Challenges for Event-driven Programs
A naive way in which existing DPOR algorithms can handle event-driven programs is to consider all pairs of messages as conflicting. However, such an approach is _not_ effective, since it will lead to exploration of all different serialization orders of the messages, even if they are non-conflicting, as is the case for the top left program of Fig. 2 in which two threads \(s\) and \(t\) post two messages \(p_{1}\) and \(p_{2}\) to a handler thread \(h\). (We show messages labeled by the message identifier and wrapped in brackets.) Since the events of \(p_{1}\) and \(p_{2}\) are non-conflicting, exploring only one execution suffices. In general, some messages of a program may be conflicting and some may not be, so a DPOR algorithm for event-driven programs should explore only the necessary inequivalent orderings between conflicting messages. Event-DPOR achieves this by extending Optimal-DPOR's technique for reversing races between events in different threads to a mechanism for reversing races between events in different messages.
We illustrate this mechanism on the program at the bottom left of Fig. 2. Assume that the first explored execution is \(E_{1}\). It contains two races between events in the two messages, one on x and one on y. According to Optimal-DPOR's principle for race reversal, the race on x should induce an alternative execution composed of the sequence of events that do not happen-after the first event (i.e., \(h\): \(p_{1}\): u = 1 \(h\): \(p_{2}\): v = 2) and the second event \(h\): \(p_{2}\): a = x (for brevity, we do not show the two post events). However, since message execution is serialized, these events cannot form an execution. Therefore, Event-DPOR forms the alternative
Figure 2: An event-driven program with non-conflicting messages (top left). A program with non-atomic conflicting messages (bottom left) and its tree of executions (right).
execution (shown in blue) by appending the second event \(h\): \(p_{2}\): a = x to a maximal subset of the events of \(E_{1}\) which is closed under \(\smash{\mathop{\smash{\mathop{\smash{\mathop{\smash{\mathop{\smash{\mathop{ \mathop{\smash{\mathop{\mathop{\mathop{\mathop{\mathop{\cdot}}}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{ \,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\, }{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}
execution \(E_{3}\). Execution \(E_{3}\) has a race on x. Its reversal produces the wakeup sequence \(s\): x = 1, which is a tentative branch next to \(p_{2}\): a = x. However, this wakeup sequence is not in conflict with the left branch labeled \(p_{1}\): b = y, which means that it will not be inserted for the reason that it is equivalent to a subsequence of an execution starting with \(p_{1}\): b = y, namely \(E_{2}\).
Reordering Messages when Reversing RacesEvent-DPOR's principles for reversing races may necessitate reordering of messages on handlers that are not involved in the race. Consider the program in Fig. 4. Assume that the first explored execution is \(E_{1}\), where we have omitted the initial sequence of post events of thread \(t\) for succinctness. In \(E_{1}\), message \(p_{1}\) is processed before \(p_{2}\), and \(q_{1}\) is processed before \(q_{2}\). There are three races in \(E_{1}\), one on each of the shared variables x, y, z. Let us consider the race on x, shown by the red arrow. A wakeup sequence which reverses this race must include all events of \(q_{2}\), since these are the \(\xrightarrow{\mathtt{hb}}_{E_{1}}\)-predecessors of \(q_{2}\): c = x. It must also include the write to z by \(p_{2}\) since it is a \(\xrightarrow{\mathtt{hb}}_{E_{1}}\)-predecessor of events in \(q_{2}\). On the other hand, it cannot include any part of the message \(q_{1}\), since \(q_{1}\) must now occur after \(q_{2}\), and therefore it also cannot include the read of y by \(p_{1}\) since its predecessor in \(q_{1}\) is missing. In summary, the wakeup sequence contains two fully processed messages \(p_{2}\) and \(q_{2}\), the event \(h\): \(p_{1}\): d = 1 of \(p_{1}\), but no events from \(q_{1}\). Such a wakeup sequence must branch off after the post events of \(t\), i.e., from the root of the tree to the right in Fig. 4. Later, this wakeup sequence is extended to a full execution \(E_{2}\). In total, the program of Fig. 4 has eight inequivalent executions (the other six are not shown).
## 4 Computation Model
### Programs
We consider programs consisting of a finite set of _threads_ that interact via a finite set of _(shared) variables_. Each thread is either a _normal thread_ or a _handler
Figure 4: A program in which a reversal of the race on x will reorder messages on the handler \(k\), and two executions that will be explored.
thread_. A normal thread has a finite set of local registers and runs a deterministic code, built in a standard way from expressions and atomic statements, using standard control flow constructs (sequential composition, selection and bounded iteration). Atomic statements read or write to shared variables and local registers, including read-modify-write operations, such as compare-and-swap. A handler thread has a _mailbox_ to which all threads (also handler threads) can post messages. A mailbox has unbounded capacity, implying that the posting of a message to a mailbox can never block. A message consists of a deterministic code, built in the same way as the code of a thread. We let \(\mathsf{post}(p,h)\) denote the statement which posts the message \(p\) into the mailbox of handler thread \(h\). A handler thread repeatedly extracts a message from its mailbox, executes the code of the message to completion, then extracts a next message and executes its code, and so on. Messages are extracted from the mailbox in arbitrary order. The execution of a message is interleaved with the statements of other threads.
The local state of a thread is a valuation of its local registers together with the contents of its mailbox. A global state of a program consists of a local state of each thread together with a valuation of the shared variables. The program has a unique initial state, in which mailboxes are empty.
Recall that we use _message_ to denote what is called _event_ in Section 1.
### Events, Executions, Happens-before Ordering, and Equivalence
We use \(s,t,\ldots\) for threads, \(p,q,\ldots\) for messages and non-handler threads, x, y, z for shared variables, and a, b, c, d for local registers. We assume, wlog, that the first event of a message does not access a shared variable, but only performs a local action, e.g., related to initialization of message execution. In order to simplify the presentation, we henceforth extend the term _message_ to refer not only to a message but also to a non-handler thread.
The execution of a program statement is an _event_, which affects the global state of the program. An event is denoted by a pair \(\langle p,i\rangle\), where \(p\) denotes the message containing the event and \(i\) is a positive integer, denoting that the event results from the \(i\)-th execution step in message \(p\). An _execution sequence_\(E\) is a finite sequence of events, starting from the initial state of the program. Since thread and message codes are deterministic, an execution sequence \(E\) can be uniquely characterized by the sequence of messages (and non-handler threads) that perform execution steps in \(E\), where we use \(\mathrm{dot}(.)\) as concatenation operator. Thus \(p.p.q\) denotes the execution sequence consisting first of two events of \(p\), followed by an event of \(q\).
We let \(enabled(E)\) denote the set of messages that can perform a next event in the state to which \(E\) leads. A sequence \(E\) is _maximal_ if \(enabled(E)=\emptyset\). We use \(u,v,w,\ldots\) to range over sequences of events. We introduce the following notation, where \(E\) is an execution sequence and \(w\) is a sequence of events.
* \(\langle\rangle\) denotes the empty sequence.
* \(E\vdash w\) denotes that \(E.w\) is an execution sequence.
* \(w\backslash p\) denotes the sequence \(w\) with its first occurrence of \(p\) (if any) removed.
* \(dom(E)\) denotes the set of events \(\langle p,i\rangle\) in \(E\), that is, \(\langle p,i\rangle\in dom(E)\) iff \(E\) contains at least \(i\) events of \(p\). We also write \(e\in E\) to denote \(e\in dom(E)\).
* \(next_{[E]}(p)\) denotes the next event to be performed by the message \(p\) after the execution \(E\) if \(p\in enabled(E)\), otherwise \(next_{[E]}(p)\) is undefined.
* \(\widehat{e}\) denotes the message that performs \(e\), i.e., \(e\) is of form \(e=\langle\widehat{e},i\rangle\) for some \(i\).
* \(E^{\prime}\leq E\) denotes that \(E^{\prime}\) is a (not necessarily strict) prefix of \(E\).
We say that \(p\)_starts after \(E\)_ if \(p\) has been posted in \(E\), but not yet performed any events in \(E\). We say that \(p\)_is active after \(E\)_ if \(p\) has been posted in \(E\), but not finished its execution in \(E\).
Definition 1 (Happens-before): Given an execution sequence \(E\), we define the _happens-before relation_ on \(E\), denoted \(\xrightarrow{\texttt{hb}}_{E}\), as the smallest irreflexive partial order on \(dom(E)\) such that \(e\xrightarrow{\texttt{hb}}_{E}e^{\prime}\) if \(e\) occurs before \(e^{\prime}\) in \(E\) and either
* \(e\) and \(e^{\prime}\) are performed by the same message \(p\),
* \(e\) and \(e^{\prime}\) access a common shared variable \(x\) and at least one writes to \(x\), or
* \(\widehat{e^{\prime}}\) is the message that is posted by \(e\) and \(e^{\prime}\) is the first event of \(\widehat{e^{\prime}}\).
The _hb-trace_ (or _trace_ for short) of \(E\) is the directed graph (\(dom(E),\ \xrightarrow{\texttt{hb}}_{E}\)).
Definition 2 (Equivalence): Two execution sequences \(E\) and \(E^{\prime}\) are _equivalent_, denoted \(E\simeq E^{\prime}\), if they have the same trace. We let \([E]_{\simeq}\) denote the equivalence class of \(E\).
Note that for programs that do not post or process messages, \(\simeq\) is the standard Mazurkiewicz trace equivalence for multi-threaded programs [32, 14, 12, 1]. We say that two sequences of events, \(w\) and \(w^{\prime}\), with \(E\!\vdash\!w\) and \(E\!\vdash\!w^{\prime}\), are _equivalent after \(E\)_, denoted \(w\simeq_{[E]}w^{\prime}\) if \(E.w\simeq E.w^{\prime}\).
## 5 The Event-DPOR Algorithm
In this section, we present _Event-DPOR_, a DPOR algorithm for event-driven programs. Given a terminating program on given input, the algorithm explores different maximal executions resulting from different thread interleavings.
### Central Concepts in Event-DPOR
Definition 3 (Happens-before Prefix): Let \(E\) and \(E^{\prime}\) be execution sequences. We say that \(E^{\prime}\) is a _happens-before prefix_ of \(E\), denoted \(E^{\prime}\sqsubseteq E\), if (i) \(dom(E^{\prime})\subseteq dom(E)\), (ii) \(\xrightarrow{\texttt{hb}}_{E^{\prime}}\) is the restriction of \(\xrightarrow{\texttt{hb}}_{E}\) to \(E^{\prime}\), and (iii) whenever \(e\xrightarrow{\texttt{hb}}_{E}e^{\prime}\) for some \(e^{\prime}\in dom(E^{\prime})\), then \(e\in dom(E^{\prime})\). We let \(w^{\prime}\sqsubseteq_{[E]}w\) denote that \(E.w^{\prime}\sqsubseteq E.w\).
Intuitively, \(E^{\prime}\sqsubseteq E\) denotes that the execution \(E^{\prime}\) is "contained" in the execution \(E\) in such a way that it is not affected by the events in \(E\) that are not in \(E^{\prime}\). 6 To illustrate, for the top left program of Fig. 2, the execution \(E^{\prime}\) consisting of \(t\): post(\(p_{2}\),\(h\)) \(h\): \(p_{2}\): y = 2 is a happens-before prefix of any maximal execution of the program, since the event of \(p_{2}\) cannot happen-after any other event than the event that posts \(p_{2}\), which is already in \(E^{\prime}\).
Footnote 6: The relation \(w^{\prime}\sqsubseteq_{[E]}w\) is also introduced in [28], as “\(w\) is a dependence-covering sequence of \(w^{\prime}\).”
Definition 4 (Weak Initials): Let \(E\) be an execution sequence, and \(w\) be a sequence with \(E\vdash w\). The set \(WI_{[E]}(w)\) of _weak initials of \(w\) after \(E\)_is the set of messages \(p\) such that \(E\vdash p.w^{\prime}\) for some \(w^{\prime}\) with \(w\sqsubseteq_{[E]}p.w^{\prime}\). _
Intuitively, \(p\) is in \(WI_{[E]}(w)\) if \(p\) can execute the first event in a continuation of \(E\) which "contains" \(w\), in the sense of \(\sqsubseteq\). In Event-DPOR, the concept of weak initials is used to test whether a new sequence is redundant, i.e., is "contained in" an execution that have been explored or in a wakeup sequence that is scheduled for exploration. Note that in Definition 4, we can generally not choose \(w^{\prime}\) as \(w\backslash p\). This happens, e.g., if \(p\) does not occur in \(w\) but instead \(w\) contains another message \(p^{\prime}\) which executes on the same handler as \(p\) and does not conflict with \(p\); in this case \(w^{\prime}\) must contain a completed execution of \(p\) inserted before \(p^{\prime}\).
We illustrate using the program shown on the right. If we let \(E\) be the execution \(s.t\) and \(w\) be the sequence \(p_{1}\), we have \(p_{2}\in WI_{[E]}(w)\), since \(w\sqsubseteq_{[E]}p_{2}.p_{2}.p_{1}\). This illustration shows that in order to determine whether \(p\in WI_{[E]}(w)\) for a message \(p\), one must know which shared-variable access will be performed by \(next_{[E]}(p)\), and, in case \(p\) starts after \(E\) but will execute after some other message on its handler, also the sequences of shared-variable accesses that \(p\) will perform when executing to completion.
The weak initial check problem consists in checking whether \(p\in WI_{[E]}(w)\).
Theorem 4.1: _The weak initial check problem is NP-hard._
The proof of the above theorem can be found in Appendix 0.B.1. In Appendix 0.A.3, we propose a sequence of inexpensive redundancy checks, which have shown to be sufficient for all our benchmarks.
Definition 5 (Races): Let \(E\) be a maximal execution sequence. Two events \(e\) and \(e^{\prime}\) in different messages are in a _race_, denoted \(e\lesssim_{E}e^{\prime}\), if \(e\xrightarrow{\mathtt{hb}}_{E}e^{\prime}\) and
1. \(e\) and \(e^{\prime}\) access a common shared variable and at least one is a write, and
2. there is no event \(e^{\prime\prime}\) with \(e\xrightarrow{\mathtt{hb}}_{E}e^{\prime\prime}\) and \(e^{\prime\prime}\xrightarrow{\mathtt{hb}}_{E}e^{\prime}\). _
Intuitively, a race arises between conflicting accesses to a shared variable, by events which are in different messages but adjacent in the \(\xrightarrow{\mathtt{hb}}_{E}\) order.
Figure 5: Illustrating weak initials
### The Event-DPOR Algorithm
The Event-DPOR algorithm, shown as pseudocode in Algorithm 1, performs a depth-first exploration of executions using the recursive procedure \(Explore(E)\), where \(E\) is the currently explored execution, which also serves as the stack of the exploration. In addition the algorithm maintains three mappings from prefixes of \(E\), named \(done\), \(wut\), and \(\textit{parkedWuS}\). For each prefix \(E^{\prime}\) of \(E\),
* \(done(E^{\prime})\) is a mapping whose domain is the set of messages \(p\) for which the call \(Explore(E^{\prime}.p)\) has returned. If \(p\) does not start after \(E^{\prime}\), then \(done(E^{\prime})(p)\) is the shared variable-access performed by \(next_{[E^{\prime}]}(p)\). If \(p\) starts after \(E^{\prime}\), then \(done(E^{\prime})(p)\) is the set of sequences of shared variable-accesses that can be performed in a completed execution of \(p\) after \(E^{\prime}\). The information in \(done(E^{\prime})(p)\) is collected during the call \(Explore(E^{\prime}.p)\) (Lines 22 to 31).
* \(wut(E^{\prime})\) is a _wakeup tree_, i.e., an ordered tree \(\langle B,\prec\rangle\) where \(B\) is a prefix-closed set of sequences, whose leaves are wakeup sequences. For each sequence \(u\in B\), the order \(\prec\) orders its children (of form \(u.p\)) by the order in which they were added to \(wut(E^{\prime})\). This is also the order in which the sequences of form \(E^{\prime}.u.p\) will be visited in the recursive exploration.
* \(\textit{parkedWuS}(E^{\prime})\) is a set of wakeup sequences \(v\) that were previously being inserted into some wakeup tree \(wut(E^{\prime\prime})\), but were "parked" at the sequence \(E^{\prime}\) because at that time there was not enough information to determine where in \(wut(E^{\prime\prime})\) to place \(v\). Later, when a branch of \(wut(E^{\prime\prime})\) has been extended to a maximal execution, it should be possible to determine where to insert \(v\).
Each call to \(Explore(E)\) first initializes \(done(E)\) and \(\textit{parkedWuS}(E)\) (\(wut(E)\) was initialized before the call), and thereafter enters one of two phases: _race detection_ (Lines 4 to 11) or _exploration_ (Lines 13 to 31). The race detection phase is invoked when \(E\) is a maximal execution sequence. First, for each wakeup sequence \(v\) parked at a prefix \(E^{\prime}\) of \(E\) it invokes \(\textit{InsertParkedWuS}(v,E^{\prime})\) to insert \(v\) into the appropriate wakeup tree (Lines 5 to 7). Thereafter, each race (of form \(e\lesssim_{E}e^{\prime}\)) in \(E\) is analyzed by \(ReverseRace(E,e,e^{\prime})\), which returns a set of executions that reverse the race. Each such execution \(E^{\prime}.v\) is returned as a pair \(\langle E^{\prime},v\rangle\), where \(v\) is a wakeup sequence that should be considered for insertion in the wakeup tree at \(E^{\prime}\). Each wakeup sequence \(v\) is checked for redundancy (Line 10), using the information in \(done\). If \(v\) is not redundant, it is inserted into the wakeup tree at \(E^{\prime}\) for future exploration (Line 11).
The exploration phase (Lines 13 to 33) is entered if exploration has not reached the end of a maximal execution sequence. First, if \(wut(E)\) only contains the empty sequence, then an arbitrary enabled message is entered into \(wut(E)\) (Lines 14 and 15). Thereafter, each sequence in \(wut(E)\) is subject to recursive exploration. We find the \(\prec\)-minimal child \(p\) of the root of \(wut(E)\) (Line 19), and make the recursive call \(Explore(E.p)\) (Line 21). Before the call, \(wut(E.p)\) is initialized (Line 20). During the call \(Explore(E)\), information is also collected about the sequences of shared-variable accesses that can be performed by each message that is active after \(E\), and subsequently stored in the mapping \(done\)
The information is collected in the variable \(msgAccesses\), which is initialized at Line 17. Each recursive call \(Explore(E.p)\) returns the sets of access sequences performed by messages that are active after \(E.p\) (Line 21). After prepending the access performed by \(next_{[E]}(p)\) to the sets of access sequences performed by \(p\) (Line 25), the sets returned by \(Explore(E.p)\) are added to the corresponding sets in \(msgAccesses\) (Line 27). Finally, \(p\) is added to the domain of \(done(E)\) (Line 28). If \(p\) starts a message after \(E\), then \(done(E)(p)\) is assigned the set of access sequences performed by \(p\) (Line 30), otherwise only the access of \(next_{[E]}(p)\). Thereafter, the subtree rooted at \(p\) is removed from \(wut(E)\) (Line 33). When
all recursive calls of form \(Explore(E.p)\) have returned, the accumulated sets of access sequences are returned (Line 33).
Event-DPOR calls functions that are briefly described in the following paragraphs. More elaborate descriptions (with pseudocode) are in Appendix 0.A.
\(ReverseRace(E,e,e^{\prime})\) is given a race \(e\lesssim_{E}e^{\prime}\) in the execution \(E\) (Line 8), and returns a set of executions that reverse the race in the sense that they perform the second event \(e^{\prime}\) of the race without performing the first one, and (except for \(e^{\prime}\)) only contain events that are not affected by the race. More precisely, it returns a set of pairs of form \(\langle E^{\prime},u.e^{\prime}\rangle\), such that (i) \(E^{\prime}.u\) is a maximal happens-before prefix of \(E\) such that \(E^{\prime}.u.e^{\prime}\) is an execution, and (ii) \(dom(E^{\prime})\) is a maximal subset of \(dom(E^{\prime}.u)\) such that \(E^{\prime}\leq E\). An illustration of the \(ReverseRace\) function was given for the race on x in the program of Fig. 4.
\(\mathit{Insert}(v,E^{\prime},\langle\rangle)\) inserts the wakeup sequence \(v\) into the wakeup tree \(wut(E^{\prime})\). If there is already some sequence \(u\) in \(wut(E^{\prime})\) such that \(u\sqsubseteq_{[E^{\prime}]}v\) or \(v\sqsubseteq_{[E^{\prime}]}u\), then the insertion leaves \(wut(E^{\prime})\) unaffected. Otherwise \(\mathit{Insert}(v,E^{\prime},\langle\rangle)\) attempts to find the \(\prec\)-minimal non-leaf sequence \(u\) in \(wut(E^{\prime})\) with \(u\sqsubseteq_{[E^{\prime}]}v\), and insert a new leaf of form \(u.v^{\prime}\) into \(wut(E^{\prime})\), such that \(v\sqsubseteq_{[E^{\prime}]}u.v^{\prime}\), which is ordered after all existing descendants of \(u\) in \(wut(E^{\prime})\). The function finds such a \(u\) by descending into \(wut(E^{\prime})\) one event at a time; from each node \(u^{\prime}\) it finds a next node \(u^{\prime}.p\) as the \(\prec\)-minimal child with \(u^{\prime}.p\sqsubseteq_{[E^{\prime}]}v\). If, during this search, the message \(p\) starts after \(E^{\prime}.u^{\prime}\) it may happen that the wakeup tree does not contain enough subsequent events to determine whether \(u^{\prime}.p\sqsubseteq_{[E^{\prime}]}v\); in this case the sequence \(v\) is "parked" at the node \(u^{\prime}.p\): the insertion of \(v\) will be resumed when \(E^{\prime}.u^{\prime}.p\) is extended to a maximal execution (at Line 7 with \(E^{\prime}\) being \(E^{\prime}.u^{\prime}\)).
\(\mathit{InsertParkedWuS}(v,E^{\prime})\) inserts a wakeup sequence \(v\), which is parked after a prefix \(E^{\prime}\) of the execution \(E\), into an appropriate wakeup tree. The function first decomposes \(E^{\prime}\) as \(E^{\prime\prime}.p\), and checks whether \(p\in WI_{[E^{\prime\prime}]}(v)\), using information about the accesses of \(p\) that can be found in \(E\). If the check succeeds, then insertion proceeds recursively one step further in the execution \(E\), otherwise \(v\) conflicts with \(p\) and should be inserted into the wakeup tree after \(E^{\prime\prime}\).
Checking for RedundancyTests of form \(p\in WI_{[E]}(w)\) for a message \(p\) and an execution \(E.w\) appear at Line 10 and in the functions \(\mathit{InsertWuS}\) and \(\mathit{InsertParkedWuS}\). If \(p\) does not start after \(E\), then the check can be straightforwardly performed using sleep sets [14]. If \(p\) starts after \(E\), then checking whether \(p\in WI_{[E]}(w)\) is NP-hard in the general case (see Theorem 3.1). To avoid expensive calls to a decision procedure, Event-DPOR employs a sequence of incomplete checks, starting with simple ones, and proceeding with a next test only if the preceding was not conclusive. These tests are in order: 1) If \(p\) is the first message (if any) on its handler in \(w\), then \(p\in WI_{[E]}(w)\) is trivially true. 2) If the happens-before relation precludes \(p\) from executing first on its handler, then \(p\in WI_{[E]}(w)\) is false; checking this may require \(w\) to be extended so that \(p\) (and possibly other messages) are executed to completion. 3) An attempt is made to construct an actual execution in which \(p\) is the first message on its handler, which respects the happens-before ordering. 4) If all previous tests were inconclusive, a decision procedure is invoked as a final step.
## 6 Correctness and Optimality
A program is defined to be _non-branching_ if each message, which executes on the same handler as some other message, performs the same sequence of accesses (reads or writes) to shared variables during its execution, regardless of how its execution is interleaved with other threads and messages. Note that the "non-branching" restriction does not apply to non-handler threads nor to messages that are the only ones executing on their handler.
The following theorems state that Event-DPOR is _correct_ (explores at least one execution in each equivalence class) for _all_ event-driven programs and _optimal_ (explores exactly one execution in each equivalence class) for non-branching programs. Proofs can be found in Appendix 0.C.
Theorem 6.1 (Correctness): _Whenever the call to \(Explore(\langle\rangle)\) returns during Algorithm 1, then for all maximal execution sequences \(E\), the algorithm has explored some execution sequence in \([E]_{\simeq}\)._
Theorem 6.2 (Optimality): _When applied to a non-branching program, Algorithm 1 never explores two maximal execution sequences which are equivalent._
## 7 Implementation
Event-DPOR was implemented on top of Nidhugg. Nidhugg[2] is a state-of-the-art stateless model checker for C/C++ programs with Pthreads, which works at the level of the LLVM Intermediate Representation. Nidhugg comes with a selection of DPOR algorithms. One of them is Optimal-DPOR, which we have used as a basis for Event-DPOR's implementation.
We have extended the data structures of Nidhugg with the information needed by Event-DPOR. For instance, nodes in wakeup trees contain new information, such as the set of parked wakeup sequences, and events in executions include the information in \(tmpAccesses\), used to compute the \(done\) set as shown in Lines 23 to 30 of Algorithm 1. The relation \(\xrightarrow{\texttt{hb}}_{E}\) is represented by a vector clock per event, containing the set of preceding events. When reversing races (in \(ReverseRace\)) and checking for redundancy (Line 10 of Algorithm 1), the relation \(\xrightarrow{\texttt{hb}}_{E}\) is extended by a saturation operation (Definition 6 in Appendix 0.A) that captures ordering constrained induced by serialized message execution.
Concerning race reversal, instead of reversing multiple races between messages executed on the same handler, our implementation detects and reverses only the race induced by the first conflict, since other races cannot be reversed, as explained using the example in Fig. 2. Moreover, in cases where \(ReverseRace\) would return several maximal executions that reverse a race, our implementation instead returns their union, even though it may not form an execution (e.g., since it may contain several incomplete executed messages on a handler). From this union, events will be removed adaptively during wakeup tree insertion to extract only those maximal executions that generate new leaves in a wakeup tree.
## 8 Evaluation
In this section, we evaluate the performance of our implementation and put it into context. Since currently there is no other SMC tool for event-driven programs to compare against,7 we have created an API, in the form of a C header file, that implements event handlers as pthread mutexes (locks) and simulates messages as threads that wait for their event handler to be free. This API allows us to use plain C/pthread programs to compare Event-DPOR with the Optimal-DPOR algorithm implemented in Nidhugg as baseline, but also with the _Lock-Aware Partial Order Reduction_ (_LAPOR_) algorithm [24], implemented in GenMC. The LAPOR algorithm is often analogous to Event-DPOR w.r.t. the amount of reduction that it can achieve when event handlers are modeled as global locks. We also include in our comparison the baseline DPOR algorithm of GenMC that tracks the modification order (-mo) of shared variables. For Nidhugg, we used its master branch at the end of 2022; for GenMC, we used version 0.6.1.8 We have run all benchmarks on a Ryzen 5950X desktop running Arch Linux.
Footnote 7: All our attempts to use \(R^{4}\) failed miserably; the tool has not been updated since 2016.
Footnote 8: GenMC v0.6.1 (released July 2021) warns that LAPOR usage with -mo is experimental; in fact, LAPOR support has been dropped in more recent GenMC versions.
We will compare implementations of different DPOR algorithms based on the number of executions that they explore, as well as the time that this takes. For some programs, LAPOR also examines a fair amount of _blocked_ executions (i.e., executions that cannot be serialized and need to be aborted), which naturally affects its time performance. In Table 1, we show the number of executions explored by an entry of the form \(T\)+\(B\), where \(T\) is the number of complete traces and \(B\) is the number of blocked executions. (We omit the \(B\) part when it is zero.)
All the benchmark programs we use are parametric, typically on the number of threads used (and thus messages posted); their parameters are shown inside parentheses. In the first program (posters), each thread posts to a single event handler two messages containing stores to some atomic global variable, and then the value of this variable is checked by an assertion. This simple program allows us to establish the baseline speed of all implementations. We can see that GenMC -mo is the fastest here. The reason is that it does not perform any checks whether the explored executions are sequentially consistent, which allows it to be five times faster than LAPOR, and seven to nine times faster than Nidhugg's algorithm implementations. We can also notice that Event-DPOR incurs a small but noticeable overhead over Optimal-DPOR for the extra machinery that its implementation requires.
The next two benchmarks were taken from a paper by Kragl et al. [27]. In buyers, \(n\) "buyer" threads coordinate the purchase of an item from a "seller" as follows: one buyer requests a quote for the item from the seller, then the buyers coordinate their individual contribution, and finally if the contributions are enough to buy the item, the order is placed. In ping-pong, the "pong" handler
thread receives messages with increasing numbers from the "ping" thread, which are then acknowledged back to the "ping" event handler.
Looking at Table 1, we notice that, in both buyers and ping-pong, all algorithms explore the same number of traces, but LAPOR also explores a significant number of executions that cannot be serialized and need to be aborted. In fact, for both benchmarks, the aborted executions significantly outnumber the traces explored. This affects negatively the time that LAPOR takes, and GenMC -lapor becomes the slowest implementation. In contrast, Event-DPOR does not suffer from this problem and shows similar scalability as baseline GenMC and Optimal-DPOR.
With the four remaining benchmarks, we evaluate all implementations in programs where algorithms tailored to event-driven programming, either natively (Event-DPOR) or which are lock-aware (when handlers are implemented as locks), have an advantage. The first program (consensus), again from the paper by Kragl et al. [27], is a simple _broadcast consensus_ protocol for \(n\) nodes to agree on a common value. For each node \(i\), two threads are created: one thread executes a broadcast method that sends the value of node \(i\) to every other node, and the other thread is an event handler that executes a collect method which receives \(n\) values and stores the maximum as its decision. Since every node receives the values of all other nodes, after the protocol finishes, all nodes have decided on the same value. The next program (prolific) is synthetic: \(n\) threads send \(n\) messages with an increasing number of stores to and loads from an atomic global variable to one event handler. The sparse-mat program com
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline & \multicolumn{3}{c}{Executions (Traces+Blocked)} & \multicolumn{3}{c}{Time (secs)} \\ \cline{2-9} & \multicolumn{2}{c}{GenMC} & \multicolumn{2}{c}{Nidhugg} & \multicolumn{2}{c}{GenMC} & \multicolumn{2}{c}{Nidhugg} \\ Benchmark & -mo & -lapor & -optimal & -event & -mo & -lapor & -optimal & -event \\ \hline posters(3) & 90 & 90 & 90 & 90 & 0.02 & 0.03 & 0.09 & 0.09 \\ posters(4) & 2520 & 2520 & 2520 & 2520 & 0.18 & 0.81 & 0.94 & 1.42 \\ posters(5) & 113400 & 113400 & 113400 & 113400 & 9.43 & 47.11 & 50.87 & 84.64 \\ \hline buyers(6) & 720 & 720+2383 & 720 & 720 & 0.08 & 2.51 & 0.36 & 0.51 \\ buyers(7) & 5040 & 5040+20301 & 5040 & 5040 & 0.56 & 25.80 & 2.53 & 3.96 \\ buyers(8) & 40320 & 40320+191369 & 40320 & 40320 & 5.03 & 306.95 & 23.59 & 37.70 \\ \hline ping-pong(6) & 3276 & 3276+8271 & 3276 & 3276 & 0.23 & 3.99 & 1.45 & 2.61 \\ ping-pong(7) & 27252 & 27252+79435 & 27252 & 27252 & 2.01 & 44.51 & 13.78 & 26.42 \\ ping-pong(8) & 253296 & 253296+835509 & 253296 & 253296 & 20.63 & 572.07 & 149.26 & 299.12 \\ \hline consensus(2) & 4 & 4+4 & 4 & 4 & 0.01 & 0.01 & 0.06 & 0.06 \\ consensus(3) & 216 & 125+347 & 216 & 125 & 0.04 & 0.29 & 0.20 & 0.20 \\ consensus(4) & 331776 & 50625+242828 & 331776 & 50625 & 75.43 & 293.91 & 419.90 & 177.63 \\ \hline prolific(5) & 120 & 30+26 & 120 & 30 & 0.17 & 5.34 & 0.21 & 0.18 \\ prolific(7) & 5040 & 126+120 & 5040 & 126 & 16.12 & 98.14 & 11.79 & 2.12 \\ prolific(9) & 362880 & 510+502 & 362880 & 510 & 2462.83 & 1132.65 & 1363.31 & 26.28 \\ \hline sparse-mat(4,3) & 204 & 34 & 204 & 34 & 0.16 & 0.06 & 0.16 & 0.09 \\ sparse-mat(4,5) & 185520 & 1546 & 185520 & 1546 & 212.51 & 3.56 & 126.06 & 1.66 \\ sparse-mat(4,7) & \(\odot\) & 130922 & \(\odot\) & 130922 & \(\odot\) & 603.31 & \(\odot\) & 234.27 \\ \hline plb(4) & 105 & 1 & 105 & 1 & 0.02 & 0.01 & 0.10 & 0.06 \\ plb(6) & 10395 & 1 & 10395 & 1 & 1.99 & 0.02 & 6.61 & 0.06 \\ plb(8) & 2027025 & 1 & 2027025 & 1 & 556.46 & 0.02 & 1808.24 & 0.06 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of different DPOR algorithm implementations.
putes the number of non-zero elements of a sparse matrix of dimension \(m\times n\), by dividing the work into \(n\) tasks sent as messages to different handlers, which compute and join their results. The last benchmark (\(\mathsf{plb}\)) is taken from a paper by Jhala and Majumdar [21]. A fixed sequence of task requests is received by the main thread. Upon receiving a task, the main thread allocates a space in memory and posts a message with the pointer to the allocated memory that will be served by a thread in the future.
Refer again to Table 1. In consensus, all algorithms start with the same number of traces, but LAPOR and Event-DPOR need to explore fewer and fewer traces than the other two algorithms, as the number of nodes (and threads) increases. Here too, LAPOR explores a significant number of executions that need to be aborted, which hurts its time performance. On the other hand, Event-DPOR's handling of events is optimal here. The \(\mathsf{prolific}\) program shows a case where algorithms not tailored to events (or locks) explore \((n-1)!\) traces, while LAPOR and Event-DPOR explore only \(2^{n}-2\) consistent executions, when running the benchmark with parameter \(n\). It can also be noted that Event-DPOR scales _much_ better than LAPOR here in terms of time, due to the extra work that LAPOR needs to perform in order to check consistency of executions (and abort some of them). The \(\mathsf{sparse\mbox{-}mat}\) program shows another case where algorithms that are not tailored to events explore a large number of executions unnecessarily (\(\triangleright\) denotes timeout). This program also shows that Event-DPOR beats LAPOR time-wise even when LAPOR does not explore executions that need to be aborted. Finally, \(\mathsf{plb}\) shows a case on which Event-DPOR and LAPOR really shine. These algorithms need to explore only one trace, independently of the size of the matrices and messages exchanged, while DPOR algorithms not tailored to event-driven programs explore a number of executions which increases exponentially and fast.
We remark that, in all benchmarks, the inexpensive checks for redundancy were sufficient, and Event-DPOR explored the optimal number of traces. Results from an extended set of benchmarks appear in Appendix 0.D.
## 9 Concluding Remarks
In this paper, we presented a novel SMC algorithm, Event-DPOR, tailored to the characteristics of event-driven multi-threaded programs running under the SC semantics. The algorithm was proven correct and optimal for event-driven programs in which the variable accesses of events do not depend on how their execution is interleaved with other threads.
We have implemented Event-DPOR in the Nidhugg tool, and we will open-source our implementation. With a wide range of event-driven programs, we have shown that Event-DPOR incurs only a moderate constant overhead over its baseline implementation (Optimal-DPOR), it is exponentially faster than existing state-of-the-art SMC algorithms in time and number of traces examined on programs where events' actions do not conflict, and does not suffer from performance degradation caused by having to examine non-serializable executions.
Event-DPOR assumes that handlers can process their events in arbitrary order. Directions for future work include to retarget Event-DPOR for event-driven programs with other policies (e.g., FIFO), and for specific event-driven execution models.
## 10 Reproducible Artifact
An anonymous artifact containing the benchmarks and all the tools used in the evaluation, including our Nidhugg with Event DPOR, is available at [https://doi.org/10.5281/zenodo.7929004](https://doi.org/10.5281/zenodo.7929004).
|
2302.01143 | Constraining rare B decays by $μ^+μ^-\to tc$ at future lepton
colliders | Motivated by the recent rare B decays measurements, we study the matching
procedure of operators $O_9, O_{10}$ in the low energy effective Hamiltonian
and operators in the Standard Model effective theory (SMEFT). It is noticed
that there are more related operators in the SMEFT whose coefficients can not
be determined only from the low-energy data from B physics. We demonstrate how
to determine these coefficients with some new physics models, like $Z^\prime$
model and leptoquark models, and then consider how to probe these operators of
SMEFT at high energy by using the process $\mu^+\mu^-\to tc$ at future muon
colliders, which can provide complementary information except for $\mu^+ \mu^-
\to b s$ on the underlying models which lead to rare B decay processes. We
perform a Monte Carlo study (a hadron level analysis) to show how to separate
the signal events from the SM background events and estimate the sensitivity to
the Wilson coefficients for different models. | Sichun Sun, Qi-Shu Yan, Xiaoran Zhao, Zhijie Zhao | 2023-02-02T14:57:18Z | http://arxiv.org/abs/2302.01143v3 | # Constraining rare B decays by \(\mu^{+}\mu^{-}\to tc\) at future lepton colliders
###### Abstract
Motivated by the recent rare B decays measurements, we study the matching procedure of operators \(O_{9},O_{10}\) in the low energy effective Hamiltonian and operators in the Standard Model effective theory (SMEFT). It is noticed that there are more related operators in the SMEFT whose coefficients can not be determined only from the low-energy data from B physics. We demonstrate how to determine these coefficients with some new physics models, like \(Z^{\prime}\) model and leptoquark models, and then consider how to probe these operators of SMEFT at high energy by using the process \(\mu^{+}\mu^{-}\to tc\) at future muon colliders, which can provide complementary information except for \(\mu^{+}\mu^{-}\to bs\) on the underlying models which lead to rare B decay processes. We perform a Monte Carlo study (a hadron level analysis) to show how to separate the signal events from the SM background events and estimate the sensitivity to the Wilson coefficients for different models.
## 1 Introduction
Searching for new physics is the prime target of both the high energy frontier and high precision frontier. In the rare decay of B mesons, long-standing discrepancies were reported between the Standard Model predictions and experimental measurements, with a hint of non-lepton flavor universality(LFU), especially in the muon-related final states. These hints are observed in a \(B\to K\mu^{+}\mu^{-}\), \(B_{s}\to\phi\mu^{+}\mu^{-}\), \(B_{s}\to\mu^{+}\mu^{-}\) and angular distribution of \(B\to K^{*}\mu^{+}\mu^{-}\)[1; 2; 3; 4; 5]. For the LFU violation, the hints were reported by LHCb[6; 7; 8; 9; 10; 11] in the ratio
\[R_{K}=\frac{BR(B\to K\mu^{+}\mu^{-})}{BR(B\to Ke^{+}e^{-})},R_{K^{*}}=\frac{ BR(B\to K^{*}\mu^{+}\mu^{-})}{BR(B\to K^{*}e^{+}e^{-})} \tag{1}\]
Although large hadronic uncertainties can enter in some of these absolute branching fractions and angular observables for the Standard Model predictions, \(R_{K},R_{K^{*}},B_{s}\to\mu^{+}\mu^{-}\) are considered relatively theoretically clean. The deviation in those measurements might lead to indirect evidence for new physics [12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. This picture has suddenly changed with the very recent experimental updates from CMS collaboration on BR(\(B_{(d,s)}\to\mu^{+}\mu^{-}\)) with the full Run 2 data [22], and LHCb analysis of \(R_{K}\) and \(R_{K*}\) with the full Run 1 and 2 dataset [23; 24]. The newly reported measurements are in agreement with the Standard Model values, however, the new physics effects can still come into play due to both theoretical and experimental uncertainties [25; 26; 27; 28; 29; 30; 31]. While that measurement provides
hints into new physics, the exact mechanics(models) behind those hints are still unknown, and low-energy measurement cannot fully reveal the nature behind that.
On the other hand, by scattering high-energy particles, collider experiments provide unique opportunities to access underlying UV theories. Among various current and future colliders[32], a multi-TeV muon collider[33, 34] is ideal for such studies. Being fundamental particles, the entire energy of incoming muons is available to produce short-distance scattering rather than being spread among partons of hadrons, and thus a 14 TeV muon collider can be as effective as a 100 TeV proton-proton collider[35]. Such high energy reach strongly benefits new heavy particles searches, such as minimal dark matter models[36, 37] searches, as well as indirect measurement at high energies[38]. Moreover, vector boson fusion processes are found to be important at muon colliders[39], and enable access to difficult parameters such as the Higgs quartic self-coupling[40]. More importantly, muon colliders have a special feature: the initial states are muons, directly related to those low-energy physics. The muon \(g-2\) anomaly can be probed directly at muon colliders[41]. In the context of muon \(g-2\) anomaly, studies have been performed on testing it under the SMEFT formalism[41], and model-exhaustive analysis[41, 42, 43, 44, 45].
The multi-TeV reach and better precision advantages of muon colliders, make it a perfect place to probe muon-related B physics in low energy. There are already proposals studying \(\mu^{+}\mu^{-}\to bs\) (here \(bs\) denotes both \(b\bar{s}\) and \(\bar{b}s\)) at multi TeV scale to discuss the impact of low energy rare B decays processes [46, 47, 48]. At the current stage, some rare B decay processes can be nicely parameterized by effective four-fermion operators at the B-physics scale (\(\mu=4.8\) GeV):
\[O_{9} = (\bar{s}\gamma_{\mu}P_{L}b)(\bar{\ell}\gamma^{\mu}\ell), \tag{2}\] \[O_{10} = (\bar{s}\gamma_{\mu}P_{L}b)(\bar{\ell}\gamma^{\mu}\gamma_{5}\ell), \tag{3}\]
For the new physics effects described by operators \(O_{9}\) and \(O_{10}\), when we go above the weak scale (\(\mu=M_{Z}\) for instance), the Wilson coefficients of such two operators depend on the specific models (e.g. leptoquark, scalars,\(Z^{\prime}\),etc).
In this work, we adopt the same assumption that the new physics scale is around \(\mu=35\) TeV and it is challenging to discover the new physics signature at the LHC. We also assume that these new physics above the weak scale (\(\mu=M_{Z}\)) can be described by the framework of the standard model effective field theory (SMEFT). Under appropriate assumptions, it is noteworthy that there is at least one more operator needed in order to match the SMEFT to low energy operators \(O_{9}\) and \(O_{10}\). These three operators in the SMEFT are given in Eqs. (2-2). Different new physics models can lead to different matching conditions across the weak scale.
Different from the proposal presented in [46], where the polarization of muon beams and charge tagging of jets in the final state is assumed, in this work, in order to reveal the nature of new physics related to the rare B decays, we propose to measure the process \(\mu^{+}\mu^{-}\to tc\) (same as \(bs\), \(tc\) denotes both \(t\bar{c}\) and \(\bar{t}c\)). Same final state has been studied for \(e^{+}e^{-}\) collider in Ref. [49, 50, 51]. And the \(tc\ell\ell\) interaction has been studied through the \(t\to c\ell\ell\) decay in [52]. We extend these works to muon collider, with a detailed Monte-Carlo simulation at hadron level.
This study shows that the measurement of \(tc\) final state can provide complementary information to the process \(\mu^{+}\mu^{-}\to bs\). The four fermion operators for \(\mu^{+}\mu^{-}\to tc\) naturally arise from the operator matching conditions from the low energy effective field theory to the SMEFT, since left-handed top and charm quarks form electroweak SU(2) doublets respectively as the partners of the left-handed bottom and strange quarks in SMEFT operators. The process \(\mu^{+}\mu^{-}\to tc\) can also help to distinguish different new physics models, which yield different operator matching conditions. We will consider one \(Z^{\prime}\) model and three leptoquark models.
The leptoquark particles are predicted in the grand unification models and they can either be scalar or vector bosons. Usually, these particles are superheavy (e.g. \(10^{13}\) GeV), as required by the experimental data of proton decays. Very light leptoquarks (e.g. 1 TeV or a few 10 TeV) are consistent with experimental data if their couplings to the first generation of matter fields are weak. Light leptoquarks are also predicted in the Pati-Salam model, where lepton numbers are treated as the fourth color quantum number. These light leptoquarks can be accessible even at the LHC and future collider projects. Recently, leptoquarks have attracted much attention in order to interpret the previously claimed B anomalies. A comprehensive on the phenomenology of leptoquarks can be found in [53].
Our new findings in this work include 1) The dominant SM background events for the process \(\mu^{+}\mu^{-}\to tc\) are different from the background of the process \(\mu^{+}\mu^{-}\to bs\). Due to the highly boosted top quark in the final states, jet substructure analysis is crucial to distinguish signal and background events. 2) In the case that there is no new resonance found in the TeV muon colliders, measurement of the process \(\mu^{+}\mu^{-}\to tc\), can provide crucial information on the potential nature of the new physics which leads to the low energy rare B decay processes data. 3) It is noticed that the final state \(W^{\pm}jj\) can have a very large cross section (e.g. 100fb with collision energy \(\sqrt{s}=10\) TeV), and such a final state is mainly from the weak final state radiation processes. Suppressing the weak final state radiation might be important for signal findings.
This paper is organized as given below. In section 2, we demonstrate the relations between the Wilsonian coefficients of the low-energy effective Hamiltonian and those of the SMEFT. In section 3, we present the values of Wilson coefficients of the SMEFT derived from different new physics models. These new physics models can accommodate the rare B decay processes data. In section 4, we perform a Monte Carlo simulation to explore the sensitivity of future muon colliders to these new physics scenarios. We end this work with a few discussions and conclusions. In Appendix, we present the renormalization group equations of Wilsonian coefficients in the SMEFT and effective Hamiltonian, respectively.
## 2 Matching and running of different operator bases
Interestingly, these rare B decay processes can be simultaneously explained in a model-independent way with the effective four-fermion operators. In many B-physics studies, new physics effects strongly prefer an effective Hamiltonian with \(O_{9}\) and \(O_{10}\) with Wilson
coefficients of dimension 6 interactions at the renormalization scale \(\mu=4.8\) GeV,
\[\mathcal{H}_{eff} = \mathcal{H}_{eff}^{SM}-\mathcal{N}\sum_{\ell=e,\mu}\sum_{i=9,10} \left(c_{i}O_{i}^{bs\ell\ell}+c_{i}^{\prime}O_{i}^{bs\ell\ell}\right)+h.c., \tag{1}\]
where the normalization factor \(\mathcal{N}\) is
\[\mathcal{N} = \frac{4G_{F}}{\sqrt{2}}V_{tb}V_{ts}^{*}\frac{e^{2}}{16\pi^{2}}. \tag{2}\]
The operators \(O_{9}\) and \(O_{10}\) are
\[O_{9}^{bs\ell\ell} = (\bar{s}\gamma_{\mu}P_{L}b)(\bar{\ell}\gamma^{\mu}\ell), \tag{3}\] \[O_{10}^{bs\ell\ell} = (\bar{s}\gamma_{\mu}P_{L}b)(\bar{\ell}\gamma^{\mu}\gamma_{5}\ell), \tag{4}\]
where \(P_{L}=(1-\gamma_{5})/2\) is the left-handed projection operator. For \(O_{i}^{\prime}\), \(P_{L}\) is replaced by right-handed projection operator \(P_{R}=(1+\gamma_{5})/2\).
For the purpose of this paper, to study the related operator \(O_{9}\) and \(O_{10}\) defined in the low energy B physics scale in higher colliding energy scale, we need to treat the running and matching of the operators carefully. Especially the subtleties that arise when the energy runs across the weak scale. Our study finds that in the energy scale above the weak scale, the impact of \(O_{9}\) and \(O_{10}\) operators needs to be reparametrized by three different operators, rather than two, in the so-called Warsaw basis, known as the Standard model effective theory (SMEFT). Here we introduce different bases and coefficient matching as below.
At the scale below the weak scale, potential new physics effects are described by a low energy effective field theory (LEFT). The LEFT Lagrangian with dimension 6 operators can be written as
\[\mathcal{L}_{LEFT} = \mathcal{L}_{QCD+QED}+\frac{1}{v^{2}}\sum_{i}L_{i}Q_{i},\]
where \(L_{i}\) are the Wilson Coefficients of LEFT, and \(v=246\) GeV is the vacuum expectation value.
In this paper, we use the convention of Ref. [54]. The most relevant operators are
\[Q_{ed}^{V,LL}(p,r,s,t) = (\bar{e}_{Lp}\gamma^{\mu}e_{Lr})(\bar{d}_{Ls}\gamma_{\mu}d_{Lt}), \tag{5}\] \[Q_{de}^{V,LR}(p,r,s,t) = (\bar{d}_{Lp}\gamma^{\mu}d_{Lr})(\bar{e}_{Rs}\gamma_{\mu}e_{Rt}), \tag{6}\]
where \(p,r,s,t\) are generation indices of quark or lepton. Since the EW symmetry is broken, here \(e\) and \(d\) are the lepton field and down-type quark field. \(L\) and \(R\) are the chiral indices of fermions.
One can derive the relations between \(Q_{i}\) and \(O_{i}\) easily:
\[Q_{ed}^{V,LL} = \frac{1}{2}\left(O_{9}-O_{10}\right), \tag{7}\] \[Q_{de}^{V,LR} = \frac{1}{2}\left(O_{9}+O_{10}\right). \tag{8}\]
So we have
\[L_{ed}^{V,LL} = \frac{\mathcal{N}v^{2}}{2}(c_{9}-c_{10}), \tag{9}\] \[L_{de}^{V,LR} = \frac{\mathcal{N}v^{2}}{2}(c_{9}+c_{10}). \tag{10}\]
The constraints of \(c_{9}\) and \(c_{10}\) can be found in Ref. [55].
Assuming the new physics appears at a scale above the weak scale \(\Lambda\), the SMEFT Lagrangian with dimension 6 operators (\(\mathcal{O}_{i}\)) is defined as
\[\mathcal{L}_{SMEFT} = \mathcal{L}_{SM}+\frac{1}{\Lambda^{2}}\sum_{i}C_{i}\mathcal{O}_{i}, \tag{11}\]
where \(C_{i}\) are called Wilson Coefficients.
A complete set of non-redundant dimension 6 operators has been derived in Ref. [56], so-called Warsaw basis. The most relevant operators in this paper are
\[\mathcal{O}_{lq}^{(1)}(p,r,s,t) = (\bar{l}_{p}\gamma^{\mu}l_{r})(\bar{q}_{s}\gamma_{\mu}q_{t}), \tag{12}\] \[\mathcal{O}_{lq}^{(3)}(p,r,s,t) = (\bar{l}_{p}\gamma^{\mu}\tau^{I}l_{r})(\bar{q}_{s}\gamma_{\mu} \tau^{I}q_{t}),\] (13) \[\mathcal{O}_{qe}(p,r,s,t) = (\bar{q}_{p}\gamma^{\mu}q_{r})(\bar{e}_{s}\gamma_{\mu}e_{t}), \tag{14}\]
where \(q\), \(l\), \(e\) are the left-handed quark doublet, left-handed lepton doublet, and right-handed lepton singlet, respectively. \(p,r,s,t\) are generation indices of quark or lepton.
When the electroweak symmetry breaking occurs, the SM heavy particles (top, Higgs, \(W^{\pm}\), \(Z\)) are integrated out, and the SMEFT should be matched to LEFT. The full matching conditions at the tree level have been derived by Ref. [54]. In this paper, we only consider the operators with flavor indices \((p,r,s,t)=(2,2,2,3)\) or \((p,r,s,t)=(2,3,2,2)\), so the matching conditions are simplified to
\[L_{ed}^{V,LL}(2,2,2,3) = \frac{v^{2}}{\Lambda^{2}}\left[C_{lq}^{(1)}(2,2,2,3)+C_{lq}^{(3)} (2,2,2,3)\right], \tag{15}\] \[L_{de}^{V,LR}(2,3,2,2) = \frac{v^{2}}{\Lambda^{2}}C_{qe}(2,3,2,2). \tag{16}\]
The related renormalization group equations for the SMEFT and LEFT can be found in Appendix A and Appendix B, respectively.
In Table. 1, we list the runnings of three benchmark points: 1) BP1, \(c_{9}=-1.0,c_{10}=-0.1\), as an example of general new physics scenario, 2) BP2, \(c_{9}=c_{10}=0.25\), as an example that \(C_{qe}\) is the main contribution at scale beyond \(M_{Z}\), 3) BP3, \(c_{9}=-c_{10}=-0.39\), as an example that \(C_{lq}^{(1)}\) and \(C_{lq}^{(3)}\) are the main contributions at scale beyond \(M_{Z}\). These benchmark points are allowed in the analysis of Ref. [25], which has considered the newest LHCb data. The BP3 is the best fit in Ref. [55]. It is observed that the operator mixings induced by the RGE running have no large effects on the size of Wilson coefficients of the SMEFT at high energy machines.
[FI
## 3 The Matching Conditions of New Physics
At muon colliders, the operators we input are Eq.(12) to Eq. (14). In massless limit,1 the differential cross sections for \(\mu^{+}\mu^{-}\to tc\) and \(\mu^{+}\mu^{-}\to bs\) are:
Footnote 1: For simplicity, in this section we work under the massless limit. Nevertheless, for the numerical results discussed in later sections, full mass dependence is always included.
\[\frac{\mathrm{d}\sigma(\mu^{+}\mu^{-}\to X)}{\mathrm{d}\cos \theta}=\frac{3s}{256\pi\Lambda^{4}}\left[|C_{LL}^{X}|^{2}(1+\cos\theta)^{2} +|C_{qe}|^{2}(1-\cos\theta)^{2}\right] \tag{15}\]
where
\[C_{LL}^{bs}=C_{lq}^{(1)}+C_{lq}^{(3)},C_{LL}^{tc}=C_{lq}^{(1)}-C_ {lq}^{(3)} \tag{16}\]
We note that the left-handed operators(\(C_{LL}^{X}\)) and right-handed operator(\(C_{qe}\)) have different \(\theta\) dependence, hence we expect that with flavor tagging and charge identification, they can be distinguished with differential cross section. For simplicity, in this work, we consider only inclusive cross section, which can be obtained by integrating over \(\cos\theta\), given by:
\[\sigma(\mu^{+}\mu^{-}\to X)=\frac{1}{32\pi\Lambda^{4}}s(|C_{LL}^{X}|^{2}+|C_{ qe}|^{2}) \tag{17}\]
In a general new physical model, all three operators \(C_{lq}^{(1)}\), \(C_{lq}^{(3)}\) and \(C_{qe}\) can be non-zero, and thus both processes \(\mu^{+}\mu^{-}\to tc\) and \(\mu^{+}\mu^{-}\to bs\) receives new physical contribution, to be measured in future muon colliders. We note that for \(\mu^{+}\mu^{-}\to bs\), the corresponding Wilson coefficients \(C_{LL}^{bs}\) and \(C_{qe}\) are directly in charge of \(b\to s\mu^{+}\mu^{-}\) transition in \(B\)-physics, and hence constrained by those measurements. On the other hand, \(C_{LL}^{tc}\) is unconstrained. As a general argument, the underlying new physical which induces \(C_{LL}^{bs}\) or equivalently \(C_{lq}^{(1)},C_{lq}^{(3)}\), would induce also \(C_{LL}^{tc}\) with size at similar order.
Therefore, we expect that the cross-section of \(\mu^{+}\mu^{-}\to tc\) is comparable to \(\mu^{+}\mu^{-}\to bs\), and as we will show in Section 4, due to the presence of top quark in the final state of \(\mu^{+}\mu^{-}\to tc\), it is easier to measure than \(\mu^{+}\mu^{-}\to bs\). To be more specific, below we discuss four types of new physics models labeled as Model I-IV, where the relations between \(C_{LL}^{tc}\) and \(C_{LL}^{bs}\) are given, and later in Section 4 we will show how to distinguish them by measuring both \(\mu^{+}\mu^{-}\to tc\) and \(\mu^{+}\mu^{-}\to bs\).
* A \(Z^{\prime}\) model with flavor symmetry \(U(1)_{L_{\mu}-L_{\tau}}\). In this model, the \(L_{\mu}-L_{\tau}\) is promoted into a \(U(1)\) gauge symmetry, with a massive gauge boson \(Z^{\prime}\). Originally it was proposed for muon \(g-2\) anomaly [12; 57] and neutrino mixing [58]. Later it is realized that the coupling to quarks can be described by high dimensional operators, which can be generated through new heavy quarks [59]. Such interaction can induce flavor violation [60; 61; 62]. Since the effective interaction between \(bs\mu\mu(tc\mu\mu)\) is mediated by an \(s\)-channel \(Z^{\prime}\), an electroweak singlet, clearly we have \(C_{lq}^{(3)}=0\), hence \(C_{LL}^{bs}=C_{LL}^{tc}=C_{lq}^{(1)}\). On the other hand, in this kind of models \(C_{qe}\) is independent from \(C_{lq}^{(1)}\). Regardless of the actual value of \(C_{qe}\) and \(C_{lq}^{(1)}\), we have \(\sigma(tc)\sim\sigma(bs)\).
* A scalar triplet leptoquark \(S_{3}\) model. In this model, the new physics is mediated by a heavy scalar leptoquark, which belongs to \(SU(2)_{L}\) triplet. The corresponding Lagrangian can be written as [53] \[\mathcal{L}_{NP}=\sum_{i,j}\lambda_{ij}\bar{Q}_{i}^{c}(i\tau^{2}) \tau^{I}L_{j}S_{3}^{I}+\text{h.c.}\] (10) In this model, both \(C_{lq}^{(1)}\) and \(C_{lq}^{(3)}\) can be generated at tree-level, which is given by [63] \[\frac{1}{\Lambda^{2}}[C_{lq}^{(1)}]_{prst}= 3\frac{\lambda_{sp}^{*}\lambda_{tr}}{4M^{2}}\] (11) \[\frac{1}{\Lambda^{2}}[C_{lq}^{(3)}]_{prst}= \frac{\lambda_{sp}^{*}\lambda_{tr}}{4M^{2}}.\] (12) where \(M\) is the mass of the leptoquark \(S_{3}\). On the other hand, \(C_{qe}\) is zero at tree-level, and it can only be generated through loop corrections, and hence we expect \(C_{qe}\ll C_{lq}^{(1)},C_{lq}^{(3)}\). Therefore, we have \(C_{LL}^{bs}\sim 2C_{LL}^{tc}\), and \(\sigma(tc)\sim\frac{1}{4}\sigma(bs)\)
* A scalar singlet leptoquark \(S_{1}\) model. In this model, new physics is mediated by a heavy scalar leptoquark, which belongs to \(SU(2)_{L}\) singlet. There are three possible hypercharge assignments, and we consider the case \(Y=\frac{1}{3}\) here. The corresponding Lagrangian is \[\mathcal{L}_{NP}=\sum_{i,j}\lambda_{ij}\bar{Q}_{i}^{c}(i\tau_{2})L_{j}S_{1}+ \text{h.c.}\] (13) At the tree level, the relevant Wilson coefficients are \[\frac{1}{\Lambda^{2}}[C_{lq}^{(1),\text{tree}}]_{prst}=\frac{ \lambda_{sp}^{L*}\lambda_{tr}^{L}}{4M^{2}}\] (14) \[\frac{1}{\Lambda^{2}}[C_{lq}^{(3),\text{tree}}]_{prst}=-\frac{ \lambda_{sp}^{L*}\lambda_{tr}^{L}}{4M^{2}}\] (15)
Interestingly, we can see that at tree-level \(C_{LL}^{bs}=0\). The leading contribution to \(C_{LL}^{bs}\) starts from one-loop level [64]. Consequently, we expect that \(C_{LL}^{bs}\ll C_{LL}^{tc}\), and hence \(\sigma(tc)\gg C_{LL}^{bs}\). For the experiment bounds(\(C_{9}=-1\) benchmark point), we get
\[\sum_{i}|\lambda_{u_{i}\mu}|^{2}\text{Re}\frac{(\lambda\lambda^{ \dagger})_{bs}}{V_{tb}V_{ts}^{*}}-1.74|\lambda_{\mu}|^{2}\sim 12.5\hat{M}^{2} \tag{21}\]
where \(\hat{M}\) is the mass of the leptoquark in terms of unit TeV, and from \(B_{s}-\bar{B}_{s}\) mixing we have
\[\frac{(\lambda\lambda^{\dagger})_{bs}}{V_{tb}V_{ts}^{*}}\sim(1. 87+0.45i)\hat{M} \tag{22}\]
For perturbativity, we have \(|\lambda_{u_{i}\mu}|^{2}<4\pi\), which yields
\[M\lesssim 1.9\text{TeV} \tag{23}\]
Consequently, we expect that at the energy range of future muon colliders, a pair of the singlet scalars \(S_{1}\) can be produced and be observed directly.
1. A \(U_{1}\) vector leptoquark model. It was considered as one possible UV completion of EFT in [65], and examined in more detail in [66; 67; 68; 69]. The Lagrangian is given by \[\mathcal{L}_{NP}=-\frac{1}{2}U_{1,\mu\nu}^{\dagger}U^{1,\mu\nu}+M_{U}^{2}U_{1, \mu}^{\dagger}U_{1}^{\mu}+g_{U}U_{1,\mu}\lambda_{ij}\bar{Q}_{i}\gamma^{\mu}L_{ j}+\text{h.c.}\] (24) The relevant Wilson coefficients are \[\frac{1}{\Lambda^{2}}[C_{lq}^{(1),\text{tree}}]_{prst}=g_{U}^{2} \frac{\lambda_{sp}^{L*}\lambda_{tr}^{L}}{2M^{2}}\] (25) \[\frac{1}{\Lambda^{2}}[C_{lq}^{(3),\text{tree}}]_{prst}=g_{U}^{2} \frac{\lambda_{sp}^{L*}\lambda_{tr}^{L}}{2M^{2}}\] (26) Clearly, in this case we have \(C_{LL}^{tc}\sim 0,C_{LL}^{bs}\gg C_{LL}^{tc}\). Moreover, in this minimal setup we have \(C_{qe}=0\), though it can be introduced through adding other particles and/or interactions. We'd like to note that the above are examples of simplest new physics models, where only one type of mediator is introduced. The Nature may be more complicated and contains several types of mediators, e.g. both scalar single and triple leptoquarks may be present[70].
## 4 Signatute of \(\mu^{+}\mu^{-}\to tc\) at future muon colliders
### Cross sections of signal and main background processes
To study the \(\mu^{+}\mu^{-}\to tc\) process at future muon collider, we generate a UFO model with relevant operators by Feynrules [71], and import it to Madgraph5_aMC@NLO[72]. We calculate the cross sections of signals and backgrounds from \(E_{cm}=1\) TeV to 30 TeV. The energy dependencies of the cross sections are shown in Fig. 1.
or the \(\mu^{+}\mu^{-}\to tc\) and \(bs\) processes, we consider the BP3 as an example where \(c_{9}=-c_{10}=-0.39\). The cross section of the main signal process \(\mu^{+}\mu^{-}\to tc\) derived from the 4-fermion interactions are proportional to collision \(s^{2}\), and it increases with the increase of the collision energy, as shown in Fig. 1. For Wilson coefficients \(C_{lq}^{1}\) and \(C_{lq}^{3}\), we consider their relations in Model I and II. As we expect, their cross sections have relation \(\sigma(tc)\sim\sigma(bs)\) for Model I, and \(\sigma(tc)\sim\frac{1}{4}\sigma(bs)\) for Model II.
The main background processes include \(b\bar{b}\), \(c\bar{c}\), \(q\bar{q}\) (\(q=u,d,s\)), \(W^{+}W^{-}\), \(ZZ\), \(t\bar{t}\), and \(Wjj\) (\(j=u,d,s,c\)). These processes are from S-channel and decrease with the increase of the collision energy. It is noteworthy that the cross sections of diboson processes \(W^{+}W^{-}\) can be much larger than those of di-quarks final states. The process \(W^{\pm}jj\) is also included, which describes the weak final state radiation. It is remarkable that the cross section of \(W^{\pm}jj\) is much larger than those of other processes. \(W^{\pm}jj\) cross section is almost \(10^{2}\) fb and remains constant even when the collision energy increases. Therefore, suppressing the background events of \(\mu^{+}\mu^{-}\to W^{\pm}jj\) will be crucial and necessary.
Near the threshold, the cross section of all background processes is huge compared to the signal process. With the increase of collision energy, the signal cross section can even be larger than those of background processes if the collision energy is large enough (say \(\sqrt{s}=30\) TeV) for some new physics models. Nonetheless, when \(\sqrt{s}=10\) TeV, it might be still challenging to discover the signal events since the cross section of background processes is several orders larger than that of the signals processes.
Figure 1: The energy dependencies of the cross sections of \(\mu^{+}\mu^{-}\to X\) are displayed. For the signal, \(X=tc/bs\). In other cases, \(X\) is the final state of the background. The BP3 with \(c_{9}=-c_{10}=-0.39\) is shown here. Their values are evaluated below the cutoff \(\Lambda=10\) TeV by our RGEs and matching conditions in the Appendix. Two models are considered: (a) Model I, \(C_{lq}^{(1)}=L_{ed}^{V,LL}\times\Lambda^{2}/v^{2}\) and \(C_{lq}^{(3)}=0\), and (b) Model II, \(C_{lq}^{(1)}=3C_{lq}^{(3)}=\frac{3}{4}L_{ed}^{V,LL}\times\Lambda^{2}/v^{2}\).
As we show in Fig. 1, the process \(\mu^{+}\mu^{-}\to tc\) can only be observed when \(E_{cm}>10\) TeV. At such high energy, we can expect two energetic jets to be observed at the detector for either signal or background events.
In Fig. 1, the BP3 with \(c_{9}=-c_{10}=-0.39\) given in Table 1 is considered. Their values are evaluated to the EW scale by our RGEs Eq. 1 and Eq. 2. And then they are matched to SMEFT by Eq. 15 and Eq. 16. From Eq. 15, we also know that the constraint from B physics can be only applied to the sum of \(C_{lq}^{(1)}+C_{lq}^{(3)~{}2}\). So any new physics scenarios with \(C_{lq}^{(1)}\) and \(C_{lq}^{(3)}\) as free parameters cannot be constrained by the data from B physics experiments. In Fig. 2, we show the running of \(C_{lq}^{(1)}+C_{lq}^{(3)}\) and \(C_{lq}^{(1)}-C_{lq}^{(3)}\) from EW scale to new physics scale at \(\Lambda=10\) TeV for the BP3 case with \(c_{9}=-c_{10}=-0.39\). We have considered two simple cases at EW scale (\(M_{Z}\)): 1) Model I with \(C_{lq}^{(1)}=L_{ed}^{V,LL}\frac{\Lambda^{2}}{v^{2}}\) and \(C_{lq}^{(3)}=0\); 2) Model II with \(C_{lq}^{(1)}=3C_{lq}^{(3)}=\frac{3}{4}L_{ed}^{V,LL}\frac{\Lambda^{2}}{v^{2}}\). Considering the experimental uncertainties, it is challenging to distinguish these two cases from the effects of RGE running of only \(C_{lq}^{(1)}+C_{lq}^{(3)}\).
It is noteworthy that although these two NP cases, i.e. Model I and Model II, produce the same \(c_{9}\) and \(c_{10}\) at \(\mu=M_{B}\), their high energy behaviors are different from each other.
### Jet Level Analysis
It should be mentioned that for a muon collider, the collision environment is relatively clean and there are no serious pileups and underlying events that occurred at a hadron collider,
but there exist beam-induced backgrounds which arise from the muon decay. By using the jet grooming techniques, it is possible to suppress such beam-induced backgrounds [73]. We will neglect the beam-induced backgrounds in our study.
Meanwhile, since it is more and more realistic to use the particle flow method to measure the jet energy, which can reduce the uncertainty of jet energy down to \(5\%-20\%\)[74]. The particle flow method can also help us to distinguish a light jet, a W/Z boson jet, and a top jet: a W/Z/top jet can have more charged tracks in a detector compared with a light jet.
To further investigate the kinematic features of jets from signal processes, we input events generated with BP3 and Model I to PYTHIA8 [75] for parton shower and hadronization, and FastJet [76] to reconstruct jets. It is expected that jet algorithms developed for electron-positron colliders can also be applied well to muon colliders. Therefore, we use the generalized \(k_{t}\) algorithm for \(e^{+}e^{-}\) collisions, which is extended from a simple \(k_{t}\) algorithm [77]. This algorithm defines two distances:
\[d_{ij} = \min(E_{i}^{2p},E_{j}^{2p})\frac{1-\cos\theta_{ij}}{1-\cos R}, \tag{10}\] \[d_{iB} = E_{i}^{2p}, \tag{11}\]
where \(p\) and \(R\) are input by user. If a \(d_{ij}\) is smallest then particle \(i\) and \(j\) are recombined, while \(d_{iB}\) is smallest then \(i\) is called an "inclusive jet". In this context, we choose \(p=1\).
It should be pointed out that the denominator \((1-\cos R)\) is replaced by \((3+\cos R)\) while choosing \(\pi<R<3\pi\) in FastJet. In this case, \(d_{iB}\) is always larger than \(d_{ij}\) so that only one inclusive jet can be found. If we also choose \(p=1\), the generalized \(k_{t}\) algorithm is identical to the original \(k_{t}\) algorithm [77], which only has a single distance:
\[d_{ij} = \min(E_{i}^{2},E_{j}^{2})\left(1-\cos\theta_{ij}\right), \tag{12}\]
and one can extract "exclusive jet" only. For the high energy muon collider, muon beams may radiate energetic particles which can have large angles to the beams. Such kinds of particles are included in the exclusive jets. So the original algorithm is not sufficient in our context.
Below we will demonstrate a case study with the collision energy \(\sqrt{s}=10\) TeV at the jet level analysis, with the BP3 and Model I as theoretical input. We demand all signal and background events have hadronic decays and neglect those semi-leptonic and pure leptonic final states.
Since the dominant background processes are \(WW(ZZ)\), \(tt\), and \(jj\) which can have more than two energetic jets in the final state, in order to avoid the combinatoric issues, it is better to have less number of jets in the final state. For example, when \(R=0.05\) and \(E_{j}>100\), the number of jets for a signal event can be much more than 10, which is difficult to analyze. Then appropriate jet parameters, like cone parameter and energy cut, are crucial for the jet numbers and the preselection of signal events. Due to the fact that both \(W/Z\) bosons and top quarks are highly boosted in the final state (each of them carries an energy 5 TeV), it is crucial to set the value of jet parameter so as to capture
the whole decay products of \(W/Z\) bosons and top quarks. Therefore, in this study, we will focus on the boosted events in our analysis. Considering that the future ECAL (HCAL) sub-detectors can have a granularity good enough, there is no doubt that a future detector of a muon collider is able to resolve the substructure of such a highly boosted top jet. In principle, a future detector of a muon collider should be very similar to the detectors of CEPC and ILC.
The optimization of parameters \(p\) and \(R\) is a complicated task. As a first attempt, we choose \(p=1\) so it works similarly to a \(k_{t}\) algorithm for hadron collider [78, 79]. For the \(t\bar{c}(\bar{t}c)\) final state, we hope to find a \(R\) parameter to reconstruct a heavy jet around top mass and a light jet. We have scanned the \(R\) parameter from \(0.05\) to \(0.15\) and have found that \(R=0.10\) can satisfy this requirement.
In Fig. 2(a), we display the energy distribution of the leading three jets in \(t\bar{c}(\bar{t}c)\) final states with \(R=0.10\) and \(p=1\). These jets are sorted by energy. Obviously, the first two jets have energy around \(E_{cm}/2\), which means that they probably originated from a hard process. We can also observe that the energy of the 3rd jet can reach several hundred GeV even TeV levels. In such a high-energy collider, parton shower can radiate some high-energy particles and can be detected as a hard jet. To reduce these radiations, we implement a cut \(E(j)>500\) GeV to jets for each event. After this cut, we plot the number of jets in Fig. 2(b). As we can see, the peak is \(N_{jets}=2\) for two final state processes, and the peak for \(W^{\pm}jj\) background is 3. Therefore to demand the number of jets \(N_{j}=2\) for each event can heavily suppress the background events of \(\mu^{+}\mu^{-}\to W^{\pm}jj\).
Fig. 4 shows the invariant masses of the two most energetic jets of signal and background events. Here, we label the heavier jet as HJ in Fig. 3(a) and the lighter one as LJ in Fig. 3(b). As we expect, HJ has a peak around the top mass and the distribution of LJ is flat for the signal. For the backgrounds with heavy particles (\(t\), \(W\), and \(Z\)), we also observe
Figure 3: (a) The jet energy distributions of process \(\mu^{+}\mu^{-}\to tc\) are displayed, where jets are sorted by energy. (b) The number of jets after implementing cut \(E(j)>500\) GeV.
peaks around their masses. It is because these particles are highly boosted in such a high energy machine.
B-tagging and C-tagging techniques can help with signal and background separation. As shown in [80], the larger the transverse momentum, the better the B tagging efficiency. Therefore we expect a higher B-tagging efficiency for a 10 TeV collision. Although B mesons and D mesons have similar proper times at \(10^{-12}\) seconds, their masses are different. If C-tagging techniques [81] can be applied in our analysis, we expect better results can be obtained.
Since we only consider hadronic decay, the top quark should decay to \(b\) quark and \(W\) boson, and \(W\) further decays to light quarks. So the HJ of a signal event should also be tagged as a \(b\)-jet in a future detector. To consider the \(b\)-tagging effects, we track all decayed products of \(b\)-hadrons after the hadronization. If the constituents of a jet have a b-hadron, we can label this jet as a true \(b\)-jet. The same procedure can be applied to labeling a \(c\)-jet.
Fig. (a)a and Fig. (b)b show the number of true \(b\)-jets and \(c\)-jets for signals and backgrounds with quarks. Obviously, a \(b\)-jet and a \(c\)-jet are found in the \(t\bar{c}(\bar{t}c)\) signal. For backgrounds with two b quarks (\(t\bar{t}\) and \(b\bar{b}\)), two \(b\)-jets can be found in most events. For the \(c\bar{c}\) backgrounds, most events include two \(c\)-jets. Most of the \(q\bar{q}\) events do not have both \(b\)- and \(c\)-jet, but a small fraction of such events can have \(b\)- or \(c\)-jets in the final states. For example, in the parton shower, a quark has a certain probability to radiate a gluon, and subsequently, this gluon may split to heavy quarks like \(b\bar{b}\) and \(c\bar{c}\). Such a kind of process is easier to occur for an energetic light quark, which may lead to an increase in the mistagging rate of light quarks. For backgrounds with gauge bosons, as we plot in Fig. (c)c and Fig. (d)d, \(ZZ\), \(W^{+}W^{-}\) or \(W^{\pm}jj\) can decay to \(b\) or \(c\) quark, so we also observe some \(b\)- or \(c\)-jets in these events.
Figure 4: The invariant mass of (a) the heavier jet and (b) the lighter jet in signal and backgrounds are displayed.
With this flavor tagging information, we can implement a \(b\)-tagging cut. In our analysis, the \(b\)-tagging rate is \(\epsilon_{b}=0.7\), while the mistagging rate is \(\epsilon_{c}=0.1\) and \(\epsilon_{q}=0.01\) for \(c\)-jets and light jets, respectively.
Below we introduce some simple cuts to separate signal and background events:
* Cut1: \(N_{jet}=2\), \(N_{lepton}=0\) and \(M_{jj}>8\) TeV,
* Cut2: 150 GeV\(<M(HJ)<200\) GeV,
* Cut3: \(M(LJ)<75\) GeV,
* Cut4: the heavy jet is b-tagged and the light jet is not tagged.
Figure 5: The number of (a) true \(b\)-jets and (b) true \(c\)-jets in signals and backgrounds with quarks are displayed. As a reference, the number of (c) true \(b\)-jet and (d) true \(c\)-jet in the background with gauge boson are displayed.
The results of the cut flows are listed in Table. 2. As we can see, the cuts for the invariant masses of heavier and lighter jets are efficient to reduce backgrounds with heavy particles. The \(b\)-tagging cut is efficient to reduce backgrounds with \(W\) boson. After these cuts, the huge backgrounds are reduced successfully, and the signal events can be observed with remarkable significance.
For a purpose of comparison and contrast, we also perform an analysis of the process \(\mu^{+}\mu^{-}\to b\bar{s}(\bar{b}s)\). We introduce the following cuts to separate signal and background events
* Cut1: \(N_{jet}=2\), \(N_{lepton}=0\), and \(M_{jj}>8\) TeV,
* Cut2: \(M(HJ)<75\) GeV,
* Cut3: One jet is b-tagged.
The results of cut flows are listed in Table. 3.
As we discussed above, there should be no massive jets in such signal events, so we can veto the heavier jets by demanding the heavier jet should not have too much larger jet mass. The jet mass cut of the heavier jets can work to reject backgrounds like \(t\bar{t}\), \(WW\), and \(ZZ\). The remained backgrounds with \(W\) boson can be further reduced by applying a b-tagging cut. After these cuts, it is observed that heavy flavor final states will be the dominant background which leads to a small \(S/B\).
In Fig. 6, we demonstrate the \(2\sigma\) and \(3\sigma\) bounds on \(C_{LL}^{X}\) from the measurement of total cross sections. Based on our analysis, \(C_{LL}^{tc}\) can be constrained to \([-1.8\times 10^{-2},+1.8\times 10^{-2}]\) (\([-2.2\times 10^{-2},+2.2\times 10^{-2}]\)) at \(2\sigma\) (\(3\sigma\)) limit at 10 TeV muon collider with luminosity \(\mathcal{L}=10\) ab\({}^{-1}\), while \(C_{LL}^{bs}\) can be constrained to \([-2.3\times 10^{-2},+2.3\times 10^{-2}]\) (\([-2.8\times 10^{-2},+2.8\times 10^{2}]\)) at \(2\sigma\) (\(3\sigma\)) limit.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & No Cuts & Cut1 & Cut2 & Cut3 & Cut4 & \(S/B\) & \(\sigma\) \\ \hline \(\mu^{+}\mu^{-}\to tc\) & 847 & 389 & 247 & 166 & 101 & & \\ \(\mu^{+}\mu^{-}\to t\bar{t}\) & \(7.15\times 10^{3}\) & 3323 & 2534 & 90 & 35 & & \\ \(\mu^{+}\mu^{-}\to q\bar{q}\) & \(3.56\times 10^{4}\) & \(1.65\times 10^{4}\) & 1696 & 1167 & 75 & & \\ \(\mu^{+}\mu^{-}\to c\bar{c}\) & \(1.73\times 10^{4}\) & 7663 & 788 & 559 & 70 & & \\ \(\mu^{+}\mu^{-}\to b\bar{b}\) & 9137 & 3744 & 386 & 276 & 53 & & \\ \(\mu^{+}\mu^{-}\to W^{+}W^{-}\) & \(2.51\times 10^{5}\) & \(2.39\times 10^{5}\) & 1181 & 186 & 0 & & \\ \(\mu^{+}\mu^{-}\to Wjj\) & \(3.55\times 10^{5}\) & \(1.11\times 10^{4}\) & 976 & 603 & 35 & & \\ \(\mu^{+}\mu^{-}\to ZZ\) & \(1.52\times 10^{4}\) & \(1.50\times 10^{4}\) & 0 & 0 & 0 & & \\ \hline \end{tabular}
\end{table}
Table 2: The number of events before and after each cut are listed, where \(\sigma\) is defined as \(\sigma=S/\sqrt{S+B}\). We use BP3 and Model I as theoretical inputs for the signal. We assume the luminosity is 10 ab\({}^{-1}\) at 10 TeV muon collider. The details of these cuts are described in the text.
Similarly, we implement the same cuts to analyze the case when only \(C_{qe}\) is switched on. In Fig. 7, we show the \(2\sigma\) and \(3\sigma\) bounds on \(C_{qe}\). There is no wonder that these bounds are close to that we obtain for \(C_{LL}^{X}\) when only the information of the total cross section is utilized.
In Fig. 8, we show the 2D constraints from \(\mu^{+}\mu^{-}\to tc\)(blue area), \(\mu^{+}\mu^{-}\to bs\)(red area), and their combination(magenta area). In the top left panel of Fig. 8, we show the constraints under the assumption that \(C_{qe}=0\). We note that only BP3 satisfies this assumption in our benchmark points, while the underlying models are unknown. Constraints from \(\mu^{+}\mu^{-}\to bs\) can probe whether the four-fermion operators are in charge of the beyond the standard model rare B decay processes, while \(\mu^{+}\mu^{-}\to tc\) are essential to distinguish different underlying models. In the top right panel of Fig. 8, we show the constraints
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & No Cuts & Cut1 & Cut2 & Cut3 & \(S/B\) & \(\sigma\) \\ \hline \(\mu^{+}\mu^{-}\to bs\) & 1533 & 693 & 309 & 214 & & \\ \(\mu^{+}\mu^{-}\to t\bar{t}\) & 7152 & 3323 & 1 & 1 & & \\ \(\mu^{+}\mu^{-}\to q\bar{q}\) & \(3.56\times 10^{4}\) & \(1.65\times 10^{4}\) & 7128 & 356 & & \\ \(\mu^{+}\mu^{-}\to c\bar{c}\) & \(1.73\times 10^{4}\) & 7663 & 3368 & 735 & & \\ \(\mu^{+}\mu^{-}\to b\bar{b}\) & 9137 & 3744 & 1662 & 673 & & \\ \(\mu^{+}\mu^{-}\to W^{+}W^{-}\) & \(2.51\times 10^{5}\) & \(2.39\times 10^{5}\) & 4151 & 462 & & \\ \(\mu^{+}\mu^{-}\to Wjj\) & \(3.55\times 10^{5}\) & \(1.11\times 10^{4}\) & 3957 & 373 & & \\ \(\mu^{+}\mu^{-}\to ZZ\) & \(1.52\times 10^{4}\) & \(1.50\times 10^{4}\) & 149 & 63 & & \\ \hline \end{tabular}
\end{table}
Table 3: The number of events before and after each cut is listed. We use BP3 and Model I as theoretical inputs for the signal. We assume the luminosity is 10 ab\({}^{-1}\) at 10 TeV muon collider. The details of these cuts are described in the text.
Figure 7: The \(2\sigma\) and \(3\sigma\) bounds of \(C_{qe}\) obtained by measuring (a) \(tc\) and (b) \(bs\) final state at 10 TeV muon collider with luminosity \(\mathcal{L}=10\) ab\({}^{-1}\) are shown.
under the assumption that \(C_{lq}^{(3)}=0\), i.e. in Model I. We can see that in this case both \(\mu^{+}\mu^{-}\to tc\) and \(\mu^{+}\mu^{-}\to bs\) have similar dependence on \(C_{qe}\) and \(C_{lq}^{(1)}\), though the former provides stronger constraints. Combining both channels we can distinguish BP1, BP2, and BP3. In the bottom panel of Fig. 8, we show the constraints under the assumption that \(C_{lq}^{(1)}=3C_{lq}^{(3)}\), i.e. Model II. We can see that in this case \(\mu^{+}\mu^{-}\to tc\) is more sensitive to \(C_{qe}\), while \(\mu^{+}\mu^{-}\to bs\) is more sensitive to \(C_{lq}^{(1)}\). Both channels are complementary and combining them we can distinguish different benchmark points.
## 5 Summary and Discussion
For the signal processes \(\mu^{+}\mu^{-}\to bs\)[46], it has been revealed that b tagging is crucial to reject background events from \(jj\) final states. The dominant background events are \(jj\). In order to extract the information on the Wilson coefficients \(C_{9}\) and \(C_{10}\), it is found that the polarized muon beams and the measurement of forward-backward asymmetry of the final state are needed.
In this work, instead of assuming polarized muon beams and charge tagging of a final state, we propose to measure the signal processes \(\mu^{+}\mu^{-}\to tc\). Then it is found that the major task is to reject \(t\bar{t}\) and \(WW\) events, which might be easier to pick out signal events. It is also found that the weak radiation \(jjW\) is large and might be a relevant background.
It is also found that in order to resolve the signal events, detectors with high granularity are needed since top quarks and W bosons in the hadronic final states are around 5 TeV and they are highly boosted objects. In order to capture the substructure of these massive jets, the cone parameter should be set as around \(0.09-0.1\) when the collision energy is assumed to be \(\sqrt{s}=10\) TeV, which is much smaller compared with the cone parameter adopted as \(R=0.4\) or 0.5 at the LHC. To extract signal events from large background events, a refined analysis of the TeV region jet substructure [82; 83; 84] should be applied to achieve much better performance. It is expected that a modern top tagger technique can improve the top jet identification and reject the W/Z jets.
To distinguish new physics models, it is found that the measurement of \(\mu^{+}\mu^{-}\to tc\) can pinpoint Model I and Model II. Model III can be discovered due to the resonance near a few TeV regions. Model IV could be favored if there is no signal of the process \(\mu^{+}\mu^{-}\to tc\) are measured.
Z.J. Zhao has been partially supported by a China and Germany Postdoctoral Exchange Program between the Office of China Postdoctoral Council (OCPC) and DESY,, and partially supported by the Natural Science Foundation of China under the grant No. 11875260. S.C. Sun is supported by the National Natural Science Foundation of China, No.12105013. Q.S. Yan is supported by the Natural Science Foundation of China under the grant No. 11475180 and No. 11875260. X.R. Zhao is supported by the Italian Ministry of Research (MUR) under grand PRIN 20172LNEEZ.
SMEFT Renormalization Group Equation
The RGE of SMEFT Wilson coefficients can be written as
\[\frac{dC_{i}}{d\ln\mu} = \frac{1}{16\pi^{2}}\beta_{i}. \tag{101}\]
All 1-loop \(\beta\) functions of operators in Warsaw basis have been derived in Ref. [85].
The \(\beta\) functions of operators 2.12\(\sim\)2.14 are
\[\left[\beta_{lq}^{(1)}\right]_{prst} = \frac{2}{3}{g^{\prime}}^{2}\left([C_{lq}^{(1)}]_{wwst}+[C_{qe}]_{ stww}\right)\delta_{pr}-{g^{\prime}}^{2}[C_{lq}^{(1)}]_{prst} \tag{102}\] \[+9g^{2}[C_{lq}^{(3)}]_{prst}+\frac{1}{2}[\Gamma_{u}^{\dagger}\Gamma _{u}]_{vt}[C_{lq}^{(1)}]_{prsv},\] \[\left[\beta_{lq}^{(3)}\right]_{prst} = \frac{2}{3}g^{2}[C_{lq}^{(3)}]_{wwst}\delta_{pr}+3g^{2}[C_{lq}^{(1 )}]_{prst}-(6g^{2}+{g^{\prime}}^{2})[C_{lq}^{(3)}]_{prst}\] (103) \[+\frac{1}{2}[\Gamma_{u}^{\dagger}\Gamma_{u}]_{vt}[C_{lq}^{(3)}]_ {prsv},\] \[\left[\beta_{qe}\right]_{prst} = \frac{4}{3}{g^{\prime}}^{2}\left([C_{lq}^{(1)}]_{wwpr}+[C_{qe}]_{ prww}\right)\delta_{st}+2{g^{\prime}}^{2}[C_{qe}]_{prst}\] (104) \[+\frac{1}{2}[\Gamma_{u}^{\dagger}\Gamma_{u}]_{vr}[C_{lq}^{(1)}]_ {pvst},\]
where \(g\) and \(g^{\prime}\) are the gauge coupling of \(SU(2)\) and \(U(1)\), respectively. \(\Gamma_{u}\) is the \(3\times 3\) Yukawa mass matrix of u-type quarks. For simplicity, we only consider the top quark is massive, and only the element \(\Gamma_{u}(3,3)=1\) is non-zero. Note the Eq. 102\(\sim\)104 only contains the most important contributions and mixing of the corresponding operators. The mixing of the full set of operators is beyond the scope of this work.
The 1-loop RGE running of SM parameters is given by these \(\beta\) functions:
\[\beta_{g} = -\frac{19}{6}g^{3}, \tag{105}\] \[\beta_{g^{\prime}} = \frac{41}{6}{g^{\prime}}^{3},\] (106) \[\beta_{g_{s}} = -7g_{s}^{2},\] (107) \[\left[\beta_{\Gamma_{u}}\right]_{33} = \frac{9}{4}g^{2}\Gamma_{u}(3,3)-\frac{17}{12}{g^{\prime}}^{2} \Gamma_{u}(3,3)-8g_{s}^{2}\Gamma_{u}(3,3)+\frac{9}{2}\Gamma_{u}^{3}(3,3). \tag{108}\]
Our simplified RGE running has been compared with two tools: DSixTools [85; 86] and Wilson [87]. The differences between our results and these tools are below 1%.
## Appendix B LEFT Renormalization Group Equation
The definition of RGE of LEFT is the same as Eq. 101, but \(C_{i}\) are replaced by \(L_{i}\). The \(\beta\) functions of operator 2.5 and 2.6 are
\[\left[\beta_{ed}^{V,LL}\right]_{prst} = \frac{4}{3}e^{2}q_{e}q_{d}\delta_{pr}\left([L_{ed}^{V,LL}]_{wwst} +[L_{ed}^{V,LL}]_{stww}\right)+12e^{2}q_{e}q_{d}[L_{ed}^{V,LL}]_{prst}, \tag{109}\] \[\left[\beta_{de}^{V,LR}\right]_{prst} = \frac{4}{3}e^{2}q_{e}^{2}\delta_{st}\left([L_{ed}^{V,LL}]_{wwpr}+[ L_{de}^{V,LR}]_{prww}\right)-12e^{2}q_{e}q_{d}[L_{de}^{V,LR}]_{prst}, \tag{110}\]
where \(e\) is the coupling constant of QED. \(q_{e}=-1\) and \(q_{d}=-1/3\) are the charges of lepton and d-type quark, respectively.
At the low energy scale, the 1-loop running of QCD and QED coupling is given by the following \(\beta\) functions:
\[\beta_{g_{s}} = -\frac{23}{3}g_{s}^{3}, \tag{112}\] \[\beta_{e} = \frac{80}{9}e^{3}, \tag{113}\]
|
2310.01352 | RA-DIT: Retrieval-Augmented Dual Instruction Tuning | Retrieval-augmented language models (RALMs) improve performance by accessing
long-tail and up-to-date knowledge from external data stores, but are
challenging to build. Existing approaches require either expensive
retrieval-specific modifications to LM pre-training or use post-hoc integration
of the data store that leads to suboptimal performance. We introduce
Retrieval-Augmented Dual Instruction Tuning (RA-DIT), a lightweight fine-tuning
methodology that provides a third option by retrofitting any LLM with retrieval
capabilities. Our approach operates in two distinct fine-tuning steps: (1) one
updates a pre-trained LM to better use retrieved information, while (2) the
other updates the retriever to return more relevant results, as preferred by
the LM. By fine-tuning over tasks that require both knowledge utilization and
contextual awareness, we demonstrate that each stage yields significant
performance improvements, and using both leads to additional gains. Our best
model, RA-DIT 65B, achieves state-of-the-art performance across a range of
knowledge-intensive zero- and few-shot learning benchmarks, significantly
outperforming existing in-context RALM approaches by up to +8.9% in 0-shot
setting and +1.4% in 5-shot setting on average. | Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Rich James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, Scott Yih | 2023-10-02T17:16:26Z | http://arxiv.org/abs/2310.01352v4 | # RA-DIT: Retrieval-Augmented Dual Instruction Tuning
###### Abstract
Retrieval-augmented language models (RALMs) improve performance by accessing long-tail and up-to-date knowledge from external data stores, but are challenging to build. Existing approaches require either expensive retrieval-specific modifications to LM pre-training or use post-hoc integration of the data store that leads to suboptimal performance. We introduce **R**etrieval-**A**ugmented **D**ual **I**nstruction **T**uning (RA-DIT), a lightweight fine-tuning methodology that provides a third option by retrofitting any large language model (LLM) with retrieval capabilities. Our approach operates in two distinct fine-tuning steps: (1) one updates a pre-trained LM to better use retrieved information, while (2) the other updates the retriever to return more relevant results, as preferred by the LM. By fine-tuning over tasks that require both knowledge utilization and contextual awareness, we demonstrate that each stage yields significant performance improvements, and using both leads to additional gains. Our best model, RA-DIT 65B, achieves state-of-the-art performance across a range of knowledge-intensive zero- and few-shot learning benchmarks, significantly outperforming existing in-context RALM approaches by up to +8.9% in 0-shot setting and +1.4% in 5-shot setting on average.
## 1 Introduction
Large language models (LLMs) excel as zero- and few-shot learners across various tasks (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023a; Neil et al., 2023; OpenAI, 2023). However, because knowledge is represented only in the model parameters, they struggle to capture long-tail knowledge (Tirumala et al., 2022; Sun et al., 2023) and require substantial resources to be kept up-to-date (Miller, 2023). Retrieval-Augmented Language Modeling (RALM) integrates LLMs with non-parametric information retrieval to overcome these limitations (Guu et al., 2020; Borgeaud et al., 2022; Izacard et al., 2022b; Shi et al., 2023b; Ram et al., 2023). By explicitly decoupling knowledge retrieval from the backbone language model, such architectures have exhibited superior performance on knowledge intensive tasks such as open-domain question answering (Lewis et al., 2020; Izacard et al., 2022b) and live chat interactions (Liu, 2022).
Existing efforts in RALM development primarily focus on two high-level challenges: (i) enhancing the LLM's capability to incorporate retrieved knowledge (Lewis et al., 2020; Izacard et al., 2022b; Luo et al., 2023) and (ii) refining the retrieval component to return more relevant content (Shi et al., 2023b; Izacard et al., 2022b). Retrieval capabilities have also been introduced at different stages of the model training process. REALM (Guu et al., 2020) and RETRO (Borgeaud et al., 2022) opt for _end-to-end pre-training_, incorporating the retrieval component from the outset. Atlas(Izacard et al., 2022b) builds upon the T5 language model (Raffel et al., 2020), and _continuously pre-trains_ the framework over unsupervised text. RePlug(Shi et al., 2023b) and In-Context RALM (Ram et al., 2023) combine _off-the-shelf_ LLMs with general-purpose retrievers, showing that LLMs and retrievers, even when optimized independently, can be effectively fused through the emergent in-context learning capabilities of LLMs. However, extensive pre-training of such architectures incurs high computational costs, and the off-the-shelf fusion approach also has limitations, particularly as the LLMs are not inherently trained to incorporate retrieved content.
In this work, we show lightweight instruction tuning (Chung et al., 2022b; Iyer et al., 2022; Zhou et al., 2023) alone can significantly boost the performance of RALMs, especially in scenarios that require access to large, external knowledge sources. We propose **R**etrieval-**A**ugmented **D**ual Instruction **T**uning (RA-DIT), an approach that retrofits any LLM with retrieval capabilities via fine-tuning over a set of tasks selected to cultivate knowledge utilization and contextual awareness in the language model predictions. We initialize the framework using pre-trained Llama (Touvron et al., 2023a) and a state-of-the-art dual-encoder based dense retriever, Dragon+ (Lin et al., 2023). Following Shi et al. (2023b), we retrieve relevant text chunks based on the language model prompt. Each retrieved chunk is prepended to the prompt, and the predictions from multiple chunks are computed in parallel and ensembled to produce the final output.
We perform instruction-tuning in two separate steps. For _language model fine-tuning_ (LM-ft), we adopt the supervised fine-tuning objective (Chung et al., 2022b; Iyer et al., 2022) while augmenting each fine-tuning prompt with a retrieved "background" field prepended to the instructions (Figure 1). We also leverage the design of existing NLP tasks and populate this field with the ground truth context for tasks such as reading comprehension and summarization. By incorporating the background text during fine-tuning, we guide the LLM to optimally utilize the retrieved information and ignore distracting content (Shi et al., 2023a). For _retriever fine-tuning_ (R-ft), we update the query encoder using a generalized _LM-Supervised Retrieval_ (LSR, Shi et al., 2023b) training objective computed over a combination of supervised tasks and unsupervised text completion. This way we enable the retriever to yield more contextually relevant results, aligned with the preferences of the LLM.
We demonstrate that each fine-tuning step offers significant performance gains, and that the fine-tuned LLM and retriever can be combined to achieve further improvements. Our largest model, RA-DIT 65B, attains state-of-the-art performance in zero- and few-shot settings on knowledge intensive benchmarks, notably surpassing the un-tuned in-context RALM approach on datasets including MMLU (Hendrycks et al., 2021b) (+8.2% 0-shot; +0.7% 5-shot) and Natural Questions (Kwiatkowski et al., 2019) (+22% 0-shot; +3.8% 5-shot). In addition, RA-DIT 65B also substantially outperforms Atlas 11B on 8 knowledge-intensive tasks (+7.2% on average in the 64-shot fine-tuning setting). This suggests that language models and retrievers, when optimized independently and then fused through instruction-tuning, can compete effectively with RALMs that have undergone extensive continuous pre-training. We further conduct a comprehensive model analysis, showing the effectiveness of our approach across LLMs of varying sizes, as well as evaluating the influence of different fine-tuning strategies and retriever configurations.
## 2 Method
### Architecture
Language ModelWe focus on retrieval-augmenting pre-trained auto-regressive language models (Brown et al., 2020). In particular, we use Llama (Touvron et al., 2023a), a family of open-sourced language models pre-trained on trillions of tokens.
Figure 1: The RA-DIT approach separately fine-tunes the LLM and the retriever. For a given example, the LM-ft component updates the LLM to maximize the likelihood of the correct answer given the retrieval-augmented instructions (§2.3); the R-ft component updates the retriever to minimize the KL-Divergence between the retriever score distribution and the LLM preference (§2.4)
RetrieverWe adopt a dual-encoder based retriever architecture, since it can be easily fine-tuned and is efficient at the inference stage (Lewis et al., 2020; Izacard et al., 2022; Shi et al., 2023). Given a corpus \(\mathcal{C}\) and a query \(q\), the document encoder maps each _text chunk_\(c\in\mathcal{C}\) to an embedding \(\mathbf{E}_{d}(c)\) and the query encoder maps \(q\) to an embedding \(\mathbf{E}_{q}(q)\). The top-\(k\) relevant text chunks for \(q\) are retrieved based on the query-document embedding similarity, which is often computed via dot product:
\[s(q,c)=\mathbf{E}_{q}(q)\cdot\mathbf{E}_{d}(c). \tag{1}\]
We initialize the retriever using Dragon+ (Lin et al., 2023), a state-of-the-art dual-encoder model trained with a contrastive learning objective and large-scale data augmentation.
Parallel In-Context Retrieval-AugmentationFollowing Shi et al. (2023), for a given language model prompt \(x\), we retrieve the top-\(k\) relevant text chunks \(\mathcal{C}^{\prime}\subset\mathcal{C},|\mathcal{C}^{\prime}|=k\). To stay within the context window size limit, each retrieved chunk is prepended individually to the prompt1, and the language model predictions from multiple augmented prompts are computed in parallel. The final output probability is a mixture of the probability from each augmented prompt weighted by the chunk relevance score
Footnote 1: We use a pair of start (“Background:”) and end (“\(n\backslash n\)”) tokens to demarcate the retrieved segment in the augmented prompt. The complete set of our instruction-tuning templates are shown in Appendix C.
\[p_{LM}(y|x,\mathcal{C}^{\prime})=\sum_{c\in\mathcal{C}^{\prime}}p_{LM}(y|c \circ x)\cdot p_{R}(c|x), \tag{2}\]
where \(\circ\) denotes sequence concatenation, and \(p_{R}(c|x)=\frac{\exp s(x,c)}{\sum_{c^{\prime}\in\mathcal{C}^{\prime}}\exp s(x, c^{\prime})}\) are the retriever scores re-normalized among top-\(k\) relevant chunks.
### Fine-tuning Datasets
We choose a set of fine-tuning tasks aimed at boosting the language model's ability to utilize knowledge effectively and improving its contextual awareness in generating predictions. As shown in Table 1, our _language model fine-tuning_ datasets (\(\mathcal{D}_{L}\)) consists of 20 datasets across 5 distinct categories: dialogue, open-domain QA, reading comprehension2, summarization and chain-of-thought
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Task & HF identifier & Dataset name & \(\mathcal{D}_{L}\) & \(\mathcal{D}_{R}\) & \#Train \\ \hline Dialogue & qasst1 & OpenAssistant Conversations Dataset (Kopf et al., 2023) & ✓ & ✓ & 31.59s \\ & commonense\_qa & Commonense\_qa & Commonense\_qa (Linér
reasoning. For _retriever fine-tuning_ datasets \(\mathcal{D}_{R}\), we opt for the QA datasets in our collection featuring standalone questions, and we additionally include two QA datasets, FreebaseQA (Jiang et al., 2019) and MS-MARCO (Nguyen et al., 2016). The examples of each dataset are serialized for instruction tuning using manually compiled templates (Table 10). For tasks in \(\mathcal{D}_{L}\cap\mathcal{D}_{R}\), we use the same template for both fine-tuning steps. In addition, we observe that supplementing the instruction-tuning data with unsupervised text leads to additional performance gains for both language model and retriever fine-tuning, and we detail data mixture used in Appendix B.
Footnote 1: [https://github.com/faceface/](https://github.com/faceface/)
(denoted as _corpus data_) for LSR training, we show that LSR can be generalized to incorporate the multi-task instruction data introduced in SS2.2 (denoted as _MTI data_). The MTI data provide direct supervision to the retriever to return relevant information that enhances the language model in various downstream tasks. As shown in SS5.1, combining both types of data yields the best results and outperforms using either source alone.
## 3 Experiment Setup
### Retriever
We initialize the retriever in our framework with Dragon+ (Lin et al., 2023) and also use it to study various retriever configurations. To build the retrieval corpus, we combine the text chunks (37M) from the Dec. 20, 2021 Wikipedia dump released by Izacard et al. (2022b) with additional ones (362M) from the 2017-2020 CommonCrawdumps. We detail the corpus pre-processing and indexing in Appendix A. Our final retrieval data store, with the two data sources combined, contain 399M text chunks with a maximum length of 200 words. In SS5.3, we conduct an analysis on the impact of retrieval corpus using various subsets of our retrieval index, as well as different Wikipedia snapshots. We obtain the retrieval queries used for our fine-tuning and evaluation tasks using manually5 constructed templates (Table 10 and 12).
Footnote 5: We leave automatically generating task-specific retrieval queries to future work.
### Baselines
We focus on comparing our approach to the base Llama models (Touvron et al., 2023a) and RePlug(Shi et al., 2023b), a state-of-the-art approach that integrates off-the-shelf LLMs and retrievers, in the zero-shot and in-context few-shot learning settings. We instantiate RePlug using Llama and Dragon+. In addition, we also compare RA-DIT to Atlas(Izacard et al., 2022b) in a 64-shot fine-tuning setting (SS4).
### Evaluation
We primarily conduct evaluation on knowledge-intensive tasks that are not included in our fine-tuning datasets, including MMLU (Hendrycks et al., 2021a), Natural Questions (NQ; Kwiatkowski et al., 2019), TriviaQA (TQA; Joshi et al., 2017), and a subset6 of the tasks in the KILT benchmark (Petroni et al., 2021). We use the development split of six of the KILT tasks (excluding ELI5) to determine fine-tuning hyperparameters (Appendix B). This enables us to report genuine few-shot evaluation results for four of the ten evaluation tasks. For the remaining tasks, we report few-shot results assuming access to in-domain development data. We randomly select few-shot examples from the official training splits of the KILT tasks, except for FEV, NQ and TQA, where we use the 64-shot examples released by Izacard et al. (2022b). For these three datasets, we also ensure that the 5-shot examples are subsets of the 64 examples. In our retrieval augmented models, we use the top-\(1\) most relevant chunk for the in-context few-shot examples. In addition, we also evaluate models on commonsense reasoning tasks to evaluate the impact of retrieval-augmented instruction tuning on the LLM's parametric knowledge and reasoning capabilities. Here we report results on the entire development sets. Details of our evaluation datasets, including the evaluation metrics, template and the scoring functions used, can be found in in Appendix D.
Footnote 6: The subset consists of seven tasks: HotpotQA (Yang et al., 2018), FEVER (Thorne et al., 2018), AIDA CoNLL-YAGO (Hoffart et al., 2011), Zero-Shot RE (Levy et al., 2017), T-REx (Elsahar et al., 2018), Wizard of Wikipedia (Dinan et al., 2019) and ELI5 (Fan et al., 2019).
## 4 Main Results
Knowledge-Intensive TasksWe report the main results in Table 2. In particular, RA-DIT is compared to Llama(Touvron et al., 2023a) as well as RePlug(Shi et al., 2023b), in both 0-shot and 5-shot settings. We first observe that RePlug works much better than the base Llama 65B, confirming the benefits of RALMs on knowledge-intensive tasks. Furthermore, RA-DIT significantly outperforms RePlug (+8.9% in 0-shot and +1.4% in 5-shot on average over MMLU, NQ, TQA
and EL15) and achieves the best performance on most datasets. This supports our claim that combining off-the-shelf LLMs and retrievers is sub-optimal, and our dual instruction tuning approach is an effective way of retrofitting LLMs with retrieval capabilities.7
Footnote 7: In comparison to Touvron et al. (2023a), we report lower 0-shot performance for Llama 65B on NQ and TQA. By examining the model generation, we think Touvron et al. (2023a) reported the ratio of responses that contain the ground truth answer string in the 0-shot setting. We do not post-process the model predictions and report exact match instead.
We also compare with Atlas, a state-of-the-art encoder-decoder based RALM that jointly pre-trains the language model and the retriever. Here we adopt a 64-shot setting similar to Izacard et al. (2022b) with the following differences. While Atlas conducts 64-shot fine-tuning for each individual task and reports the performance of task-specific models, we continuously fine-tune the RA-DIT checkpoint using the 64-shot examples from all tasks combined, and report the performance of a single model across tasks. As shown in Table 2, despite using a single model, RA-DIT outperforms Atlas by an average of 4.1 points, achieving higher performance on 6 out of the 8 datasets.
### Fine-tuning Strategies
Language Model Fine-tuningWe compare Llama instruction-tuned with retrieval-augmentation (RA-IT 65B) to the base language model, as well as Llama that is instruction-tuned conventionally8 (IT 65B) on the same set of tasks. We evaluate all models with in-context retrieval augmentation using the Dragon+ retriever, adjusting the number of retrieved chunks to 0, 1 or 10. As shown in Table 4, while both instruction tuning methods substantially enhance the 0-shot performance, they offers marginal improvements or even hurt the model performance in the 5-shot setting for most tasks except for HotpotQA9. When in-context retrieval-augmentation is applied, all models show substantial gains in both settings, even when limited to the top-1 chunk. The model performance consistently improves as we include more retrieved chunks. In the 0-shot setting with top-10 retrieved chunks, the RA-IT 65B model outperforms the IT 65B model by a large margin (51.0% vs. 47.7%). Under this setting, we observe that retrieval-augmented instruction tuning significantly enhances the LLM's ability to integrate information from the retrieved text chunks. The model is able to extract the correct answers from relevant chunks with greater confidence, while effectively leaning on its parametric knowledge for prediction when an irrelevant text chunk is present (Appendix F). In Appendix E.1, we also discuss the performance of RA-IT models when applied to smaller Llama models (7B and 13B), showing that it offers even larger performance boost in those cases.
Footnote 8: Since our instruction tuning datasets include reading comprehension and summarization, the IT models are also exposed to problem types that depend on background knowledge.
Footnote 9: This observation aligns with the findings from previous instruction-tuning literature (Iyer et al., 2022). HotpotQA is an exception likely because it is from a task category covered in our instruction-tuning data.
Retriever Fine-tuningIn Table 5, we study different retriever fine-tuning strategies. As mentioned in SS2.4, we explore two types of retriever fine-tuning data, the _multi-task instruction (MTI) data_ and the _corpus data_. We observe that fine-tuning the retriever with the corpus data alone improves over the base Dragon+ model by an average of 0.4 points, whereas fine-tuning using only the MTI data improves by a smaller margin of 0.1 points. While fine-tuning with the MTI data yields good performance on certain datasets such as NQ (possibly due to its similarity to the MTI data),
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline _0/ 5-shot_ & HoPo & FEV & AIDA & zsRE & T-REx & WoW & Avg \\ \hline _No retrieval_ & & & & & & & & & \\ Llama 65B & 12.5 / 23.8 & 59.6 / 83.7 & 0.9 / 64.1 & 9.7 / 36.0 & 1.2 / 52.3 & 15.7 / 17.4 & 16.6 / **46.2** \\ IT 65B & 20.0 / 30.0 & 67.8 / 83.2 & 8.9 / 58.5 & 19.0 / 35.4 & 17.3 / 53.5 & 16.4 / 16.5 & 24.9 / **46.2** \\ RA-IT 65B & 26.8 / 29.9 & 65.2 / 84.8 & 10.7 / 52.9 & 30.9 / 35.2 & 24.1 / 52.9 & 16.5 / 16.5 & **29.0** / 45.4 \\ \hline _top-1 chunk_ & & & & & & & & \\ Llama 65B + Dragon+ & 25.8 / 39.4 & 72.8 / 89.8 & 39.1 / 50.7 & 48.8 / 59.6 & 31.4 / 69.1 & 15.8 / 17.1 & 39.0 / **54.3** \\ IT 65B + Dragon+ & 33.3 / 38.8 & 84.0 / 90.1 & 43.9 / 50.3 & 56.8 / 58.2 & 44.7 / 66.4 & 15.7 / 15.6 & 46.4 / 53.2 \\ RA-IT 65B + Dragon+ & 37.6 / 39.1 & 81.0 / 90.4 & 41.6 / 52.3 & 59.6 / 57.9 & 49.6 / 65.8 & 16.6 / 16.6 & **47.7** / 53.7 \\ \hline _top-10 chunks_ & & & & & & & & \\ Llama 65B + Dragon+ & 31.0 / 41.6 & 75.4 / 90.8 & 44.8 / 54.0 & 58.6 / 63.7 & 40.2 / 71.9 & 16.0 / 17.8 & 44.3 / **56.6** \\ IT 65B + Dragon+ & 33.9 / 40.6 & 87.0 / 91.8 & 50.5 / 53.8 & 53.9 / 62.5 & 45.7 / 69.4 & 15.6 / 15.7 & 47.8 / 55.6 \\ RA-IT 65B + Dragon+ & 40.0 / 41.2 & 82.8 / 92.1 & 47.2 / 53.5 & 65.0 / 62.3 & 54.3 / 69.0 & 16.5 / 16.6 & **51.0** / 55.8 \\ \hline \end{tabular}
\end{table}
Table 4: Ablation of language model fine-tuning strategies. All rows report dev set performance.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline _5-shot_ & MMLU & NQ & TQA & HoPo & FEV & AIDA & zsRE & T-REx & WoW & Avg\({}^{\circ}\) & Avg \\ \hline Dragon+ & 62.6 & 41.8 & 72.9 & 41.5 & 90.6 & 54.1 & 63.7 & 72.1 & 17.5 & 56.6 & 57.4 \\ \hline Multi-task instruction (MTI) data & 61.1 & 43.6 & 74.0 & 36.5 & 91.4 & 64.6 & 56.7 & 72.1 & 17.1 & 56.4 & 57.5 \\ corpus data (FT both encoders) & 61.7 & 43.2 & 73.8 & 37.5 & 88.2 & 69.8 & 53.5 & 57.2 & 17.5 & 54.0 & 55.8 \\ corpus data & 62.9 & 43.0 & 74.3 & 41.1 & 91.6 & 54.4 & 63.4 & 71.8 & 17.4 & 56.6 & 57.8 \\ \(95\%\) corpus + 5\% MTI data & 63.0 & 42.1 & 74.9 & 41.2 & 91.6 & 54.9 & 65.2 & 71.6 & 17.5 & **57.0** & **58.0** \\ \hline \multicolumn{10}{l}{\({}^{\circ}\) Average over the 6 KILT development tasks.} \\ \end{tabular}
\end{table}
Table 5: Ablation of retriever fine-tuning strategies. All rows use the Llama 65B model and report 5-shot performance on the dev sets.
fine-tuning with the corpus data appears to generalize better and leads to stronger overall performance. Furthermore, we experiment with fine-tuning using both the MTI and corpus data. Table 5 shows that fine-tuning with "95% corpus data + 5% MTL data" achieves the best accuracy across all models, outperforming the non-finetuned baseline by 0.6 points on average.10
Footnote 10: In early experiments, we also tested other mixtures and found that using 5% or 10% MTI data worked the best. (They perform similarly to each other.)
Finally, we also compare jointly fine-tuning both the query and document encoders with only fine-tuning the query encoder while freezing the document encoder. Table 5 shows this experiment conducted using the corpus data, where freezing the document encoder produces significantly better performance. As a result, we only fine-tune the query encoder in this work.
### Dual Instruction Tuning Ablation
We isolate the impact of the language model fine-tuning from retriever fine-tuning in our RA-DIT method, and illustrate the benefit of each. 11 According to Table 6, both LM-ft and R-ft are beneficial when used alone, and outperform the RePlug using Llama 65B and the Dragon+ retriever. On the other hand, the most gain can be achieved when combining LM-ft and R-ft in our RA-DIT method, which outperforms the RePlug baseline by 0.8 points on average. In our preliminary experiments, we also attempted iterative dual instruction tuning by fine-tuning the retriever using LSR scores from the RA-IT LM or conduct the RA-IT step using passages returned by the fine-tuned retriever, for one or two such iterations, but did not observe further gains. We leave the exploration of multi-step RA-DIT to future work.
Footnote 11: Minor performance differences may be observed for the Llama 65B + Dragon+ model in different ablations due to the differences in few-shot example truncation in long prompts. We ensure all rows within each table are comparable.
### Retriever Settings
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline _5-shot_ & MMLU & NQ & TQA & ELI5 & HoPo & FEV & AIDA & zsRE & T-REx & WoW & Avg \\ \hline Llama 65B + Dragon+ & 61.7 & 41.7 & 73.0 & 22.1 & 41.6 & 90.8 & 54.0 & 63.7 & 71.9 & 17.2 & 53.8 \\ \hline Llama 65B + FTed Dragon+ & 63.0 & 42.2 & 74.9 & 22.2 & 41.4 & 91.6 & 54.9 & 65.2 & 71.4 & 17.4 & 54.4 \\ RTF 65B + Dragon+ & 64.8 & 42.8 & 73.1 & 23.6 & 41.2 & 92.1 & 53.5 & 62.3 & 69.0 & 16.6 & 53.9 \\ RTF 65B + FTed Dragon+ & 64.3 & 43.8 & 75.0 & 23.3 & 42.0 & 92.3 & 52.8 & 65.2 & 70.1 & 17.3 & **54.6** \\ \hline \end{tabular}
\end{table}
Table 6: The impact of LM and Retriever fine-tuning in our RA-DIT method, comparing the RePlug baseline, LM-ft only, R-ft only, and RA-DIT. 5-shot dev set performance is reported.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline _5-shot_ & MMLU & NQ & TQA & HoPo & FEV & AIDA & zsRE & T-REx & WoW & ELI5 & Avg \\ \hline Llama 65B & 61.3 & 30.9 & 70.6 & 23.8 & 83.7 & 50.2 & 36.0 & 52.3 & 17.4 & 23.4 & 45.0 \\ \hline _Retriever ablation using_ & Llama _65B and the_ _399M_ & _CC + Wiki corpus_ & & & & & & & & \\ Contiver & 59.3 & 41.2 & 73.0 & 32.4 & 88.1 & 45.0 & 40.8 & 56.1 & 17.2 & 21.6 & 47.5 \\ Contiver-msmarco & 62.0 & 42.1 & 74.1 & 38.7 & 89.3 & 49.3 & 60.2 & 62.9 & 17.4 & 21.8 & 51.8 \\ Dragon+ & 61.7 & 41.7 & 73.0 & 40.8 & 90.8 & 48.8 & 63.7 & 71.9 & 17.8 & 23.8 & 53.4 \\ \hline _Retriever corpus ablation using_ & Llama _65B and the_ Dragon+ _retriever_ & & & & & & & & & \\ CC only & 62.8 & 39.6 & 72.6 & 34.4 & 89.5 & 54.8 & 30.3 & 46.2 & 17.1 & 22.9 & 47.0 \\ Wiki 2021 + infobox & 62.2 & 42.0 & 71.2 & 41.8 & 89.8 & 62.2 & 65.3 & 73.1 & 17.7 & 22.2 & 54.8 \\ Wiki 2021 & 62.2 & 41.8 & 71.0 & 41.7 & 89.7 & 62.1 & 65.2 & 73.3 & 17.6 & 22.2 & 54.7 \\ Wiki 2018 & 61.5 & 42.6 & 70.7 & 40.4 & 90.8 & 62.1 & 51.3 & 59.8 & 17.6 & 22.5 & 51.9 \\ \hline \end{tabular}
\begin{tabular}{l c c c c c c c c c} \hline _Number of retrieved chunks ablation using_ & Llama _65B and the_ Dragon+ _retriever_ & & & & & & & & \\ top-1 chunks & 60.5 & 36.6 & 69.2 & 39.4 & 89.8 & 48.6 & 59.6 & 69.1 & 17.1 & 22.2 & 51.2 \\ top-3 chunks & 62.1 & 39.6 & 71.3 & 40.8 & 90.3 & 49.8 & 62.9 & 70.8 & 17.2 & 22.7 & 52.8 \\ top-10 chunks & 61.7 & 41.7 & 73.0 & 40.8 & 90.8 & 48.8 & 63.7 & 71.9 & 17.8 & 23.8 & 53.4 \\ \hline \end{tabular}
\end{table}
Table 7: Retriever settings: We report 5-shot dev set performance using Llama 65B and various retrievers in the RePlug setting.
In this section, we study the impact of various retriever choices in our framework. We use Llama 65B as the language model and combine it with different retrievers. Table 7 first compares Dragon+ (Lin et al., 2023) with other state-of-the-art retrievers such as Contriever (Izacard et al., 2022a). All retrieval-augmented models substantially improve over the Llama baseline, and Dragon+ significantly outperforms both Contriever and Contriever-MSMARCO. We hence adopt Dragon+ as our base retriever in all experiments.
The middle section in Table 7 shows the impact of varying the retrieval corpora. In particular, we consider several subsets of our 399M retrieval corpus, namely CommonCrawl only (362M) and Wikipedia only (with and without infoboxes). We further compare with another Wikipedia snapshot (Wiki 2018) commonly used in the literature (Karpukhin et al., 2020). We observe that retrieving from Wikipedia only is beneficial for a number of KILT tasks such as AIDA and zsRE, as Wikipedia was the intended corpus for KILT tasks. We find that Wiki 2018 works better for NQ since the corpus is closer to the date of its data collection, similar to the observations by Izacard et al. (2022b). This indicates that our retrieval-augmented LM is faithful to the supplied retrieval corpus, and up-to-date information can be provided by updating the retrieval index at test time.
Finally, we experiment with the number of retrieved passages supplied to Llama during generation. Table 7 shows that even retrieving the top-1 passage significantly improves Llama's average performance from 45.0 to 51.2, and it continues to increase as more retrieved passages are used. Due to diminishing return and inference cost, we adopt 10 retrieved passages by default in our experiments.
## 6 Related Work
Retrieval-Augmented Language ModelsRALMs fuse language models (LMs) with a retrieval module that explicitly augments the LM with information retrieved from external knowledge stores (Guu et al., 2020; Lewis et al., 2020). One mainstream type of RALM follows the "retrieve-and-read" paradigm, where the retrieval module supplies external knowledge as additional context which the LM (reader) leverages to produce the final output (Izacard et al., 2022b; Borgeaud et al., 2022; Shi et al., 2023b; Ram et al., 2023). Some existing work focuses on pre-training the LM to better utilize retrieved knowledge. For example, REALM (Guu et al., 2020) and RETRO (Borgeaud et al., 2022) incorporate retrieval from the beginning and conduct end-to-end retrieval-augmented pre-training, whereas Atlas(Izacard et al., 2022b) continuously pre-trains a T5 LM (Raffel et al., 2020) jointly with a retriever. Others assume black-box access to an LM and combine it with either off-the-shelf or fine-tuned retrievers (Shi et al., 2023b; Ram et al., 2023). Our approach adopts lightweight fine-tuning to effectively retrofit any pre-trained LLM with retrieval capacity. This approach offers efficiency compared to methods involving extensive pre-training and demonstrates superior effectiveness compared to the off-the-shelf fusion approach.
Independent to our work, Luo et al. (2023) proposes SAIL, an approach that fine-tunes the LM with instructions augmented with retrieved content, and examines it on public instruction following datasets (Taori et al., 2023; Chiang et al., 2023) using a moderately sized model (7B parameters). In comparison, RA-DIT conducts parallel retrieval-augmentation by generating distinct prompts for each retrieved passage and subsequently aggregating the outcomes; SAIL, on the other hand, concatenates the top retrieved passages in the augmentation. Furthermore, RA-DIT adopts a holistic view of the RALM architecture, employing a learnable neural retriever and proposing a dual optimization framework. SAIL, in comparison, leans on commercial search engines and BM25 and focuses on the LM-side enhancement (e.g. it proposes an in-context retrieval selection technique to guide the model focus towards informative content).
Another family of RALMs incorporate retrieval in the output distribution of the LM (Khandelwal et al., 2020; Zhong et al., 2022). Such models retrieve a set of \(k\) nearest-neighbor tokens using the LM context representation, and interpolate this distribution of retrieved tokens with the LM output distribution to generate the next token at inference time. Alternatively, the retrieved token distribution can be used alone to make a non-parametric LM (Min et al., 2023).
Instruction TuningInstruction fine-tuning has been proposed to align pre-trained LLMs to follow natural language instructions and avoid extensive prompt engineering (Ouyang et al., 2022; Wei et al., 2022; Chung et al., 2022a; Wang et al., 2022; Iyer et al., 2022). We propose retrieval
augmented instruction tuning (RA-IT) as part of our _dual instruction tuning_ framework to improve the LM's ability to leverage retrieved information.
Information RetrievalRetrieval methods include _sparse retrievers_ that does matching over a sparse bag-of-words representation (Robertson and Zaragoza, 2009; Formal et al., 2021), _dense retrievers_ that embed queries and documents into a fixed-size dense vector for nearest-neighbor search (Karpukhin et al., 2020; Xiong et al., 2021), and _multi-vector retrievers_ which uses multiple vectors as the representation and more complex search algorithms for increased accuracy (Khattab and Zaharia, 2020; Li et al., 2023). We adopt a state-of-the-art dense retriever, Dragon(Lin et al., 2023), as our base retriever, because of its simplicity, state-of-the-art accuracy, high retrieval efficiency on GPUs, and the ease of further fine-tuning.
## 7 Conclusion
In this paper, we propose RA-DIT, a lightweight Retrieval-Augmented Dual Instruction Tuning framework that can effectively retrofit any pre-trained LLM with retrieval capabilities. RA-DIT updates the LLM with _retrieval-augmented instruction tuning_ to make better use of retrieved knowledge and ignore irrelevant or distracting information. It also fine-tunes the retriever with supervision from the LLM to retrieve texts that can better help the LLM generate correct outputs. RA-DIT achieves state-of-the-art performance in zero- and few-shot evaluations on knowledge intensive benchmarks, surpassing un-tuned in-context RALM approaches such as RePlug and compete effectively against methods that require extensive pre-training such as Atlas.
|
2305.05382 | Feasibility of Passive Sounding of Uranian Moons using Uranian
Kilometric Radiation | We present a feasibility study for passive sounding of Uranian icy moons
using Uranian Kilometric Radio (UKR) emissions in the 100 - 900 kHz band. We
provide a summary description of the observation geometry, the UKR
characteristics, and estimate the sensitivity for an instrument analogous to
the Cassini Radio Plasma Wave Science (RPWS) but with a modified receiver
digitizer and signal processing chain. We show that the concept has the
potential to directly and unambiguously detect cold oceans within Uranian
satellites and provide strong constraints on the interior structure in the
presence of warm or no oceans. As part of a geophysical payload, the concept
could therefore have a key role in the detection of oceans within the Uranian
satellites. The main limitation of the concept is coherence losses attributed
to the extended source size of the UKR and dependence on the illumination
geometry. These factors represent constraints on the tour design of a future
Uranus mission in terms of flyby altitudes and encounter timing. | Andrew Romero-Wolf, Gregor Steinbruegge, Julie Castillo-Rogez, Corey J. Cochrane, Tom A. Nordheim, Karl L. Mitchell, Natalie S. Wolfenbarger, Dustin M. Schroeder, Sean T. Peters | 2023-05-06T00:12:00Z | http://arxiv.org/abs/2305.05382v1 | # Feasibility of Passive Sounding of Uranian Moons using Uranian Kilometric Radiation
###### Abstract
We present a novel technique for measuring the \(\mathrm{NH_{3}}\) rich volume of Uranian Moons using Uranian Kilometric Radiation. We present a novel technique for measuring the \(\mathrm{NH_{3}}\) rich volume of Uranian Moons using Uranian Kilometric Radiation.
###### Abstract
We present a feasibility study for passive sounding of Uranian icy moons using Uranian Kilometric Radio (UKR) emissions in the 100 - 900 kHz band. We provide a summary description of the observation geometry, the UKR characteristics, and estimate the sensitivity for an instrument analogous to the Cassini Radio Plasma Wave Science (RPWS) but with a modified receiver digitizer and signal processing chain. We show that the concept has the potential to directly and unambiguously detect cold oceans within Uranian satellites and provide strong constraints on the interior structure in the presence of warm or no oceans. As part of a geophysical payload, the concept could therefore have a key role in the detection of oceans within the Uranian satellites. The main limitation of the concept is coherence losses attributed to the extended source size of the UKR and dependence on the illumination geometry. These factors represent constraints on the tour design of a future Uranus mission in terms of flyby altitudes and encounter timing.
## Plain Language Summary
The large moons of Uranus are hypothesized to have subsurface oceans beneath their icy crust. This paper analyzes the possibility to use natural radio emissions originating from Uranian auroras to probe for these oceans. Cold ice is transparent to radio waves allowing reflections from liquid water to be readily observed. Monitoring the radio noise patterns from Uranus and searching for the reflections could constitute a direct way to detect the subsurface oceans.
## 1 Introduction
The Uranian system consists of the ice giant Uranus and its 27 known moons. Among these moons, the five largest ones Miranda, Ariel, Umbriel, Titania, and Oberon are of particular interest due to their potential for subsurface oceans (Hussmann et al., 2006; Hendrix et al., 2019; Bierson & Nimmo, 2022; Castillo-Rogez et al., 2023). This possibility is of great interest in the search for potentially habitable environments in the Solar System and could provide insight into the thermal and evolutionary history of the moons. The _Origins, Worlds, and Life_ decadal survey prioritized the Uranus Orbiter and Probe mission as the next Flagship to be started this decade.
Miranda, the innermost of the five moons, is known for its relatively young surface and extensive tectonic features, including cliffs, canyons, and grooves, which have been interpreted as evidence of a recent tidal heating event (C. Beddingfield et al., 2015; C. B. Beddingfield, Leonard, et al., 2022). Ariel also exhibits signs of past activity, most prominently the chasmata, canyons likely formed by extension (C. B. Beddingfield, Cartwright, et al., 2022). Umbriel has a cratered surface with little evidence of tectonic activity (Schenk & Moore, 2020). Titania, the second outermost moon, exhibits a mixture of cratered and smooth regions and is less heavily cratered than the surfaces of either Oberon or Umbriel, implying a younger surface (Kirchoff et al., 2022). Oberon's surface is the most heavily cratered of all the Uranian moons and might therefore have the most ancient surface of the Uranian satellites (Kirchoff et al., 2022).
A proven technique for detecting ice-ocean interfaces is magnetic sounding which has been used to discover subsurface liquid water oceans within Europa, Callisto, and Ganymede (Kivelson et al., 1999, 2002) as well as a putative magma ocean beneath the volcanically active surface of Io (Khurana et al., 2011). Magnetic sounding of the Jovian moons is achieved through magnetic induction, which is facilitated by the time varying Jovian magnetic environment in which they are immersed. The two upcoming missions - NASA's Europa Clipper and ESA's JUICE - will further use magnetic sounding to characterize the oceans within Europa, Ganymede, and Callisto (Grasset et al., 2013; Phillips & Pappalardo, 2014). The strong and highly dynamic magnetic environment of Uranus' magnetosphere also provides a fortuitous laboratory to perform magnetic induction investigation of the Uranian moons. Several recent studies have demonstrated the feasibility to detect induced magnetic field signatures from sub-surface oceans on Uranus' major moons for a wide range of possible ocean configurations (Arridge & Eggington, 2021; Cochrane et al., 2021; Weiss et al., 2021). However, (Castillo-Rogez et al., 2023) showed that sub-surface oceans on these moons, if they exist, could be cold, only a few tens of kilometers thick, and enriched in ammonia. At these conditions, the electrical conductivity of these residual oceans could be very low, which would make them difficult to detect via magnetic induction.
Passive radar sounding using Uranian Kilometric Radio (UKR) emissions has the potential to provide information about the internal structure of these moons, including the thickness of the ice shell and the presence and depth of subsurface oceans, thus making it a complementary technique to magnetic sounding. Kilometric radio emissions range from 1 to 10 kilometers in wavelength, and are emitted by all planets with substantial atmospheres
and magnetic fields (Zarka, 2004). Kilometric emissions have been observed to originate from Uranus and are hypothesized to be generated by cyclotron maser instability (Gulkis and Carr, 1987). The use of passive radar sounding techniques involves detecting and analyzing the reflection of naturally occurring radio waves off of the surface or subsurface of a geophysical target (Romero-Wolf et al., 2015). By analyzing the reflection of radio waves off of the surface or subsurface of Miranda, Ariel, Umbriel, Titania, and Oberon, it may be possible to determine the presence of subsurface oceans and the thickness of the ice shell.
In this paper we will establish the feasibility to passively radar sound oceans in the subsurface of Uranian moons using the UKR emissions in the 100 - 900 kHz frequency band (wavelengths 0.33-3 km) as a source. This study assumes an instrument with similar specifications as the Cassini Radio and Plasma Wave Science (Gurnett et al., 2004) but with a modified digitizer and signal processing chain to perform the cross-correlation of the data needed for passive sounding. The study is analogous to the passive sounding feasibility studies done for Jovian moons using Jovian radio bursts (Romero-Wolf et al., 2015; Schroeder et al., 2016; Steinbrugge et al., 2021). In Section 2 we will provide a concept overview and then summarize the current knowledge of the radio source properties. As the sensitivity for passive sounding depends on the spatial extent of the source (which results in coherence losses), the time bandwidth product available to the instrument, the availability of the source, and the radio losses of the medium being probed, Sections 3, 4, and 5 present our analysis to address the source properties, the receiver model, and the target properties, respectively to provide an initial validation of the concept.
## 2 Concept Overview
The Uranian passive sounder concept is based on prior concepts for sounding of Jupiter's Galilean moons using Jovian radio bursts (Romero-Wolf et al., 2015; Schroeder et al., 2016; Steinbrugge et al., 2021). Passive sounding has been demonstrated experimentally on Earth using reflections of the Sun's quiescent radio emissions reflected off the ocean (Peters et al., 2018), sand (Peters et al., 2021), and water beneath 1 km of ice in Greenland (Peters et al., 2021). Importantly, the authors demonstrated that synthetic aperture radar (SAR) processing was possible in passive radar sounding, enabling additional gain to be recovered (Peters et al., 2021).
The passive sounding concept is summarized in Figure 1. The three main components are the UKR source, whose direct emission is recorded by the receiver, and the emission reflected by the Uranian icy moon target which is recorded by the same receiver.
The source properties relevant for estimating the passive sounding sensitivity are the spatial extent, beam pattern, flux, and instantaneous bandwidth of the radio emissions. These components and their impact on sensitivity will be treated in detail in SS3. The spectral structure and its temporal variation can also induce undesired affects to passive sounding (Carrer et al., 2021). However, Roberts et al. (2022) demonstrated a signal conditioning process that removes the undesired effects of spectral variability by flattening the spectral amplitude modulations at "ripple periods" sufficiently to remove those from the expected echoes. This technique works best in the strong signal regime relevant to this concept and will not be treated further.
The receiver point-model used for this study is similar to Cassini (Gurnett et al., 2004) but with a different back-end digitizing at higher instantaneous bandwidth and capable of performing the correlation between the direct and reflected emissions. In the case of a receiver orbiting Uranus, the parameters dominating sensitivity are the duration of the data capture, which is limited by the moon flyby speed and altitude, the receiver's instantaneous bandwidth and center frequency, and the background noise, which we will demonstrate is negligible compared to the UKR itself. The receiver is described in more detail in SS4.
The key target properties for sensitivity estimates are the moon's ice shell and subsurface reflector properties. The ice shell thickness and attenuation are based on geophysical
Figure 1: Flow chart outlining the passive sounding concept for Uranian icy moons using Uranian kilometric radio (UKR) radio emission.
models for each icy moon of interest (Miranda, Ariel, Umbriel, Titania, and Oberon). The reflected signal strength is also determined by the dielectric contrast between the ice-shell at the interface with the subsurface reflector (e.g. liquid water or bedrock). The icy moon models will be treated in SS5.
Other radio frequency measurements that could aid the geophysical interpretation of the data are goniopolarimetric localization (Cecconi et al., 2009) as performed by Cassini on Saturn, and occultations (Cecconi et al., 2021). Goniopolarimetric localization, where the direction of a circularly polarized wave is identified using correlations between co-located antennas with different orientations, allows for the identification of the source position, which is important for estimating the depth of the subsurface reflector. Occultation of the UKR source by the thick ice shells of Uranian icy moons (\(\sim 100\) km) could potentially be used to estimate the attenuation profiles of the ice. These will be discussed in SS7.
The models described above will be combined to produce sensitivity estimates and predictions of what the data might look like for a variety of icy moon geophysical scenario point models. The models and predictions are treated in SS6.
## 3 Source Properties
The geometric model of the UKR sources with properties relevant for passive sounding is shown in Figure 2. The UKR sources are located around the northern and southern magnetic poles, which are not aligned with the spin axis of Uranus. The radio emission regions are highly extended with cone-shaped beams emanating along the magnetic field lines. The key properties for passive sounding are the UKR flux (SS3.1), the angular extent of the source emitting region \(\Delta\theta\), which limits the coherence of the correlation used for passive sounding (SS3.2), and the beam pattern, which limits the view angles \(\theta_{\rm view}\) for which the source illuminates the icy moon (SS3.3).
### Flux Density
Studies of the UKR source are all based on the encounter by Voyager-2 in January of 1986 (Stone, 1987). In the vicinity of the Uranian icy moons, the UKR is the brightest source in the sky by far in the 25 \(\rm kHz\) - 900 \(\rm kHz\) band. In Figure 3 we show the average UKR flux from Zarka (1998) normalized to the locations of the Uranian icy moons. Miranda, the icy moon closest to Uranus, is shown in dashed line to indicate that it is uncertain whether the
UKR illuminates this moon or not (see SS3.3). The fluxes incident on the other four moons are several orders of magnitude stronger than the Galactic sky background radiation. In SS4.2 we provide a detailed analysis of background noise sources to show that the limiting background for sounding is the UKR itself.
### Extent and Coherence Losses
The size of the UKR source is a key parameter to estimate the feasibility of passive sounding. If the spatial extent of an incoherent source is too large, the different emission regions can interfere with each other to the point of removing all coherence in the cross correlation between the directed and reflected radiation, making passive sounding less effective (Peters et al., 2022).
Figure 2: Geometry of the UKR sources and icy moon. The location and size of the southern source, labeled by a blue “S”, is based on Voyager-2 observations (Menietti et al., 1990). The figure is drawn to scale for Uranus, the source extent and the icy moon Miranda. The northern UKR source, labeled with a red “N”, was not observed by Voyager-2 and we model it as an antipodal clone of the southern source. Ultraviolet images of Uranus taken with the Hubble Space Telescope result in morphological differences between the northern and southern aurora (Lamy et al., 2017), which is indicative of differences between their corresponding radio sources. Detailed modeling of the radio source is left to future work (see §7) and observational constraints could eventually be obtained directly by a spacecraft. The vector \(\mathbf{r}_{src}\) corresponds to a location in the UKR source region with the view angle \(\theta_{\rm view}\) corresponding to the view angle from the icy moon as seen from position \(\mathbf{r}_{\rm M}\). In this illustration, the icy moon is located at the same longitude as the UKR southern source although this is not necessarily the case. The angle \(\Delta\theta\) represents the source extent as seen from the icy moon.
Menietti et al. (1990) performed a ray tracing study to determine the southern source region of the smooth high-frequency nightside Uranus kilometric radiation. Their results show that the relevant altitude of the radio source is about 1.5 \(R_{U}\) for 700 kHz frequency. Figure 3 of that paper bounds the spatial extent of the source between 0.47 - 0.53 \(R_{U}\). Here we assume a conservative bound assuming all regions radiate isotropically. We know this is conservative because the radiation follows a conical beam pattern with opening angle spanning from 90\({}^{\circ}\) to at least 120\({}^{\circ}\) but possibly as large as 160\({}^{\circ}\)(Menietti et al., 1990). Including this more detailed model will improve coherence limitations on the icy moons, except possibly for Miranda since it could reduce its overall illumination.
The source extent as seen from the observer results in an angular extent of the source denoted by \(\Delta\theta\) (see Figure 2). The estimates below follow (Peters et al., 2022). At a given wavelength \(\lambda\), this angular extent determines the maximum altitude \(h_{\rm max}\) at which a receiver can correlate the direct and reflected signals without significant losses
\[h_{\rm max}=\frac{\lambda}{2(1-\cos\Delta\theta)}. \tag{1}\]
Figure 3: The flux density of the Uranian Kilometric Radio (UKR) source (Zarka, 1998) normalized to the distances of the Uranian icy moons. The flux curve for Miranda is dashed because it is currently uncertain whether the beam pattern illuminates it or not. The sky background noise flux (data and parametrization from Cane (1979)) is included for comparison.
The value of \(\Delta\theta\simeq\Delta S/D\), where \(\Delta S\) is the spatial extent of the source projected in the direction of the icy moon and \(D\) is the distance between the icy moon and the UKR source. Figure 4 shows estimates of the maximum altitude below which the passive sounding technique will not suffer from coherence losses. The shaded region corresponds to the estimated source size in units of the Uranian radius. The results indicate the fairly low-altitude flybys are required for the closest icy moons (\(<\) 50 km for Miranda, \(<\) 110 km for Ariel, \(<\) 210 km for Umbriel) while higher altitude flybys can be tolerated for Titania and Oberon (\(<\) 580 km and \(<\) 1000 km, respectively).
One limitation of this estimate is that the source is directly overhead. The coherence degrades away from that. Given the source is on for a significant fraction of time, it may be possible to coordinate such a flyby. Note that we also assumed the entire region in Figure 4 is contributing to the radiation at any given instance (i.e. the emission at each point is isotropic). This is an overestimation since the sources are extended but beamed, which
Figure 4: Maximum sounding altitude for a reference frequency of 700 kHz. The lines corresponding to each icy moon estimate the maximum altitude at which sounding is viable before coherence losses begin to take place as a function of source size (in units of Uranian radius \(R_{U}\)). At 700 kHz, the upper bound on the source size is shown by the gray shaded region. Maximum altitude ranges from \(\sim 50\) km for Miranda and are as high as \(\sim 1000\) km for Oberon.
results in a smaller effective source size. More detailed estimates including these effects will be the subject of future work.
### Beam Pattern and Target Illumination
The beam pattern of the UKR determines the spatio-temporal illumination characteristics of the icy moons. The analysis of Menietti et al. (1990) shows the southern source extends from 30\({}^{\circ}\) - 60\({}^{\circ}\) in latitude and has a hollow cone beam pattern with opening angle spanning from 90\({}^{\circ}\) to at least 120\({}^{\circ}\) but possibly as large as 160\({}^{\circ}\). The range of view angles \(\theta_{\rm view}\) (see Figure 2) corresponding to beam pattern illumination range from 45\({}^{\circ}\) (corresponding to the 90\({}^{\circ}\) cone opening angle) up to at least 60\({}^{\circ}\) and possibly as high as 80\({}^{\circ}\).
In Figure 5 we show the southern UKR source view angle \(\theta_{\rm view}\) with respect to the icy moons Miranda and Oberon. The source will illuminate the icy moon when it is \(\pm\)80\({}^{\circ}\) away from the Uranian longitude of the centroid of the southern UKR source at \(\sim\) 235\({}^{\circ}\). The northern radio source was not observable by Voyager-2 so we do not have information on its size and beaming properties. As a proxy, we have also included the northern source as a clone of the southern source located rotated to the antipodal point. While it is expected that there are differences between the northern and southern sources, this is meant to show the level of source availability expected. See SS7 for a more detailed discussion.
## 4 Receiver Model
### General Properties
We use the Cassini Radio Plasma Wave Science (RPWS) instrument (Gurnett et al., 2004) with a modified digitizer (1 MHz instantaneous bandwidth) and signal processing chain as a baseline for this study. The key properties are the antenna sensitivity and noise contributions in the environment of the Uranian icy moons.
The sensitivity of the instrument is determined by a combination of antenna effective length and instrument noise. In the frequencies of interest (\(<\) 1 MHz), the electrically short antenna approximation is valid. The dipole has an effective length \(L_{\rm eff}\simeq 3.1\) m including stray capacitance losses (Zarka et al., 2004) but a physical length of 7.3 m (tip-to-tip). The noise contributions (internal and external) are covered in the next subsection.
### Radio Frequency Noise
A noise calibration of the Cassini RPWS is provided by Zarka et al. (2004). The internal receiver noise is estimated by taking power spectral density measurements with the antennas stowed prior to deployment. Using the effective length of the dipole antennas, we have converted these data to spectral equivalent flux density (SEFD) as shown in Figure 6. The conversion between power at the receiver (in units of V\({}^{2}\) Hz\({}^{-1}\)) to flux (in units of W m\({}^{-2}\) Hz\({}^{-1}\)) is given by \(K=Z_{0}L_{\rm eff}^{2}\simeq 3530\;{\rm m}^{2}\Omega\) where \(Z_{0}\) is the impedance of free space and \(L_{\rm eff}\) is the effective length of the antenna referenced at the receiver including stray capacitance losses (Zarka et al. (2004)). The figure also includes the flux of the UKR emissions at Miranda and Oberon and are more than three orders of magnitude greater than the receiver noise. The Galactic background noise from Manning and Dulk (2001) is also shown in Figure 6 and is below the receiver noise except between 600 kHz - 1 MHz where it becomes comparable to the receiver noise.
Figure 5: Source view angle (\(\theta_{\rm view}\) as defined in Figure 2) as a function of the icy moon’s Uranian Longitude. The traces correspond to points \({\bf r}_{\rm src}\) sampled over the source extent of the southern source (blue traces) and for a northern source (red traces). The southern UKR source is modelled based on (Menietti et al., 1990). The northern source was not observable by Voyager-2 and, as a proxy, we have included it as a copy of the southern source model mapped to the antipodal region. Modelling the radio emission of the northern source is future work (see §7 for further discussion). The solid horizontal lines corresponds to the view angles where the southern UKR source would illuminate the icy moon. The dashed black line corresponds to the theoretical maximum cone opening angle from Menietti et al. (1990). The closest (Miranda) and farthest (Oberon) of the icy moons of interest are shown to illustrate the extremes.
The plasma noise dominating at lower frequencies (Figure 6) is due to the currents induced on the antenna by the random motion of free electrons in its immediate vicinity. The plasma noise induced at the terminals of the antenna depends on the half-length of the dipole \(L_{1/2}\), the number density of electrons \(n_{e}\), and their temperature \(T_{e}\). The equation below is adapted from Meyer-Vernet and Perche (1989) with scale factors relevant to this concept:
Figure 6: Noise backgrounds in spectral equivalent flux density (SEFD) compared to expected flux densities of UKR at the icy moons. The fluxes at each icy moon of interest is shown using solid colored lines. The receiver noise is the Cassini low and high frequency band receiver measured prior to antenna deployment as reported in Zarka et al. (2004) is shown in dash-dotted lines. The lower bound on plasma noise corresponding to an electron density of \(n_{e}=1.0\) cm\({}^{-3}\) and temperature \(T_{e}=3\times 10^{3}\) Kelvin is shown with a yellow dashed line and the upper bound corresponding to \(n_{e}=2500\) cm\({}^{-3}\) and temperature \(T_{e}=10^{3}\) Kelvin is shown with a gray dashed line (see text for details on the choice of parameters). The Galactic background flux is shown as a dashed black line.
\[\langle V_{\rm plasma}^{2}\rangle\simeq 4.1\times 10^{-17}\ {{\rm V^{2}}\over{\rm Hz}}\ \left({n_{e}\over 1\ {\rm cm^{-3}}}\right)\left({T_{e}\over 3 \times 10^{3}\ {\rm K}}\right)\left({f\over 100\ {\rm kHz}}\right)^{-3}\left({L_{1/2} \over 3.65\ {\rm m}}\right)^{-1} \tag{2}\]
In terms of system-equivalent flux density (SEFD), the plasma noise is given by
\[\langle{\rm SEFD_{plasma}}\rangle\simeq 1.2\times 10^{-20}\ {{\rm W}\over{\rm m ^{2}\ Hz}}\ \left({n_{e}\over 1\ {\rm cm^{-3}}}\right)\left({T_{e}\over 3\times 10^{3} \ {\rm K}}\right)\left({f\over 100\ {\rm kHz}}\right)^{-3}\left({L_{1/2}\over 3.65\ {\rm m}}\right)^{-1} \tag{3}\]
Since no data is available on the electron density and temperature near the surface of Uranian icy moons we provide a lower and upper bound. For the lower bound, we use measurements of plasma in the vicinity of Uranus, but far from any moons, taken with Voyager-2 (Sittler et al., 1987). The closest approach of Voyager-2 to Miranda, Ariel, Umbriel, Titania, and Oberon was 29,000 km, 127,000 km, 325,000 km 365,200 km 470,600 km, respectively (Stone, 1987) while ionospheric scale heights are expected to be \(<1,000\) km. The plasma electron temperature during this pass was typically \(T_{e}\simeq 3\times 10^{3}\) eV while the plasma electron density was typically \(n_{e}\simeq 10^{-3}\) cm\({}^{-3}\) but could go as high as \(n_{e}\simeq 1\) cm\({}^{-3}\). The expected lower bound plasma noise level shown in Figure 6 uses \(T_{e}=3\times 10^{3}\) eV and \(n_{e}=1.0\) cm\({}^{-3}\). This plasma electron density and temperature values do not result in a significant source of noise for most of the band of interest.
For the upper bound, we can estimate the electron plasma density \(n_{e}\) assuming its ratio to surface gravity \(g_{\rm surf}\) is approximately constant. The peak electron density of Europa's ionosphere during daytime conditions is \(n_{e,Eu}\simeq 10^{4}\) cm\({}^{-3}\) and drops to levels consistent with zero during nighttime (Kliore et al., 1997). The surface gravity of Uranian icy moons ranges from \(7.9\times 10^{-3}g\) (Miranda) to \(3.7\times 10^{-2}g\) (Oberon), where \(g\) is the surface gravity of Earth, compared to Europa with \(1.3\times 10^{-1}g\). The scaled peak ionospheric density of Uranian icy moons is given by assuming the ratio of peak electron density and surface gravity \(n_{\rm e,peak}/g_{\rm surf}\) is constant. The upper bound in peak plasma density derived in this manner are shown in Figure 7. The resulting plasma noise profile, assuming an electron temperature \(T_{e}\sim 10^{3}\) K which bounds the atmospheric temperature of Europa typically assumed to be in the hundreds of Kelvin (Kliore et al., 1997), is shown in Figure 6 with the curve labeled Plasma Noise (\(n_{\rm e}\ =\ 2500\) cm\({}^{-3}\)). Note that this bound is aggressively pessimistic since the icy moons of Uranus, unlike Europa, do not reside in a plasma torus and are not expected to be active.
Figure 7: Range of possible values for the peak ionospheric electron density of the Uranian icy moons. The lower bounds are from Voyager-2 measurements of the plasma density in the Uranian system. The upper bound is obtained by scaling to the peak electron density and surface gravity of Europa. These upper limits are aggressively pessimistic given that, unlike Europa, Uranian icy moons do not reside in a plasma torus nor are they expected to be active.
## 5 Target Properties
We consider the interior structure and composition models by Castillo-Rogez et al. (2023) to evaluate the potential to reveal the interior structure of the Uranian satellites using passive radar. The ice shells of the Uranian satellites differ significantly from the ice shells that have been previously studied for radar sounding, e.g., Europa (Kalousova et al., 2017) and Enceladus (Soucek et al., 2023). The ice shells of all major Uranian satellites are generally assumed to be too cold for convective heat transfer to be operating at present (Bierson & Nimmo, 2022; Hussmann et al., 2006) and with thicknesses on the order of 120 to 300 km (Castillo-Rogez et al., 2023). Based on carbonaceous chondrite composition supported by ground based infrared spectroscopy, the satellites could be rich in nitrogen-bearing species (Cartwright et al., 2020, 2023). Furthermore, the presence of subsurface oceans could imply high porosity in the upper crust providing increased thermal insulation. Porosity might have two origins: primordial microporosity (accreted material) and macroporosity introduced by large impacts.
While cold ice tends to be very transparent to radio waves, attenuation increases with temperature and is also affected by impurities, specifically those that are soluble in the ice lattice (e.g., Cl\({}^{-}\), NH\({}_{4}^{+}\), H\({}^{+}\)). Importantly, this implies that attenuation increases as an ice-ocean interface is approached. In addition, the porous crust could lead to volume scattering. However, due to the long wavelength, surface roughness losses are expected to be negligible. Therefore, only attenuation and volume scattering are investigated in the following. For this purpose we consider the following end-member models derived from Castillo-Rogez et al. (2023) for Ariel/Umbriel and Titania/Oberon. Both pairs of moons are expected to be similar enough in structure and composition to be treated together. A sharp ice-ocean interface is likely to be highly reflective. Using sea ice brines as an analog (Stogryn & Desargant, 1985), we predict a reflection coefficient of \(>-0.1\) dB for an ice-ocean interface frequencies between 10 kHz and 1 MHz. Miranda is not considered here as it is not expected to have an ocean but we will discuss the potential detection of an ice-rock interface in Section 6.
### Ice Shell Model and Radio Frequency Attenuation
To model the attenuation in ice we assume a conductive temperature profile with a surface temperature of \(T_{s}=70\) K and two ocean cases. One with a thin ocean, highly
enriched in ammonia and with an equilibrium temperature at the ice ocean interface at depth \(b\) of \(T_{b}\) = 180 K and a second case with a thick ocean and a temperature of \(T_{b}\) = 268 K. The structural and compositional parameters are summarized in Table 1. The temperature as a function of depth \(z\) is represented by the equilibrium profile for a thermally conductive ice shell:
\[T(z)=T_{s}\exp\left(z\frac{\ln(T_{b}/T_{s})}{b}\right) \tag{4}\]
The attenuation in ice depends on the electrical conductivity of the material which, in addition to the temperature, further depends on the concentration of lattice soluble impurities. Using the model and the parameters of MacGregor et al. (2015), the conductivity of pure ice as a function of frequency can be approximated by
\[\sigma_{p}=\omega\epsilon_{0}\mathfrak{Im}\left(\frac{\Delta\epsilon^{\prime }}{1+(i\omega\tau)^{1-\alpha}}\right)\,, \tag{5}\]
with the angular frequency \(\omega\), the permittivity in vacuum \(\epsilon_{0}\), the dielectric susceptibility \(\Delta\epsilon\), the relaxation time \(\tau\), and the Cole-Cole distribution parameter \(\alpha\)=0.1 (MacGregor et al., 2015). In the presence of impurities, the conductivity becomes
\[\sigma=\sigma_{p}\exp\left[\frac{E_{ice}}{k_{b}}\left(\frac{1}{T_{r}}-\frac{ 1}{T}\right)\right]+\sum_{i}^{N}\mu_{i}M_{i}\exp\left[\frac{E_{i}}{k_{b}} \left(\frac{1}{T_{r,i}}-\frac{1}{T}\right)\right]\,. \tag{6}\]
The in ice 2-way attenuation as a function of depth is then given by
\[A_{2}=2\frac{10\log_{10}(e)}{10000\epsilon_{0}\sqrt{\epsilon_{ice}}c}\int_{0}^ {b}\sigma(z)dz\,. \tag{7}\]
We derived the ice shell composition from the ocean composition assuming that the impurities in the ice follow a partition coefficient of 0.137 for Cl in presence of ammonium (Gross et al., 1977) for equilibrium freezing.
The resulting 2-way attenuation as a function of depth is shown in Figure 8 and calculated for a center frequency of 100 kHz, however the frequency dependence of ice conductivity is relatively flat therefore the changes on the results for different frequencies between 100 kHz and 1 MHz are rather subtle for temperatures above -55 \({}^{\circ}\)C but tend to decrease with lower temperatures (FUJINO, 1967). Due to the similarities in interior structure, we grouped the parameters and results for Ariel and Umbriel, and for Titania and Oberon considering a thin ocean and thick ocean case for each moon pair as described by the parameters given in Table 1. The best case scenario in terms of direct ocean detection would be an ocean at the eutectic point which would lead to a very cold ice-ocean interface and therefore an equally cold ice-shell. In such a scenario the attenuation would be effectively negligible.
In case of a thick ocean, the warm ice close to the ice-ocean interface in combination with the elevated concentration of impurities within the shell would lead to high attenuation within the ice. This situation would make it unlikely to directly detect the ocean. However, attenuation only becomes significant below the depth where the temperature is above the NH\({}_{3}\) eutectic temperature, referred to here as the eutectic interface. Below the eutectic interface, the ice is partially molten, where the amount of melt stable is governed by the temperature and concentration of impurities in the ice (Wolfenbarger et al., 2022). The detection of a eutectic interface would provide a constraint on the temperature profile of the ice shell and, if the composition is known, on the location of the putative subsurface ocean. Similar hypotheses have been formulated for the use of active radar sounding in the context of Europa (Culha et al., 2020) and Enceladus (Soucek et al., 2023).
### Volume Scattering
While large porosities are unlikely for larger moons as the porosity significantly decreases above pressures of 25 MPa within the Uranian satellites (Castillo-Rogez et al., 2023), there could still be a porous outer crust resulting from primordial microporosity and fracturing events. Increased porosity values can lead to significant scattering losses if the pore sizes are large compared to the radar wavelength (see discussion within Eluszkiewicz (2004) and Aglyamov et al. (2017) for Europa). In the case of the kilometric radiation from Uranus
\begin{table}
\begin{tabular}{c|c c|c c} & \multicolumn{2}{c|}{Ariel/Umbriel} & \multicolumn{2}{c}{Titania/Oberon} \\ \hline Moon Radius [km] & \multicolumn{2}{c|}{580} & \multicolumn{2}{c}{770} \\ H\({}_{2}\)O Layer [km] & \multicolumn{2}{c|}{190} & \multicolumn{2}{c}{240} \\ \hline & Thin Ocean & Thick Ocean & Thin Ocean & Thick Ocean \\ \hline Ocean Thickness [km] & 2 & 25 & 4 & 50 \\ Ocean Cl [Mol/kg] & 4 & 0.5 & 3 & 0.1 \\ Ocean NH\({}_{3}\) [Mol/kg] & 20 & 5 & 9 & 1 \\ Ocean NH\({}_{4}\) [Mol/kg] & 4 & 0.75 & 3 & 0.9 \\ \end{tabular}
\end{table}
Table 1: Structural and composition models for the attenuation model. For each moon pair we consider a thin ocean case and a thick ocean case. Parameters derived from Castillo-Rogez et al. (2023).
however, the long wavelength significantly reduces the susceptibility to volume scattering. The effect from Mie-scattering can be estimated the using the anomalous diffraction approximation of van de Hulst (1981). Note that this approximation is assuming large spheres compared to the wavelength and tends to overestimate scattering losses for lower frequencies, so can be assumed to be conservative in our case. The scattering efficiency factor in this approximation is given by
\[Q=2-\frac{4}{p}\sin(p)+\frac{4}{p^{2}}(1-\cos(p))\,, \tag{8}\]
with
\[p=4\pi r\frac{(n-1)}{\lambda}\,. \tag{9}\]
In the equation above, \(r\) is the radius of the spheres, \(\lambda\) the radio wavelength, and \(n\) the ratio of refractive indices. With the efficiency factor \(Q\) we can calculate the optical depth of the ice with total thickness \(d\) and porosity \(\phi\) in the same way as Aglyamov et al. (2017) as
\[\tau=\frac{3\phi d}{4r}Q\,, \tag{10}\]
and the two-way scattering losses by \(L=\exp{(-2\tau)}\). Using the extremely conservative case of a porosity of 30% (\(\phi=0.3\)) for the entire ice shell of \(d=180\) km thickness with sphere radii of \(r=5\) m, \(n=1.75\), \(\lambda=3\) km, we find scattering losses of less than 9 dB for the
Figure 8: Radar attenuation as a function of ice shell depth for Ariel and Titania. The results for Ariel are assumed to be identical to Umbriel, and Titania to Oberon, respectively, due to the similar interior structure of the two moons. Shown are the results for a thick ocean case and a thin ocean case, with the respective locations of the ice-ocean interfaces and the eutectic temperatures for compositionally relevant aqueous solutions.
entire ice shell. Therefore, we conclude that volume scattering is not an obstacle for the proposed technique.
### Passive Signal-to-Noise Ratio
For the calculation of the passive sounding Signal-to-Noise Ratio (SNR) we follow the approach of Schroeder et al. (2016). In the context of passive sounding, this term can be ambiguous as, by definition, the noise is the signal. Therefore, this value should be understood as the strength of the auto-correlated signal versus the UKR background. Other sources such as the Galactic background are not included in the following calculation. Further, we only calculate the surface SNR for a perfectly reflecting interface. This number should be compared against the estimated attenuation and bulk scattering losses described in Section 5.1 and 5.2. When the source being used for passive sounding is significantly larger than other backgrounds, the passive SNR then generally depends on how much of the noise from the source can be integrated. Therefore, not only the altitude \(h\) but also the flyby speed \(v\) affect the SNR. Further, the higher bandwidths \(\beta\) are favorable. Here, we assume that the bandwidth is half the center frequency, which will lead to higher SNR's for higher frequencies (Schroeder et al., 2016).
\[\text{SNR}=\frac{2\sqrt{h\lambda}\beta}{v\left(1+\sqrt{\frac{h}{\lambda}} \tan(\sigma_{s})\right)^{2}} \tag{11}\]
Here, \(\sigma_{\text{s}}\) is the surface slope at the wavelength scale. Assuming a fractal surface, the slope at these scales is expected to be small therefore the associated term is negligible. For the flyby groundspeed we consider two end-members with 3 km/s on the lower end and 10 km/s on the upper end. Based on the maximum altitudes inferred in Section 3.2, we consider 10, 100, and 1000 km. The results are shown in Figure 9 and suggest 55 - 70 dB for 100 kHz and 65 - 80 dB for 1 MHz.
## 6 Expected Return Signal Characteristics
Based on the discussion of the target properties in Section 5, we can hypothesize a set of interior structure scenarios and their predicted signature from passive radar sounding.
A passive radar operating at kilometric wavelength has to compromise in terms of vertical resolution. Further, integration times over the groundtrack have to be balanced against the horizontal resolution, especially when compared to actively pulsed radars operating at
MHz frequencies. We estimate that over the course of a flyby, only a few range lines can be recorded. Ultimately the number will be a trade-off between the horizontal resolution and the SNR.
Given the dominant effect of ocean temperature on the attenuation and the similarities of the results for the individual moons, we can expect three plausible cases: The presence of no ocean, the presence of a cold ocean, or the presence of a warm ocean. In all three scenarios, some interface will likely be detected but the characteristics of the return signal would be different. The cold (\(<200\) K) ocean case should return a signal from the ice-ocean interface exceeding the strength of the surface return. This is due to the low scattering and attenuation losses on one hand, and the strong reflection coefficient of liquid water on the other hand. This scenario would enable a direct, unambiguous ocean detection and simultaneously determine the thickness of the overlaying ice shell. Further, the ratio of the amplitudes of the surface return and ocean return are informing the attenuation and therefore constrain the temperature and composition of the ice shell.
In case of a warm (\(>200\) K) ocean, the attenuation is likely too strong to allow the direct detection of an ice-ocean interface. As the ocean extent is assumed to be small, the
Figure 9: Passive signal to noise ratio for the surface reflection as a function of center frequency, altitude and flyby speed. In all scenarios we expect to obtain around 55 - 80 dB.
high concentration of impurities in the lower ice layers will make ice probing by radio waves challenging. However, also in that scenario the NH\({}_{3}\) eutectic interface could be probed with less than 50 dB of attenuation on Titania and Oberon and less than 20 dB of attenuation on Ariel and Umbriel. The eutectic would constitute the first liquid interface and presence of liquids would likely shadow the structure beneath. In this scenario, the use of passive radar would therefore be most powerful in combination with a magnetometer, which could detect an induction signal in the warm ocean case (Cochrane et al., 2021).
In the case that no ocean is present, passive radar would likely still detect the ice-mantle interface as the ice shell is expected to be cold in this case (Castillo-Rogez et al., 2023). As the return signal is expected to be less strong than in the cold ocean case, there is some ambiguity from one return alone as a dim return could also originate from a somewhat warm ocean (due to the enhanced attenuation as the ice-ocean interface is approached). Having multiple range lines distributed over the ground track could characterize the interface and help discriminating an ocean return from a mantle return. Further ways to discriminate between the two cases would be to test if an induced magnetic field is absent or obtain constraints on the shell temperature, for example by performing UKR occultation measurements to probe the attenuation profile of the ice shell (see SS7).
## 7 Discussion
This study is focused on a first evaluation of the feasibility of passive sounding for subsurface oceans in the icy moons of Uranus using UKR emissions. Passive radar sounding presents a complementary technique to magnetic induction; the low electrical conductivity of a cold, ammonia-rich ocean that challenges magnetic induction measurements is favorable for sounding the ice-ocean interface while extended source size and radio beam patterns limit access to the closer moons. Although this technique is promising, there are a number of modeling aspects that need to be refined in order to minimize the risks of a future implementation. In this section, we discuss some of the developments needed. Their quantification fall outside the scope of this paper and will be the subject of future work.
_Northern UKR source:_ The Voyager-2 flyby of Uranus only partially observed the southern source and none of the northern source. While the northern and southern kilometric radio sources in well-studied gas giants (Jupiter and Saturn) are similar, they do show differences in frequency cutoff and potentially also in size. The uncertainties in source
size and radio emission beam impact source availability and maximum altitude for passive sounding, which are key parameters for planning flybys. These uncertainties can be further characterized and potentially reduced by using forward-modelling computational tools such as the Exoplanetary and Planetary Radio Emission Simulator (ExPRES) (Louis et al., 2019). This simulation can take a magnetic field models of Uranus, of which there are many possibilities (see Podolak et al. (1991)), along with a plasma density model to predict the visibility of radio emissions. These models can be tested against Voyager-2 data for the southern UKR source and applied to characterize the uncertainties in the northern UKR source. ExPRES also allows an auroral oval model as input to predict visibility of radio emissions. Ultraviolet observations of the Uranian aurorae with the Hubble space telescope (Balcerak, 2012) could be applied as additional input for these predictions.
_Solar radio bursts:_ Solar radio bursts could interfere with a passive sounding flyby. We can bound the probability that this occurs via Equation 12.
\[P_{SB}<0.015\left(\frac{R_{\rm III}}{6.6\ {\rm day}^{-1}}\right)\left(\frac{T_{100 \rm kHz}}{1\ {\rm hr}}\right)\left(\frac{P(>10^{-18}\ {\rm W\ m^{-2}\ Hz^{-1}})}{0.055}\right) \tag{12}\]
The rate of Type III bursts is \(R_{\rm III}\sim 6.6\) per day at solar maximum and decreases by approximately an order of magnitude at solar minimum (Ndacyayisenga et al., 2021). We do not consider type II bursts since they are more than an order of magnitude less frequent than type III bursts at frequencies \(<1\) MHz, and generally much weaker in signal strength (Krupar and Szabo, 2018). An icy moon flyby lasts for the order of minutes to tens of minutes (see SS5.3) compared to the \(\sim 1\) hour duration of type III bursts at 100 kHz so we scale by the duration of the radio burst \(T_{100\rm kHz}\simeq 1\) hr. Finally, we weigh in the probability that the burst exceeds a flux density of \(10^{-18}\ {\rm W\ m^{-2}\ Hz^{-1}}\) at the Uranian system, which is conservatively chosen to be roughly an order of magnitude below the UKR flux at Oberon. The probability of this \(P(>10^{-18}\ {\rm W\ m^{-2}\ Hz^{-1}})\simeq 0.055\) is based on Krupar and Szabo (2018), where we have scaled by the square of the distance between Earth and Uranus. These conservative estimates result in a probability of a type III Solar Radio Bursts \(P_{SB}\) smaller than 1.5% making it a negligible concern.
_Icy moon ionospheres:_ The ionosphere of Uranian icy moons is not well constrained and can limit the minimum usable frequency for sounding, affect the signal shape via frequency-dependent dispersion, and result in additional losses due to Faraday rotation induced by interaction with the Uranian magnetic field.
The peak electron density of the ionosphere determines the cutoff frequency below which radio emissions will not propagate into the surface or subsurface of the icy moon. The ionospheric cutoff frequency, below which radio signals will not propagate, is determined by the plasma frequency
\[f_{\rm plasma}\simeq 9\ {\rm kHz}\left(\frac{n_{e}}{{\rm cm}^{-3}}\right)^{1/2}, \tag{13}\]
where \(n_{e}\) is the electron density. While the Voyager-2 flyby of the Uranian system was not close enough to measure the electron density near its icy moons, we can bound the cutoff frequency by scaling electron density and surface gravity to other icy moons such as Europa. Using the upper bounds on peak plasma density derived in SS4.2 (see Figure 7) and plugging them into Equation 13 we obtain an upper limit to the peak plasma frequency. Figure 10 shows the usable frequency band below 900 kHz and above the plasma frequency upper limit (green bars), the band that could potentially be used between the plasma frequency upper limit and the Uranian system's ambient plasma frequency \(n_{e}=1\ {\rm cm}^{-3}\) (\(f_{\rm plasma}=9\ {\rm kHz}\)) (yellow bars) and the frequency band definitely not usable in the Uranian system with \(f<9\ {\rm kHz}\) (red bars). In the worst case, the ionospheric cutoff frequency could be as high as 450 kHz (for Titania), which still allows for a significant part of the UKR spectrum to penetrate into the icy moon. As discussed in SS4.2 these upper limits are aggressively pessimistic. Even with these upper bounds, a significant portion of the spectrum of UKR emissions will penetrate through the ionospheres enabling passive sounding.
We also estimate upper bounds on the impact of the icy moon ionospheres on radio signal propagation (Figure 11). Following Grima et al. (2015) we estimate the ionospheric phase delay due to dispersion for 2-way propagation according to
\[\Delta T_{\rm 2way}=\frac{2.69\times 10^{-7}}{f^{2}}TEC. \tag{14}\]
This equation is valid for frequencies above the plasma frequnecy and the gyrofrequency \(f_{g}=2.8\times 10^{10}B\), which is below 10 kHz for the Uranian icy moons. The dispersion delay, including only frequencies above the plasma frequency, is shown in the left panel of Figure 11. The dispersion allows for the use of 10 kHz sub-bands (corresponding to a time resolution of \(10^{-4}\ {\rm s}\)). For the purposes of estimating an upper bound, we use the total electron content (TEC) of Europa integrate up to an altitude of 1000 \({\rm km\ TEC_{Eu}\simeq 4\times 10^{15}\ {\rm m}^{-2}}\) and scale it with the square of the surface gravity of the Uranian icy moon to obtain \(TEC_{M}\simeq TEC_{Eu}(g_{M}/g_{Eu})^{2}\). One factor of the surface gravity comes from the scaling to the peak electron density and a second one comes from the modification of ionospheric scale
Figure 10: Frequency band available for passive sounding for each icy moon. The upper value of 900 kHz is due to the cutoff frequency of the UKR source. The green bar extends down to the plasma frequency upper limit derived from scaling to Europa’s peak ionospheric electron densities ( Figure 7) and scale heights. These upper limits are aggressively pessimistic given that, unlike Europa, Uranian icy moons do not reside in a plasma torus nor are they expected to be active. The yellow bars cover the uncertain range between the plasma frequency upper limit and the ambient plasma frequency in the Uranian system (\(f\simeq 9\ kHz\)). Even with these pessimistic upper bounds, a significant portion of the spectrum of UKR emissions penetrate through the icy moon ionospheres and allows for passive sounding.
height. Note that the surface return signal is bright with predictable delays allowing for dispersion effects to be deconvolved and corrected. This same deconvolution would apply to subsurface return signals.
The 2-way Faraday fading, as defined in (Grima et al., 2015), provides a measure of the signal loss due to Faraday rotation. The right panel of Figure 11 shows expected losses due to this effect, again, assuming pessimistic parameters. The solid lines correspond to the magnetic field being aligned with the direction of propagation while the dashed lines are offset by \(80^{\circ}\) from the direction of propagation.
_Goniopolarimetry:_ The goniopolarimetric technique enables direction finding of circularly polarized radio waves using the correlation between multiple co-located antennas (Cecconi & Zarka, 2005). This technique has been applied successfully to the localization of Saturn's kilometric radio source with the RPWS instrument on Cassini (Cecconi et al., 2009). This technique could be applied to localizing not only the UKR but also potentially the reflected signal from an icy moon. The analysis presented in Cecconi et al. (2009) did require fairly high signal-to-noise ratio cuts, but in this proposed passive radar system there would be significant increases in time-bandwidth product integration that could enable application of the technique to reflected signals. The localization and polarization vector of the reflected signal could provide further insight into the moon ice shells, particularly for the
Figure 11: Left: The ionospheric dispersion delay assuming pessimistic ionospheric total electron content (TEC) obtained by scaling Europa’s \(\mathrm{TEC_{Eu}}\sim 4\times 10^{15}\) m\({}^{-2}\) to the square of surface gravity of Uranian icy moons (see text for details). Right: the 2-way Faraday fading vs frequency, as defined in (Grima et al., 2015), which provides a loss of signal due to the birefringence of the ionosphere under the influence of the Uranian magnetic field.
case of bistatic reflections. The feasibility of goniopolarimetric localization in the context of passive sounding should be further explored in simulations.
_Occultations:_ Observing the transmitted power through an icy moon during a UKR occultation pass could potentially serve as an additional characterization of the ice shell attenuation profile. The reference levels prior to ingress and following egress would serve as reference power levels. Ray-propagation studies would be needed to investigate the sensitivity to various attenuation profile scenarios covered in this paper, including the effects of a potential subsurface reflecting ocean. Studies of Jovian moon occultations with Galileo (Cecconi et al., 2021) have shown that these measurements can be applied to constraining the source location. While this could be accomplished with goniopolarimetry, as mentioned above, occultations may provide additional constraints on the attenuation profile of the ice shells by measuring the transmission of the UKR. The combination of UKR occultations and goniopolarimetric localization with transmission through the ice could prove a powerful technique, although feasibility needs to be demonstrated via detailed simulations.
## 8 Conclusions
This initial feasibility assessment of passively sounding Uranian icy moon cryospheres using Uranian Kilometric Radio emissions is promising. We have reached this conclusion after evaluating the source properties, receiver model, target properties, and a range of possible physical models of the Uranian icy moon cryospheres.
The flux density of the UKR source in the vicinity of Uranian icy moons is orders of magnitude higher than the background identified, meaning the performance for passive sounding is limited only by the integration time-bandwidth product. The source extent is sufficiently compact for a passive sounder to maintain coherence at reasonable flyby altitudes. The beam pattern and source extent make the UKR source availability predictable with values of at least \(\sim 55\%\) and possibly as high as \(\sim 87\%\) if the beam pattern is wider than what was available to the Voyager-2 flyby of Uranus.
The receiver used for this study is modeled after the RPWS instrument on NASA's Cassini mission but with a modified back end consisting of a 1-MHz instantaneous bandwidth digitizer and signal processing chain. The measured receiver noise floor is significantly below the UKR flux in the vicinity of Uranian icy moons so that it is not necessary to improve on it for passive sounding. The plasma noise will not significantly impact the frequency
band of interest provided that the electron density in the plasma surrounding the receiver is \(n_{e}<2500\) cm\({}^{-3}\), which is an aggressively pessimistic estimate based on scaling Europa's ionosphere and surface gravity, and Voyager-2 measurements in the Uranian system were \(n_{e}\leq 1\) cm\({}^{-3}\). Galactic noise is also a negligible contributor to background noise at the frequencies of interest (\(<1\) MHz).
For cold oceans, which challenge magnetic induction techniques, passive sounding can directly probe the ice-ocean interface. We predict that losses due to attenuation and scattering due to porosity will be small. If the oceans are warm such that the attenuation prohibits direct ocean detection, brine expected in the lower ice shell, when the ice temperature exceeds the NH\({}_{3}\) eutectic temperature, will still be detectable, allowing constraint of the thermal profile of the ice shell. Under these circumstances this method would complement magnetometic induction techniques by directly measuring the ice shell thickness, thus enhancing the ability to characterize ocean properties.
Given this is an initial estimate, we have identified key modeling refinements needed to further develop this concept. Radio emission simulations and the ionospheric density profile expectations are important to understand the uncertainties and provide more accurate estimates of source availability. Future studies of the goniopolarimetric capabilities and UKR occultation by the ice shells would further enrich the understanding of the Uranian icy moon cryospheres.
## Data Availability Statement
This work uses publicly available data from a variety of sources. Figure 3 uses UKR average flux density spectrum is from Zarka (1998) and sky background noise spectral density is from Cane (1979). Figure 4 uses icy moon radii and orbital distances from [https://ssd.jpl.nasa.gov/sats/phys_par/](https://ssd.jpl.nasa.gov/sats/phys_par/) and [https://ssd.jpl.nasa.gov/sats/ephem/](https://ssd.jpl.nasa.gov/sats/ephem/), respectively, source size is obtained from Figure 3 of Menietti et al. (1990) along with the maximum altitude limit provided in Peters et al. (2022). Figure 5 samples points in the Figure 3 of Menietti et al. (1990) along with view angles derived from the same geometric parameters used in Figure 4. Figure 6 uses UKR fluxes from Zarka (1998) along with Cassini RWPS noise and calibration data from Zarka et al. (2004). The Galactic flux curve is obtained from data in Manning and Dulk (2001). Electron density and temperature parameters are provided in the text and are based on representative values from Sittler |
2302.11350 | Ductile Breakup of Tracer Aggregates in Homogenous Isotropic Turbulence | In this paper we study the ductile breakup of tracer aggregates in an
incompressible, homogeneous, and isotropic three-dimensional turbulent flow.
The flow dynamics is studied by means of a direct numerical simulation, whereas
the Lagrangian velocities and stress statistics along trajectories are obtained
by particle tracking. We investigate the breakup dynamics under the hypothesis
that aggregates are able to deform and accumulate energy. Within this
framework, breakup occurs when the energy transferred to the aggregate by the
flow exceeds a critical value. We contrast our predictions for ductile breakup
with those obtained for brittle breakup. We observe that turbulence
intermittency is crucial for the breakup of brittle aggregates, while it
becomes less relevant for ductile aggregates. In the limit of highly ductile
aggregates the breakup rate is dictated by the mean properties of the flow. We
propose a simple model to capture this behaviour. | Graziano Frungieri, Matthaus U. Babler, Luca Biferale, Alessandra S. Lanotte | 2023-02-22T12:47:22Z | http://arxiv.org/abs/2302.11350v1 | # Ductile Breakup of Tracer Aggregates in Homogenous Isotropic Turbulence
###### Abstract
In this paper we study the ductile breakup of tracer aggregates in an incompressible, homogeneous, and isotropic three-dimensional turbulent flow. The flow dynamics is studied by means of a direct numerical simulation, whereas the Lagrangian velocities and stress statistics along trajectories are obtained by particle tracking. We investigate the breakup dynamics under the hypothesis that aggregates are able to deform and accumulate energy. Within this framework, breakup occurs when the energy transferred to the aggregate by the flow exceeds a critical value. We contrast our predictions for ductile breakup with those obtained for brittle breakup. We observe that turbulence intermittency is crucial for the breakup of brittle aggregates, while it becomes less relevant for ductile aggregates. In the limit of highly ductile aggregates the breakup rate is dictated by the mean properties of the flow. We propose a simple model to capture this behaviour.
## 1 Introduction
The fragmentation of particle aggregates in a fluid flow is a phenomenon of broad interest in physical, chemical and environmental problems, including technological applications such as the processing of materials in the food, pharmaceutical and composite industry (Vasquez et al., 2022; Vasquez et al., 2023) or the formation and destruction of particles in the ocean (Andrady, 2017). For small aggregates, breakup is caused by the hydrodynamical shear stresses due to the flow motion, and traditionally it has been assumed to occur in a brittle manner, i.e., to occur instantaneously as soon as the aggregate happen to experience for the first time a fluid dynamic stress exceeding its internal strength (Frungieri et al., 2022).
However, depending on their internal structure and colloidal particle-particle interactions (Frungieri and Vanni, 2021), aggregates are expected also to be able to undergo ductile breakup, i.e., to store the energy transmitted by the fluid stress in internal deformation and to fail only when the accumulated energy exceeds their toughness limit (Marchioli and Soldati, 2015). Accumulation of energy transmitted to the aggregate structure through the hydrodynamic stress was considered by Saha et al. (2016) for the interpretation of experiments on the breakup of single aggregates in turbulence.
In either case, brittle or ductile breakup, a physical understanding of the process in complex flow conditions, such as those of turbulence, is still lacking, due to the difficulties of having at the same time a detailed description of the aggregate structure - counting for both hydrodynamic and colloidal interactions between constituent particles - and an accurate description of the flow dynamics (Brandt and Coletti, 2022; De Bona et al., 2014; Breuer and Khalifa, 2019).
In some studies, a simplified approach has been adopted, which consists in considering fully the complex turbulent flow dynamics, while drastically reducing the complexity of the aggregate structure, by considering it as a point-particle (Babler et al., 2012). By such an approach, the breakup of small, brittle aggregates (Babler et al., 2012; Babler et al., 2015) has been investigated in different flow configurations. In particular, the breakup rate was measured at varying strength of the aggregates, showing that the fragmentation mechanism has two distinct regimes. For loose aggregates, the fragmentation rate is high, and it has a universal power-law behaviour governed by the smooth, Gaussian fluctuations of the turbulence. For stronger aggregates, the rate of breakup is instead smaller, and its occurrence is controlled by the intermittent and intense burst of the turbulent stress.
Within an approach similar to the one used by Marchioli and Soldati (2015), in this work, we compute the breakup rate of small, ductile tracer aggregates, i.e., aggregates that follow passively the fluid streamlines and that break only when the accumulated energy overcome their toughness limit. To do this, we use data from a Direct Numerical Simulation of a three-dimensional isotropic turbulent flow at moderate Reynolds number, and we seed the flow with a large number of tracer aggregates. Two aggregate characteristic parameters (the critical stress for initiating the deformation process and the critical accumulated energy) are deemed as crucial and
their effect on the breakup rates is investigated. Our interest in calculating breakup rates is motivated by the possibility offered by population balance models of accurately and efficiently tracking the evolution of the particle size distribution in process scale simulations (Lins et al., 2022; Frungieri and Briesen, 2023; Schiele et al., 2023). The paper is organised as follows: in Section 2, we report the equations used to describe the particle and flow dynamics and the approach used to track the accumulation of shear stresses on the aggregate structure; in Section 3 we discuss results for ductile breakup and we contrast them with those obtained for brittle aggregates, and with the predictions that can be obtained by simple modeling. Concluding remarks follow.
## 2 Methods
We consider a dilute suspension of aggregates described as point-like tracer particles, which have no feedback on the flow in which they are suspended, and which have no hydrodynamical interactions between them. Aggregates are smaller than the Kolmogorov scale of the flow \(\eta\) and are treated as tracers carried passively by the flow. Their equation of motion thus reads as:
\[\dot{\mathbf{x}}_{p}=\mathbf{u}(\mathbf{x}_{p},t) \tag{1}\]
where \(\mathbf{x}_{p}\) is the particle position and \(\mathbf{u}\) the fluid velocity. The latter was evolved according to the incompressible Navier-Stokes (NS) equations reading as:
\[\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla\mathbf{u}=-\frac{ \nabla p}{\rho_{f}}+\nu\nabla^{2}\mathbf{u}+\mathbf{F},\qquad\nabla\cdot \mathbf{u}=0\,. \tag{2}\]
where \(\rho\) and \(p\) are the fluid density and pressure, respectively, and where \(\mathbf{F}\) is a forcing term injecting energy in the first low-wave number shells and keeping constant their spectral content (Bec et al., 2010). The NS equations are solved on a 512\({}^{3}\) cubic grid with periodic boundary conditions, and a Taylor-scale Reynolds number \(\mathrm{Re}_{\mathrm{h}}\simeq\) 185. The kinematic viscosity is chosen in such a way that the Kolmogorov length scale equals the grid spacing \(\eta\simeq\delta\mathbf{x}\). In Table 1 the main characteristics of the flow are reported. Further numerical details can be found in the work by Bec et al. (2010). The stress acting on the particles is the one due to shear only, which is computed along trajectories as (Kusters, 1991):
\[\sigma(\mathbf{x}_{p},t)=\mu\sqrt{\frac{2}{15}\frac{\varepsilon(\mathbf{x}_{p })}{\nu}} \tag{3}\]
where \(\varepsilon(\mathbf{x}_{p})\) is the local turbulent energy dissipation rate computed as \(\varepsilon\)=\(2ve_{ij}e_{ij}\) with \(e_{ij}\) being the rate of deformation tensor, and where \(\nu\) and \(\mu\) are the kinematic and dynamic viscosity of the fluid, respectively.
We are interested in assessing the occurrence of ductile breakup. We assume that the breakup process has to be first activated (and this occurs when the hydrodynamic stress \(\sigma\) acting on the aggregate exceeds a critical value \(\sigma_{cr}\), that is a characteristic of the aggregate internal strength) and then it proceeds through the accumulation of energy until a critical threshold is reached. As long as the condition \(\sigma\)-\(\sigma_{cr}\) is met, the aggregate stores energy as:
\[E\left(\tau\right)=\int_{0}^{\tau}\sigma(\mathbf{x}_{p},t)\ \theta(\sigma-\sigma_{cr})dt \tag{4}\]
where \(\theta\) is the Heaviside step function. Breakup occurs when the accumulated energy exceeds a critical threshold \(E_{cr}\). Hence, an individual aggregate that is released at a random time \(t_{0}\) will break after a time-lag \(\tau\) which is the time at which the accumulated energy, as computed from Eq.(4), assumes a value equal to \(E_{cr}\)(that is a characteristic of the aggregate toughness limit). Figure 1 illustrates the approach just outlined. The breakup frequency follows as the inverse of the average time-lag obtained after tracking many aggregates. Formally, this can be written as:
\[f(\sigma_{cr},E_{cr})\equiv\frac{1}{\left\langle\tau(\sigma_{cr},E_{cr}) \right\rangle},\qquad\tau(\sigma_{cr},E_{cr})\equiv\left\{\tau\ \left|\ E_{cr}\right.=\int_{0}^{\tau}\sigma(\mathbf{x}_{p},t)\ \theta(\sigma-\sigma_{cr})dt\right.\right\} \tag{5}\]
where \(f(\sigma_{cr},E_{cr})\) is the breakup rate of aggregates characterized by \(\sigma_{cr}\) and \(E_{cr}\). Here \(\tau(\sigma_{cr},E_{cr})\) is the time-lag elapsed between the aggregate release in a flow region where \(\sigma<\sigma_{cr}\) and the first time \(E(\tau)=E_{cr}\). The brackets \(\left\langle.\right\rangle\) indicate the ensemble average over the Lagrangian trajectories. We average the results over 128000 trajectories.
In the limit of \(E_{\alpha}\)=0, i.e., for brittle aggregates, breakup occurs when the aggregate released at \(t_{0}\) experiences for the first time a hydrodynamic stress that exceeds the critical stress \(\sigma_{cr}\). This limiting case was investigated by Babler et al. (2012) who also provided a closed-form approximation of the breakup rate for brittle aggregates, based on an earlier model by Loginov (1986):
\[\tilde{f}=\frac{\int_{0}^{\infty}d\dot{\sigma}\,\dot{\sigma}p_{2}(\sigma_{cr}, \dot{\sigma})}{\int_{0}^{\sigma_{cr}}d\sigma\,p(\sigma)} \tag{6}\]
In this expression, \(p_{2}(\sigma,\dot{\sigma})\) is the joint probability density function (PDF) of the hydrodynamic stress \(\sigma\) and of its time derivative \(\dot{\sigma}\) along the aggregate trajectory, and \(p(\sigma)\) is the marginal PDF of the stress \(\sigma\). Both \(p_{2}(\sigma,\dot{\sigma})\) and \(p(\sigma)\) are computed along aggregate trajectories obtained by DNS.
## 3 Results
We compute first the probability density function of the energy that is accumulated by the aggregates over the whole length of their trajectories. We do this by assuming the aggregates to start accumulating energy as soon as they experience for the first time a stress larger than a critical threshold \(\sigma_{cn}\), and by considering them as infinitely strong, i.e. resistant to breakup. In Figure 2 the results of the analysis are reported at varying values of the critical threshold \(\sigma_{cr}\). For a low threshold of the critical hydrodynamic stress, all aggregates accumulate energy over nearly the entire length of their trajectory, leading to a relatively narrow PDF (black and orange curves in Figure 2. On the other hand, if the critical threshold is high, energy is accumulated only along the few segments of the trajectory where \(\sigma>\sigma_{cr}\). Due to the turbulent flow variability, the local value of \(\sigma\) and the length of these segments are strongly varying quantities, leading to a wider PDF (grey curve). The inset of Figure 2 shows the average of the accumulated energy, which decreases for increasing thresholds.
Figure 1: Illustration of the approach used to assess the occurrence of ductile breakup. Accumulation starts after the aggregate experience for the first time a shear stress \(\sigma\)\(\sim\)\(\sigma_{cr}\). Breakup occurs when the energy accumulated exceeds the toughness limit \(E_{cr}\). In the above example this happens at the time \(t/\tau_{\eta}\cong 195\) indicated by the vertical dotted line.
Figure 3 shows the breakup rate for various values of the accumulation energy \(E_{cr}\) needed for breakup. The dashed line refers to the case of zero energy, i.e., is the one of brittle aggregates. For these, breakup occurs as soon as they experience the critical stress for the first time. Accordingly, the time lag for breakup is comparably short, and the breakup rate is high. As the threshold for the critical energy increases (i.e., as the aggregates become more tenacious), a longer stage of energy accumulation is necessary, and the breakup rate is lower. However, for both the brittle and ductile cases, when the critical stress is large, events where \(\sigma\) is larger than \(\sigma_{cr}\) become rare; consequently, the breakup rate shows a rapid fall off.
When the breakup energy \(E_{cr}\) is large, the aggregates spend a long time in the flow accumulating energy (long compared to the large eddy turn-over time \(T_{L}\)), and during this phase, they sample the whole stress probability space. We are willing to use this consideration as the basis for a model to describe the breakup rate: computing the accumulated energy as:
\[E_{cr}\simeq\langle\tau\rangle\int_{\sigma_{cr}}^{\infty}\sigma^{\prime}p( \sigma^{\prime})d\sigma^{\prime} \tag{7}\]
where \(\langle\tau\rangle\) is the time lag for breakup (much larger than \(T_{L}\) and of comparable duration among the different aggregates), we can solve Eq(7) for \(\langle\tau\rangle\) and evaluate the breakup rate as:
\[f(\sigma_{cr},E_{cr})=\frac{1}{\langle\tau\rangle}\simeq\frac{\int_{\sigma_{ cr}}^{\infty}\sigma^{\prime}p(\sigma^{\prime})d\sigma^{\prime}}{E_{cr}} \tag{8}\]
where p(\(\sigma\)) is the PDF of the shear stress, that is plotted in Figure 3b, whereas the prediction of Eq(8) is reported for the highest energy level considered in our simulations by the solid line in Figure 3a. The model correctly predicts both the plateau of the breakup rate at small critical stress and the fall off at larger critical stress. Deviations can be explained as the trajectories do not sample the whole probability space of the stress.
Figure 2: Probability distribution function of the energy \(E\) accumulated by aggregates at varying value of the activation stress \(\sigma_{cr}\). The critical stress has been made dimensionless by the average stress (\(\sigma\)). In the inset, the average energy is plotted as a function of the critical stress and normalized by the product between the average stress and the simulation run time.
## 4 Conclusions
In this work we have studied the ductile breakup of small aggregates in a homogenous isotropic turbulence by direct numerical simulations. We have treated aggregates as inertialess, tracer point-particles and we have tracked the history of shear stress they experience in the flow. We are interested in evaluating breakup rates. We have investigated the scenario in which aggregates are ductile, i.e., they undergo breakup only if they experience, at least once, a stress larger than a critical one (which can be thought of as the elastic limit upon which shear stresses induce irreversible deformation) and if the energy accumulated along the trajectory exceeds a critical energy threshold (which can be thought of as the aggregate toughness limit). Under these modeling conditions, at vanishing energy threshold, the usual mechanism for brittle breakup is recovered.
We have observed that for large activation stresses, the rate of breakup is controlled by the turbulence dynamics, and by the occurrence of the bursts of the turbulent hydrodynamic stress. On the other hand, for small activation stresses, turbulence fluctuations play a minor role: aggregates constantly accumulate stress along their trajectory and the breakup is independent of the dynamics of the stress and of the occurrence of intense turbulent bursts.
We have also observed that when aggregates have a large toughness limit, i.e., when they have to accumulate large energies in order to break, the contribution of the turbulent bursts of hydrodynamic stress become less relevant, and the occurrence of breakup can be predicted by simple modeling based on the average properties of the flow (Conchuir and Zaccone, 2013). On the contrary, for brittle aggregates, breakup is controlled by turbulent intermittency and occurs at large rates. Finally, our results confirm what was found by Marchioli and Soldati (2015) for the breakup of ductile aggregates in a bounded flow.
Future efforts could explore the use of breakup rates in population balance models to address the fragmentation dynamics and the evolution of the particle size distribution, also possibly in the presence of concurring aggregation phenomena.
###### Acknowledgements.
M.U.B. acknowledges financial support from the Swedish Energy agency (Project Nr. P2019-90227). L. B. received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 882340).
Figure 3: a) Breakup frequency as a function of the critical activation stress. The breakup frequency is made dimensionless by the Kolmogorov length scale of the flow, whereas the critical stress is normalized with the average stress in the flow. Each data series (symbols) refers to a different value of the critical energy. The dashed curve refers to brittle particles. Eq(8) is plotted as a solid curve for the largest energy level investigated. b) Probability density function of the shear stress. |
2303.06505 | Two-tier PON virtualisation with scheduler synchronization supporting
application-level ultra-low latency in MEC based Cloud-RAN, using MESH-PON | Ultra-low end-to-end latency is one of the most important requirements in 5G
networks and beyond to support latency-critical applications. Cloud-RAN and MEC
are considered as the key driving technology that can help reduce end-to-end
latency. However, the use of MEC nodes poses radical changes to the access
network architecture. As it brings the processing and the networking services
closer to the edge, it often requires network functions (for example, the CU/DU
stack and the application processing) to be distributed across different MEC
sites. Therefore, a novel transport mechanism is needed to efficiently
coordinate and connect network functions across MEC nodes.
In order to address this challenge, we propose a novel two-tier virtualized
PON transport method with schedulers coordination over a virtualised and sliced
MESH-PON architecture. While a MESH-PON architecture enables direct
communication between MEC nodes that are hosting CU/DU and/or the application
processing, our method provides a two tier virtualised PON transport scheme
with coordinated schedulers. This approach greatly reduces latency incurred in
transporting signals across the different PON tiers, while maintaining the
flexibility of the multi-tier methods. We show that our proposed scheme can
achieve end-to-end application-level latency below 1ms or 2ms, depending on the
network configurations. | Sandip Das, Frank Slyne, Daniel Kilper, Marco Ruffini | 2023-03-11T22:19:06Z | http://arxiv.org/abs/2303.06505v1 | Two-tier PON virtualisation with scheduler synchronization supporting application-level ultra-low latency in MEC based Cloud-RAN, using MESH-PON
###### Abstract
Ultra-low end-to-end latency is one of the most important requirements in 5G networks and beyond to support latency-critical applications. Cloud-RAN and Multi Access Edge Computing (MEC) are considered as the key driving technology that can help reduce end-to-end latency. However, the use of MEC nodes poses radical changes to the access network architecture. As it brings the processing and the networking services closer to the edge, it often requires network functions (for example, the CU/DU stack and the application processing) to be distributed across different MEC sites. Therefore, a novel transport mechanism is needed to efficiently coordinate and connect network functions across MEC nodes.
In order to address this challenge, we propose a novel two-tier virtualized PON transport method with schedulers coordination over a virtualised and sliced MESH-PON architecture. While a MESH-PON architecture enables direct communication between MEC nodes that are hosting CU/DU and/or the application processing, our method provides a two tier virtualised PON transport scheme with coordinated schedulers. This approach greatly reduces latency incurred in transporting signals across the different PON tiers, while maintaining the flexibility of the multi-tier methods. We show that our proposed scheme can achieve end-to-end application-level latency below 1ms or 2ms, depending on the network configurations.
2023
## 1 Introduction
As the the deployment of 5G networks has moved past its initial phase, operators are pushing forward to find technological solutions to fine tune their fronthaul/backhaul networks to address key requirements for 5G and beyond. Among these, the support for ultra-low latency applications (i.e., of the order of 1ms) is important to enable mission-critical applications such as Intelligent Transport Systems (ITS), industry 4.0, public safety, including use of Augmented Reality (AR) technology [1]. Cloud Radio Access Networks (C-RAN), and MEC is rapidly replacing the legacy architecture as it can better support network densification and local data processing and storage. From a networking perspective, when considering end-to-end (i.e., from source to destination at the application level) we could break down the network elements contributing to overall latency into three sections: the RAN (due to wireless resource scheduling, transmission distance and stack processing), the fronthaul transmission from Radio Unit (RU) to Distributed Unit/Centralised Unit (DU/CU) (due to transmission distance and data scheduling if operating over a PON) and the transport from the DU/CU towards the MEC node running the application.
In a traditional RAN, the access latency is the amount of time the User Equipment (UE) application traffic needs to wait for allocation of uplink resources before transmission, which is generally assigned to the UE via a set of Downlink Control Information (DCI) messages in 5G New Radio (NR). The latency between buffer status report by the UE and the corresponding uplink resource grant allocation via DCI messages is 4 time slots (e.g., 2 ms for 0.5 ms slot duration [2]). This is the largest contributing factor to RAN access latency. In order to address this bottleneck, Coordinated Grant Scheduling (CGS) (for uplink) and Semi-Persistent Scheduling (SPS) (in downlink) were pro
posed [3], which semi-statically pre-allocate uplink resources (typically a group of Physical Resource Blocks (PRBs)) to UEs so that they can send their uplink traffic without making a request, thus avoiding waiting for the scheduling of uplink resources.
The second source of latency, as mentioned above, occurs due to the uplink scheduling of fronthaul, when the RAN is transported over a PON. This latency becomes critical if the RAN is employing a functional split that is a split 6 (between MAC and PHY) or above [4]. Here, if the PON and RAN schedulers are not coordinated properly, data from the UE will need to queue at the ONU side waiting for the PON upstream grant to be provided by the OLT. This coordination issue was recently solved with the development of cooperative Dynamic Bandwidth Allocation (Co-DBA) [5] implemented over a Coordinated Transport Interface (CTI)[6]. This requires the OLT to fetch prior UE uplink scheduling information from the DU and use it to estimate the fronthaul packet size and arrival time at the ONU that is connected to the RU. Based on this information, the OLT can pre-assign uplink resources to the ONU so that the packets from the UEs undergo minimal queuing once they arrive at the ONU from RU.
However, this does not solve the issue of latency at the application level. As mentioned above, low latency requires the use of methods like CGS in the RAN, where information about incoming data from the UE is not known in advance and thus cannot be passed to the OLT for CTI coordination. CTI currently does not support a RAN that uses the CGS for ultra-low latency. In this work we propose an updated CTI that can support low-latency RAN operations, thus addressing this issue.
The third source of network latency is the data transmission between the DU/CU and the server running the application. Typically, this is sent over the network to a Central Office (CO) or a MEC node, and can involve multiple layer 2 or layer 3 hops, depending on the network configuration. A second contribution of this work is to address these shortcomings through the a new MESH-PON approach, described below.
We base our architecture on virtualised PONs (vPONs) [7] operating over a mesh access topology [8]. A MESH-PON makes use of technology such as wavelength reflectors at splitter locations as in [9] (or other configurations as in [10, 11]) to enable direct communications between end points (i.e., without the need of OEO conversion and packet scheduling by the OLT located at the source of the PON tree in the central office). It should be noticed that a legacy PON that does not support mesh connectivity can only operate as a point to multipoint. Thus the only option to achieve connectivity between end points is to communicate to an OLT located at the source of the PON tree (i.e., at the CO), which accumulates considerable latency over each round trip. In our MESH-PON, a vPON can be dynamically created among a set of nodes that require direct communications. For example a number of small cell RUs can create a vPON that includes an MEC node that hosts the DU/CU servers controlling the small cell RUs. The virtualisation aspect (combined with a flexible and tunable physical layer) enables the connectivity among this set of nodes to be created and modified dynamically (i.e., if due to a change in load or services a set of small cell RUs needs to connect to a different MEC node). In our previous work, [12], we presented a method for coordinating transmission from the RAN to a first MEC node that hosts the DU/CU (first tier) and then from there to another MEC node (second tier) that hosts the application. It is worth noting that it is possible for the same MEC node to host both DU/CU and applications. However, our solution allows for a more general case where a functional chain may be spread across multiple locations. This can for example support multi-tenancy [13]: the owner of the C-RAN and that of the application can be different entities that run their services from different MEC nodes. In this work, we extend our work in [12] in two ways. Firstly, we provide a detailed communication protocol and the management of the vPON slices to facilitate the schedulers synchronization for achieving application-level end-to-end low-latency, and analyze the latencies involved in the various stages of the overall end-to-end transport. Secondly, we examine the impact of various network factors (such as traffic load at the RU and average OLT downlink load in the second-tier transport) on the end-to-end latency of applications in our proposed architecture, and discuss potential solutions. Therefore, the overall contributions of this work can be summarised as follows:
1. In the first tier, we propose an enhanced cooperative DBA which can support CGS, to achieve ultra-low latency both in the RAN and fronthaul PON transport.
2. In the second tier, we propose an uplink-to-downlink switching mechanism between virtual PONs that minimizes latency towards the application MEC node.
3. We have provided an understanding of how different network parameters, such as traffic load at the RU and average OLT downlink load in the second-tier transport, impact the end-to-end latency of the proposed architecture, and possible strategies to overcome any shortcomings
The rest of this article is organized as follows: In Section 2, we provide the system architecture of our proposed two-tier vPON transport method. Here we also discuss the details of transport protocol and coordination between two schedulers (\(\mathrm{1^{st}}\) and \(\mathrm{2^{nd}}\) tier). In Section 3, we provide details of the discrete event simulation to carry out performance evaluation, whose results are then provided in Section 4. Finally, we conclude this article in Section 5.
## 2 System architecture
Fig. 1 presents the proposed system architecture and use case. We consider a fixed-mobile converged access scenario, where a mesh TWDM-PON similar to our proposed architecture in [8] is used for sharing C-RAN fronthaul with residential broadband users (not shown in the figure). We also consider MEC nodes hosted at the macro cell site for providing low-latency RAN and application processing. For the low-latency RAN, we target types of URLLC applications with latency requirements of the order of 1ms [1], for example the scenario of real-time control application of remote industrial site as shown in Fig. 1. In order to meet this tight end-to-end latency, we propose the following coordinated two-tier vPON scheduling method.
The proposed architecture features a number of PON endpoints that serve small cell RU sites, which are equipped with wavelength-tunable ONU capability, and MEC nodes that have tunable OLT capability. This enables the OLTs at the MEC nodes to simultaneously communicate with multiple RUs and other OLTs at other MEC nodes via EAST-WEST PON connectivity (illustrated with red-colored dashed-line in Fig. 1). Such direct connectivity between MEC nodes and multiple RUs is achieved by reflecting back selected wavelengths through Fiber Bragg Gratings (FBGs) at the level-1 splitter locations which is shown as "Splitter with local loopback" in Fig. 1. Residential users (not
shown here) can be served by regular, low-cost, non-tunable ONUs and connect to the central office via NORTH-SOUTH, point-to-multipoint connectivity. In order to transport fronthaul data with low latency, ONUs connected to RUs providing URLLC services (referred to as priority-UE traffic) can create a virtual PON slice with the OLT located at nearby MEC nodes (MEC-1 in this case). We refer to this as the \(1^{st}\)-tier and its path is illustrated with the green-colored dashed line in Fig. 1. Finally, similar to our MESH-PON architecture in [9], the tunable ONUs at the MEC sites provide a connection to the central office, ensuring that a communication channel is always available via NORTH-SOUTH connectivity (illustrated in orange-colored dashed line) for exchanging control information, such as receiving vPON slice configurations from the central office. To fully understand the practical feasibility of the MESH-PON connectivity, including the splitter architecture, wavelength planning, physical lightpath creation via vPON formation, and power budget analysis, we encourage readers to refer to our previous work in [9] and [8] for a more in-depth examination.
1. Communication Protocol: In order to achieve low end-to-end latency in fronthaul, the first tier requires a novel, enhanced Co-DBA mechanism, to efficiently incorporate the CGS resources that can deliver URLLC. The conventional Co-DBA [5] utilizes mobile scheduling information 4-TTI (4 NR slot time for the 5G case) in advance from the CU/DU to derive the uplink grants for efficient fronthaul transport with ultra-low latency. The O-RAN standard for Cooperative Transport Interface [6] outlines the interface definition and messaging protocol between the CU/DU and OLT for achieving this Co-DBA. However, the conventional Co-DBA relies on the fact that the proper mobile scheduling information is available 4-NR slots prior and does not take into account the traffic scheduling at CGS resources, where information is not typically known in advance. To encounter this, our proposed enhancement to the conventional Co-DBA incorporates information about the semi-static allocation of CGS resources from RRC to accurately calculate DBA and account for traffic at the CGS resources. As the Radio Resource Control (RRC) block in the CU semi-statically allocates a set of CGS resources to a specific UE for URLLC services, this can be made available from the CU/DU via CTI interface, and our algorithm passes this information to the OLT, so that it can calculate uplink grants using our proposed enhanced Co-DBA considering the allocation of CGS resource particular to the RU. A conservative approach, implemented in this work, is to consider the allocated CGS resources regardless of how many PRBs the UE is actually using. However, efficiency could be further improved by measuring the current UE traffic and then estimate the percentage of the CGS resources that are occupied in the uplink and use it along with the typical mac scheduling information for calculating the grants.
The second-tier vPON transport in our proposed architecture targets ultra-low latency for the connection between the DU/CU and the MEC nodes where the application is hosted (shown as MEC-2 in Fig. 1). In a typical PON based fronthaul/midhaul/backhaul deployment, this connection between two MEC nodes is achieved by transporting the traffic via CO which incurs a significant amount of
Figure 1: System architecture and the use case
lency. However, in our proposed architecture, this can be done by configuring the vPON slice of OLT-1 (at MEC-1) to temporarily include the ONU (that was originally connected with the control channel with CO [9]) at the MEC-2 and send the traffic to MEC-2 over the next downlink period of the same vPON slice. It is worth noting that in this work we assume a worst-case scenario, where the packet waits for the next downlink frame to start, although this could be further optimised in future work. The path of this CU/DU to application traffic via the proposed inter-MEC connectivity is illustrated with red-colored dashed line in Fig. 1. In this case, an uplink transmission (in the \(1^{st}\)-tier) is followed by a downlink transmission (in the \(2^{nd}\)-tier), thus the packet does not need to wait for a DBA scheduling round or implement a complex inter-PON Co-DBA coordination.
Fig. 2 shows the whole two-tier vPON transport process and illustrates the latencies involved in the various location of the proposed architecture for achieving application-level low latency. As can be seen on Fig. 2, we consider two separate type of traffic: the URLLC traffic, which requires ultra-low latency of the order of 1-2ms, and the normal traffic that is generally latency tolerant (of the order of tens of milliseconds). The URLLC traffic uses the CGS PRB resources to transmit immediately at the next NR slot, while the normal traffic uses the standard RAN-MAC scheduling method, which requires about 4-NR slot times to acquire the PRB resources to transmit the data.
After the fronthaul reception and the CU/DU processing at the MEC, the normal and URLLC UE traffic processed for that particular NR-slot are separated. At this point, the URLLC traffic is sent to the application hosted in the other MEC (\(MEC_{2}\) in this case) and the normal UE traffic is sent to the application hosted at Cloud-Central office. This is sent uplink over the PON using the standard Status-Report based DBA (SR-DBA) mechanism. Therefore, the total application-level end-to-end latency mainly consists of the latencies incurred in the following interfaces of the overall connection path of this proposed architecture: UE access latency, Fronthaul transport latency (\(1^{st}\)-tier), queuing at CU/DU to MEC-1 OLT downlink interface, and CU-DU to application transport latency (\(2^{nd}\)-tier).
The proposed two-tier vPON scheme can also be applied to the return path, where processed data is sent from MEC-2 to the UE. In this specific scenario, the return path consists of an uplink from MEC-2 to MEC-1, followed by a downlink from MEC-1 to the RU. The proposed scheme is effective in this case as well because, amount of the processed application response data for UE from MEC-2 is typically deterministic, therefore, a fixed bandwidth allocation with priority scheduling can be implemented to achieve low-latency at the uplink return path at the 2nd tier. This followed by the downlink path in the fronthaul to reach the response to the UE. It is worth noting that other configurations such as a two-tier downlink-downlink path, where the OLT at MEC-2 includes the ONU at MEC-1 in its vPON configuration to transmit traffic from MEC-2 to MEC-1 in a downlink PON, can also be explored. Investigating these alternative configurations presents interesting challenges related to load-balancing and maintaining ultra-low end-to-end application level latency.
## 2 Control and Management
One of the key operations of this architecture is the control and management of the slices, in order to assure coordination between the two-tiered transport. In our proposal, the configuration of vPON slices is facilitated by the OLT at the CO, which can send control information to both MECs using downlink PLOAM messages. As the ONUs at both MECs are initially connected to the control-channel with the CO OLT, the OLIs co-located at both MECs nodes receive the request for vPON slice configuration. In order to facilitate the \(2^{nd}\)-tier transport (which is downlink from MEC-1 OLT in this example), the ONU at MEC-2 tunes its
Figure 2: Two-tier vPON transport process illustrating the latencies involved in the various stages of the overall end-to-end transport
wavelength to connect to OLT at MEC-1, while the OLT-1 updates its vPON slice to include the ONU at MEC-2 to complete the vPON slice reconfiguration process. At this point, any control channel information intended for the MEC-2 OLT is conveyed through the MEC-1 OLT (via downlink). Once this 2\({}^{th}\)-tier transport is no longer required (i.e., the connection between CU/DU at MEC-1 application to MEC-2) possibly due to the application being migrated to some other MEC nodes, the ONU at MEC-2 goes back to control channel with the CO-OLT. The entire control and management information is conveyed through PLOAM messaging which is implemented in our system simulation.
## 3 Simulation Set Up
The proposed architecture and the use case described above were simulated using OMNET++. The base architecture for the simulation setup follows a MESH-PON framework (for example as described in detail in [8]), with fibre distances reported in Fig. 1. The following enhancements were carried out on the MESH-PON simulator:
1. _RU_ and user traffic On the wireless side, two user traffic arrival processes (normal and URLLC) are created following different Poisson processes. The CGS is abstracted as a set of PRBs that are reserved for static pre-allocation whenever URLLC traffic arrives. Upon each user traffic arrival at RU (normal or URLLC), a group of PRB resources \(N_{PRB}^{user}\) which is chosen to be 5 in this work) is allocated. Here, each RU uses 4 MIMO antennas and a 7.2 split, operating over 100MHz of bandwidth, where a certain percentage of the PRBs (for example 10% or 20%) are semi-statically allocated/reserved as CGS resources that UEs can acquire for transmitting URLLC traffic immediately at the current 5G-NR slot. The rest of the PRBs are allocated using the standard PRB allocation process, following the resource request-and-grant process which takes about 4 NR slot-times.
2. Two-tier vPON Transport The transport architecture implements a MESH-PON with two-tier vPON transport, with the enhanced Co-DBA as proposed in Section 2 and the uplink-to-downlink switching scheme. In our simulation, we have incorporated a fiber propagation latency of 4.5 \(\mu\)s/Km. As a result, one-way transmission of traffic from MEC-MEC incurs a fixed latency of approximately \(\approx\) 90 \(\mu\)s, while one-way transmission of traffic from MEC-Central office incurs a fixed latency of approximately \(\approx\) 225 \(\mu\)s. In this work, we have taken into account the variability of DU/CU processing time at the MEC. To account for this type of latency, we have assumed that the processing time is proportional to the slot time. This is based on the assumption that shorter slot durations result in shorter timing windows for the CU/DU stack processing, as stated in [14]. Therefore, we have used the slot time as a representation of the DU/CU processing time.). We consider Split-7.2 for the fronthaul transport in this work. The fronthaul rate per RU-ONU with respect to the RU traffic load can be obtained from equation (1). Here, \(\delta_{i}\) is 1 if the corresponding PRB carries data traffic and 0 otherwise. A more in-depth explanation of this equation can be found in [15]. \[\begin{split} R_{7,2}=&\left(R_{\text{PUSCH,IQ}}+R_{ \text{DMRS,IQ}}\right)+R_{\text{PUCCH,IQ}}\\ &+R_{\text{PRACH,IQ}}+R_{\text{SRS,IQ}}\\ =& 2N_{\text{ant}}\left(\sum_{i=1}^{N_{\text{SR}}}N_{ \text{res,}i}N_{\text{RE,}i}\delta_{i}\frac{1}{T_{slot}^{h}}\right.\\ &+\left.N_{\text{reg}}^{\text{PUCCH}}N_{\text{RE}}^{\text{PUCCH}}N _{\text{res}}^{\text{PUCCH}}\frac{1}{T_{slot}^{h}}\right.\\ &+\left.N_{\text{bins}}^{\text{PRACH}}N_{\text{res, PRACH}}\frac{1}{T_{\text{PRACH}}}\right.\\ &\left.+N_{\text{scr,SRS}}\left.N_{\text{res, SRS}}\frac{1}{T_{\text{SRS}}} \right)\right.\end{split}\] (1) Using equation (1), at the end of each NR-slot, the user traffic at the RU is converted to fronthaul traffic depending on how many users are currently active on the current NR slot (consequently the number of PRBs carrying data). Therefore, given a vPON slice configuration (i.e, the number of RU-ONUs in the vPON slice), we can use this to calculate the PON load (or traffic intensity) as follows: \[\text{Traffic Intensity (\%load)}=\frac{\sum_{i=1}^{N_{\text{SR}}}R_{7,2}^{i}}{T_{cap}}\times 100\] Where \(N_{sl}\) represents the number of RU-ONUs in the vPON slice, \(R_{7,2}^{i}\) denotes the fronthaul rate for \(i^{th}\) RU in the vPON slice obtained using (1), and \(T_{cap}\) represents the capacity per OLT channel (which is considered to be 50 Gbps
\begin{table}
\begin{tabular}{l c|c} \hline \multirow{2}{*}{Parameters} & \multicolumn{2}{c}{values} \\ \cline{2-3} & Config-1 & Config-2 \\ \hline NR Bandwidth (per RU) & \multicolumn{2}{c}{100 MHz} \\ NR numerology (\(\mu\)) & 1 & 2 \\ NR Slot Time (\(T_{slot}^{h}\)) & 0.5 ms & 0.25 ms \\ maximum No. of PRBs (\(N_{PRB}^{BW(i)}\)) & 270 & 135 \\ Num PRBs per user (\(N_{PRB}^{user}\)) & 5 \\ Traffic type & [Normal, URLLC] \\ Percentage of PRBs reserved & [10\%, 20\%] \\ for CGS (URLLC traffic) & [10\%, 20\%] \\ Num MIMO layers per RU (\(\upsilon_{layers}^{(i)}\)) & 4 \\ Num antennas per RU (\(N_{\text{ant}}\)) & 4 \\ Modulation order (\(Q_{in}^{(j)}\)) & 256 QAM \\ Number of component carriers & 1 \\ for carrier aggregation (\(J\)) & \\ Scaling Factor (\(f^{(j)}\)) & 1 \\ \(R_{max}\) & 948/1024 \\ Overhead (\(OH^{(j)}\)) & 0.1 (for Frequency range FR1 and UL) \\ OFDM symbol duration & 35.714 \(\mu\)s & 17.85 \(\mu\)s \\ including CP (in \(\mu\)s) & 35.714 \(\mu\)s & 17.85 \(\mu\)s \\ \(\left(T_{s}^{h}=10^{-3}/\left(14*2^{n}\right)\right)\) & \\ \hline \end{tabular}
\end{table}
Table 1: Simulation parameters for RU and user traffic
in this case).
After the fronthaul reception and the CU/DU processing at MEC, the normal and URLLC UE traffic processed for that particular NR-slot is separated. At this point, the URLLC traffic is sent to the application hosted in the other MEC (\(MEC_{2}\) in this case) and the normal UE traffic is sent to the Cloud-Central office for further application processing of the normal traffic. The amount of DU processed data per NR-Slot that is to be sent to the application processing largely depends on the amount of each traffic carried on that NR-Slot (i.e, the number of PRB resource occupied). For example, for the URLLC traffic, the amount of DU processed data for a particular NR-Slot \(D^{i}_{du}\) to be sent from \(MEC_{1}\) to the application processing at \(MEC_{2}\) depends on the number of PRBs occupied on the corresponding NR-slot, and is calculated using the following equation (2).
\[D^{i}_{du}\;(\text{in Mb})=\left((R_{cell}/N^{\text{PBU}(j),\mu}_{PRB})\cdot N^ {user}_{PRB}\cdot u^{i}_{slot}\cdot T^{\mu}_{slot}\right) \tag{2}\]
Here, \(N^{user}_{PRB}\) denotes the number of PRB resources allocated per user traffic instance. \(u^{i}_{slot}\) and \(T^{\mu}_{slot}\) denote the number of active users in the slot and NR-slot duration, respectively. \(R_{cell}\) denotes the maximum cell throughput calculated using equation (3) and is based on 3GPP TS 38.306 [16].
\[R_{cell}\;(\text{in Mbps}) = 10^{-6}\cdot\sum_{j=1}^{l}\left(v^{(j)}_{layers}\cdot Q^{(j)}_{m} \cdot f^{(j)}\cdot R_{max}\cdot\right.\] \[\left.\frac{N^{\text{PBU}(j),\mu}_{PRB}\cdot 12}{T^{\mu}_{s}} \cdot\left(1-OH^{(j)}\right)\right)\]
Tables 1 and 2 reports the parameters used for the simulation and for generating the results.
## 4 Results
We run our simulations repeatedly for 30 second intervals (which is about 240,000 OLT grant cycles). We collected latency metrics for each received packets at various location of the proposed architecture, counting between 4 million and 3 billions values, depending on the observation location and the traffic load.
Fig. 3 shows end-to-end latency in the first tier of the vPON transport i.e., fronthaul latency between the RU and DU. It also demonstrates the advantage of our enhanced Co-DBA over the conventional Co-DBA. Here, the conventional DBA, which does not take into account CGS allocated resources, allocates bandwidth based on the scheduling information of normal-UE traffic (4-NR slot prior) and adds a fixed-bandwidth corresponding to 5% of the overall RU traffic to accommodate for fluctuations due to URLLC traffic. Our proposed Co-DBA, however, takes into account both the allocation of normal-UE scheduling information (4-NR slot prior) and the allocation of semi-static CGS resources obtained from RRC and passed through the CTI interface to the OLT, resulting in significant improvements in latency, particularly at higher loads where URLLC-traffic at CGS resources increases significantly and improper DBA allocation by conventional DBA causes increased queuing latency at the fronthaul between RU and DU.
Fig. 4 shows the end-to-end latency, measured both at the RU-DU interface and at the application level (i.e., end-to-end), of our proposed mechanism, against the traffic load on the PON uplink. This figure also illustrates the end-to-end latency difference between the URLLC traffic (where we employ the two-tier vPON scheme) and the normal traffic (which uses ordinary RAN scheduling, Co-DBA and a main OLT at a CO that is 50 km away). The normal traffic uses PON fronthaul followed by conventional SR-DBA to transport CU/DU traffic to the application at the CO, and serves as the benchmark in this study.
\begin{table}
\begin{tabular}{l c} \hline Parameters & Values \\ \hline (TWDM-PON) Uplink capacity per & 50 Gbps \\ OLT channel (CO-OLT, MEC-OLT) & \\ Fiber Propagation delay & 4.5 \(\mu\)s/Km \\ OLT grant cycle (GC) duration & 125 \(\mu\)s \\ Ethernet (eCPRI) frame size & 2048 bytes \\ Inter packet gap for eCPRI packets & \(10^{-7}\)s \\ ONU response time & 35 \(\mu\)s \\ CU/DU stack processing delay & 0.5ms, 0.25ms \\ \hline \multirow{8}{*}{eCPRI 7.2 rate calculation parameters} & \multirow{8}{*}{\(N_{\text{RE,SRS}}=\)12} \\ & \\ \cline{1-1} & \\ \cline{1-1} & \\ \cline{1-1} & \\ \cline{1-1} & \\ \cline{1-1} & \\ \cline{1-1} & \\ \cline{1-1} & \\ \cline{1-1} & \\ \cline{1-1} & \\ \cline{1-1} & \\ \cline{1-1} & \\ \cline{1-1} & \\ \cline{1-1} & \\ \hline \end{tabular}
\end{table}
Table 2: Simulation parameters for two-tier vPON transport
Figure 3: Latency performance comparison of proposed enhanced Co-DBA vs the conventional Co-DBA.
As can be seen, our proposed scheme can achieve end-to-end (application-level) average latency just above 1 ms (red bar), with maximum latency around 1.7 ms. This remains unaffected by load until around 95% traffic, when the average latency increases above 2.5ms (although the average remains approximately the same). It is important to note that these results do not take into account the latency generated by the application processing, as that would depend on the specific application being run. The results only show the latency of the communication between the applications.
A way to further reduce latency is to reduce the RAN slot duration from 0.5 ms (used for fig. 4), to 0.25 ms. This is shown in fig. 5 and 6. Using a shorter NR slot configurations we are able to meet sub millisecond latency (both average and max) up to about 95% load. This is because, shorter time slot reduces the waiting time for the user traffic. This also reduces the CU/DU processing time window. Therefore, the overall application-level latency is reduced.
In both the results above, we have considered that sufficient downlink bandwidth is always available at the edge-OLT (at the MEC location) for URLLC traffic from the DU to the application located at the other MEC, via the second-tier vPON. However, in practice, only a fraction of the total OLT downlink bandwidth may be available, since the OLT also serves downlink fronthaul for other RUs in that vPON. This fraction of available bandwidth would depend on various factors, for example the downlink traffic load per RU, the functional split and the number of RUs in the vPON slice. Fig. 7 and 8 shows the overall application-level latency when we reduce the available downlink bandwidth for the second tier vPON transport. We report the analysis for value of available bandwidth between 25% and 5% (i.e., 75% to 95% of the OLT downlink bandwidth respectively is occupied by other fronthaul services). Here, we consider taht 20% of the overall traffic is URLLC (i.e, having ultra-low latency requirement). Therefore, an increase in overall RU traffic load also means a proportional increase of URLLC traffic. As a consequence, this also increases the traffic at the second-tier path between CU/DU and application at MEC-2. Therefore, given only a fraction of the OLT downlink bandwidth is available for the second-tier transport, it is important to analyse how much traffic load at the RU can be supported with the required ultra-low application-level end-to-end latency.
Figure 4: Average and Max application-level end-to-end latency for different traffic types and PON load (slot time =0.5ms).
Figure 5: Average and Max application-level end-to-end latency for different traffic types and PON load (slot time =0.25ms).
Figure 6: Comparison of Average and Max application-level end-to-end latency at different traffic load for time slot of 0.25ms and 0.5ms.
Figure 7: Average end-to-end application-level latency at different traffic load for different fraction of the available OLT bandwidth (20% 10% and 5%) for 2\({}^{nd}\)-tier transport (slot-time=0.5ms).
As it can be seen from this figures, our proposed scheme can achieve maximum end-to-end latency below 2ms (for 0.5ms NR-slot) or \(\approx\)1ms (for 0.25ms NR slot) for traffic load up to about 80%, when at least 10% downlink bandwidth is available to the OLT for 2nd-tier PON transport of DU processed traffic towards the application. However, as this available bandwidth reduces down to 5%, then less than 50% of the traffic load can achieve ultra-low end-to-end latency. Above this load, although the fronthaul latency is still low (\(\approx\) 450\(\mu\)s) the queuing latency at the interface between DU and OLT downlink interface (for 2nd-tier vPON transport to application) increases and therefore, the overall latency is over 5 ms.
## 5 Conclusions
In this paper, we have proposed a novel two-tier PON transport method with schedulers coordination over a virtualised MESH-PON to achieve application-level ultra-low latency. Our proposed method addressed three major sources of latency, namely: RAN access latency, fronthaul latency and the CU/DU to application transport latency. While the RAN access is significantly reduced by using CGS resources to transport URLLC traffic, we have proposed a modification of the Co-DBA to incorporate the traffic at the CGS resources while calculating the uplink grants for maintaining low fronthaul latency. To address the CU/DU to application path latency (backhaul), we have proposed a second-tier vPON transport to enable direct communication between MEC-nodes without layer 2 switching back at the CO OLT, thus reducing latency significantly. The results show application-level end-to-end transport latency of the order of 2 ms, depending upon fronthaul and RU traffic load when using 0.5 ms NR slot. A further reduction in the end-to-end latency \(\approx\)1ms can be achieved by using a even shorter NR slot duration (for e.g. 0.25 ms). Therefore, in conclusion, we have shown how the use of a virtualised MESH-PON with coordinated scheduling can be instrumental in supporting URLLC latency requirements of the order of 1 ms, on a network topology covering distances up to 20 km (while ordinary traffic can be served by OLTs in the CO located more than 50 km away). The use of MESH-PON is key to support low-cost connectivity to enable densification of small cells and MEC nodes, which is a key factor for delivering the high capacity, reliability and coverage required by beyond 5G networks and applications.
###### Acknowledgements.
Financial support from SFI grants 17/CDA/4760, 18/RI/5721 and 13/RC/2077 is gratefully acknowledged.
|
2304.13811 | A Data-Driven Hybrid Automaton Framework to Modeling Complex Dynamical
Systems | In this paper, a computationally efficient data-driven hybrid automaton model
is proposed to capture unknown complex dynamical system behaviors using
multiple neural networks. The sampled data of the system is divided by valid
partitions into groups corresponding to their topologies and based on which,
transition guards are defined. Then, a collection of small-scale neural
networks that are computationally efficient are trained as the local dynamical
description for their corresponding topologies. After modeling the system with
a neural-network-based hybrid automaton, the set-valued reachability analysis
with low computation cost is provided based on interval analysis and a split
and combined process. At last, a numerical example of the limit cycle is
presented to illustrate that the developed models can significantly reduce the
computational cost in reachable set computation without sacrificing any
modeling precision. | Yejiang Yang, Zihao Mo, Weiming Xiang | 2023-04-26T20:18:12Z | http://arxiv.org/abs/2304.13811v1 | # A Data-Driven Hybrid Automaton Framework to Modeling Complex Dynamical Systems
###### Abstract
In this paper, a computationally efficient data-driven hybrid automaton model is proposed to capture unknown complex dynamical system behaviors using multiple neural networks. The sampled data of the system is divided by valid partitions into groups corresponding to their topologies and based on which, transition guards are defined. Then, a collection of small-scale neural networks that are computationally efficient are trained as the local dynamical description for their corresponding topologies. After modeling the system with a neural-network-based hybrid automaton, the set-valued reachability analysis with low computation cost is provided based on interval analysis and a split and combined process. At last, a numerical example of the limit cycle is presented to illustrate that the developed models can significantly reduce the computational cost in reachable set computation without sacrificing any modeling precision.
data-driven modeling, hybrid automata, neural networks
## I Introduction
Data-driven methods such as neural networks are widely used in modeling for their effectiveness without relying on the explicit mathematical model or prior knowledge of the system in a variety of research activities, e.g., modeling nonlinear dynamical systems in the description of Ordinary Differential Equations (ODEs) [1] such as modeling thermal conductivity of water-based nanofluid containing magnetic copper nanoparticles in [2], modeling groundwater-level variation in coastal aquifers in [3], etc. However, due to the high complexity of large-scale neural network models, some computationally expensive tasks such as reachability analysis are difficult to perform on neural-network-based models. Therefore, computationally efficient modeling methods are in critical need for neural-network-based models.
To improve the performance of the model and ensure that the model is matching the characteristics of the system, e.g., robustness, stability, etc, modeling is becoming a challenging task. The training of neural network models has received particular attention in the machine learning community. For instance, adding Lyapunov constraints in the training neural networks to enhance stabilization of learning nonlinear system in [4], studying the adversarial robustness of neural networks using robust optimization in [5], utilizing the idea of robust optimization in the training of neural networks in [6], estimating the Lipschitz constant of neural networks in [7], etc. Besides training, the verification for neural networks plays a crucial part in the usability and safety of the neural-network-based model is investigated in research such as providing reachable set estimation and safety verification for multi-layer neural networks in [8], verifying the neural network model by the star set based set-valued reachability analysis in [9], providing bound propagation-based neural network verifiers in [10, 11], etc. Training and verification are related to the size of the neural network, i.e., a single neural network that aims to be trained with all samples may lead to the training and verification for the neural network model becoming complex and time-consuming works.
Inspired by [12], a dynamical system can be modeled by a hybrid automaton with a finite number of local topologies plus transitions among them. Furthermore, if the dynamical description for each topology of the hybrid automaton is a neural network, and the neural network only needs to approximate the local system dynamics within that topology, which means the size of each neural network can be scaled down compared with using a large-scale neural network approximating the entire system dynamics. As a result, the computational complexity in either training or verification will be reduced and moreover, due to parallel training, the scalability can be further increased. In this paper, a neural-network-based hybrid automaton is proposed to reduce the computation cost in training and verification for modeling the dynamical systems. First, the given region is divided into valid partitions representing topologies of the proposed model, based on which the guards are defined. Sample data are selected into different groups with which the neural networks are trained respectively as the dynamical description of their corresponding topologies. Then, the Mean Square Error (MSE) and analysis of the set-valued reachability of our proposed method are provided. Lastly, a numerical example is given to illustrate the effectiveness of our approach.
The main contributions of this paper lie in the way to model the dynamical system using our computationally efficient neural-network-based hybrid automaton and its set-valued analysis. The neural-network-based hybrid automaton models the system with multiple neural networks with each trained with the sample group corresponding to its topology,
which reduces the computational complexity while increasing scalability. The set-valued analysis is given based on interval analysis and a _Split and Combined_ process.
The paper is organized as follows: preliminaries and problem formulation are given in Section II. The main result, modeling with neural-network-based hybrid automaton, and the set-valued analysis are given in Section III. In Section IV, a limit cycle modeling example is provided to evaluate our method. Conclusions are given in Section V.
## II Preliminaries and Problem Formulation
The dynamical systems in the paper are in the general discrete-time form of
\[x(k+1)=f(x(k),u(k)) \tag{1}\]
where \(x(k)\in\mathbb{R}^{n_{x}}\) is the system state, and \(u(k)\in\mathbb{R}^{n_{u}}\) is the system input, respectively. The evolution of system state is governed by \(f:\mathbb{R}^{n_{x}+n_{u}}\rightarrow\mathbb{R}^{n_{x}}\), which is an unknown nonlinear discrete-time process. In this paper, we aim to develop a novel data-driven, i.e., neural-network-based, modeling method to approximate this unknown nonlinear mapping \(f\).
**Assumption 1**: _It is assumed that state \(x(k)\) and input \(u(k)\) are all measurable from unknown nonlinear system (1)._
Under Assumption 1, the training data for modeling is defined as follows.
**Definition 1**: _Sampled data \(\mathcal{W}=\{w_{1},w_{2},\cdots,w_{L}\}\) of an unknown dynamical system (1) is a collection of sampled \(L\) traces obtained by measurement, where for each trace \(w_{i}\), \(i=1,\ldots,L\), is a finite sequence of time steps and data \((k_{0,i},d_{0,i}),(k_{1,i},d_{1,i}),\cdots,(k_{M_{i,i}},d_{M_{i,i}})\) in which_
* \(k_{\ell,i}\in(0,\infty)\) _and_ \(k_{\ell+1,i}=k_{\ell,i}+1\)_,_ \(\forall\ell=0,1,\ldots,M_{i}\)_,_ \(\forall i=1,2,\ldots,L\)_._
* \(d_{\ell,i}=[x_{i}^{\top}(k_{\ell,i}),\;u_{i}^{\top}(k_{\ell,i})]^{\top}\in \mathbb{R}^{n_{x}+n_{u}}\)_,_ \(\forall\ell=0,1,\ldots,M_{i}\)_,_ \(\forall i=1,2,\ldots,L\)_, where_ \(x_{i}(k_{\ell,i}),u_{i}(k_{\ell,i})\) _denote the state and input of the system at_ \(\ell\)_th step for_ \(i\)_th trace, respectively._
In this paper, it is assumed that there exist sufficient measurable input and state traces available in \(\mathcal{W}\) for the data-driven modeling of nonlinear system (1). Instead of using one single large-scale neural network, e.g., Deep Neural Networks (DNN), as in most of the existing results [8, 9, 13] which commonly suffer overly expensive computation in successive use especially for safety verification before deployment on safety-critical systems, we aim to develop a novel neural-network-based hybrid automaton consisting of a family of small-scale neural networks, i.e., shallow neural networks, along with inferred transitions to not only accurately approximate the unknown nonlinear system model \(f\), but also hold a great promise in performing computational-efficient verification.
In this work, we consider feedforward neural networks in the form of \(\Phi:\mathbb{R}^{n_{0}}\rightarrow\mathbb{R}^{n_{L}}\) defined by the following recursive equations in the form of
\[\begin{cases}\eta_{\ell}=\phi_{\ell}(W_{\ell}\eta_{\ell-1}+b_{\ell}),\;\ell= 1,\ldots,L\\ \eta_{L}=\Phi(\eta_{0})\end{cases} \tag{2}\]
where \(\eta_{\ell}\) denotes the output of the \(\ell\)-th layer of the neural network, and in particular \(\eta_{0}\in\mathbb{R}^{n_{0}}\) is the input to the neural network and \(\eta_{L}\in\mathbb{R}^{n_{L}}\) is the output produced by the neural network, respectively. \(W_{\ell}\in\mathbb{R}^{n_{\ell}\times n_{\ell-1}}\) and \(b_{\ell}\in\mathbb{R}^{n_{\ell}}\) are weight matrices and bias vectors for the \(\ell\)-th layer. \(\phi_{\ell}=[\psi_{\ell},\cdots,\psi_{\ell}]\) is the concatenation of activation functions of the \(\ell\)-th layer in which \(\psi_{\ell}:\mathbb{R}\rightarrow\mathbb{R}\) is the activation function.
In this paper, we focus on reducing the computational complexity of neural-network-based models in terms of reachability analysis. The reachable set of neural network (2) is given as follows.
**Definition 2**: _Given neural network (2) with a bounded input set \(\mathcal{V}\), the following set_
\[\mathcal{Y}\triangleq\{\eta_{L}\mid\eta_{L}=\Phi(\eta_{0}),\;\eta_{0}\in \mathcal{V}\subset\mathbb{R}^{n}\} \tag{3}\]
_is called the reachable set of neural network (2)._
**Remark 1**: _The computation complexity for reachable set computation of \(\mathcal{Y}\) heavily relies on the number of layers and neurons. To enable computationally efficient neural-network-based models in particular for reachability analysis and successive verification procedures, we have to choose neural networks with fewer layers and neurons, however, it usually leads to low training accuracy for complex nonlinear systems. Our goal in this paper is to overcome this dilemma of computation complexity versus training accuracy._
## III Main Results
### _Neural-Network-Based Hybrid Automata_
The main goal of this paper is to develop an efficiently verifiable data-driven model for nonlinear system (1). Inspired by the hybridization methods for the analysis of nonlinear systems in [14, 15] which enable the efficient verification procedures of hybrid automata with complex, nonlinear dynamics through an abstraction process of multiple local subsystems of simplified forms, we propose a hybridization method to model unknown nonlinear dynamics (1) in data-driven modeling scenarios utilizing a collection of small/moderate-size neural networks characterizing local system behaviors.
A neural-network-based hybrid automaton consists of variable components describing both dynamics and the discrete transition logic of a system, which will be used as the modeling framework in the rest of this paper.
**Definition 3**: _A neural-network-based hybrid automaton is defined by a tuple \(\mathcal{H}\triangleq\langle\mathcal{Q},\mathcal{X},init,\mathcal{U},\mathcal{E },g,\mathcal{G},inv,\mathcal{F}\rangle\) in which the components are defined by:_
* _Topologies_:_ \(\mathcal{Q}\triangleq\{q_{1},q_{2},\ldots,q_{N}\}\) _is a finite set of topologies._
* _State Variables_:_ \(\mathcal{X}\subset\mathbb{R}^{n_{x}}\) _is the set of state variables with the_ _state_ _defined by_ \((q,x)\in\mathcal{Q}\times\mathcal{X}\)_._
* _Initial conditions:_ _init_ \(\subseteq\mathcal{Q}_{0}\times\mathcal{X}_{0}\) _is the initial state set, in which_ \(\mathcal{Q}_{0}\subseteq\mathcal{Q}\) _and_ \(\mathcal{X}_{0}\subseteq\mathcal{X}\)_._
* _Inputs_:_ \(\mathcal{U}\triangleq\{u_{1},u_{2},\ldots,u_{M}\}\) _is the set of inputs for each topologies._
* _Transitions_:_ \(\mathcal{E}\subset\mathcal{Q}\times\mathcal{Q}\) _is the set of transitions where a discrete transition from_ \(i\)_th topology to_ \(j\)_th topology is taking place, i.e.,_ \(q_{i}\to q_{j}\)_,_ \(e_{ij}=(q_{i},q_{j})\in\mathcal{E}\)_._
* _Guard functions_: \(g:\mathcal{E}\rightarrow\mathcal{G}\) is the guard function mapping each transition element \(e_{ij}\) to its guard \(g(e_{ij})\in\mathcal{G}\).
* _Guards_: \(\mathcal{G}\subseteq 2^{\mathcal{X}}\) is the guard set which satisfies \(\forall e_{ij}\in\mathcal{E}\), \(g(e_{ij})\in\mathcal{G}\). The _guard_ is satisfied by the state when the hybrid automaton model takes a transition from the current topology to another given topology, i.e., \((q_{k},x_{k})\vDash g(e_{ij})\) if and only if \(q_{k}=q_{i}\) and \(x_{k}\in g(e_{ij})\).
* _Invariants_: \(inv:\mathcal{Q}\rightarrow 2^{\mathcal{X}}\) is a mapping that assigns an invariant \(inv(q)\subseteq\mathcal{X}\) for each topology \(q\in\mathcal{Q}\). An _invariant_ is satisfied by all the states of a hybrid automata model for a given topology, i.e., \((q,x)\vDash inv(q)\) if and only if \(x\in inv(q)\).
* _Set of Dynamical Description_: \(\mathcal{F}\) is the set of dynamical description which describes the dynamical process for each given topology \(q\in\mathcal{Q}\). In this paper, neural network \(\Phi_{q}(x(k),u(k))\in\mathcal{F}\) defines the dynamics for each topology \(q\in\mathcal{Q}\) in a given time step \(k\in[k_{1},k_{2}]\).
Given sampled data set \(\mathcal{W}\) defined by Definition 1, the data-driven modeling problem for unknown dynamical system (1) is to establish a neural-network-based hybrid automaton, i.e., \(\mathcal{H}=\langle\mathcal{Q},\mathcal{X},init,\mathcal{U},\mathcal{E},g, \mathcal{G},inv,\mathcal{F}\rangle\) with \(\mathcal{F}\) represented by a collection of neural networks \(\Phi_{q}\), \(q\in\mathcal{Q}\). In addition, the constructed neural-network-based hybrid automaton model \(\mathcal{H}\) is expected to be with low computational complexity without sacrificing training accuracy. Specifically, the modeling and verification challenges are described as follows.
**Problem 1**: _Given sampled data set \(\mathcal{W}\) collected from measurable input and state traces of dynamical system (1), how does one construct a hybrid automaton \(\mathcal{H}\) embedded with neural network in the form of (2) to accurately capture the system dynamical behaviors of system (1)?_
**Problem 2**: _Given neural-network-based hybrid automaton model \(\mathcal{H}\) as defined by Definition 3, how does one perform efficient verification procedures on \(\mathcal{H}\)?_
### _Data-Driven Modeling Processes_
To learn local system behaviors, state space partitioning is required for the segmentation of training data set \(\mathcal{W}\).
**Definition 4**: _Given a compact set \(\mathcal{X}\subset\mathbb{R}^{n_{x}}\). A finite collection of sets \(\mathscr{X}\triangleq\{\mathcal{X}^{(1)},\mathcal{X}^{(2)},\ldots,\mathcal{X} ^{(N)}\}\) is called a partition of \(\mathcal{X}\) if (1) \(\mathcal{X}^{(q)}\subseteq\mathcal{X}\), \(\forall q=1,\ldots,N\); (2) \(\mathrm{int}(\mathcal{X}^{(q)})\cap\mathrm{int}(\mathcal{X}^{(p)})=\emptyset, \forall q\neq p\); (3) \(\mathcal{X}\subseteq\bigcup_{q=1}^{N}\mathcal{X}^{(i)}\). Each elements of \(\mathcal{X}^{(q)}\) of partition \(\mathscr{X}\) is called a cell._
**Remark 2**: _In this paper, we use cells defined by hyper-rectangles which are given as follows: For any bounded state set \(\mathcal{X}\subseteq\mathbb{R}^{n_{x}}\), we define \(\mathcal{X}\subseteq\bar{\mathcal{X}}\), where \(\bar{\mathcal{X}}=\{x\in\mathbb{R}^{n_{x}}\mid\underline{x}\leq x\leq\bar{x}\}\), in which \(\underline{x}\) and \(\bar{x}\) are defined as the lower and upper bounds of state \(x\) in \(\mathcal{X}\) as \(\underline{x}=[\inf_{x\in\mathcal{X}}(x_{1}),\ldots,\inf_{x\in\mathcal{X}}(x_ {n_{x}})]^{\top}\) and \(\bar{x}=[\sup_{x\in\mathcal{X}}(x_{1}),\ldots,\sup_{x\in\mathcal{X}}(x_{n_{x} })]^{\top}\), respectively. Then, we are able to partition interval \(\mathcal{I}_{i}=[\inf_{x\in\mathcal{X}}(x_{i}),\ \sup_{x\in\mathcal{X}}(x_{i})]\), \(i\in\{1,\ldots,n_{x}\}\) into \(N_{i}\) segments as \(\mathcal{I}_{i,1}=[x_{i,0},x_{i,1}]\), \(\mathcal{I}_{i,2}=[x_{i,1},x_{i,2}]\), \(\ldots\), \(\mathcal{I}_{i,N_{i}}=[x_{i,N_{i}-1},x_{i,N_{i}}]\), where \(x_{i,0}=\inf_{x\in\mathcal{X}}(x_{i})\), \(x_{i,N_{i}}=\sup_{x\in\mathcal{X}}(x_{i})\) and \(x_{i,n}=x_{i,0}+\frac{m(x_{i,N_{i}}-x_{i,0})}{N_{i}}\), \(m\in\{0,1,\ldots,N_{i}\}\). The cells then can be constructed as \(\mathcal{X}_{q}=\mathcal{I}_{1,m_{1}}\times\cdots\times\mathcal{I}_{n_{x},m_{ n_{x}}}\), \(q\in\{1,2,\ldots,\prod_{s=1}^{n_{x}}N_{s}\}\), \(\{m_{1},\ldots,m_{n_{x}}\}\in\{1,\ldots,N_{1}\}\times\cdots\times\{1,\ldots,N_{ n_{x}}\}\). To remove redundant cells, we have to check if the cell has an empty intersection with \(\mathcal{X}\). Cell \(\mathcal{X}^{(q)}\) should be removed if \(\mathcal{X}_{q}\cap\mathcal{X}=\emptyset\), and the remaining cells \(\mathcal{X}^{(q)}\) constitute \(\mathscr{X}=\{\mathcal{X}^{(1)},\mathcal{X}^{(2)},\ldots,\mathcal{X}^{(N)}\}\).
Based on a collect of sets \(\mathscr{X}=\{\mathcal{X}^{(1)},\mathcal{X}^{(2)},\ldots,\mathcal{X}^{(N)}\}\), we will be able to segment sampled data set \(\mathcal{W}\) into a collection of sets \(\mathscr{W}=\{\mathcal{W}_{1},\mathcal{W}_{2},\ldots,\mathcal{W}_{N}\}\) where
\[\mathcal{W}_{q}\triangleq\{x(k)\mid x(k)\in\mathcal{X}_{q},\ x(k+1)\in \mathcal{X}^{(q)}\} \tag{4}\]
in which \(x(k)\), \(x(k+1)\) are any sampled states in traces \(w_{i}\), \(i=1,\ldots,L\) defined by Definition 1. Therefore, \(\mathcal{W}_{q}\) contains all the sampled state traces evolving within cell \(\mathcal{X}^{(q)}\).
To train neural network \(\Phi_{q}\) to model local system behaviors of dynamical system (1) evolving in \(\mathcal{X}^{(q)}\), the training input-output pairs need to be abstracted from trace set \(\mathcal{W}_{q}\).
**Definition 5**: _Given sampled trace \(w_{i}\) in set \(\mathcal{W}_{q}\), an input-output pair is defined as_
\[p_{\ell,i,q}=\{d_{\ell,i},\ x_{i}(k_{\ell+1},i)\} \tag{5}\]
_where \(d_{\ell,i}=[x_{i}^{\top}(k_{\ell,i}),\ u_{i}^{\top}(k_{\ell,i})]^{\top}\) is given in Definition 1, and \(d_{\ell,i}\in\mathcal{W}_{q}\times\mathcal{U}\), \(x_{i}(k_{\ell+1},i)\in\mathcal{X}_{q}\). The training data set out of \(\mathcal{W}_{q}\) includes all the input-output pairs and is given as below:_
\[\mathcal{P}_{q}=\{p_{0,1,q},\ldots,p_{\ell,i,q},\ldots,p_{M_{L},L,q}\} \tag{6}\]
_where \(\ell=0,1,\ldots,M_{i}\), \(i=1,\ldots,L\)._
Under Assumption 1, we assume that there exist a sufficient number of training input-output pairs in each \(\mathcal{P}_{q}\) out of segmented data set \(\mathcal{W}_{q}\) to train neural network \(\Phi_{q}\) for modeling local system behaviors in cell \(\mathcal{X}^{(q)}\).
With training set \(\mathcal{P}_{q}\) for each cell \(\mathcal{X}^{(q)}\), the neural networks can be trained respectively for location \(q\). The set of dynamical description \(\mathcal{F}\) which is a collection of neural networks \(\Phi_{q}\), \(q=1,\ldots,N\) can be trained, which can be summarized as the following problem:
\[\min_{W_{q},b_{q}}\left\|\Phi_{q}(D_{q})-\hat{X}_{q}\right\|,\ q=1,2,\cdots,N \tag{7}\]
where \(W_{q}\) and \(b_{q}\) are weight matrices and bias vectors to determine neural network \(\Phi_{q}\), \(D_{q}\) are input data matrix and \(\hat{X}_{q}\) is output data matrix from input-output pair \(\mathcal{P}_{q}\), respectively.
**Remark 3**: _In general, the learning processes can be viewed as an optimization procedure to find optimized weights and biases that minimize the error between the output of the trained neural network and output training data, as described in (7). Instead of using one single neural network for modeling, a collection of neural networks \(\Phi_{q}\), \(q=1,\ldots,N\) are used to model local system behaviors in each cell \(\mathcal{X}^{(q)}\), \(q=1,\ldots,N\). We have the following advantages if we use multiple neural networks:_
* _Compared with modeling the complex global system behavior using a single large-size neural network_ \(\Phi\)_, a number of neural networks_ \(\Phi_{q}\)_,_ \(q=1,\ldots,N\)
* Since \(\mathcal{W}_{q}\), \(q=1,\ldots,N\) are independent of each other, the training processes for small-size neural networks \(\Phi_{q}\), \(q=1,\ldots,N\) can be conducted in a parallel manner which would be more computationally efficient than training a large scale neural network.
* Even though there are multiple neural networks in the model, only one small-size neural network is activated at each time step \(k\). Therefore, the computation effort at each step is only determined by the active small-size neural network. This feature is extremely helpful for executing computationally expensive tasks based on the model such as reachability-based safety verification in which we only need to compute the reachable set of a small-size neural network at each step.
The above benefits of using multiple small-size neural networks enable efficiently verifiable neural-network-based models of dynamical system (1). A detailed evaluation will be presented in the evaluation section.
After obtaining the collection of neural networks as the set of dynamical descriptions in the hybrid automaton, the transition between two neural networks of dynamical descriptions is defined as follows.
**Definition 6**: _Transitions between two topologies are automatically generated by the dynamical description of hybrid automaton, such that \(\forall p,q=1,2,\ldots,N,\ p\neq q\), \((p,q):x\in\mathcal{X}^{(p)},\ \Phi_{p}(x,u)\in\mathcal{X}^{(q)}\)._
An illustration of neural-network-based hybrid automaton model \(\mathcal{H}\) is given in Fig. 1. After modeling the dynamical system (1) in the framework of hybrid automaton model \(\mathcal{H}\), we can evaluate \(\mathcal{H}\) using the following Mean Square Error (MSE) performance out of \(L\) test traces which is defined as
\[\mathrm{MSE}=\frac{1}{\sum\limits_{\ell=1}^{L}M_{i}}\sum\limits_{\ell=1}^{L} \left\|\sum\limits_{k=1}^{M_{i}-1}(\Phi_{q}(x_{i}(k),u(k))-x_{i}(k+1))\right\|\]
in which \(M_{i}\) denotes the length of \(i\)th trace. In the evaluation example, this MSE performance will be used for evaluating model precision.
### _Reachable Set Analysis_
In this subsection, Problem 2, i.e., safety verification, will be addressed in the framework of reachability. The reachable set of hybrid automaton model \(\mathcal{H}\) is defined as follows.
**Definition 7**: _Given a neural-network-based hybrid automaton model \(\mathcal{H}\) with initial set \(\mathcal{X}_{0}\) and input set \(\mathcal{U}\), the reachable set at a time instant \(k\) is \(\mathcal{X}_{k}\triangleq\{x(k)\mid x(k)\text{ satisfies }\mathcal{H}\text{ and }x(0)\in \mathcal{X}_{0}\}\) and the reachable set over time interval \([0,k_{f}]\) is defined by \(\mathcal{X}_{[0,k_{f}]}=\bigcup_{s=0}^{k_{f}}\mathcal{X}_{s}\)._
Reachable set analysis for a neural-network-based hybrid automaton can be referred for safety verification in [13]. Based on Definition 7, \(\mathcal{X}_{k}\) may intersect multiple elements from \(\mathcal{X}\), which means there will be a split computation of reachable set for each intersection. This process is called _Split_ and is defined by
**Definition 8**: _For a reachable set \(\mathcal{X}_{k}\) of \(\mathcal{H}\) intersects with \(l\) elements of \(\mathcal{X}\), given a subspace \(\mathcal{V}_{i,k}\) from \(\mathcal{X}_{k}\) in which \(\mathcal{V}_{i,k}:\mathcal{V}_{i,k}=(\mathcal{X}_{k}\cap\mathcal{X}^{(m)});\ \ \cup_{i=1}^{l} \mathcal{V}_{i,k}=\mathcal{X}_{k},\ \forall i=1,\cdots,l,\ \exists m=1, \cdots,N\), the process of splitting analysis of the output space \(\mathcal{V}_{i,k+1}\) is given by_
\[\mathcal{V}_{i,k+1}\triangleq\{\eta_{i,k+1}\mid\eta_{i,k+1}=\Phi_{m}(\eta_{i, k}),\ \eta_{i,k}\in\mathcal{V}_{i,k}\} \tag{8}\]
_where the process of obtaining \(\mathcal{V}_{i,k+1},\ \forall i=1,2,\cdots,N\) is called Split._
After _Split_, the _Combine_ process is needed to obtain a complete reachable set for the next step.
**Definition 9**: _For \(\mathcal{V}_{i,k+1},\ i=1,\cdots,l\) the output reachable set \(\mathcal{X}_{k+1}\) for a neural-network-based hybrid automaton model \(\mathcal{H}\) at time step \(k+1\) is given by \(\mathcal{X}_{k+1}\triangleq\bigcup_{i=1}^{l}\mathcal{V}_{i,k}\) by which the Combine derives the reachable set at \(k+1\) time instance._
With the _Split and Combine_ defined above, the reachable set of \(\mathcal{H}\) can be paralleling analyzed at time instance \(k\) if \(\mathcal{X}_{k}\) intersects with multiple elements from \(\mathcal{X}\). The _Split_ and _Combine_ is given in Algorithm 1.
**Remark 4**: _Note that if the output reachable set \(\mathcal{X}_{k}\) intersects with multiple valid partitions, the process of Split & Combine may compute the output reachable sets for more times than modeling with a single neural network and the conservatism may increase because of Combine. However, according to [8, 9, 13], the computational cost for the set-valued analysis of the neural network is mainly affected by the scale of the neural network model, e.g., the layers and neurons of the neural network model. In our case, due to parallel training of shallow neural network models, the neural-network-based hybrid automaton model may have less computational cost compared with traditional methods._
## IV Evaluation
In this section, a numerical example of the limit cycle borrowed from [16] is used for evaluation in the form of
\[\begin{split} r(k+1)&=(1+\tau)r(k)-\tau r^{3}(k)+ \tau u(k)\\ \theta(k+1)&=\theta(k)+\tau\omega\\ u(k)&=\mu+\delta\zeta(k)\end{split} \tag{9}\]
where \(\omega=2\pi/3\) and \(\tau=0.1\) are the angular velocity and time step width, respectively and the uniform random number \(\zeta(k)\sim U(-1,1)\). Namely, the input \(u(k)\sim U(\mu-\delta,\mu+\delta)\) (\(\mu=0.2\) and \(\delta=1.5\)) in which \(U\) denotes uniform distribution.
Given dynamical system (9), 50 training traces \(w=\{w_{1},w_{2},\cdots,w_{50}\}\) with \(h_{i}=50,\ \forall i=1,2,\cdots,50\) are generated with random initial output where \(\theta(0)\in[-\pi,\pi]\), \(r\in[-4,4]\). To derive the neural-network-based hybrid automaton \(\mathcal{H}\) which aims to model the dynamics of system (9). Firstly, the state region \(\mathcal{X}\) defines \(x_{1}\in[-4,4]\) and \(x_{2}\in[-3,3]\). Based on \(\mathcal{X}\), valid partitions representing the _Topologies_ of \(\mathcal{H}\) are obtained by desired segments with \(M_{1}=4\), \(M_{2}=3\) for each dimension, totaling 12 topologies. The _Transitions_ between topologies are obtained by the relationship between samples and regions.
Then, by training a set of neural networks with each \(\Phi_{i}\) containing \(20\) hidden neurons for each \(q_{i}\), the hybrid automaton model \(\mathcal{H}\) is obtained. Moreover, for the sake of comparison, a neural network model \(\Phi\) with \(200\) hidden neurons with similar MSE is trained as well. Fig. 4 shows that both single neural network and hybrid automaton models can capture the system's behaviors well. However, in Table I, it
Fig. 4: 50 test trajectories of single neural network model (a) and neural-network-based hybrid automaton model (b) with 12 neural networks.
Fig. 3: 50 trajectories of the limit cycle with random initial condition \(r(0)\in[-4,4]\), \(\theta(0)\in[-\pi,\pi]\) each of which contains 150 samples and the input \(u\sim U(-1.3,1.7)\).
Fig. 2: Sketch map for _Split_ while \(\mathcal{X}_{k}\) intersects with \(l=4\) valid partitions
can be observed that hybrid automaton model \(\mathcal{H}\) is with lower MSE which implies higher modeling precision. In addition, the training time can be significantly reduced by training for 12 neural networks parallelly.
Set-valued analysis using NVV in [9] and _Split and Combine_ compared with a single neural-network-based dynamical model is given in Fig. 5, and hybrid automaton model can produce tighter reachable set other than single neural network does. Moreover, the reachable set computation time has been significantly reduced compared with a single neural network model as shown in Fig. 6.
In summary, the evaluation results show that the developed neural-network-based hybrid automaton model can enable computationally efficient training and reachability analysis processes with better modeling accuracy and reachability analysis results for complex dynamical systems.
## V Conclusions
A data-driven neural-network-based hybrid automaton is developed to model complex dynamical systems. First, sampled data is generated by the dynamical system given random initial conditions and input series. Then, the region of output is divided into valid partitions, based on which the topologies and guards of the proposed model are obtained. Neural networks are trained as the dynamical description for their corresponding topologies. The set-valued reachability of the proposed model is analyzed by the reachable set estimation method and _Split and Combine_. Modeling a numerical example of the limit cycle with a neural-network-based hybrid automaton is given. Compared with the traditional model method with one single neural network, the computational cost can be significantly reduced for computationally expensive tasks such as reachable set computation.
|
2303.11079 | Differentially Private Algorithms for Synthetic Power System Datasets | While power systems research relies on the availability of real-world network
datasets, data owners (e.g., system operators) are hesitant to share data due
to security and privacy risks. To control these risks, we develop
privacy-preserving algorithms for the synthetic generation of optimization and
machine learning datasets. Taking a real-world dataset as input, the algorithms
output its noisy, synthetic version, which preserves the accuracy of the real
data on a specific downstream model or even a large population of those. We
control the privacy loss using Laplace and Exponential mechanisms of
differential privacy and preserve data accuracy using a post-processing convex
optimization. We apply the algorithms to generate synthetic network parameters
and wind power data. | Vladimir Dvorkin, Audun Botterud | 2023-03-20T13:38:58Z | http://arxiv.org/abs/2303.11079v1 | # Differentially Private Algorithms for Synthetic Power System Datasets
###### Abstract
While power systems research relies on the availability of real-world network datasets, data owners (e.g., system operators) are hesitant to share data due to security and privacy risks. To control these risks, we develop privacy-preserving algorithms for the synthetic generation of optimization and machine learning datasets. Taking a real-world dataset as input, the algorithms output its noisy, synthetic version, which preserves the accuracy of the real data on a specific downstream model or even a large population of those. We control the privacy loss using Laplace and Exponential mechanisms of differential privacy and preserve data accuracy using a post-processing convex optimization. We apply the algorithms to generate synthetic network parameters and wind power data.
## I Introduction
Power system datasets are instrumental for enhancing solutions to many problems, including optimal power flow (OPF) and wind power forecasting. Releasing real data, however, is challenging due to security and privacy concerns. For example, detailed network datasets inform false data injection attacks on SCADA systems [1], and strategic market players may leverage bidding records to maximize profits at the expense of deteriorating social welfare [2]. These concerns motivate producing synthetic datasets - a sanitized version of private datasets that approximately preserve accuracy of data for power system problems.
Differential privacy (DP) is an algorithmic notion of privacy preservation that enables quantifiable trade-offs between data privacy and accuracy [3]. It has found applications in the context of privacy-preserving OPF computations, e.g., in distributed control algorithms [4] and in centralized solvers for distribution and high-voltage grids [5, 6]. It has also been applied to enhance data privacy in machine learning problems in power systems [7]. Models in [4, 5, 6, 7], however, only control data leakages in computational results and do not provide synthetic data per se.
Producing synthetic datasets in a DP way is achieved by corrupting data with privacy-preserving noise [8, 9]. However, applications of the standard noise-additive DP mechanisms in power systems, such as the Laplace mechanism, may no longer admit a meaningful result. Indeed, adding noise to data may fundamentally alter important statistics and trends in machine learning datasets [10], e.g., monotonic dependency of wind power generation on wind speed. In the OPF context, the authors in [11] and [12] showed that the Laplacian perturbation of network parameters almost surely violates feasibility on a broad range of power system benchmarks. As a remedy, they proposed an optimization-based post-processing which restores the accuracy of synthetic OPF datasets without altering the privacy guarantee. The proposed restoration, however, renders the synthetic dataset feasible only for a particular OPF model. Repeated applications of the Laplace mechanism to restore accuracy on many OPF models (e.g., for different instances of variable renewable production) may not be possible, as noise must be scaled respecting the number of repetitions, as per composition of DP [3].
In this letter, we introduce two private synthetic dataset generation algorithms for power systems, which ensure the accuracy of synthetic datasets for downstream models. The algorithms enjoy a combination of known DP mechanisms and convex (and mixed-integer) optimization of synthetic data. Specifically, we develop:
1. Wind power obfuscation (WPO) algorithm which privately releases historical wind power measurements, while guaranteeing DP of the real data and ensuring accuracy in terms of the outcomes of a downstream regression analysis.
2. Transmission capacity obfuscation (TCO) algorithm, which releases synthetic line parameters, while ensuring that they remain feasible and cost-consistent with respect to real data on a population of OPF models. Here, we use both Laplace and Exponential mechanisms of DP to substantially reduce the noise compared to using the Laplace mechanism alone.
In the next section, we review the basic DP results. In Sections III and IV we present the two algorithms and their theoretical properties. Section V provides numerical experiments, and Section VI concludes. Proofs are relegated to the Appendix.
## II Preliminaries on Differential Privacy
This section reviews basic DP results serving as building blocks for our privacy-preserving dataset generation algorithms.
Consider a vector \(y\in\mathcal{Y}\subseteq\mathbb{R}^{n}\) collecting \(n\) private records from a dataset universe \(\mathcal{Y}\), and consider a query \(Q:\mathcal{Y}\mapsto\mathcal{R}\) as a mapping from universe \(\mathcal{Y}\) to some range \(\mathcal{R}\). Queries of interest include simple numerical queries, i.e., identity query \(Q(y)=y\), and optimization and ML queries, such as OPF or regression models. The goal is to make _adjacent_ vectors \(y,y^{\prime}\in\mathcal{Y}\) of private records, statistically indistinguishable in query answers.
**Definition 1** (Adjacency [13]): _Vectors \(y,y^{\prime}\in\mathcal{Y}\) are said to be \(\alpha-\)adjacent, denoted as \(y\sim_{\alpha}y^{\prime}\), if \(\exists i\in 1,\ldots,n\), s.t. \(y_{j}=y^{\prime}_{j},\forall j\in\{1,\ldots,n\}\setminus i\), and \(|y_{i}-y^{\prime}_{i}|\leqslant\alpha\) for \(\alpha>0\). That is, the adjacent datasets are different only in one item by at most \(\alpha\)._
A statistical similarity of query answers is captured by the notion of differential privacy, attained through randomization.
**Definition 2** (\(\varepsilon-\)differential privacy [3]): _A random query \(\tilde{Q}:\mathcal{Y}\mapsto\mathcal{R}\) is \(\varepsilon-\)differentially private if, for any output \(r\subseteq\mathcal{R}\)
and any \(\alpha-\)adjacent vectors \(y,y^{\prime}\in\mathcal{Y}\), the following ratio holds
\[\frac{\text{Pr}[\tilde{Q}(y\,)=r]}{\text{Pr}[\tilde{Q}(y^{\prime})=r]}\leqslant \text{exp}(\varepsilon). \tag{1}\]
where probability is with respect to the randomness of \(\tilde{Q}\).
Privacy parameter \(\varepsilon>0\) is termed _privacy loss_: with smaller \(\varepsilon\) we achieve stronger privacy protection. Indeed, for small \(\varepsilon\) we have \(\text{exp}(\varepsilon)\approx 1+\varepsilon\), thereby making any two adjacent datasets \(y\) and \(y^{\prime}\) statistically similar in the answer of the randomized query.
**Theorem 1** (Sequential composition [3]): _A series \(\tilde{Q}_{1}(y),\ldots,\)\(\tilde{Q}_{k}(y)\) of \(\varepsilon_{i}-\)DP queries on dataset \(y\) satisfies \(\sum_{i=1}^{k}\varepsilon_{i}-\)DP._
**Theorem 2** (Post-processing immunity [3]): _If query \(\tilde{Q}\) satisfies \(\varepsilon\)-DP, then \(g\circ\tilde{Q}(y)\), where \(g\) is an arbitrary, data-independent post-processing of the query answer, also satisfies \(\varepsilon\)-DP._
The first results bounds the privacy loss over multiple queries, and the second result states that any data-independent transformation of a DP query answer preserves the privacy guarantee.
A numerical query is made DP by adding random noise to its output. The noise magnitude depends on the worst-case sensitivity \(\delta_{Q}\) of query \(Q\) to adjacent datasets, defined as
\[\delta_{Q}=\text{max}_{y^{\sim\omega}y^{\prime}}\left\|Q(y)-Q(y^{\prime}) \right\|_{1}.\]
Let \(\text{Lap}(\lambda)^{k}\) denote a sample from the \(k-\)dimensional Laplace distribution with zero mean and scale parameter \(\lambda\). DP of a numerical query is then achieved with the following result.
**Theorem 3** (Laplace mechanism [14]): _Let \(Q\) be a query that maps datasets to \(\mathbb{R}^{k}\). Then, the Laplace mechanism which outputs \(Q(y)+\text{Lap}(\delta_{Q}/\varepsilon)^{k}\) achieves \(\varepsilon-\)DP._
We also like to limit privacy losses when answering non-numerical queries. For example, given a population \(\mathcal{Q}\) of queries, we would like to answer the question: _which query \(Q\in\mathcal{Q}\) gives the maximum value on a private dataset \(y\)?_ The following Exponential mechanism answers this question privately.
**Theorem 4** (Exponential mechanism [15]): _Let \(\mathcal{Q}\) be a query population, and let \(u:\mathcal{Y}\times\mathcal{Q}\mapsto\mathbb{R}\) be the score function with sensitivity \(\delta_{u}\). Then, the Exponential mechanism which outputs query \(Q\in\mathcal{Q}\) proportionally to \(\text{exp}\left(\frac{\text{exp}(y,Q)}{2\delta_{u}}\right)\) attains \(\varepsilon-\)DP._
For discrete populations of queries, i.e., \(\mathcal{Q}=Q_{1},\ldots,Q_{m}\), we can adopt the report-noisy-max algorithm [3, SS3.3] - an efficient implementation of the exponential mechanism for finite \(\mathcal{Q}\).
Next, we leverage these results to design DP algorithms for synthetic dataset generation as applicable to power systems.
## III Privacy-Preserving Wind Power Dataset Release
Consider the problem of a wind turbine operator (data owner) who wants to release synthetic wind power records in a differentially private way. The real dataset \(\mathcal{D}=\{(x_{1},y_{1}),\ldots,(x_{m},y_{m})\}\) consists of \(m\) records, where each record \(i\) includes some public weather data \(x_{i}\in\mathbb{R}^{n}\) and a private power measurement \(y_{i}\in\mathbb{R}\) subject to obfuscation. The release of the synthetic dataset takes the form \(\tilde{\mathcal{D}}=\{(x_{1},\tilde{y}_{1}),\ldots,(x_{m},\tilde{y}_{m})\}\), where \(\tilde{y}_{i}\) is a synthetic measurement. To provide formal privacy guarantees in this release, we could perturb each real record \(y_{i}\) with the Laplace mechanism of Theorem 3. However, the application of the Laplace mechanism alone is ignorant of the accuracy of the resulting dataset in the downstream analysis, and such a release may not be useful. We discuss the dataset accuracy in terms of the outcomes of a regression (downstream) problem
\[\underset{\beta}{\text{minimize}}\quad\left\|X\beta-y\right\|+\lambda\left\| \beta\right\|, \tag{2}\]
which minimizes the loss function by optimally choosing regression weights \(\beta\in\mathbb{R}^{p}\), given some small regularization parameter \(\lambda\) to prevent overfitting. Here, matrix \(X^{m\times p}\) collects weather features; we do not require \(p=n\), as model (2) may not include all meteorological data from \(\mathcal{D}\) and may also enjoy certain feature transformations (e.g., squared wind speeds). The goal is thus to release a synthetic dataset \(\tilde{\mathcal{D}}\) whose regression loss and weights are consistent with those on the real dataset. On a particular vector of measurements \(\overline{y}\), we denote the regression loss and weights by \(\ell(\overline{y})\) and \(\beta(\overline{y})\), respectively. To estimate them on the real dataset privately, we need to bound their sensitivities to adjacent datasets.
**Lemma 1** (Regression sensitivity bounds): _For any two \(\alpha\)-adjacent vectors of wind power measurements \(y,y^{\prime}\in\mathbb{R}^{m}\), the worst-case sensitivity of regression weights is bounded as_
\[\delta_{\beta}=\text{max}_{y^{\sim\alpha}y^{\prime}}\left\|\beta(y)-\beta(y^{ \prime})\right\|_{1}\leqslant\left\|(X^{\top}X+\lambda I)^{-1}X^{\top}\right\| _{1}\alpha,\]
_and the worst-case sensitivity of the regression loss_
\[\delta_{\ell}=\text{max}_{y^{\sim\alpha}y^{\prime}}\left\|\ell(y)-\ell(y^{ \prime})\right\|_{1}\]
_is bounded by the solution of the following problem:_
\[\delta_{\ell}\leqslant\underset{i=1,\ldots,m}{\text{maximize}}\left\|(X(X^{ \top}X+\lambda I)^{-1}X^{\top}-I)(e_{i}\circ\alpha)\right\|.\]
Proof:: See Appendix A.
Importantly, the two bounds only depend on public information, i.e., weather features, regularization and adjacency parameters, and completely independent from private measurements \(y\).
### _Differentially Private WPO Algorithm_
We now introduce the privacy-preserving wind power obfuscation (WPO) Algorithm 1. The algorithm takes the real dataset, privacy and regularization parameters as inputs, and produces a consistent synthetic dataset of wind power records. It relies on Lemma 1 to privately reveal regression results on a real dataset, and then leverages them to restore the consistency of the synthetic
dataset using a post-processing optimization. Specifically, at Step 1, the algorithm initializes the synthetic datasets using the Laplace mechanism. Then, at Step 2, it computes the approximate regression loss and weights using the Laplace mechanism twice. At the last Step 3, the synthetic dataset undergoes optimization-based post-processing to ensure that the regression results on the synthetic dataset are consistent with those on the real data.
The post-processing is based on the hierarchical optimization (3), where the upper-level problem (3a)-(3b) optimizes the synthetic dataset \(\tilde{y}\) in response to the outcomes of the embedded lower-level regression problem (3c). In the upper-level objective, the first term improves the consistency in terms of regression loss, while the second and third terms are used for regularizing the synthetic dataset. Indeed, the losses \(l\) and \(\overline{l}\) can be matched with infinitely many assignments of \(\beta\) and \(\tilde{y}\). Thus, by setting a small parameter \(\gamma_{\beta}>0\), the matching is achieved with a close approximation of the regression weights on the real data. Similarly, by setting a small parameter \(\gamma_{y}>0\), we regularize the new data points according to the perturbation of real data points at Step 1. Finally, the upper-level constraint (3b) guarantees that the synthetic dataset respects the nominal power limits.
While the hierarchical optimization (3) is originally intractable, we arrive at its tractable convex reformulation by substituting the lower-level problem (3c) with the following constraints:
\[\beta=(X^{\top}X+\lambda I)^{-1}X^{\top}\tilde{y}, \tag{4a}\] \[\|X\beta-\tilde{y}\|\leqslant\ell, \tag{4b}\]
where the linear constraint (4a) is the closed-form solution to regression weights on vector \(\tilde{y}\), and the conic constraint (4b) is used to compute the loss on the same vector and weights.
We now state the \(\varepsilon-\)DP guarantee of this algorithm.
**Theorem 5** (DP of the WPO Algorithm): _Setting \(\varepsilon_{1}=\varepsilon/2\) and \(\varepsilon_{2}=\varepsilon/4\) renders Algorithm 1\(\varepsilon-\)DP for \(\alpha-\)adjacent wind power datasets._
Proof:: See Appendix B.
## IV Privacy-Preserving DC-OPF Dataset Release
We now consider a problem of a power system operator who wants to release a synthetic network dataset in a differentially private way. The goal is to guarantee not only privacy but also accuracy with respect to possible downstream computations on the synthetic dataset. We consider the DC-OPF problem as the main computational task. We also specifically focus on the release of transmission capacity data, though other network parameters (loads, generation cost, etc.) can be released similarly.
The OPF problem models operations in a power network with \(n\) buses and \(e\) transmission lines. The goal is to compute the least-cost generation dispatch \(p\in\mathbb{R}^{n}\) while satisfying electric loads \(d\in\mathbb{R}^{n}_{+}\). Generators produce at linear costs \(c\in\mathbb{R}^{n}_{+}\) within the minimum and maximum technical limits, encoded in set \(\mathcal{P}=\{p\mid\mathcal{D}\leqslant p\leqslant\overline{p}\}\). The DC power flows are modeled using the power transfer distribution matrix \(F\in\mathbb{R}^{e\times n}\), and resulting power flows \(\varphi=F(p-d)\in\mathbb{R}^{e}\) are limited by line capacities \(\overline{f}\in\mathbb{R}^{e}_{+}\).
Suppose that there is a set \(1,\ldots,m\) of OPF models, where each model \(i\) includes a specific cost vector \(c_{i}\), generation limits in set \(\mathcal{P}_{i}\), and electric loads \(d_{i}\). The transmission data, i.e., topology encoded in \(F\) and capacity \(\overline{f}\), remain the same. Each OPF model \(i\) is then described by a tuple \(\langle c_{i},d_{i},\mathcal{P}_{i},F,\overline{f}\rangle\). Given the real OPF dataset \(\langle c_{i},d_{i},\mathcal{P}_{i},F,\overline{f}\rangle_{i=1}^{m}\), the goal is to produce its synthetic version \((c_{i},d_{i},\mathcal{P}_{i},F,\overline{\varphi})_{i=1}^{m}\) with an obfuscated transmission capacity vector \(\overline{\varphi}\), which permits feasible and cost-consistent - with respect to real data - OPF outcomes across \(m\) models.
Towards the goal, we formulate a DC-OPF problem parameterized by the synthetic transmission capacity \(\overline{\varphi}\):
\[\mathcal{C}_{i}(\overline{\varphi})=\underset{p\in\mathcal{P}_{i}}{ \text{minimize}} c_{i}^{\top}p\] (5a) subject to \[1^{\top}(p-d_{i})=0, \tag{5b}\] \[\left\|F(p-d_{i})\right\|_{1}\leqslant\overline{\varphi}, \tag{5c}\]
where the objective function (5a) minimizes OPF costs, denoted by \(\mathcal{C}_{i}(\overline{\varphi})\), subject to power balance (5b), flow and generation limits in (5c) and \(\mathcal{P}_{i}\), respectively; all specific to a particular model \(i\). We make two assumptions on problem (5).
**Assumption 1** (Feasibility): \(\mathcal{C}_{i}(\overline{f})\) _exists for all \(i=1,\ldots,m\)._
**Assumption 2** (Sensitivity): _Let \(\overline{\varphi}_{1}\sim_{\alpha}\overline{\varphi}_{2}\) be two \(\alpha-\)adjacent vectors of transmission capacities. Then, \(\left\|\mathcal{C}_{i}(\overline{\varphi}_{1})-\mathcal{C}_{i}(\overline{ \varphi}_{2})\right\|_{1}\leqslant\overline{c}\alpha,\forall i=1,\ldots,m\), where \(\overline{c}\) is the maximum cost coefficient._
The former requires OPF feasibility of the real transmission capacity data on all historical records, and the latter bounds the change in OPF costs to the cost of the most expensive unit.
As a perturbed vector of line capacities may not be OPF feasible, we additionally introduce the relaxed OPF problem to give a numerical value to infeasibility of a particular vector \(\overline{\varphi}\):
\[\mathcal{C}_{i}^{R}(\overline{\varphi})=\underset{p\in\mathcal{P} _{i},v\geqslant 0}{\text{minimize}} c_{i}^{\top}p+\psi^{\top}v\] (6a) subject to \[1^{\top}(p-d_{i})=0, \tag{6b}\] \[\left\|F(p-d_{i})\right\|_{1}\leqslant\overline{\varphi}+v, \tag{6c}\]
where the slack variable \(v\in\mathbb{R}^{e}\) renders the OPF solution feasible for any assignment \(\overline{\varphi}\). That is, infeasible vectors \(\overline{\varphi}\) for problem (5) translate into costs \(\mathcal{C}_{i}^{R}(\overline{\varphi})\) using penalty scalar \(\psi\gg\overline{c}\).
### _Differentially Private TCO Algorithm_
We now introduce the privacy-preserving transmission capacity obfuscation (TCO) Algorithm 2 for DC-OPF datasets. Here, Step 1 initializes synthetic dataset \(\overline{\varphi}^{0}\) by perturbing real data using the Laplace mechanism, and the remaining steps post-process the synthetic dataset. Step 2 runs the report-noisy-max algorithm, a discrete version of the Exponential mechanism [3], to privately identify the worst-case OPF model. Here, the score function, \(\Delta\mathcal{C}\), is the \(L_{1}\) norm which measures the distance between OPF costs on real and synthetic datasets. Then, Step 3 uses the Laplace mechanism to estimate the cost of the worst-case OPF model on the real data. Step 4 post-processes the synthetic dataset using a bilevel optimization problem (7), where \(\mathcal{C}_{k^{\top}}(\overline{\varphi})\) is the OPF costs obtained from the embedded DC-OPF problem (5) for some fixed vector \(\overline{\varphi}\). By embedding the OPF problem as a constraint, we require feasibility and cost-consistency of \(\overline{\varphi}\) with respect to the worst-case OPF models identified at previous steps. In addition, with the last term in (7a), we regularize the solution \(\overline{\varphi}\) to make
sure that the changes in synthetic capacities are only guided by feasibility and cost-consistency requirements.
The major difference between the WPO and TCO algorithms is that the latter terminates after repeating Steps 2 to 4\(T\) times. The OPF feasibility for one model does not guarantee feasibility across the whole population of models. By increasing \(T\), the TCO algorithm finds more worst-case OPF models with the largest cost \(\mathcal{C}_{1}^{R}\) of violations, thereby improving the accuracy (feasibility) of the synthetic dataset across the population.
To arrive at a tractable mixed-integer reformulation of problem (7), we substitute constraint (7b) with the Karush-Kuhn-Tucker conditions of problem (5); we refer to [16, SS6] for details. Notably, problem (7) only relies on obfuscated data. Hence, by Theorem 2, it does not induce any privacy loss. We now state the \(\varepsilon-\)DP guarantee of the entire algorithm.
**Theorem 6** (DP of the TCO Algorithm): _Setting \(\varepsilon_{1}=\varepsilon/2\) and \(\varepsilon_{2}=\varepsilon/(4T)\) renders Algorithm 2\(\varepsilon-\)DP for \(\alpha-\)adjacent DC-OPF datasets._
See Appendix C.
**Remark 1** (Relation to prior work): _When \(m=T=1\), Step 2 in Algorithm 2 becomes redundant, and the algorithm replicates the Laplace-based PLO mechanism in [11], when applied to the capacity obfuscation in the DC-OPF setting. The difference between the two algorithms reveals when the synthetic dataset must be accurate, i.e., feasible and cost consistent, on a population of OPF models, i.e., \(m\gg 1\). Indeed, the worst-case OPF model and cost can also be estimated using Laplace perturbations, but the induced privacy loss will reach \(mT\varepsilon_{2}\). The combination of the Exponential and Laplace mechanisms in Steps 2 and 3 in Algorithm 2, however, reduces the privacy loss to \(2T\varepsilon_{2}\)._
## V Numerical Experiments
In our experiments, we fix the privacy loss \(\varepsilon=1\) and vary adjacency parameter \(\alpha\), thereby increasing the range of adjacent datasets, which are required to be statistically indistinguishable. All data and codes to replicate the results are available online:
[https://github.com/wdvorkin/SyntheticData](https://github.com/wdvorkin/SyntheticData)
### _Synthetic Wind Power Records Generation_
We first demonstrate the WPO Algorithm 1 for a privacy-preserving release of wind power records. We use the theoretical wind power curve of the General Electric GE-2.75.103 turbine from [17], considering a medium range of wind speeds between \(2.5\) and \(12.5\)\(\nicefrac{{m}}{{s}}\), where the power output is most sensitive to speed. We then perturb each power output with a Gaussian noise \(\mathcal{N}(0,0.1)\) to introduce some variation among the records; the dataset is thus not completely real, but resembles real-life datasets which we hope to eventually release with our algorithm. In the dataset, we have \(m=1,000\) normalized power measurements \(y\in[0,1]^{m}\) and corresponding wind speeds \(x\).
We specify regression (2) as follows. First, we transform the wind speed records using \(p=5\) Gaussian radial basis functions:
\[\varphi_{j}(x)=e^{-\left(\frac{1}{2}\left\|x-\mu_{j}\right\|\right)^{2}}, \forall j=1,\ldots,p,\]
positioned at \(\mu=\{2.5,5,7.5,10,12.5\}\)\(\nicefrac{{m}}{{s}}\). Each feature in \(X\) is then obtained as \(X_{ij}=\varphi_{j}(x_{i})\), \(\forall i=1,\ldots,m\), \(\forall j=1,\ldots,p\). Finally, we set the regularization parameter as \(\lambda=10^{-3}\).
We use the standard Laplace mechanism as a reference method, which perturbs power records as \(\tilde{y}=y+\text{Lap}(\alpha/\varepsilon)^{m}\), and projects them onto feasible range \([0,1]^{m}\). The resulting synthetic records satisfy \(\varepsilon-\)DP for \(\alpha-\)adjacent datasets, as per Theorems 2 and 3. To guarantee \(\varepsilon-\)DP for the WPO algorithm, we set \(\varepsilon_{1}\) and \(\varepsilon_{2}\) according to Theorem 5. We also set regularization parameters \(\gamma_{y},\gamma_{\beta}=10^{-5}\) for post-processing in (3).
Figure 1 demonstrates some examples of synthetic wind power dataset releases. Here, we measure adjacency \(\alpha\) in \(\%\) of the nominal capacity of the wind turbine. Observe, that with increasing \(\alpha\), the regression-agnostic Laplace mechanism yields a larger deviation of the synthetic records from the real data. While the WPO algorithm introduces even more noise, i.e., \(\times 3\) more noise at Step 1 and more noise at Step 2 due to sensitivities \(\delta_{\ell}\) and \(\delta_{\beta}\) growing in \(\alpha\), the post-processing of the noisy records at Step 3 results in a better accuracy of the synthetic dataset. In Fig. 2, we demonstrate the statistical significance of this observation by plotting the loss on synthetic datasets under the two methods. With increasing \(\alpha\), the Laplace mechanism demonstrates a notable deviation from the loss on real data. The WPO algorithm, on the other hand, converges to the real loss on average, and does not significantly deviate from the average.
### _Synthetic Transmission Data Generation_
We apply the TCO algorithm to a network data release from the IEEE 73-Bus Reliability Test System. To make the case more challenging, we reduce the transmission capacity to \(60\%\) of the nominal level to increase network congestion. We generate \(m=10^{3}\) feasible DC-OPF datasets by sampling demand and generation limits from uniform distributions with bounds \(\pm 12.5\)% of their nominal values. The cost data is sampled from a uniform
distribution \(\mathcal{U}(80,100)\) $/MWh, and we set penalty \(\psi=3\cdot 10^{3}\) in (6a) for flow limit violations. The privacy loss \(\varepsilon\) is split according to Theorem 6. Finally, we vary adjacency parameter \(\alpha\) from 5 to 30 MW and iteration limit \(T\) from 1 to 10.
By increasing \(\alpha\), we increase the noise magnitude at Step 1 of the TCO algorithm, resulting in a broader distribution of synthetic dataset outcomes, as depicted by box plots in Fig. 3 for one selected transmission line. However, as noise increases, the probability of obtaining an infeasible synthetic dataset also increases. We thus increase the iteration limit \(T\) to improve the accuracy of the synthetic dataset. By setting \(T\), we require feasibility and cost-consistency with respect to the set of \(T\) worst-case OPF models and outcomes, provided at Steps 2 and 3, respectively. Such deeper post-processing results in distributional shifts, as further shown in Fig. 3 for increasing \(T\). The virtue of these shifts is revealed in Fig. 4, where the top row demonstrates how the probability of infeasible OPF outcomes on synthetic datasets reduces as the iteration limit increases. For smaller adjacency, it takes fewer iterations to restore the feasibility of the synthetic dataset. For example, for \(\alpha=5\)MW, it is enough to leverage \(6\) worst-case OPF models in the post-processing optimization at Step 4 to restore feasibility across the entire population of \(1,000\) OPF models. For larger adjacency parameters, it takes as much as 10 iterations on average. The bottom row in Fig. 4 depicts the mean sub-optimality of OPF models on the synthetic dataset, computed as:
\[\Delta\mathcal{C}=\frac{1}{m}\sum_{i=1}^{m}\frac{\left\|\mathcal{C}_{i}( \overline{f})-\mathcal{C}_{i}^{R}(\overline{\varphi}^{T})\right\|}{\mathcal{C }_{i}(\overline{f})}\times 100\%. \tag{8}\]
The sub-optimality of synthetic datasets increases in adjacency parameter \(\alpha\), as more noise corrupts the real data. However, as we increase \(T\), the OPF cost on synthetic data gets closer to that on the real data. Eventually, the sub-optimality is kept very close to zero without violating the privacy of the real dataset.
## VI Conclusions
We developed two algorithms for privacy-preserving releases of synthetic wind power records and transmission capacity data. The former obfuscates power records by adding Laplacian noise to data and then post-processes the noisy data to privately restore accuracy using a reference machine learning model, thereby improving on the application of the Laplace mechanism alone. The latter enjoys both Laplace and Exponential mechanisms to release cost-consistent transmission data while ensuring the
Fig. 1: Wind power dataset obfuscation for the General Electric 2.75 MW turbine using the Laplace mechanism (top row) and the WPO algorithm (bottom row).
Fig. 4: Infeasibility and sub-optimality of synthetic DC-OPF datasets. Top row: percentage of infeasible OPF solutions across a population of \(m=1,000\) OPF models. Bottom row: the mean sub-optimality \(\Delta\mathcal{C}\) of OPF costs on synthetic datasets. The mean values are provided with 90% confidence bands.
Fig. 3: Distributions of obfuscation outcomes for line #40 across 300 runs of the TCO algorithm for varying adjacency parameter \(\alpha\) and iteration limit \(T\).
Fig. 2: The mean and 90% confidence band of the regression loss on synthetic datasets for 300 runs of the Laplace mechanism and WPO algorithm.
feasibility on a population of heterogeneous OPF models, without the need of drastically scaling the noise. Our results showed that identifying 10 worst-case OPF models suffices to restore data accuracy across the population of 1,000 models, on average.
### _Proof of Lemma 1_
The worst-case sensitivity of regression weights is bounded as:
\[\delta_{\beta} =\underset{y^{\sim}u^{\prime}}{\text{maximize}}\ \left\|\beta(y)-\beta(y^{\prime})\right\|_{1} \tag{9a}\] \[=\underset{y^{\sim}u^{\prime}}{\text{maximize}}\ \left\|(X^{\top}X+ \lambda I)^{-1}X^{\top}(y-y^{\prime})\right\|_{1}\] (9b) \[\leqslant\left\|(X^{\top}X+\lambda I)^{-1}X^{\top}\right\|_{1} \cdot\underset{y^{\sim}u^{\prime}}{\text{maximize}}\ \left\|y-y^{\prime}\right\|_{1}\] (9c) \[\leqslant\left\|(X^{\top}X+\lambda I)^{-1}X^{\top}\right\|_{1}\cdot\alpha \tag{9d}\]
where equality (9b) is from the closed-form solution to the ridged regression, inequality (9c) is due to the Holders inequality, and inequality (9d) is due to Definition 1 of adjacent datasets.
The sensitivity of regression loss \(\ell\) is bounded as:
\[\delta_{\ell} =\underset{y^{\sim}u^{\prime}}{\text{maximize}}\ \left\|\ell(y)-\ell(y^{\prime})\right\|_{1} \tag{10a}\] \[=\underset{y^{\sim}u^{\prime}}{\text{maximize}}\ \left\|\left|X \beta-y\right\|-\left\|X\beta-y^{\prime}\right\|\right\|_{1}\] (10b) \[\leqslant\underset{y^{\sim}u^{\prime}}{\text{maximize}}\ \left\|X(\beta(y)-\beta(y^{\prime}))-(y-y^{\prime})\right\|\] (10c) \[=\underset{y^{\sim}u^{\prime}}{\text{maximize}}\ \left\|(X(X^{\top}X+ \lambda I)^{-1}X^{\top}-I)(y-y^{\prime})\right\|\] (10d) \[=\underset{i=1,\dots,m}{\text{maximize}}\ \left\|(X(X^{\top}X+ \lambda I)^{-1}X^{\top}-I)(e_{i}\circ\alpha)\right\| \tag{10e}\]
where inequality (10c) is due to the reverse triangle inequality, equality (10d) is from the closed-form solution to the ridged regression. Equality (10e) originates from Definition 1 of adjacent datasets, i.e., different in one element by at most \(\alpha\). It is thus enough to find index \(i\) of that element which maximizes the norm.
### _Proof of Theorem 5_
The algorithm queries real data in the following computations:
1. Initialization at Step 1 using the Laplace mechanism with parameters \(\alpha/\varepsilon_{1}\). Since the worst-case sensitivity of identity queries is \(\alpha\)[13], this computation is \(\varepsilon_{1}\)-DP by Theorem 3.
2. Estimation of the regression loss on the real data at Step 2 using the Laplace mechanism with parameters \(\delta_{\ell}/\varepsilon_{2}\). By Lemma 1 and Theorem 3, this estimation is \(\varepsilon_{2}\)-DP.
3. Estimation of regression weights on the real data at Step 2 using the Laplace mechanism with parameters \(\delta_{\beta}/\varepsilon_{2}\). By Lemma 1 and Theorem 3, this estimation is \(\varepsilon_{2}\)-DP.
Note, that the post-processing optimization at Step 3 only uses obfuscated data. Hence, it does not induce any privacy loss per Theorem 2. Per Theorem 1, the total privacy loss becomes \(\varepsilon_{1}+2\varepsilon_{2}\), yielding \(\varepsilon\) when setting parameters \(\varepsilon_{1}=\varepsilon/2\) and \(\varepsilon_{2}=\varepsilon/4\).
### _Proof of Theorem 6_
We follow similar arguments. Algorithm 2 queries private transmission capacity vector \(\overline{f}\) for the following computations:
1. Initial dataset \(\overline{\varphi}^{0}\): the algorithm uses a private identity query with privacy budget \(\alpha/\varepsilon_{1}\). Since the sensitivity of identity queries on \(\alpha-\)adjacent datasets is \(\alpha\)[13], by Theorem 3 this computation is \(\varepsilon_{1}-\)DP.
2. Worst-case OPF index: found by the discrete variant of the Exponential mechanism with privacy budget \(\overline{\alpha}\alpha/\varepsilon_{2}\). Since the sensitivity of the score function \(\Delta\mathcal{C}_{i}\) is the same as that of \(\mathcal{C}_{i}\), by Theorems 2 and 4 and Assumption 2, this is \(\varepsilon_{2}-\)DP.
3. Worst-case OPF cost: Step 3 uses a private identity query of the worst-case OPF cost using privacy budget \(\overline{\alpha}\alpha/\varepsilon_{2}\). Per Assumption 2 and Theorem 3, this computation is \(\varepsilon_{2}-\)DP.
Let \(\overline{\varepsilon}\) be the total privacy loss accumulated by the algorithm. Step 1 accumulates privacy loss of \(\varepsilon_{1}\). Since Steps 2 and 3 repeat \(T\) times, per Theorem 1, they accumulate the privacy loss of \(2T\varepsilon_{2}\). The total loss is then \(\overline{\varepsilon}=\varepsilon_{1}+2T\varepsilon_{2}\), which amounts to \(\varepsilon\) when setting DP parameters \(\varepsilon_{1}=\varepsilon/2\) and \(\varepsilon_{2}=\varepsilon/(4T)\).
|
2310.11141 | Long-form Simultaneous Speech Translation: Thesis Proposal | Simultaneous speech translation (SST) aims to provide real-time translation
of spoken language, even before the speaker finishes their sentence.
Traditionally, SST has been addressed primarily by cascaded systems that
decompose the task into subtasks, including speech recognition, segmentation,
and machine translation. However, the advent of deep learning has sparked
significant interest in end-to-end (E2E) systems. Nevertheless, a major
limitation of most approaches to E2E SST reported in the current literature is
that they assume that the source speech is pre-segmented into sentences, which
is a significant obstacle for practical, real-world applications. This thesis
proposal addresses end-to-end simultaneous speech translation, particularly in
the long-form setting, i.e., without pre-segmentation. We present a survey of
the latest advancements in E2E SST, assess the primary obstacles in SST and its
relevance to long-form scenarios, and suggest approaches to tackle these
challenges. | Peter Polák | 2023-10-17T10:44:05Z | http://arxiv.org/abs/2310.11141v1 | # Long-form Simultaneous Speech Translation+
###### Abstract
Simultaneous speech translation (SST) aims to provide real-time translation of spoken language, even before the speaker finishes their sentence. Traditionally, SST has been addressed primarily by cascaded systems that decompose the task into subtasks, including speech recognition, segmentation, and machine translation. However, the advent of deep learning has sparked significant interest in end-to-end (E2E) systems. Nevertheless, a major limitation of most approaches to E2E SST reported in the current literature is that they assume that the source speech is pre-segmented into sentences, which is a significant obstacle for practical, real-world applications. This thesis proposal addresses end-to-end simultaneous speech translation, particularly in the long-form setting, i.e., without pre-segmentation. We present a survey of the latest advancements in E2E SST, assess the primary obstacles in SST and its relevance to long-form scenarios, and suggest approaches to tackle these challenges.
## 1 Introduction
In today's highly globalized world, communication among individuals speaking different languages is gaining importance. International conferences and multinational organizations like the European Parliament often rely on human interpreters. However, in many scenarios, employing human interpreters can be impractical and costly. In such cases, simultaneous speech translation1 (SST) offers a viable solution by enabling real-time translation before the speaker completes their sentence.
Footnote 1: We consider only the speech-to-text variant in this work.
Traditionally, both offline speech translation (ST) and simultaneous speech translation (SST) have relied predominantly on cascaded systems that decompose the task into multiple subtasks, including speech recognition, speech segmentation, and machine translation (Osterholtz et al., 1992; Fugen et al., 2007; Bojar et al., 2021). However, recent advancements in deep learning and the availability of abundant data (Tan and Lim, 2018; Sperber and Paulik, 2020) have led to a significant paradigm shift towards end-to-end (E2E) models. While the cascaded approach continues to dominate offline ST, the opposite is true for SST (Anastasopoulos et al., 2022; Agarwal et al., 2023).
Despite the recent popularity of end-to-end SST, the vast majority of research focuses on the "short-form" setting, which assumes that the speech input is already pre-segmented into sentences. Critically, this assumption poses an obstacle to deployment in the wild. Therefore, we aim to achieve a "true" long-form simultaneous speech translation in our thesis. We break down our efforts into three steps:
Quality-latency tradeoff in SSTThe first step of our research concentrates on enhancing the quality-latency tradeoff, mainly in the traditional "short-form" regime. We will evaluate different approaches and architectures.
Towards the long-form SSTIn the next step, we will explore the feasibility of long-form simultaneous speech translation by adopting segmented inference.
True long-form SSTThe final goal of our work is to explore the potential of end-to-end modeling for true long-form SST. We will focus on identifying an appropriate model architecture and effective training procedures to achieve seamless and reliable long-form simultaneous speech translation.
The next section introduces some important aspects of simultaneous speech translation.
Simultaneous Speech Translation
The ultimate goal of SST is to enable _real-time_ communication between people speaking different languages. To achieve this goal, SST systems must meet two important criteria. First, they must be _computationally efficient_ to ensure timely translation during ongoing speech. Second, SST systems must be capable of _handling unfinished sentences_. Working with unfinished sentences allows for more timely translations, particularly when waiting for sentences to be completed is impractical, such as matching slides or presenters' gestures. However, translating unfinished sentences increases the risk of translation errors since translation usually requires re-ordering that benefits from a more complete sentence context. Thus, there exists a _quality-latency tradeoff_. This means that given a certain latency constraint, we want the model to produce as good translations as possible. Ideally, we want the model to "predict" the future context without the risk of an incorrect translation. The quality-latency tradeoff is one of the main topics of our research.
### Re-Translation vs. Incremental SST
SST can be classified as either re-translation or incremental. Re-translation SST (Niehues et al., 2016, 2018) can revise the hypothesis or re-rank the set of hypotheses as more speech input is read. Revising the translation allows the re-translation SST to have comparable final translation quality with the offline speech translation (Arivazhagan et al., 2020). This design approach arguably introduces challenges for the user in processing the translation and makes it impossible to use in real-time speech-to-speech translation. Additionally, it also complicates the latency evaluation.
In fact, several SST latency metrics (Ma et al., 2020) were originally developed specifically for incremental translation scenarios.2 Incremental SST (Cho and Esipova, 2016; Dalvi et al., 2018) differs from the re-translation system in that it prunes all hypotheses to a common prefix, which is then shown to the user. For the user, the translation changes only by incrementally getting longer; none of the previously displayed outputs are ever modified. In our work, we focus on incremental SST.
Footnote 2: IWSLT shared tasks (Ansari et al., 2020; Anastasopoulos et al., 2021, 2022) also follow this evaluation standard.
### Cascaded vs. End-to-End
Traditionally, offline speech translation and SST were achieved as a _cascade_ of multiple systems: automatic speech recognition (ASR), inverse transcript normalization, which includes punctuation prediction and true casing, and machine translation (MT, Osterholtz et al., 1992; Fugen et al., 2007; Bojar et al., 2021). The advantage of the cascade approach is that we can optimize models for each subtask independently. Also, ASR and MT tasks typically have access to larger and more diverse corpora than direct speech translation.
However, using a cascade system introduces several challenges (Sperber and Paulik, 2020). The most important among them is _error propagation_(Ruiz and Federico, 2014). Further, MT models might suffer from _mismatched domains_ when trained on written language. Furthermore, as the source is transformed into a textual form, it _loses crucial information about prosody_, i.e., the rhythm, intonation, and emphasis in speech (Bentivogli et al., 2021). Finally, many languages, especially endangered ones, have no written form, which makes the cascade approach impractical or impossible for such languages (Harrison, 2007; Duong et al., 2016).
As of the latest findings, the current state-of-the-art for offline speech translation continues to be based on a cascaded approach (Anastasopoulos et al., 2022; Agarwal et al., 2023). In simultaneous speech translation, however, both approaches yield competitive performance. The advantage of the end-to-end models in SST may be that they avoid the extra delay caused by ASR-MT collaboration in the cascade (Wang et al., 2022).
In our work, we focus on end-to-end models.
## 3 Long-form Simultaneous Speech Translation
Most of the contemporary research on SST assumes speech pre-segmented into short utterances with segmentation following the sentence boundaries. However, in any real application, there is no such segmentation available. This section places long-form SST within the broader context of long-form ASR, MT, and offline ST. Subsequently, we explore the current literature on long-form SST.
### Long-Form ASR
In terms of input and output modalities, long-form ASR and ST face similar issues. There are two
types of strategies for long-form processing: (1) the _segmented approach_, which divides the input into smaller chunks, and (2) the _true long-form approach_, which handles the entire long-form input as a single unit.
Most of the literature focuses on the _segmented approach_. A typical solution involves pre-segmenting the audio using voice activity detection (VAD). However, VAD segmentation may not be optimal for real-world speech since it might fail to handle hesitations or pauses in sentences that must be treated as undivided units. More sophisticated approaches leverage latent alignments obtained from CTC (Graves et al., 2006) and RNN-T (Graves, 2012) for better segmentation (Yoshimura et al., 2020; Huang et al., 2022). Alternatively, segmentation into _fixed segments_ is also popular (Chiu et al., 2019, 2021). To reduce low-quality transcripts close to the segment boundaries, they typically perform overlapped inference and use latent alignments to merge the transcripts correctly. The chunking approach is also adopted by the attentional model Whisper in the offline (Radford et al., 2023) and simultaneous regime (Machacek et al., 2023).
Another line of work focused on _long-form modeling_ directly. For example, Chiu et al. (2019) conducted a comprehensive study comparing different architectures, including RNN-T and attention-based models. The findings indicate that only RNN-T and CTC architectures can generalize to unseen lengths. To further improve the true long-form ASR, Narayanan et al. (2019) suggest simulation of long-form training by LSTM state passing.
While the previously mentioned research was predominantly based on RNNs, more recent work has transitioned to utilizing Transformer models. Zhang et al. (2023) compared a chunk-wise attention encoder, which involves an encoder with a limited attention span, in combination with the attention-based decoder (AD) and CTC. We note that while the encoder has a limited attention span, the attention-based decoder sees the entire encoder representation. The model employing AD could not function without chunking, whereas the CTC model processed the entire speech at once and still outperformed the AD model.
### Long-Form MT
The primary objective of long-form MT is to enhance textual coherence, as conventional MT systems assume sentence independence. Early work explored a concatenation of previous (Tiedemann and Scherrer, 2017; Donato et al., 2021) and future sentences (Agrawal et al., 2018). These works showed that MT models benefit from the extra context and better handle the inter-sentential discourse phenomena. However, the benefits diminish if the context grows beyond a few sentences (Agrawal et al., 2018; Kim et al., 2019; Fernandes et al., 2021). This can be attributed to the limitations of attention mechanisms, where an extensive volume of irrelevant information can lead to confusion.
Other body of work tries to model very long sequences directly. Dai et al. (2019) introduced a recurrence mechanism and improved positional encoding scheme in the Transformer. Later work proposed an explicit compressed memory realized by a few dense vectors (Feng et al., 2022).
### Long-Form Offline ST
Unlike written input text in long-form MT, speech input in the ST task lacks explicit information about segmentation. Therefore, the research in the area of long-form offline speech translation concentrates on two separate issues: (1) improving _segmentation_ into sentences, and (2) enhancing robustness through the use of larger _context_.
In the traditional cascaded approach with separate speech recognition and machine translation models, the work focused on segmentation strategies for the ASR transcripts.3 The methods are usually based on re-introducing punctuation to the transcript (Lu and Ng, 2010; Rangarajan Sridhar et al., 2013; Cho et al., 2015, 2017). However, these approaches suffer from ASR error propagation and disregard the source audio's acoustic information. This was addressed by Iranzo-Sanchez et al. (2020), however, the approach still requires an intermediate ASR transcript that is unavailable in E2E models.
Footnote 3: ASR transcripts are traditionally normalized, i.e., they consist of lowercase words without punctuation.
An alternative approach involves source-speech-based segmentation. The early work focused on VAD segmentation. This is usually sub-optimal as speakers place pauses inside sentences, not necessarily between them (e.g., hesitations before words with high information content, Goldman-Eisler, 1958). To this end, researchers tried considering not only the presence of speech but also its length (Potapczyk and Przybysz, 20
2021; Gaido et al., 2021). Later studies tried to avoid VAD and focused on more linguistically-motivated approaches, e.g., ASR CTC to predict voiced regions Gallego et al. (2021) or directly modeling the sentence segmentation Tsiamas et al. (2022); Fukuda et al. (2022).
To address the problem of inadequate segmentation, Gaido et al. (2020) showed that context-aware ST is less prone to segmentation errors. In an extensive study of context-aware ST, Zhang et al. (2021) observed that context improves quality, but this holds only for a limited number of utterances.
### Long-Form Simultaneous ST
Research focusing on direct long-form simultaneous speech translation remains relatively scarce. The closest works are in long-form simultaneous MT. Schneider and Waibel (2020) proposed a streaming MT model capable of translating unsegmented text input. This model could be theoretically adapted for speech input. However, it was later shown that this model exhibits huge latency Iranzo Sanchez et al. (2022). Another work Iranzo Sanchez et al. (2022) explored the extended context and confirmed the findings from long-form MT and offline ST, demonstrating that using the previous context significantly enhances performance. They also confirmed that a too-long context leads to decreased translation quality.
Finally, the only direct SST model that claims to work on a possibly unbounded input is Ma et al. (2021). The model utilizes a Transformer encoder with a restriction on self-attention, allowing it to attend solely to a memory bank and a small segment. Unfortunately, based on the reported experiments, whether the model was specifically evaluated in the long-form setting remains unclear.
### Evaluation
Evaluation of SST is a complex problem as we have to consider not only the translation quality but also the latency. Additionally, in the long-form regime, segmentation becomes another obstacle.
The most commonly used metric for translation quality in speech translation is BLEU Papineni et al. (2002); Post (2018). Other metrics such as chrF++ Popovic (2017) and a neural-based metric COMET Rei et al. (2020) can be applied, too.
The other important property of an SST system is latency. There are two main types of latencies: computation-unaware (CU) and computation-aware (CA) latency. The computation-unaware latency measures the delay in emitting a translation token relative to the source, regardless of the actual computation time. Hence, CU latency allows for a fair comparison regardless of the hardware infrastructure. However, CU latency cannot penalize the evaluated system for extensive computation; hence, CA latency can offer a more realistic assessment.
Measuring latency relative to the source or reference in SST is quite difficult because of the reordering present in translation. Historically, latency metrics were first developed for simultaneous machine translation (i.e., the source is text rather than speech). The most common are average lagging AL; Ma et al. (2019) and differentiable average lagging DAL; Cherry and Foster (2019). Broadly speaking, they measure "how much of the source was read by the system to translate a word". The latency unit is typically a word. The speech community quickly adopted these metrics. Unfortunately, these metrics assume a uniform distribution of words and uniform length of these words in the speech source. Alternatively, Ansari et al. (2021) proposed to use a statistical word alignment of the candidate translation with the corresponding source transcript. This theoretically allows for more precise latency evaluation, but it is unclear how the alignment errors impact the reliability.
In the unsegmented long-form setting, additional issues arise. In a typical "short-form" segmented setup, the SST model does inference on a pre-segmented input. However, the candidate and reference segmentation into sentences might differ in the long-form unsegmented regime. Traditionally, this issue was addressed by re-segmenting the hypothesis based on the reference Matusov et al. (2005). After the re-segmentation, a standard sentence-level evaluation of translation quality and latency is done. It should be noted that the commonly used latency metrics (AL, DAL) cannot be used in the long-form regime Iranzo-Sanchez et al. (2021) without the re-segmentation. Yet, recent work observed that the re-segmentation introduces errors Amrhein and Haddow (2022). This poses a risk of incorrect translation and quality assessment and remains an open research question.
## 4 Thesis Goals
The goal of our thesis is to achieve a "true" long-form simultaneous speech translation. This section outlines the steps we will take to accomplish this goal.
### Data and Evaluation
In our future research, we will mainly use the setup similar to the IWSLT shared tasks Ansari et al. (2020); Anastagopoulos et al. (2021, 2022), i.e., mostly single speaker data. Identical to the IWSLT, we will treat the TED data as an in-domain setting. We will consider domains such as parliamentary speeches (e.g., Europarl-ST Iranzo-Sanchez et al. (2020) for the out-of-domain setting. As for the languages, we will include a diverse set of language pairs. A good inspiration might be again the IWSLT, i.e., English-to-{German, Japanese, Chinese}. Challenging will be the long-form setting, as to the best of our knowledge, none of the available data is strictly long-form. Our preliminary review found that the original TED talks can be reconstructed from the MuST-C Cattoni et al. (2021) development and test set available for English-to-{German, Japanese, Chinese} language pairs.
As highlighted in the literature review in Section 3.5, evaluating the long-form SST remains an open problem. The quality and latency evaluation metrics currently used are designed for sentence-level evaluation. We must re-segment the long hypotheses into sentences based on their word alignment with provided references to use these metrics in the long-form regime. Unfortunately, the re-segmentation introduces errors, which poses a risk to the evaluation reliability. To tackle this, we will investigate alternative evaluation strategies. One potential approach for reducing the alignment error could be to move the alignment to the sentence level rather than the word level and allow an \(m\)-to-\(n\) mapping between the reference and proposed sentences, similar to the Gale-Church alignment algorithm Gale et al. (1994), with a reasonably small \(m\) and \(n\) (e.g., \(0\leq m,n\leq 2\)). To verify the effectiveness of this method, we need to compare its correlation with human evaluations.
### Quality-latency tradeoff in SST
The first step of our research concentrates on enhancing the quality-latency tradeoff, mainly in the traditional "short-form" simultaneous speech translation. We hope the insights and improvements from the short-form regime will translate into the long-form regime.
In the research done so far, we already successfully reviewed the possibility of "onlinizing" state-of-the-art offline speech translation models in Polak et al. (2022). Our observations indicated that the attention-based encoder-decoder (AED) models tend to over-generate. This not only affects the resulting quality but also negatively impacts the AL latency evaluation reliability. Therefore, we proposed an improved version of the AL metric, which was later independently proposed under name length-adaptive average lagging (LAAL; Papi et al. (2022)). To remedy the over-generation problem, we proposed an improved version of the beam search algorithm in Polak et al. (2023). While this led to significant improvements in the quality-latency tradeoff, the decoding still relied on label-synchronous decoding. In Polak et al. (2023), we proposed a novel SST policy dubbed "CTC policy" that uses the output of an auxiliary CTC layer to guide the decoding. The proposed CTC policy led to even greater improvements in quality and reduced the real-time factor to 50 %.
Thus far, our research has focused primarily on the AED architecture. Nonetheless, recent findings Anastagopoulos et al. (2022); Agarwal et al. (2023) suggest that other approaches, such as transducers Graves (2012), yield competitive results. Nevertheless, it remains unclear which approach is the most advantageous for SST. Our goal will be to compare these architectures for SST. We will put a particular emphasis on architectures with latent alignments (e.g., transducers). Generally, the latent alignment models make a strong monotonic assumption on the mapping between the source and the target, which might be problematic for the translation, typically involving word reordering. Therefore, we will assess the alignment quality and potential applications (such as segmentation).
### Towards the Long-Form SST via On-the-Fly Segmentation
In the second stage, we will concentrate on the long-form SST by utilizing on-the-fly segmentation and short-form models from the previous stage.
Drawing inspiration from offline long-form ST, which primarily emphasizes segmentation, we consider direct segmentation modeling the most promising approach Tsiamas et al. (2022); Fukuda et al. (2022). The limitation of these approaches is that they do not allow out-of-the-box simultaneous inference. However, we believe their adaptation to the simultaneous regime should be relatively straightforward (e.g., using a unidirectional encoder) and a custom decoding strategy. The main challenge here will be integrating this segmenta
tion with existing models, especially considering the quality-latency tradeoff.
Our hopes go even further: Can we train a model to translate and predict the segmentation at the same time? The translation already contains punctuation marks (full stop, exclamation, and question marks), so if we knew the alignment between the translation and the source speech, we could use this information to segment the utterances directly. Therefore, we will experiment with various alignment approaches and asses their applicability to the segmentation. The results of our initial investigation on on-the-fly separation with CTC outputs are available in Polak and Bojar (2023).
However, we see another valuable use of direct speech-to-translation alignments -- dataset creation. Today, ST datasets are created using the cascaded approach (Iranzo-Sanchez et al., 2020; Cattoni et al., 2021; Salesky et al., 2021). The source transcript is first forced-aligned to the speech, then the transcript is word-aligned to the translations, and finally, these two alignments are used to segment the source speech into sentences based on the punctuation in the translation. In fact, this approach has a critical drawback: it virtually eliminates all data without a source transcript, preventing the research community from utilizing potentially valuable data sources. It is also worth noting that some languages do not have a writing system, which makes the direct speech-to-translation alignment even more attractive. Therefore, if the alignments show promising results, we will explore the feasibility of E2E speech-to-translation dataset creation.
An additional question is how to accommodate long context in the simultaneous regime. As pointed out in Sections 3.2 to 3.4, the performance usually drops with a context longer than a few sentences. Some solutions have been suggested (Kim et al., 2019; Feng et al., 2022), but it remains unclear how to adapt these approaches for SST with the specifics of SST in mind (e.g., computational constraints, speech input).
### True Long-Form SST
The ultimate goal of our work is to achieve true long-form simultaneous speech translation. In other words, we aim to develop an architecture capable of processing a potentially infinite stream of speech input without any segmentation or special inference algorithm, translating the speech directly into the target language in real time. Admittedly, this is a very ambitious goal. However, there is plenty of evidence that it is feasible. For example, in long-form ASR, related work has already observed that the RNN-T and CTC architectures are capable of long-form regime (Chiu et al., 2019; Narayanan et al., 2019; Lu et al., 2021; Zhang et al., 2023; Rekesh et al., 2023). Arguably, speech recognition is simpler than speech translation because it monotonically transcribes speech without reordering. However, the literature also shows that an architecture like RNN-T can be used in the "short-form" offline and simultaneous ST (Yan et al., 2023).
Therefore, based on the previous work in speech recognition and translation, we will propose a novel architecture that will allow simultaneous speech translation of a possibly infinite stream of speech. We will take inspiration from the existing architectures but revise them for the specific needs of simultaneous ST. This will require a particular focus on speech-to-translation alignment so that the source speech and target translation do not get out of sync. This architecture will also contain a "forgetting" mechanism that will allow the storage of essential bits of context while preventing memory issues. Finally, we will address the train-test mismatch because current hardware and training methods do not permit models to fit long inputs.
## 5 Conclusion
In conclusion, this thesis proposal presents an overview of the challenges involved in simultaneous speech translation (SST). The literature review highlighted the limited research on long-form speech translation. Our research sets out three main goals with an emphasis on long-form speech translation. These include improving the general quality-latency tradeoff in SST, exploring long-form SST through segmented inference, and ultimately achieving true long-form SST modeling. We placed these goals in the context of related work and outlined a clear strategy for achieving them.
## Acknowledgments
Peter would like to thank his supervisor, Ondrej Bojar, for his insight and guidance, as well as the anonymous reviewers for their valuable suggestions. This work has received support from GAUK project 244523 of Charles University and partial support from grant 19-26934X (NEUREM3) of the Czech Science Foundation. |
2302.07526 | Macroscopic maximally entangled state preparation between two atomic
ensembles | We develop a scheme to prepare a macroscopic maximally entangled state (MMES)
between two atomic ensembles using adaptive quantum nondemolition (QND)
measurements. The quantum state of the system is evolved using a sequence of
QND measurements followed by adaptive unitaries, such that the desired
measurement outcome is obtained with asymptotically unit probability. This
procedure is repeated in z and x spin basis alternately such that the state
converges deterministically towards the maximally entangled state. Up to a
local spin-basis rotation, the maximally entangled state has zero total spin
angular momentum, i.e. it is a singlet state. Our protocol does not perform
postselection and works beyond the Holstein-Primakoff regime for the atomic
spin degrees of freedom, producing genuine macroscopic entanglement. | Manish Chaudhary, Ebubechukwu O. Ilo-Okeke, Valentin Ivannikov, Tim Byrnes | 2023-02-15T08:47:31Z | http://arxiv.org/abs/2302.07526v2 | # Macroscopic maximally entangled state preparation between two atomic ensembles
###### Abstract
We develop a scheme to prepare a macroscopic maximally entangled state (MMES) between two atomic ensembles using adaptive quantum nondemolition (QND) measurements. The quantum state of the system is evolved using a sequence of QND measurements followed by adaptive unitaries, such that the desired measurement outcome is obtained with asymptotically unit probability. This procedure is repeated in \(z\) and \(x\) spin basis alternately such that the state converges deterministically towards the maximally entangled state. Up to a local spin-basis rotation, the maximally entangled state has zero total spin angular momentum, i.e. it is a singlet state. Our protocol does not perform postselection and works beyond the Holstein-Primakoff regime for the atomic spin degrees of freedom, producing genuine macroscopic entanglement.
## I Introduction
Entanglement plays an important role in various quantum information tasks such as teleportation [1], cryptography [2] and its production is one of the essential capabilities when constructing a quantum computer [3; 4; 5]. Entanglement is considered a resource in the context of quantum information science [6; 7; 8; 9; 10]. In the standard model of quantum computing, composite systems of qubits can be used to form a quantum register [4; 11]. However, quantum protocols based on higher dimensional systems have recently attracted a great attention [12; 13; 14; 15; 16] and offer certain advantages such as a higher information capacity and increased resistance to noise [17; 18; 19; 20]. Higher-dimensional systems are advantageous as these allow for lower detection efficiency than qubits [21; 22]. Several physical systems allow for the encoding of higher-dimensional quantum information. These systems include Rydberg atoms [23], trapped ions [24], cold atomic ensembles [25], superconducting phase qudits [26], photonic systems [27; 28], and mechanical resonators [29]. Atomic gases are a particularly fascinating physical platform for observing many-body entanglement, due to the high level of controllability and low decoherence [30; 31]. One of the most elementary type of entangled states for an atomic gas are spin squeezed states, where particular observables are reduced below the standard quantum limit [32; 33; 34], and has numerous applications in quantum metrology [35; 36; 37; 38; 39; 40; 41; 42]. It has also been observed that Bell violations [43; 44; 45], which are a stronger form of quantum correlations in the quantum quantifier hierarchy [46; 47], can be generated in Bose-Einstein condensates (BECs) [48; 49].
Maximally entangled states such as Bell states in a two qubit system [11; 43; 50; 51] are of great importance for numerous quantum information tasks. Quantum communication schemes such as teleportation, dense coding, and entanglement swapping require control over a basis of maximally entangled quantum states [1; 52; 53]. In optical systems these states are routinely generated and detected [54]. In higher dimensions, maximally entangled states can potentially be used for the teleportation of more complex quantum states in the larger Hilbert space [55; 56; 57; 58]. While most of the work relating to entanglement in atomic ensembles has been focused on entanglement that exists between atoms in a single ensemble [35], works extending this to two or more spatially separate ensembles have also been investigated both theoretically and experimentally [30; 59; 60]. The first experimental demonstration of entanglement between atomic gases was observed in paraffin-coated hot gas cells [61] using quantum nondemolition (QND) measurements where the entanglement between two atomic ensembles had been produced in the form of two-mode squeezed states. For BECs, entanglement has been observed between two spatial regions of a single BEC [62; 63; 64], but never between two completely separate BECs, to date. Such entanglement is fundamental to performing various quantum information tasks based on atomic ensembles, such as quantum teleportation [65; 66; 67; 68], remote state preparation [69], clock synchronization [70], and quantum computing [71; 72; 73]. In the past, numerous theoretical and experimental works has been focused on generating macroscopic singlet states within single atomic ensembles using collective QND measurement [74; 75; 76]. This state is basis invariant that finds considerable importance in quantum information processing [1; 52; 53; 77]. Currently, the amount of entanglement that can be experimentally generated
is very small, working within the Holstein-Primakoff approximation of spins, such that Hilbert space of the spins is largely unused. As such, current experiments are far below levels where a MMES can be generated even in principle from the way the protocols are constructed.
In this paper we propose a scheme for the generation of a MMES between two atomic ensembles using collective QND measurement and local spin rotations. In the QND scheme, the atoms in ensemble interact with a photonic field, which is subsequently measured, projecting the atoms into an entangled state [78; 79]. Our approach extends works such as Ref. [74; 75; 76] which have proposed sequential QND measurements to generate a collective singlet state within single atomic ensembles with postselection. Our scheme, on the other hand, is deterministic in the sense that the system converges towards MMES with _asymptotically unit_ probability. Our scheme does not approximate spins as a bosonic mode under the Holstein-Primakoff approximation as is often done by restricting to the short time interaction regime and holds for longer evolution times. In addition, our scheme does not rely upon individual atom control, as we have employed collective spin operations, projective measurements and local unitary rotations that can be implemented in experimental settings.
The paper is structured as follows. In Sec. II we review the theory of QND measurement induced entanglement [78; 79] and introduce the basic physical system that we are dealing with. In Sec. III we describe the maximally entangled state for macroscopic atomic ensembles and show its connection to the macroscopic singlet state. The former can be transformed into the latter state through a local unitary transformation. In Sec. IV we explain the protocol for deterministic preparation of the MMES and show that multiple sequential QND measurement produces a convergence of the desired state with the adaptive unitary. In Sec. V we numerically simulate our proposed protocol and show that convergence is obtained towards the MMES. Finally, in Sec. VI we summarize our results.
## II QND measurements
Here we review the theory of QND measurements on the atomic ensembles as introduced in Ref. [78]. The effect of multiple such QND entanglement operations is studied in Ref. [79].
### Definitions and Physical system
The physical system we shall consider consists of two neutral atomic ensembles or BECs, where each atom has two populated internal states. A common choice for the internal states are hyperfine ground states, such as the \(F=1,m_{F}=-1\) and \(F=2,m_{F}=1\) states in the case of \({}^{87}\)Rb [80]. For BECs we denote the bosonic annihilation operator for the two states as \(g_{l},e_{l}\) respectively, where \(l\in\{1,2\}\) labels the two BECs. These operators can be used to define an effective spin using the Schwinger boson operators
\[S_{l}^{x} =e_{l}^{\dagger}g_{l}+g_{l}^{\dagger}e_{l}\] \[S_{l}^{y} =-ie_{l}^{\dagger}g_{l}+ig_{l}^{\dagger}e_{l}\] \[S_{l}^{z} =e_{l}^{\dagger}e_{l}-g_{l}^{\dagger}g_{l}. \tag{1}\]
The commutation relation for the spin operators are
\[[S^{j},S^{k}]=2i\epsilon_{jkl}S^{l}, \tag{2}\]
where \(\epsilon_{jkl}\) is the Levi-Civita symbol.
For atomic ensembles, the total spin operators are written in terms of collective spin operators
\[S_{l}^{x} =\sum_{n=1}^{N}\sigma_{l,n}^{x}\] \[S_{l}^{y} =\sum_{n=1}^{N}\sigma_{l,n}^{y}\] \[S_{l}^{z} =\sum_{n=1}^{N}\sigma_{l,n}^{z}, \tag{3}\]
where \(\sigma_{l,n}^{k}\) is a Pauli operator for the \(n\)th atom in the \(l\)th ensemble. For simplicity, we consider that the number of atoms \(N\) in each ensemble are equal. For the case that all the operations on the atomic ensembles are completely symmetric under particle interchange from the initialization of the states to the final measurement, the formalism (1) and (3) for the BECs and atomic ensembles respectively are completely equivalent [81]. We will use the bosonic formulation (1) henceforth, although it should be understood that our calculations apply to both the BEC and atomic ensemble case.
The spin coherent states for \(N\) uncorrelated atoms in an ensemble is defined as
\[|\theta,\phi\rangle\rangle_{l}=\frac{(\cos\frac{\theta}{2}e^{-i \phi/2}e_{l}^{\dagger}+\sin\frac{\theta}{2}e^{i\phi/2}g_{l}^{\dagger})^{N}}{ \sqrt{N!}}|\text{vac}\rangle \tag{4}\]
where \(\theta,\phi\) are the angles on the Bloch sphere, and \(|\text{vac}\rangle\) is the vacuum state containing no atoms. The Fock states are defined as
\[|k\rangle_{l}=\frac{(e_{l}^{\dagger})^{k}(g_{l}^{\dagger})^{N-k }}{\sqrt{k!(N-k)!}}|\text{vac}\rangle. \tag{5}\]
The Fock states are eigenstates of the \(S^{z}\) operator according to
\[S_{l}^{z}|k\rangle_{l}=(2k-N)|k\rangle_{l}. \tag{6}\]
### QND Entanglement
Here we summarize the elementary entangling operation that we will use in our protocol for deterministic preparation of maximally entangled states. Coherent
light is used to perform an indirect measurement of two atomic ensembles arranged in a Mach-Zehnder configuration (Fig. 1). The atoms in the ensemble are prepared in a product state of two spin coherent states and the interaction between photons and atoms is governed by the Hamiltonian [34],
\[H=\kappa(S_{1}^{z}-S_{2}^{z})J^{z}, \tag{7}\]
where \(\kappa\) is the coupling constant and \(J^{z}=a_{1}^{\dagger}a_{1}-a_{2}^{\dagger}a_{2}\) is the Stokes operator for the two optical modes \(a_{1},a_{2}\) that enter into each arm of the interferometer.
After interacting with the atoms, the photonic modes are interfered with a beam splitter, giving rise to new modes \(c,d\) and the photons are detected by the detectors with counts \(n_{c},n_{d}\) respectively. After the measurement, the atomic ensembles collapse in the \(S_{1}^{z}-S_{2}^{z}\) spin observable basis [78; 79].
As shown in Ref. [79], the QND entanglement scheme between two atomic ensembles can be described in terms of a Positive Operator Valued Measure (POVM) as
\[M_{n_{c}n_{d}}(\tau)=\sum_{k_{1},k_{2}=0}C_{n_{c},n_{d}}[(k_{1}-k _{2})\tau]|k_{1},k_{2}\rangle\langle k_{1},k_{2}|, \tag{8}\]
where the modulating function is defined as
\[C_{n_{c},n_{d}}(\chi)=\frac{\alpha^{n_{c}+n_{d}}e^{-|\alpha|^{2 }/2}}{\sqrt{n_{c}!n_{d}!}}\cos^{n_{c}}(\chi)\sin^{n_{d}}(\chi), \tag{9}\]
and \(\tau=\kappa t\) is the interaction time. The resulting state after the measurement is
\[|\widetilde{\psi}_{n_{c}n_{d}}(\tau)\rangle =M_{n_{c}n_{d}}(\tau)|\psi_{0}\rangle\] \[=\sum_{k_{1},k_{2}}\langle k_{1},k_{2}|\psi_{0}\rangle C_{n_{c},n _{d}}[(k_{1}-k_{2})\tau]|k_{1},k_{2}\rangle. \tag{10}\]
According to the Eq. (10), the initial wave function is modulated by an extra factor of \(C_{n_{c},n_{d}}[(k_{1}-k_{2})\tau]\) which can result in a measurement-induced generation of entanglement.
For large photon counts, the modulating function \(C_{n_{c},n_{d}}[(k_{1}-k_{2})\tau]\) takes a Gaussian form [78] and is sharply peaked at
\[\sin^{2}[(k_{1}-k_{2})\tau]=\frac{n_{d}}{n_{c}+n_{d}}. \tag{11}\]
Taking the interaction time \(\tau=\pi/2N\) and assuming \(|\alpha\tau|^{2}\gg 1\), as defined in [79], we may then approximate the POVM (8) as a measurement operator according to
\[M_{n_{c}n_{d}}(\tau=\frac{\pi}{2N})\approx\Pi_{\Delta}, \tag{12}\]
where the projections \(\Delta=k_{1}-k_{2}\) and photonic measurements \(n_{c},n_{d}\) are related according to (11), and we defined
\[\Pi_{\Delta}= \frac{1}{2^{\delta_{\Delta}}}\Big{(}\sum_{k=0}^{N-\Delta}|k,k+ \Delta\rangle\langle k,k+\Delta|\] \[+(-1)^{(1-\delta_{\Delta})n_{d}}\sum_{k^{\prime}=\Delta}^{N}|k^{ \prime},k^{\prime}-\Delta\rangle\langle k^{\prime},k^{\prime}-\Delta|\Big{)}. \tag{13}\]
Here \(\delta_{\Delta}\) is the Kronecker delta which is 1 if \(\Delta=0\) and 0 otherwise.
As it is clear from the definition of modulating function (9) and noted in Ref. [79], there is a sign difference between the two terms for odd \(n_{d}\) photonic measurements. Since the shot-to-shot photonic outcome \(n_{d}\) is random, the two measurements (13) occur randomly and leads to stochastic evolution of the system. An exception is the outcome \(\Delta=0\) which is independent of photonic count \(n_{d}\). We will show that in our protocol it is possible to construct a adaptive unitary that is independent of \(n_{d}\) (and thus avoids explicit photon counting) that still converges towards the MMES.
The measurement operators are defined in different spin bases by applying suitable unitary rotation as [81]
\[\Pi_{\Delta}^{(\theta,\phi)}=\mathcal{U}(\theta,\phi)\Pi_{\Delta}^{(z)} \mathcal{U}^{\dagger}(\theta,\phi) \tag{14}\]
where
\[\mathcal{U}(\theta,\phi)=e^{-i(S_{1}^{z}+S_{2}^{z})\phi/2}e^{-i(S_{1}^{\eta}+S _{2}^{z})\theta/2}, \tag{15}\]
and \(\Pi_{\Delta}^{(z)}\) is the same measurement operator as in (13), but we explicitly specified the basis with the \({}^{(z)}\) label.
## III The maximally entangled state
In this section we discuss the nature of the maximally entangled state between two BECs. Namely, we would like to create the state,
\[|\text{MMES}\rangle=\frac{1}{\sqrt{N+1}}\sum_{k}|k\rangle_{1}|k \rangle_{2}. \tag{16}\]
Figure 1: Entanglement generation betweem spins in an atomic ensemble using the QND scheme. Coherent light \(|\alpha\rangle\) is used to interact two-mode BECs via the QND Hamiltonian interaction (7) arranged in a Mach-Zehnder configuration. The photon mode detections \(n_{c},n_{d}\) after the second beam splitter \(B_{2}\) entangles the two spins \(\mathbf{S}_{1}\) and \(\mathbf{S}_{2}\).
This state has an entanglement of \(E=\log_{2}(N+1)\) using the von Neumann entropy, which is the maximum value for two \(N+1\) level systems. This state is also known as the spin-EPR state for atomic ensembles [82].
We now show that the MMES (16) has a very close connection with the spin zero singlet state. This fact shall be used to construct our protocol. Each BEC can be considered to be a macroscopic qubit state with spin value \(s_{1}=s_{2}=N/2\). Due to each boson being symmetric under interchange, the total spin is always in the maximum spin sector. For two spins, one can define the collective state that can be formed, with quantum numbers of the total spin \(\mathbf{s}_{\rm tot}=\mathbf{s}_{1}+\mathbf{s}_{2}\), here we have used the notation \(\mathbf{s}_{l}=\mathbf{S}_{l}/2\), to connect our notation to the standard conventions of quantum angular momentum and \(l\in\{1,2\}\) labels two atomic ensembles, We can explicitly write this state in terms of the total angular momentum eigenstate \(|s,m\rangle\) where the two spins are coupled with \(m=m_{1}+m_{2}\), \(m\) is the orientation of total spin quantum number \(s\) along \(z\)-direction such that,
\[(s_{1}^{z}+s_{2}^{z})|s,m\rangle=m|s,m\rangle\] \[\mathbf{s}_{\rm tot}^{2}|s,m\rangle=s(s+1)|s,m\rangle. \tag{17}\]
There is a unique singlet state \(|s_{0},m_{0}\rangle\) which satisfies
\[(s_{1}^{z}+s_{2}^{z})|s_{0},m_{0}\rangle=0\] \[\mathbf{s}_{\rm tot}^{2}|s_{0},m_{0}\rangle=0, \tag{18}\]
with \(s_{0}=m_{0}=0\). Using the coupling rule for two spins [83], the singlet state then reads
\[|s_{0},m_{0}\rangle=\sum_{m}\frac{(-1)^{s-m}}{\sqrt{2s+1}}|s,m\rangle_{1}|s,-m \rangle_{2}. \tag{19}\]
The state (19) has perfect correlations and anti-correlations in the linear combination of spin observables. The state could be realized as the ground spin state of the Hamiltonian \(\mathbf{S}^{2}\).
For atomic ensembles of collection of \(N\) atoms, we describe the state in Fock space (5) that can be equivalently described in the angular momentum basis as well
\[|k\rangle=\Big{|}s=\frac{N}{2},m=k-\frac{N}{2}\Big{\rangle}, \tag{20}\]
The singlet state (19) is defined for atomic ensemble using the relation (20)
\[|s_{0},m_{0}\rangle=\frac{1}{\sqrt{N+1}}\sum_{k=0}^{N}(-1)^{k}|k\rangle_{1}|N- k\rangle_{2}. \tag{21}\]
We see that there is a close connection between the maximally entangled state (16) and the singlet state (21). In fact, the singlet state is a MMES up to local basis transformations. The local spin basis rotation,
\[e^{-iS_{2}^{y}\frac{\pi}{2}}|k\rangle=(-1)^{k}|N-k\rangle, \tag{22}\]
transforms the singlet state to the maximally entangled state as
\[|{\rm MMES}\rangle=e^{-iS_{2}^{y}\frac{\pi}{2}}|s_{0},m_{0}\rangle. \tag{23}\]
From (23) we may deduce the operator that has the analogous relation as (18) for the MMES. Applying the operator \(e^{-iS_{2}^{y}\pi/2}\) to (18) and using (23) we have
\[e^{-iS_{2}^{y}\pi/2}\mathbf{s}_{\rm tot}^{2}e^{iS_{2}^{y}\pi/2}|{\rm MMES}\rangle= \mathbf{\bar{s}}_{\rm tot}^{2}|{\rm MMES}\rangle=0 \tag{24}\]
where
\[\mathbf{\bar{s}}_{\rm tot}^{2}=\frac{(S_{1}^{x}-S_{2}^{x})^{2}+(S_{1}^{y}+S_{2}^{y })^{2}+(S_{1}^{z}-S_{2}^{z})^{2}}{4} \tag{25}\]
has same correlations in the spin observables as seen in QND interactions [78; 79].
For a two qubit system, the maximally entangled state (16) is the Bell state
\[\frac{|0\rangle_{1}|0\rangle_{2}+|1\rangle_{1}|1\rangle_{2}}{\sqrt{2}}. \tag{26}\]
This state is an eigenstate of the operators \(\sigma_{1}^{z}-\sigma_{2}^{z}\) and \(\sigma_{1}^{x}-\sigma_{2}^{x}\) with zero eigenvalue.
## IV Deterministic preparation of maximally entangled state
As discussed in Sec. II, QND measurements can be used to entangle two different atomic ensembles or BECs. Depending on the photonic measurement outcomes, the state of BEC collapses on different entangled states in general (10). For instance, an initial state \(|\psi_{0}\rangle\) is collapsed by measurement (13) as
\[\Pi_{\Delta}^{(z)}|\psi_{0}\rangle=\sum_{k=0}^{N-\Delta}\psi_{k}^{+}|k+\Delta \rangle|k\rangle+\sum_{k^{\prime}=\Delta}^{N}\psi_{k^{\prime}}^{-}|k^{\prime} -\Delta\rangle|k^{\prime}\rangle \tag{27}\]
where the coefficients in (27),
\[\psi_{k}^{+} =\frac{1}{2^{\delta_{\Delta}}}\langle k+\Delta|\langle k|\psi_{ 0}\rangle\] \[\psi_{k}^{-} =\frac{(-1)^{(1-\delta_{\Delta})n_{d}}}{2^{\delta_{\Delta}}} \langle k-\Delta|\langle k|\psi_{0}\rangle, \tag{28}\]
which is entangled for a particular measurement outcome \(\Delta\). It is however not a MMES due to the amplitudes \(\psi_{k}^{\pm}\) being not necessary of equal magnitude, and the difference \(\Delta\) between the Fock states in the BECs. Our aim now will be to devise a protocol such that the MMES (16) can be prepared deterministically, using quantum measurements which are inherently random.
### Basic idea
To gain some intuition about the protocol that we will introduce later, let us introduce some basic properties of the QND measurements and the MMES.
The MMES is a unique state that is an eigenstate of both the measurement operators \(\Pi_{0}^{(z)}\) and \(\Pi_{0}^{(x)}\),
\[\Pi_{0}^{(z)}|\text{MMES}\rangle =|\text{MMES}\rangle\] \[\Pi_{0}^{(x)}|\text{MMES}\rangle =|\text{MMES}\rangle. \tag{29}\]
It then follows that an alternating sequence of such measurements has the \(|\text{MMES}\rangle\) as an eigenstate
\[(\Pi_{0}^{(x)}\Pi_{0}^{(z)})^{M}|\text{MMES}\rangle=|\text{MMES}\rangle. \tag{30}\]
Due to the unique nature of the MMES satisfying (29), the QND measurements (13) applied alternately on an arbitrary state \(|\psi_{0}\rangle\) converges to the MMES (16),
\[(\Pi_{0}^{(x)}\Pi_{0}^{(z)})^{M}|\psi_{0}\rangle\xrightarrow{M\to \infty}|\text{MMES}\rangle. \tag{31}\]
According to (29), since the MMES is an eigenstate of both \(\Pi_{0}^{(z)}\) and \(\Pi_{0}^{(x)}\) measurement operators, once the state \(|\text{MMES}\rangle\) is obtained, further application of the measurement operators do not change the state. This is in fact an unique state for the same reasons that a singlet state is a unique state for two \(s_{l}=N/2\) spins. Therefore it is a fixed point of the evolution. The MMES is obtained for the QND measurement (13) corresponding to outcome \(\Delta=0\). However, Eq. (31) does not constitute a physically realizable protocol because obtaining the \(\Delta=0\) measurement outcome is set by Born's probability rule and due to the randomness of quantum measurements, we cannot guarantee that only the \(\Delta=0\) outcome will be obtained.
In order to overcome the randomness of quantum measurements and make a deterministic scheme, we use an adaptive strategy. Our scheme involves applying a unitary transformation to the state in the event that a \(\Delta\neq 0\) is obtained, and repeating the measurements many times until the desired \(\Delta=0\) outcome is obtained. The protocol is deterministic in the sense that eventually a measurement sequence will always end up with the \(\Delta=0\) outcome. The adaptive unitary is chosen such as to maximize the probability of obtaining the \(\Delta=0\) outcome in the next step. Our approach can be considered a special case of the measurement-based imaginary time evolution protocol proposed in Ref. [84; 85].
### Protocol
Here we more concretely describe the full procedure for deterministic preparation of the MMES using sequential QND measurements performed in \(z\) and \(x\) basis.
We define the "repeat-until-success" adaptive QND scheme which applies a sequence of QND measurements (13) and unitary operators until the measurement outcome \(\Delta=0\) is obtained as
\[T_{\vec{\Delta}}^{(z)} =\prod_{j=1}^{L}U_{\Delta_{j}}^{(z)}\Pi_{\Delta_{j}}^{(z)}\] \[=\Pi_{0}^{(z)}U_{\Delta_{L-1}}^{(z)}\Pi_{\Delta_{L-1}}^{(z)} \ldots U_{\Delta_{1}}^{(z)}\Pi_{\Delta_{1}}^{(z)}, \tag{32}\]
where \(\Delta_{L}=0\) and \(U_{0}^{(z)}=I\). A particular repeat-until-success measurement sequence is labeled according to the notation,
\[\vec{\Delta}=(\Delta_{1},\Delta_{2}\ldots\Delta_{L}). \tag{33}\]
In order to make the state convergent towards MMES we aim to correct those projections (\(\Delta\neq 0\)) through unitary \(U_{\Delta}^{(z)}\) that ensures the convergence,
\[\Pi_{0}^{(z)}|\tilde{\psi}_{\vec{\Delta}}\rangle=|\tilde{\psi}_{\vec{\Delta}}\rangle \tag{34}\]
where the unnormalized state after the repeat-until-success sequence is
\[|\tilde{\psi}_{\vec{\Delta}}\rangle=T_{\vec{\Delta}}^{(z)}|\psi_{0}\rangle. \tag{35}\]
Then analogously to (31), we replace each of the projectors in the \(z\) and \(x\) basis with the measurement sequences (32) such that
\[|\tilde{\psi}_{\vec{\Delta}}^{f}\rangle=\prod_{r=1}^{M}(T_{\vec{\Delta}_{r}^{ r}}^{(x)}T_{\vec{\Delta}_{r}^{r}}^{(z)})|\psi_{0}\rangle\xrightarrow{M\to\infty}|\text{ MMES}\rangle, \tag{36}\]
where the product is evaluated in reverse order such that \(r=1\) is applied first. The full sequence for the adaptive sequential QND measurements is written
\[\vec{\vec{\Delta}}=(\vec{\Delta}_{1}^{z},\vec{\Delta}_{1}^{z},\vec{\Delta}_{2 }^{z},\vec{\Delta}_{2}^{x}\ldots\vec{\Delta}_{M}^{z},\vec{\Delta}_{M}^{x}). \tag{37}\]
Figure 2: Protocol for obtaining the MMES. A “repeat-until-success” measurement sequence \(T_{\vec{\Delta}}^{(z)}\) is applied to an initial state, where a sequence of projective measurements \(\Pi_{\Delta}^{(z)}\) and adaptive unitary rotations are made until the \(\Delta=0\) result is obtained. The same repeat-until-success sequence is repeated in the \(x\) basis. The two sequences are repeated until convergence is attained, where both \(z\) and \(x\) measurements yield \(\Delta=0\) on the first measurement. This procedure converges to the MMES (16).
The two repeat-until-success sequences in the \(z\) and \(x\) basis are repeated until convergence is attained, defined as obtaining the outcome \(\Delta=0\) for the first measurement in each repeat-until-success sequence.
Here we summarize, for the sake of clarity, the entire protocol for preparing the MMES using adaptive QND scheme (Fig. 2). The protocol follows the sequence:
1. Perform the repeat-until-success \(\Pi_{\Delta}^{(z)}\) QND measurement sequence in the \(z\) basis. If \(\Delta\neq 0\), then apply unitary \(U_{\Delta}^{(z)}\) as a correction and reapply \(\Pi_{\Delta}^{(z)}\) until the measurement outcome \(\Delta=0\) is obtained (32).
2. Do the same as step 1 in the \(x\) basis in order to converge towards \(\Delta=0\) measurement outcome.
3. Repeat steps 1 and 2 until the outcome \(\Delta=0\) is obtained for both on the first measurement for a satisfactory number of cycles (36).
The above sequence, using adaptive QND, deterministically converges an initial state to a MMES (23).
### The adaptive unitary
In this section we discuss the choice of unitary rotation that is employed in the repeat-until-success sequence. There is in fact no unique choice for the adaptive unitary and we take advantage of this to choose a convenient form that has a simple experimental implementation. In order to understand the different choice of the unitary rotation, we first analyze the state,
\[|\tilde{\psi}_{\Delta}^{c}\rangle=U_{\Delta}^{(z)}\Pi_{\Delta}^{(z)}|\psi_{0}\rangle. \tag{38}\]
The main criterion for the unitary correction is that it maximizes the probability that \(\Delta=0\) is obtained in the next outcome. As may be seen by the measurement operator (13), there are two outcomes which occur randomly depending on detection count of the photonic outcomes \(n_{d}\). We assume that \(n_{d}\) is not measurable, since it requires single photon resolution of a bright laser, which is experimentally challenging. In order to overcome this, we choose a unitary correction that rotates the state such that it has a significant overlap with the \(\Delta=0\) sector, regardless of the random outcome of \(n_{d}\) in QND measurements (13).
We choose a unitary transformation that is based on a spin rotation
\[U_{\Delta}^{(z)}=e^{iS_{1}^{\Psi}\frac{\theta_{\Delta}}{2}}\otimes I_{2}. \tag{39}\]
We require a relationship between the measurement outcome \(\Delta\) and the corresponding angle of rotation \(\theta\). An adaptive unitary (39) changes the QND measured initial state (27) as,
\[U_{\Delta}^{(z)}\Pi_{\Delta}^{(z)}|\psi_{0}\rangle = \sum_{k=0}^{N-\Delta}\sum_{k^{\prime}=0}^{N}\psi_{k}^{+}\langle k ^{\prime}|e^{iS_{1}^{\Psi}\frac{\theta_{\Delta}}{2}}|k+\Delta\rangle|k^{ \prime}\rangle_{1}|k\rangle_{2}\] \[+ \sum_{k=\Delta}^{N}\sum_{k^{\prime}=0}^{N}\psi_{k}^{-}\langle k ^{\prime}|e^{iS_{1}^{\Psi}\frac{\theta_{\Delta}}{2}}|k-\Delta\rangle|k^{ \prime}\rangle_{1}|k\rangle_{2}.\]
We see the modified state (40) involves the matrix elements of unitary rotations \(e^{iS^{\Psi}\theta/2}\).
In order to maximize the probability that the outcome \(\Delta=0\) in the next measurement is obtained, we require performing a rotation such that the Fock state numbers of the two BECs are equal. Concretely, we require maximizing the amplitudes of the terms with \(k^{\prime}=k\) in (40), such that the matrix elements \(\langle k|e^{iS^{\Psi}\theta/2}|k\pm\Delta\rangle\) have a large value (see Appendix A for an explicit expression of the matrix elements).
Fig. 3(a),(b) shows the plot of the amplitude of matrix element \(\langle k+\Delta|e^{iS^{\Psi}\theta/2}|k\rangle\) for two values of \(k=0,1\) respectively. We can see that the largest amplitudes occur for a unitary rotation corresponding to a particular outcome \(\Delta\) near to the curve
\[\theta_{\Delta}\propto\frac{\Delta}{N}. \tag{41}\]
We see that as \(k\) increases in Fig. 3(a),(b), the region where the matrix elements have a significant magnitude broadens.
Figure 3: Choice of optimum angle of unitary transformation: Plot of the matrix element \(\langle k+\Delta|e^{iS^{\Psi}\frac{\theta}{2}}|k\rangle\) given in (111) as a function of the angle of unitary rotation \(\theta\) in (39) with the measurement outcome \(\Delta\) in (33) for (a) \(k=0\), (b) \(k=1\). Total number of the atoms is \(N=150\), (c) Variation of the fidelity (42) with the angle of unitary rotation (39) in the adaptive QND measurement outcomes, \(N=1\). \(\theta_{\Delta}^{\rm max}\) represents angle of unitary rotation that maximizes the fidelity (42) for a particular measurement outcome \(\Delta\), (d) Plot of maximized angles of unitary rotation for different measurement outcome (38), \(N=10\). The dashed line in (a),(b),(d) depicts the optimized choice of unitary rotation that maximizes the fidelity, which is fitted with the line \(\theta_{\Delta}^{\rm opt}=\pi\frac{\Delta}{N}\).
To find the proportionality constant in (41), we analyze the overlap of the transformed state with MMES. The fidelity of the normalized state (38) with the MMES (16) is calculated after the first QND measurement as
\[f=\frac{|\langle\text{MMES}|\tilde{\psi}^{z}_{\Delta}\rangle|^{2}}{\langle\tilde{ \psi}^{x}_{\Delta}|\tilde{\psi}^{x}_{\Delta}\rangle}. \tag{42}\]
Fig. 3(c) shows the variation of the fidelity of the state when the angle of unitary rotation is varied. We can see that the fidelity is maximum for a particular angle of unitary rotation \(\theta^{\text{max}}_{\Delta}\). It is clear that the choice of the angle is unique that maximizes the fidelity.
Fig. 3(d) shows the possible choice for the angle of unitary rotation, we see that the largest amplitude occurs near to the line
\[\theta^{\text{opt}}_{\Delta}=\pi\frac{\Delta}{N}. \tag{43}\]
This corrects the state (38) in such a way that it has a large overlap with the MMES state (16) in the next round of measurement. We note that it is possible to further improve upon the choice (43), but we find that this is a simple but effective choice that works for all \(N\).
## V Performance of the adaptive QND scheme
In order to demonstrate that the MMES is prepared using our protocol, we have performed a numerical analysis to check the performance of the protocol.
### Convergence to desired measurements
We first examine the probability distribution of the state after one QND measurement and correction step (35) in the \(z\) basis according to the protocol. The probability of a particular sequence is defined by,
\[p_{\vec{\Delta}} = \langle\tilde{\psi}_{\vec{\Delta}}|\tilde{\psi}_{\vec{\Delta}}\rangle \tag{44}\] \[= \langle\psi_{0}|T^{(z)\dagger}_{\vec{\Delta}}T^{(z)}_{\vec{ \Delta}}|\psi_{0}\rangle\]
where the normalized state (35) of the protocol is given by,
\[|\psi_{\vec{\Delta}}\rangle=\frac{|\tilde{\psi}_{\vec{\Delta}}\rangle}{\sqrt{ \langle\tilde{\psi}_{\vec{\Delta}}|\tilde{\psi}_{\vec{\Delta}}\rangle}}. \tag{45}\]
We consider the initial state of the two atomic ensembles to be \(S^{x}\)-polarized state,
\[|\psi_{0}\rangle = \big{|}\frac{\pi}{2},0\big{\rangle}\big{\rangle}_{1}\big{|}\frac {\pi}{2},0\big{\rangle}\big{\rangle}_{2} \tag{46}\] \[= \frac{1}{2^{N}}\sum_{k_{1},k_{2}=0}^{N}\sqrt{\binom{N}{k_{1}} \binom{N}{k_{2}}}|k_{1},k_{2}\rangle.\]
The operator \(T^{(z)}_{\vec{\Delta}}\) applied on the initial state (46) produces correlations between the BECs in the \(z\) basis. In the case of obtaining \(\Delta=0\) outcome on the first measurement, the state that is obtained is
\[\Pi^{z}_{0}|\psi_{0}\rangle = \frac{1}{2^{N}}\sum_{k=0}^{N}\binom{N}{k}|k,k\rangle \tag{47}\] \[= \sum_{k=0}^{N}\sqrt{p_{0}(k,k)}|k,k\rangle,\]
where \(p_{\Delta}(k_{1},k_{2})=|\langle k_{1},k_{2}|\Pi^{z}_{\Delta}|\psi_{0}\rangle |^{2}\) is the probability of the measured state for a particular outcome \(\Delta\) in the Fock basis. The outcome \(\Delta=0\) signifies the MMES-like correlations (16).
In general, for a random measurement sequence (36), the probability distribution in the Fock states is described as,
\[p_{\vec{\Delta}}(k_{1},k_{2})=|\langle k_{1},k_{2}|T^{(x)}_{\vec{\Delta}^{z}} T^{(z)}_{\vec{\Delta}^{z}}|\psi_{0}\rangle|^{2}. \tag{48}\]
In Fig. 4 we plot the probability distribution of the state (48) after performing QND measurement and correction operations in the \(z\) and \(x\) basis respectively. In Fig. 4(a)-(d) we show the probability distributions for one measurement and unitary correction sequence in the \(z\) basis. In Fig. 4(a) we see that the probability distribution for \(\Delta^{z}=0\) is correlated along \(k_{1}=k_{2}\) in the Fock state space of two ensembles and it resembles as that of the MMES distribution (16). It is however not the MMES because of the binomial factors in (47). For the projection outcomes \(\Delta^{z}=1\) in Fig. 4(b), we see the offset in Fock state probability distribution with \(k_{2}=k_{1}\pm\Delta^{z}\) according to the definition of operator (13). By applying a unitary correction (39), mostly the probability distribution is restored along the diagonal as shown in Fig. 4(c), such that in the subsequent measurement there is a high probability of obtaining \(\Delta^{z}=0\) in Fig. 4(d).
Fig. 4(e)-(h) show the effect of another application of the sequence of QND measurements (32), where the basis is changed from \(z\) to \(x\). Correlations are further improved in Fig. 4(e) because of suppression of the binomial factors (47). Unlike Fig. 4(b), we observe weaker offsets in the Fock state space as it is clear from Fig. 4(g)-(h). It is because of the fact that in subsequent QND measurement and corrections, stronger spin correlations are developed only for the MMES. Hence, the probability of obtaining the prepared state in other measurement outcomes, such as \(\Delta^{x}\neq 0\), is less likely and the probability distribution converges solely towards that of the MMES in Fig. 4(f) which implies the deterministic preparation of an initial state from the scheme.
### Probability distribution
We now turn to the probability (44) of the various measurement outcomes in the protocol, shown in Fig. 5. We define the marginal probability distribution of obtaining the measurement outcomes in a particular sequence (33) as,
\[p_{\Delta_{L}}=\sum_{\Delta_{1},\Delta_{2}\ldots\Delta_{L-1}}p_{\vec{\Delta}}. \tag{49}\]
The marginal probability gives the total probability of obtaining an outcome \(\Delta_{L}\) in a sequence of \(L\) measurements (33). This gives the probability of obtaining an outcome \(\Delta_{L}\) in a sequence, regardless of the previous measurement outcomes.
In Fig. 5 we have plotted the marginal probabilities for various levels of iteration for different measurement sequences (32). Fig. 5(a) shows a single \(z\) basis measurement sequence. As we can see, the marginal probability for the initial state is generally largest for the outcome \(\Delta=0\) and the probability decreases for other outcomes \(\Delta\neq 0\). The probability to obtain the MMES increases with larger numbers of measurements (\(L=5\)) in a sequence. Fig. 5(b) shows an \(x\) basis sequence after an initial measurement sequence in the \(z\) basis, where the final outcome was \(\Delta^{z}=0\). The probability for obtaining the MMES increases successively with the measurement sequences as compared to Fig. 5(a) and hence, other probabilities corresponding to the measurement outcomes \(\Delta\neq 0\) are suppressed further. Similarly, Fig. 5(c)-(d) show another \(z\) and \(x\) basis measurement sequences respectively (\(M=2\)) after the first \(z\) and \(x\) basis sequence (\(M=1\)), in this case the state converges to the MMES at a faster rate. The state is prepared in the measurement outcome \(\Delta=0\) with almost unit probability and the other measurement outcomes \(\Delta\neq 0\) occur with low probability. Finally, Fig. 5(e)-(f) best describes the overall performance of the protocol as the probability of obtaining the outcome \(\Delta=0\) is dominant in the subsequent QND measurements in the \(z\) and \(x\) basis respectively (\(M=3\)), and the MMES is prepared with nearly 100% success with very little contribution from the other measurements because of the increasing spin correlations. This shows that the MMES can be prepared in a deterministic way.
### Success probability
In the previous section, we have seen that in the sequential adaptive QND measurements, the probability of obtaining \(\Delta\neq 0\) measurement outcomes is low and the system is prepared deterministically in the MMES with outcome \(\Delta=0\). We define the success probability for obtaining the MMES as a sum of the probability of all the measured states in a QND measurement sequence with measurements that end with \(\Delta=0\):
\[p_{\rm suc} =p_{\Delta_{L}=0}\] \[=\sum_{\Delta_{L}\epsilon\{0\}}\sum_{\Delta_{1},\Delta_{2}\ldots \Delta_{L-1}}p_{\vec{\Delta}} \tag{50}\]
Fig. 6 shows the success probability of obtaining the MMES in our protocol for various levels of iteration. We
Figure 5: Marginal probability (49) for different measurement outcomes in sequential adaptive QND measurement (36) is shown for two atomic ensembles prepared in \(S^{x}\)-polarized state (46). Convergence is attained for measurement outcome \(\Delta=0\) after three rounds of iterations. A zoomed in plot is shown in the inset for better visibility of the probability values and its convergence in a sequence. The number of atoms in each ensemble is \(N=10\).
see that after a single \(z\) basis measurement sequence, (e.g. the \(M=1\) case), the success probability increases monotonically, as expected, although it is not sufficient to drive towards a perfect MMES as other measurement outcomes are still possible (see also Fig. 5(a)). Another measurement sequence in \(x\) basis leads to enhanced spin correlations and an increased probability for obtaining \(\Delta=0\) outcome and hence, it shows better success probability. Similarly in the next round of measurements in the \(z\) and \(x\) basis, i.e. \(M=2\), near unit success probability is achieved. After three rounds of measurements (\(M=3\)), the success probability of obtaining the MMES is close to unity. The convergence to unit probability is shown in the inset for better clarity.
### Fidelity calculation
Finally, we calculate the fidelity of the final state obtained from the protocol. The fidelity of the normalized state (36) with respect to the MMES in an adaptive QND measurement sequence (37) is calculated as
\[F_{\vec{\Delta}}=\frac{|\langle\text{MMES}|\tilde{\psi}_{\vec{\Delta}}^{f} \rangle|^{2}}{\langle\tilde{\psi}_{\vec{\Delta}}^{f}|\tilde{\psi}_{\vec{\Delta }}^{f}\rangle}. \tag{51}\]
We also define the fidelity over all possible outcomes, the average fidelity is calculated as,
\[F_{\text{avg}} =\sum_{\vec{\vec{\Delta}}}p_{\vec{\Delta}}F_{\vec{\Delta}}\] \[=|\langle\text{MMES}|\prod_{r=1}^{M}(T_{\vec{\Delta}_{r}^{r}}^{( x)}T_{\vec{\Delta}_{r}^{r}}^{(z)})|\psi_{0}\rangle|^{2}, \tag{52}\]
where the probability of a state in a particular sequence is
\[p_{\vec{\Delta}}=\langle\tilde{\psi}_{\vec{\Delta}}^{f}|\tilde{\psi}_{\vec{ \Delta}}^{f}\rangle. \tag{53}\]
Fig. 7 shows the average fidelity for obtaining the MMES for our protocol (36). In the first \(z\) basis measurement, the average fidelity is low, and it increases with the number of measurements made in a sequence. An \(x\) basis measurement after an initial measurement sequence in the \(z\) basis in the final outcome \(\Delta^{z}=0\) improves the average fidelity as the probability for obtaining the MMES increases. In the next round of measurements in the \(z\) and \(x\) basis, i.e. \(M=2,3\), the average fidelity increases to unity implying perfect preparation of the MMES only.
## VI Summary and conclusions
In this paper we have introduced an adaptive QND scheme to generate the MMES between two atomic ensembles. The state is equivalent to a singlet state formed from two macroscopic spins, with total angular momentum zero, up to a local basis transform. Using the basic properties of the singlet state, we have proposed a protocol that can be implemented using QND measurements with adaptive unitary corrections and converges towards the MMES in a deterministic way. Our scheme is experimentally viable in the sense that it does not use complex operations such as transformations on individual atoms, and only involves collective spin operations, projective measurements and local unitary rotations. In order to check the efficiency of the scheme so as to converge the system towards MMES, we have calculated the fidelity, and the success probability to achieve target state after
Figure 6: Success probability (50) for obtaining the MMES after sequential adaptive QND measurement (36) for \(M=1,2,3\) is plotted. It shows the convergence to the desired state after each measurement in the \(z\) and \(x\) basis. A zoomed in plot is shown for \(M=3\) in the inset. The number of atoms in each ensemble is \(N=10\).
Figure 7: Average fidelity (52) of the initial state (46) for different measurement outcomes is calculated in adaptive QND measurement (36) for \(M=1,2,3\). Convergence is attained after three rounds of measurements. A zoomed in plot is shown for \(M=3\) in the inset. The number of atoms in each ensemble is \(N=10\).
multiple rounds of measurements and corrections in a sequence. We observe that the probability and fidelity of obtaining the desired state increases in subsequent measurements. We have also checked the probability distribution of the measured state in Fock space and confirmed that it matches with spin correlations of the MMES.
Maximally entangled states find number of important applications in quantum information tasks as these serve as resource states for various quantum protocols. In Ref. [61], generation of two-mode squeezed states (TMSS) was demonstrated under Holstein Primakoff short interaction time regime in two separate gas cells. Here it is important to understand the difference between TMSS and MMES. The amount of entanglement, as calculated by von Neumann entropy, in a TMSS is \(\cosh^{2}r\log_{2}(\cosh^{2}r)-\sinh^{2}r\log_{2}(\sinh^{2}r)\), [68; 82], where \(r\) is the squeezing parameter. Typically the squeezing parameter is in the region of \(r\approx 1\), hence the amount of entanglement is of the order unity [29; 61]. Meanwhile, the value for a MMES between two ensembles of dimension \(N\) is \(\log_{2}(N+1)\)[59]. This illustrates that the MMES possesses much more entanglement than in the TMSS. Moreover, in the MMES, the linear combination of all spin observables show correlations (or anti-correlations) [82], while in TMSS, only few spin observables are correlated (or anti-correlated). Our work provides a simple yet powerful method for producing a MMES, and improves upon previous methods [74; 86] which rely upon postselection. In addition, we have not preformed any approximation to spin variables in our calculations and have considered the spins in an exact way. The protocol works regardless of the initial state but we have considered the state that has the largest fidelity with the MMES, namely two spin coherent states polarized in the \(x\)-direction.
In this paper we have focused upon introducing the protocol in an idealized setting and experimental imperfections such as decoherence were not considered. While we have not performed an explicit calculation including decoherence effects, we have made several studies of the effects of QND measurements under decoherence [87; 88; 89], which gives us some expectations of the performance of the current scheme. We briefly comment on prospects in this regard. We first point out that QND measurements have been shown to be remarkably robust against photon loss. In a previous work, decoherence effects on QND measurements were studied and shown that as long as the QND interaction times are in the short-time regime (as is the case in the measurements considered in this paper), decoherence on the atoms can be well-controlled [87]. One technical challenge, however, is the photonic resolution at the detectors. The primary effect of imperfect detector efficiency \(\eta\) is to reduce the average photon number \(\alpha\) by an amount proportional to detection efficiency, i.e. \(\alpha\rightarrow\alpha\sqrt{\eta}\), and modify the photon counts \(n_{c},n_{d}\) in Eq. (9) [88]. As a result, it masks the number of photons at the outputs of the detector so that the actual value is never known, thereby introducing randomness in the readout. The general form of Eq. (9) remains, however, of the same form, meaning that we do not expect the impact on entanglement generation itself to be affected. It will however produce an effective noise in the estimate of measurement outcome \(\Delta\), which can affect the convergence towards the MMES, such that the perfect stability that is seen in Fig. 5 will be disrupted. Another potential source of decoherence is the spontaneous emission of photons by the atoms. Since the QND interaction is a second order effect, spontaneous emission via photon emission of the excited state can be an eventual source of dephasing of the atomic states [87]. We note however that the MMES is in fact not the most sensitive state to dephasing by its nature [59]. Other types of entangled states such as Bell states composed of Schrodinger cat states are much more sensitive to dephasing and we expect such states are poor candidates for experimental realization. On the other hand, the MMES as we consider here scales much better with the system size, and are a much more realistic prospect for experimental realization. In a controlled experiment, where the detuning is large, effects arising from spontaneous emission can be controlled to be a small quantity. Another potential challenge is to control the atom number fluctuations. Experiments shot-to-shot will not have precisely the same atom numbers prepared in each trap, which may lead to additional errors in the readouts of particular quantities. In terms of entanglement generation, extension of the theory to unequal atom numbers will simply result in entanglement that is generated corresponding to the smaller number of atoms between the two ensembles. The remaining atoms are then not involved in the entanglement, and we expect that a MMES can still be formed. In summary, we consider the most critical threat to experimental realization of the MMES is the atomic dephasing that QND measurements induce. However, this can be controlled, and with a careful choice of parameters, we believe dephasing effects can be minimized.
###### Acknowledgements.
This work is supported by the National Natural Science Foundation of China (62071301); NYU-ECNU Institute of Physics at NYU Shanghai; the Joint Physics Research Institute Challenge Grant; the Science and Technology Commission of Shanghai Municipality (19XD1423000,22ZR1444600); the NYU Shanghai Boost Fund; the China Foreign Experts Program (G2021013002L); the NYU Shanghai Major-Grants Seed Fund; TAMKeen under the NYU Abu Dhabi Research Institute grant CG008.
## Appendix A Expression for transformation of Fock states through spin rotation
The Fock states \(|k\rangle\) are eigenstates of the \(S^{z}\) spin operator, one can transform it to an arbitrary direction \(|k\rangle^{(\theta,\phi)}\) as defined in Ref. [81] where the matrix elements of the \(S^{y}\) rotation are given by
\[\langle k^{\prime}|e^{-iS^{y}\theta/2}|k\rangle=\sqrt{k!(N-k)!k^{ \prime}!(N-k^{\prime})!}\] \[\times\sum_{n=\max(k^{\prime}-k,0)}^{\min(k^{\prime},N-k)}\frac{( -1)^{n}\cos^{k^{\prime}-k+N-2n}(\theta/2)\sin^{2n+k-k^{\prime}}(\theta/2)}{(k^ {\prime}-n)!(N-k-n)!n!(k-k^{\prime}-n)!} \tag{24}\]
where \(|k\rangle=|k\rangle^{(z)}\).
|
2308.15179 | Best performance and reliability for your time: budget-aware
search-based optimization of software model refactoring | Context: Software model optimization is a process that automatically
generates design alternatives, typically to enhance quantifiable non-functional
properties of software systems, such as performance and reliability.
Multi-objective evolutionary algorithms have shown to be effective in this
context for assisting the designer in identifying trade-offs between the
desired non-functional properties. Objective: In this work, we investigate the
effects of imposing a time budget to limit the search for design alternatives,
which inevitably affects the quality of the resulting alternatives. Method: The
effects of time budgets are analyzed by investigating both the quality of the
generated design alternatives and their structural features when varying the
budget and the genetic algorithm (NSGA-II, PESA2, SPEA2). This is achieved by
employing multi-objective quality indicators and a tree-based representation of
the search space. Results: The study reveals that the time budget significantly
affects the quality of Pareto fronts, especially for performance and
reliability. NSGA-II is the fastest algorithm, while PESA2 generates the
highest-quality solutions. The imposition of a time budget results in
structurally distinct models compared to those obtained without a budget,
indicating that the search process is influenced by both the budget and
algorithm selection. Conclusions: In software model optimization, imposing a
time budget can be effective in saving optimization time, but designers should
carefully consider the trade-off between time and solution quality in the
Pareto front, along with the structural characteristics of the generated
models. By making informed choices about the specific genetic algorithm,
designers can achieve different trade-offs. | J. Andres Diaz-Pace, Daniele Di Pompeo, Michele Tucci | 2023-08-29T10:01:29Z | http://arxiv.org/abs/2308.15179v1 | # Best performance and reliability for your time:
###### Abstract
**Context:** Software model optimization is a process that automatically generates design alternatives, typically to enhance quantifiable non-functional properties of software systems, such as performance and reliability. Multi-objective evolutionary algorithms have shown to be effective in this context for assisting the designer in identifying trade-offs between the desired non-functional properties.
**Objective:** In this work, we investigate the effects of imposing a time budget to limit the search for design alternatives, which inevitably affects the quality of the resulting alternatives.
**Method:** The effects of time budgets are analyzed by investigating both the quality of the generated design alternatives and their structural features when varying the budget and the genetic algorithm (NSGA-II, PESA2, SPEA2). This is achieved by employing multi-objective quality indicators and a tree-based representation of the search space.
**Results:** The study reveals that the time budget significantly affects the quality of Pareto fronts, especially for performance and reliability. NSGA-II is the fastest algorithm, while PESA2 generates the highest-quality solutions. The imposition of a time budget results in structurally distinct models compared to those obtained without a budget, indicating that the search process is influenced by both the budget and algorithm selection.
**Conclusions:** In software model optimization, imposing a time budget can be effective in saving optimization time, but designers should carefully consider the trade-off between time and solution quality in the Pareto front, along with the structural characteristics of the generated models. By making informed choices about the specific genetic algorithm, designers can achieve different trade-offs.
keywords: Multi-objective, Search-based Software Engineering, Performance, Reliability, Refactoring, Model-driven engineering +
Footnote †: journal: Journal of Information and Software Technology
## 1 Introduction
Over the last decade, multi-objective optimization techniques have been successfully applied to many software engineering problems [1; 2; 3]. These techniques have proved effective on problems whose objectives can be expressed through quantifiable metrics. Problems related to non-functional properties (_e.g.,_ performance and reliability) undoubtedly fit into this category, as witnessed by the literature in this domain [4; 5; 6]. Most approaches have been based on evolutionary algorithms [7; 8] that allow exploring the search space by combining different solutions or transformations.
One of the main drawbacks of applying optimization techniques to improve non-functional properties is that the search for alternative solutions requires a considerable amount of computational resources, notably time. Whenever a new solution is generated, search algorithms have to evaluate it. This means computing quantifiable indices by solving non-functional models, either analytically or by simulating them. Due to the complexity of such models, it is difficult to further improve the efficiency of their evaluation. Therefore, the time required to search for better solutions is negatively impacted by the evaluation phase. When performed on realistic models, this type of optimization can even take days [9; 10; 11], which poses an obstacle to its adoption in practical design and development scenarios.
To address the aforementioned challenge, the search for better solutions can be constrained by optimization budgets of varying complexity. A simple strategy is to set a time budget that interrupts the search when the imposed time has expired [9]. However, choosing the right time budget is not straightforward. Time budgets that are too small heavily limit the exploration of the solution space, consequently hampering the quality of the computed Pareto fronts (_i.e.,_ the set of non-dominated solutions obtained at the end of the optimization). Conversely, larger time budgets may not be effective in saving enough optimization time, therefore defeating their purpose.
This paper extends our prior work [12] in which we investigated the impact of time budgets on the quality of generated design alternatives (intended as software models obtained as the outcome of the search process). This extension focuses on showing how a designer can find and evaluate a trade-off between the time spent on the search and the characteristics of the
obtained models. Consequently, the analysis of the effects of time budgets is elaborated by investigating both the quality of the generated design alternatives and their structural features. A novel aspect of this paper is that it analyzes and links the effects of the time budgets on the search process to the structural features of the resulting software models, which is a rather unexplored topic in the literature. Specifically, here we consider a multi-objective optimization process that aims at improving models through sequences of refactoring actions. These actions are intended to alter an initial model to maximize performance and reliability while minimizing the number of detected performance anticipatterns1 and the cost of the refactoring itself.
Footnote 1: Performance antipatterns describe bad design practices that usually lead to performance degradation in a system.
Regarding the impact of the time budget on the quality of solutions, we answer the research questions:
* **RQ1**: Which algorithm performs better when limited by a time budget?
* **RQ2**: To what extent does the time budget affect the quality of Pareto fronts?
In particular, RQ1 and RQ2 assess the differences in the quality of the Pareto fronts when varying the budget and the algorithm. To do so, we employ hypothesis testing and quality indicators, such as the Hypervolume (HV) [13; 14] and Inverse Generation Distance (IGD+) [15]2. The HV measures the amount of volume in the solution space that is covered by a computed Pareto front (\(PF^{c}\)) with respect to a reference Pareto front (\(PF^{ref}\)), while the IGD+ is the inverse of the Euclidean distance between points belonging to \(PF^{c}\) and \(PF^{ref}\). In our case, \(PF^{ref}\) is a Pareto front obtained without a time budget but terminated after 100 genetic evaluations, which represents a baseline against which we compare the results obtained when imposing a budget.
Footnote 2: IGD+ extends the analyses performed in our previous study ([12]).
To understand the impact of time budgets on the structural features of the models, we answer two additional research questions3:
Footnote 3: RQ3 and RQ4 are new contributions of this study.
* **RQ3**: Do different budgets generate different software models?
* **RQ4**: How do the sequences of refactorings look like when using different budgets?
In RQ3 and RQ4, we analyze the impact of time budgets on the models produced by the optimization, in terms of the sequences of refactorings resulting from the imposition of different budgets4. This is a rather unexplored aspect in model-based refactoring optimization. For this task, we rely on a tree-based representation of the search space that exposes similarities and differences with respect to \(PF^{ref}\).
Footnote 3: RQ3 and RQ4 are new contributions of this study.
In order to answer all our research questions, we designed an experimental study with two model-based benchmarks, namely: Train Ticket Booking Service [16], and CoCoME [17]. Also, we compare three genetic algorithms, _i.e.,_ NSGA-II[18], SPEA2[19], and PESA2[20], to identify whether any of them performs better when the search is limited by time budgets.
Our results show that the time budget heavily impacts on the quality of Pareto fronts, particularly for performance and reliability. Furthermore, we show how slightly increasing the budget results in small improvements in the quality of solutions in the Pareto fronts. On the contrary, the choice of algorithm appears to be critical. In most cases, NSGA-II is the fastest among the analyzed algorithms, while PESA2 is the algorithm that generates the solutions with the highest quality. Also, SPEA2 exhibits worse time performance than NSGA-II and PESA2. Our findings suggest that, when the designer is in need of a faster algorithm, NSGA-II should be preferred, while PESA2 can deliver better solutions in longer, but still reasonable, time. We observe that imposing a time budget forces the algorithms to generate models that are structurally different from those obtained without a budget. In addition, only a small fraction of these models that are induced by the budget can also be found in \(PF^{ref}\), indicating that the search process is affected by the selected budget and algorithm. Overall, the experiments show that the designer should weight time versus quality of solutions in the Pareto front, but also consider the structural characteristics of the models when inspecting results.
The remaining of the paper is structured as follows: Section 2 reports related work, Section 3 introduces background concepts, Section 4 presents the design of this study. Section 5 describes the two case studies employed in our analysis. Research questions and results are presented and discussed in Section 6. Threats to validity are covered in Section 7. Finally, Section 8 gives the conclusions and outlines future work.
## 2 Related Work
The idea of limiting the search using additional criteria has gotten attention within the search-based community [21]. Often, it is unfeasible to use a "formal" stopping criterion in real-world multi-objective problems for which a mathematical formulation might be hard to define [22]. To deal with this limitation, some proposals for stopping criteria are based on quality indicators [23; 24], while others are based on statistical testing of different metrics [25; 26]. To the best of our knowledge, there are no studies that investigate the usage of the aforementioned search budgets in refactoring optimization of model-based software. In the following, we report on studies about multi-objective optimization of various non-functional properties of software models (_e.g.,_ reliability, and energy [27; 28]), which have different degrees of freedom with respect to modifying the models (_e.g.,_ service selection [29]).
A popular Architecture Description Language (or ADL) for performance optimization is the Palladio Component Model
(PCM) [30], which supports the analysis of different quality attributes on PCM architectures. Ni et al. [31] compared the ability of two multi-objective optimization approaches to improve quality attributes where randomized search rules were applied to improve the PCM architectures. The study indirectly shows that the multi-objective optimization problem at the model level is still an open challenge.
Koziolek et al. [32] presented PerOtperyx, a performance-oriented multi-objective optimization problem, which supports PCM architectures. In PerOtperyx, the optimization process is guided by architectural tactics referring to component reallocation and hardware. Besides, PerOtperyx and our study use Layered Queuing Networks (LQNs) as the performance modeling technique, and both rely on model transformations to map the architectural models to performance ones.
Rago et al. [33] proposed an extensible platform, called SQuAT, aimed at including flexibility in the definition of an architecture optimization problem. SQuAT exploits LQNs for performance evaluation and PerOtperyx tactics for architectural changes to optimize PCM architectures.
The works above rely on tactics to optimize PCM architectures, which do not strictly represent refactoring actions. Conversely, in our approach, we apply refactoring actions that change the structure of an architectural model while preserving its original behavior. Another difference is that we use UML as a standard modeling notation, instead of an ADL such as PCM.
Cortellessa and Di Pompeo [3] previously studied the sensitivity of multi-objective software model refactoring to configuration characteristics, where models are defined in a performance-oriented ADL called AEmilia. They also implemented a refactoring engine being able to change the structure of AEmilia architectural models. Moreover, they compared NSGA-II and SPEA2 in terms of the quality of the solutions in the Pareto front.
Etemaadi and Chaudron [34] presented an approach aimed at improving quality attributes of software architectures through genetic algorithms. The multi-objective optimization considers component-based architectures described with an ADL called AQOSA-IR [35]. The architectures can be evaluated by means of several techniques, such as LQNs and Fault Trees. The genetic algorithm considers the variation of designs (_e.g.,_ number of hardware nodes) as objectives of the fitness function.
Aleti et al. [4] proposed an approach for modeling and analyzing architectures expressed in the Architecture Analysis and Description Language (AADL). The authors also introduced a tool based on genetic algorithms for optimizing different quality attributes while varying the architecture deployment and the component redundancy. More recently, the GATSE project [36] supports quality-attribute exploration of AADL configurations, enabling the designer to focus on certain regions of the space and narrow down the search.
Unlike the aforementioned studies, we consider more complex model transformations in the form of refactoring actions and different target objectives for the fitness function. Besides, we investigate the impact of search budgets and the role of genetic algorithms using different searching policies in the context of software model refactoring optimization.
## 3 The multi-objective optimization approach
In this study we analyze the impact of search budgets on the refactoring of software models using three _Genetic Algorithms_: NSGA-II [18], SPEA2 [19], and PESA2 [20]. We chose these algorithms due to their different policies when exploring the solution space. For example, NSGA-II uses the knowledge of non-dominated sorting to generate Pareto frontiers, SPEA2 uses two archives to store computed Pareto fronts, and PESA2 uses the hyper-grid concept to compute Pareto fronts.
### The Refactoring Engine
The automated refactoring of UML models is a key point when evolutionary algorithms are employed in order to optimize non-functional properties of models. For the sake of full automation of our approach, we have implemented a refactoring engine that applies predefined refactoring actions on UML models.
Each solution produced by our evolutionary algorithm produces a sequence of refactoring actions that, once applied to an initial model, leads to a model alternative that shows different non-functional properties. Since our refactoring actions are combined during the evolutionary approach, we exploit our engine to verify in advance whether a sequence of refactoring actions is feasible or not [37; 38].
Our refactoring actions are equipped with pre- and post-conditions. The pre-condition represents the model state for enabling the action, whereas the post-condition represents the model state when the action has been applied. The approach extracts a refactoring action and adds it to the sequence. As soon as the action is selected, it randomly extracts a model element (_i.e.,_ the target element). Thus, the refactoring engine checks the feasibility of the (partial) sequence of refactoring actions.When the latest added action makes the sequence unfeasible, the engine discards the action and replaces it with a new one. Our engine also allows to reduce the number of invalid refactoring sequences, thus reducing the computational time.
The refactoring actions employed in our study are briefly described below.
Clone a Node (Clon)This action is aimed at introducing a replica of a Node. Adding a replica means that every deployed artifact and every connection of the original Node has to be in turn cloned. Stereotypes and their tagged values are cloned as well. The rationale of this action is to introduce a replica of a platform device with the aim of reducing its utilization.
Move an Operation to a new Component deployed on a new Node (MO2N)This action is in charge of randomly selecting an operation and moving it to a new Component. All the elements related to the moving operation (_e.g.,_ links) will move as well. Since we adopt a multi-view model, and coherence among views has to be preserved, this action has to synchronize dynamic and deployment views. A lifeline for the newly created Component is added in the dynamic view, and messages
related to the moved operation are forwarded to it. In the deployment view, instead, a new Node, a new artifact, and related links are created. The rationale of this action is to lighten the load of the original Component and Node.
Move an Operation to a Component (MO2C)This action is in charge of randomly selecting and transferring an Operation to an arbitrary existing target Component. The action consequently modifies each UML Use Case in which the Operation is involved. Sequence Diagrams are also updated to include a new lifeline representing the Component owning the Operation, but also to re-assign the messages invoking the operation to the newly created lifeline. The rationale of this action is quite similar to the previous refactoring action, but without adding a new UML Node to the model.
Deploy a Component on a new Node (ReDe)This action simply modifies the deployment view by redeploying a Component to a newly created Node. In order to be consistent with the initial model, the new Node is connected with all other ones directly connected to the Node on which the target Component was originally deployed. The rationale of this action is to lighten the load of the original UML Node by transferring the load of the moving Component to a new UML Node.
### Objective
Our process, as depicted in Figure 1, optimizes software models through refactoring, with respect to four conflicting objectives: the average system performance (perfQ) [39], the reliability (reliability) of the software model [40], the number of performance antipatterns (#pas) detected in the model, and the cost of the refactoring actions (#changes) to generate the design alternative from the initial model [10].
Average System Performance (perfQ)With this objective, we quantify the performance improvement (or detriment) between two models.
\[\mathtt{perfQ}(M)=\frac{1}{c}\sum_{j=1}^{c}p_{j}\cdot\frac{F_{j}-I_{j}}{F_{j}+ I_{j}}\]
where \(M\) is a model obtained by applying a refactoring solution to the initial model, \(F_{j}\) is the value of a performance index in \(M\), and \(I_{j}\) is the value of the same index on the initial model. \(p\in\{-1,1\}\) is a multiplying factor that holds: i) 1 if the \(j\)-th index has to be maximized (i.e., the higher the value, the better the performance), like the throughput; ii) \(-1\) if the \(j\)-th index has to be minimized (_i.e.,_ the smaller the value, the better the performance), like the response time. Furthermore, a single perfQ for each performance index is computed as the normalized ratio between the index value of a model alternative and the initial model. Finally, the global perfQ is computed as the average across the number of performance indices considered in the performance analysis.
System Reliability (reliability)The reliability analysis model that we adopt here to quantify the reliability objective is based on [40]. The mean failure probability \(\theta_{S}\) of a software system \(S\) is defined by the following equation:
\[\theta_{S}=1-\sum_{j=1}^{K}p_{j}\left(\prod_{i=1}^{N}(1-\theta_{i})^{Im\,Nr_{ ij}}\cdot\prod_{l=1}^{L}(1-\psi_{l})^{MsgSize(l,j)}\right)\]
This model takes into account failure probabilities of components (\(\theta_{i}\)) and communication links (\(\psi_{l}\)), as well as the probability of a scenario to be executed (\(p_{j}\)). Such probabilities are combined to obtain the overall reliability on demand of the system (\(\theta_{S}\)), which represents how often the system is not expected to fail when its scenarios are invoked. Such probabilities are combined to obtain the overall reliability on demand of the system, which represents how often the system is not expected to fail when its scenarios are invoked.
The model is considered to be composed of \(N\) components and \(L\) communication links, whereas its behavior is made of \(K\) scenarios. The probability (\(p_{j}\)) of a scenario \(j\) to be executed is multiplied by an expression that describes the probability that no component or link fails during the execution of the scenario. This expression is composed of two terms: \(\prod_{i=1}^{N}(1-\theta_{i})^{Im\,Nr_{ij}}\), which is the probability of the involved components not to fail raised to the power of their number of invocations in the scenario (denoted by \(InvNr_{ij}\)), and \(\prod_{i=1}^{L}(1-\psi_{l})^{MsgSize(l,j)}\), which is the probability of the involved links not to fail raised to the power of the size of messages traversing them in the scenario (denoted by \(MsgSize(l,j)\)).
Performance Antipatterns (#pas)A performance antipattern describes bad design practices that might lead to performance degradation in a system. These textual descriptions were later translated into first-order logic (FOL) equations [41].
FOLs enable an automated comparison with thresholds in order to reveal the occurrences of a performance antipattern. The identification of such thresholds is a non-trivial task, and using
Figure 1: A graphical representation of the approach. It takes as input: the set of all the available refactoring actions (_Repository of refactoring actions_), and an _Software model_ (_i.e.,_ the subject model). The _Optimization Engine_ randomly selects and combines refactoring actions in order to generate a set of _Model Alternatives_, which are _Evaluated_ with respect to the objectives. Finally, the Optimization Engine produces a _Pareto front of Software Models_.
deterministic values may result in an excessively strict detection, where the smallest change in the value of a literal determines the occurrence of the antipattern. For these reasons, we use the fuzzy threshold concept [41], instead of detecting a performance antipattern in a deterministic way. By using fuzzy thresholds, we assign probabilities to the occurrences of antipatterns.
_Refactoring cost (#changes)._ This objective quantifies the distance of the design alternative obtained by applying refactoring actions to the initial one. The effort needed to perform a refactoring is quantified as the product between the _baseline refactoring factor_, which is associated to each refactoring action, and the _architectural weight_, which is associated to each model element on the basis of the number of connections to other elements in the model [10]. The overall #changes is obtained by summing the efforts of all refactoring actions contained in a solution.
### Quality indicators
Establishing the quality of a computed Pareto front is arduous, and it is actually a NP-hard problem [42]. Different quality estimators have been introduced, such as the Hypervolume (HV) [14; 13] and Inverse Generational Distance (IGD+) [15]. Each estimator measures a different quality aspect of a Pareto front.
Following the classification of Li and Yao [43], we employed two quality indicators falling in two categories: HV as the volume-based QI, and IGD+ as the distance-based QI.
The HV measures the amount of the volume of the solution space that a computed Pareto front (\(PF^{c}\)) covers with respect to a reference Pareto front (\(PF^{ref}\)), and it can assume values between 0 and 1. When the \(HV=0\), it means that the \(PF^{c}\) is fully dominated by the \(PF^{ref}\), while \(HV=1\) means that each point within the \(PF^{c}\) is non-dominated by any points within the \(PF^{ref}\). Therefore, the closer to 1 the HV, the higher the quality of the \(PF^{c}\).
The Inverse Generational Distance plus (IGD+) is a quality indicator to be minimized. It measures the distance from a solution in \(PF^{ref}\) to the nearest solutions in \(PF^{c}\)[15].
In our evaluation, we use the indicators above to estimate the quality of the \(PF^{c}\) obtained with a search budget when compared to a \(PF^{ref}\) computed without budgets but terminated after 100 genetic evolutions.
## 4 Study Design
The goal of the study is to determine whether the imposition of a time-based search budget can hamper the quality of the resulting Pareto fronts in the context of a model-based multi-objective optimization. Additionally, we are interested in how different algorithms cope with the search budgets. To this end, we selected two case studies and ran a number of optimization experiments with time-based and QI-based budgets. We varied the budget limit between 15, 30, and 60 minutes, while considering the HV, and IGD+ quality indicators. Moreover, for each search budget, we also ran three genetic algorithms: NSGA-II, SPEA2, and PESA2. These algorithms were chosen due to their different searching policies, as described in Section 3.
To account for the random nature of genetic algorithms [44], we ran the same experiment 30 times and computed the QIs for each resulting Pareto front (\(PF^{c}\)). Since \(PF^{ref}\) is unknown in our case studies, we computed the HV with respect to the best Pareto front obtained for each case study after running the algorithms for 100 genetic evolutions (_i.e._, without search budgets). The entire study consisted of 558 experiments that we performed on three AMD EPYC 7282, each with 64 cores and 512GB of RAM.5
Footnote 5: Replication package: [https://github.com/SEALABQualityGroup/replication-package_2023_search_budgets](https://github.com/SEALABQualityGroup/replication-package_2023_search_budgets)
We followed the guidelines by Arcuri and Briand [45] to compare the experiments against each other. Therefore, we applied the Mann-Whitney U non-parametric statistical test (also referred to as Wilcoxon rank-sum test) [46] with a null hypothesis stating that the experiments do not have a statistically significant difference. Two experiments are considered to be significantly different on the basis of their quality indicator value if the test computes a p-value smaller than \(\alpha=0.05\). To assess the magnitude of the difference, we used the Vargha-Delaney \(\hat{A}_{12}\)[47], a standardized non-parametric effect size measure. \(\hat{A}_{12}\) can take values between 0 and 1, and a value of 0.5 indicates that the two experiments are equivalent. The closer the \(\hat{A}_{12}\) value gets to 0 or 1, the larger the effect size. The interpretation of the magnitude as being negligible, small, medium, and large is performed according to the thresholds 0.147, 0.33, and 0.474 respectively [48].
In addition to the quantitative analysis above, we performed a qualitative analysis to assess differences in the software models when using different budgets. First, we looked at #changes and #pas as distinctive characteristics of the models, which were treated as optimization objectives in the experiments. Second, we relied on the types of refactoring actions and their arrangement in sequences (generated by the optimization) as proxies for the software models derived from those sequences. The sequences resulting from a given experiment (or search space) were represented as a tree to facilitate comparisons between experiments.
## 5 Case Studies
We applied our approach to two case studies from the literature: i) the Train Ticket Booking Service (TTBS) [16], and ii) the well-established modeling case study CoCoME, whose UML model has been derived by the specification in [17].6 Table 2 reports the size of each case study in terms of number of Components, Nodes, and UML Use Cases.
Footnote 6: [https://github.com/SEALABQualityGroup/uml2lqp-casestudies](https://github.com/SEALABQualityGroup/uml2lqp-casestudies)
_Train Ticket Booking Service._ TTBS is a Web-based booking application whose architecture is based on the microservices paradigm. The system is made up of 40 microservices, and it
provides different scenarios through which users can perform realistic operations, _e.g.,_ book a ticket or watch trip information.
For our analysis, we extracted **11** UML Components, **11** UML Nodes, and **3** UML Use Cases from the UML model [16]. We selected _Login_, _Update user details_ and _Rebook_ as use cases because they commonly represent performance-critical scenarios in a ticketing booking service. Also, the model defines two user categories: simple and admin users.
CoCoMECoMEdescribes a trading system containing several stores. A store can have one or more cash desks for processing goods. A cash desk is equipped with all the tools needed to serve a customer (_e.g.,_ a Cash Box, Printer, Bar Code Scanner). CoCoME covers possible use cases performed at a cash desk (_e.g.,_ scanning products, paying by credit card, or ordering new goodies). CoCoME describes 8 scenarios involving more than 20 components.
For our analysis, we extracted **3** UML Use Cases, **13** UML Components, and **8** UML Nodes. Furthermore, we focused on three scenarios: _Process Sale_, _Receive Ordered Products_, and _Show stock reports_ because they represent common activities in a trading system.
## 6 Research Questions
The four research questions we intend to address in this study are presented below. Afterwards, we describe the results for each question, and discuss the key findings and implications for the designer.
### RQ1: Which algorithm performs better when limited by a time budget?
When a time constraint is imposed on the optimization, a designer might be interested in selecting the algorithm that provides the best quality solutions belonging to a computed Pareto front (\(PF^{c}\)) for the specific budget. It is worth mentioning that the quality of \(PF^{c}\) can be estimated through several quality indicators (QIs) [49]. Each quality indicator measures a specific characteristic of that \(PF^{c}\), and none of them is a clear winner to estimate Pareto fronts. For this reason, we chose HV and IGD+ to assess two angles of \(PF^{c}\) in our experiments. The HV measures how much volume of the solution space is covered by a \(PF^{c}\), whereas the IGD+ measures the Euclidean distance between solutions belonging to \(PF^{c}\) and solutions belonging to \(PF^{ref}\).
Figure 2 depicts the timelines of how the HV (Figures 1(a) and 1(b)) and IGD+ (Figures 1(c) and 1(d)) QIs vary with different search budgets, and how many genetic evolutions were performed during the search. At a glance, PESA2 generated the lowest IGD+ and the highest HV in all three time budgets and for both case studies, while NSGA-II was better than SPEA2 for both indicators and case studies. From the timelines, we can see that SPEA2 was the slowest algorithm in our experiments, whereas NSGA-II was the fastest one. Furthermore, for each search budget, NSGA-II performed the highest number of genetic evolutions, _e.g.,_ it performed on average 20 genetic evolutions for TTBS with 60 minutes of search budget, and almost 18 genetic evolutions for CoCoME. Conversely, SPEA2 performed, on average, only 8 evolutions for TTBS with a 60 minutes search budget. Regarding PESA2, we can observe that it consistently
\begin{table}
\begin{tabular}{l l l} \hline \hline & Configuration & Eligible values \\ \hline \multirow{6}{*}{Common configuration} & Number of genetic evolutions & 100 \\ & Population Size & 16 \\ & Number of independent runs & 30 \\ & \(P_{\textit{crossover}}\) & 0.80 \\ & Crossover Operator & Single Point \\ & \(P_{\textit{mutation}}\) & 0.20 \\ & Mutation Operator & Simple Mutation \\ \hline NSGA-II & Selection operator & Binary Tournament Selection with crowding distance \\ \hline \multirow{3}{*}{SPEA2} & Selection operator & Binary Tournament Selection \\ & Archive population size & 16 \\ & Distance to the k-th individual & 1 \\ \hline \multirow{2}{*}{PESA2} & Archive population size & 16 \\ & Number of hyper-grids & 5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Configuration values for the evolutionary algorithms.
\begin{table}
\begin{tabular}{l l} \hline \hline Case Study & Elements \\ \hline \multirow{3}{*}{TTBS} & **11** UML Components \\ & **11** UML Nodes \\ & **3** UML Use Cases \\ \hline \multirow{3}{*}{CoCoME} & **13** UML Components \\ & **8** UML Nodes \\ & **3** UML Use Cases \\ \hline \hline \end{tabular}
\end{table}
Table 2: Case studies at a glance.
Figure 2: Timelines of the number of evolutions performed by the algorithms for the different budget configurations, along with the achieved HV and IGD+. Vertical bars show the average HV and IGD+ over 30 runs, while ticks represent the standard deviation from the mean.
generated the best QI in each case study and for every search budget. However, it turned out to be slower than NSGA-II, but faster than SPEA2.
It is worth mentioning that the two QIs showed the same behavior in the two case studies, therefore, for the sake of brevity, we only elaborate on observed behavior for HV. Analyzing the TIBS results, we observe that the HV values of SPEA2 almost lie close to 0.3 for every search budget. For PESA2, in turn, the longer the search budget, the higher the HV values. The HV values for NSGA-II increase between the 15 and 30 minutes budgets and then become almost flat between 30 and 60 minutes. In addition, the timelines of the two case studies seemed to resemble each other. Also for CoCoME, NSGA-II was the fastest algorithm, SPEA2 the slowest one, and PESA2 generated the highest HV values. Furthermore, the number of genetic evolutions of CoCoME is consistent with that of TTBS.
Table 3 reports the results of the Mann-Whitney U test, and the corresponding \(\hat{A}_{12}\) effect sizes. The name of the algorithm is underlined when i) the test resulted in a significant difference, and ii) that algorithm yielded high HV values. In this case, most tests revealed a significant difference between the algorithms in any given time budget (highlighted in bold). PESA2 performed best in many cases and in both case studies, NSGA-II scored best only in two cases in TTBS and not by a large margin, and SPEA2 was superior in the 15 minutes budget test in CoCoME.
To investigate possible reasons behind such HV differences, we take a look at how the HV is achieved and when, by comparing it to the time budget and the number of performed evolutions.
Another viewpoint on the difference among the algorithms could be the actual quality of the computed solutions in terms of the non-functional properties of interest. To visually inspect this aspect, we relied on scatter plots comparing perfQ and reliability, because these objectives are the non-functional properties to be improved through the refactoring and optimization process. Along this line, Figures 3a to 3c depict the three \(PF^{c}\) when varying the time budget of all three genetic algorithms for both case studies. At a glance, we can observe a more densely populated \(PF^{c}\) for CoCoME than for TTBS, while TTBS showed a more evident trend towards the top-right corner (the optimization direction for improving both objectives). Regarding the \(PF^{c}\) for CoCoME, a horizontal clustering was observed for the three search budgets. The cluster that lies around 0.8 for reliability is always more populated than the other two clusters: one between 0.4 and 0.6, and the other between 0.0 and 0.2, approximately. There is not an evident motivation for the horizontal clustering of CoCoME. We conjecture that the characteristics of the CoCoME model, which has a more complex behavior than TTBS, prevent the algorithms from reaching higher reliability values for the search budgets we considered. Also, the CoCoME solution space seems to be less homogeneous, with feasible solutions that are inherently clustered.
In summary, we can answer **RQ1** by saying that there is a clear difference among the algorithms when comparing them on the basis of a QI for multi-objective optimization, like the HV, and when pursuing speed in completing evolutions as the main goal of the designer. Nonetheless, if we only look at the non-functional properties of the optimization, the shape of the \(PF^{c}\) and the explored design space do not differ much from those covered by the \(PF^{ref}\).
### RQ2: To what extent does the time budget affect the quality of Pareto fronts?
The first main concern about imposing a search budget is its effect on the optimization process. We remark that we analyze the HV only (as already discussed in Section 6.1) since the two considered QIs showed quite the same behavior. Table 4 reports, for each algorithm and search budget, the average HV achieved in 30 runs along with its standard deviation. Intuitively, this indicator gives an idea of how much of the solution space was covered with the budget restriction, compared to a run without restrictions. We can observe that, in fact, the time budget affects the quality of the computed Pareto fronts.
The search budget had a different impact on the two case studies. In TTBS, the search was able to achieve a better HV in all cases, when compared to that of CoCoME. This is probably due to the difference in size and complexity between the two case studies. CoCoME permits a larger number of possible refactoring candidates, and its model defines a more complex behavior. These factors inherently lead to a bigger search space (\(\Omega\) in the table), but also to spend more time in computing the objective functions. Therefore, on average, the longer it takes to complete a single evolution, the fewer evolutions will be performed for a given budget.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Budget & Algor. 1 & Algor. 2 & MWU p & \(\hat{A}_{12}\) & \\ \hline \multicolumn{6}{c}{TTBS} \\ \hline
15 min & PESA2 & NSGA-II & **0.0487** & (S) 0.6462 & \\
15 min & SPEA2 & NSGA-II & 0.8548 & (N) 0.486 & \\
15 min & SPEA2 & PESA2 & **0.0234** & (M) 0.3319 & \\
30 min & NSGA-II & PESA2 & **0.0167** & (M) 0.3226 & \\
30 min & SPEA2 & NSGA-II & **0.0385** & (S) 0.3465 & \\
30 min & SPEA2 & PESA2 & \(<\)**0.0001** & (L) 0.1582 & \\
60 min & NSGA-II & PESA2 & \(0.0037\) & (M) 0.2851 & \\
60 min & SPEA2 & NSGA-II & **0.0202** & (M) 0.3278 & \\
60 min & SPEA2 & PESA2 & \(<\)**0.0001** & (L) 0.1301 & \\ \hline \multicolumn{6}{c}{CoCoME} \\ \hline
15 min & NSGA-II & SPEA2 & **0.0085** & (M) 0.3049 & \\
15 min & PESA2 & NSGA-II & **0.0072** & (M) 0.6993 & \\
15 min & PESA2 & SPEA2 & 0.7999 & (N) 0.4807 & \\
30 min & PESA2 & NSGA-II & **0.0066** & (M) 0.7014 & \\
30 min & SPEA2 & NSGA-II & 0.5543 & (N) 0.4558 & \\
30 min & SPEA2 & PESA2 & \(<\)**0.0001** & (L) 0.1738 & \\
60 min & PESA2 & NSGA-II & **0.0127** & (M) 0.6847 & \\
60 min & SPEA2 & NSGA-II & 0.3789 & (N) 0.4344 & \\
60 min & SPEA2 & PESA2 & \(<\)**0.0001** & (L) 0.1686 & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mann–Whitney U test and \(\hat{A}_{12}\) effect sizes comparing the HV achieved by different algorithms in 30 runs. Magnitude interpretation: negligible (N), small (S), medium (M), large (L). The magnitude of the effect size is also represented by bars.
To assess whether doubling or quadrupling the time budget makes a significant difference in the HV of the \(PF^{c}\), we compare the results obtained with different budgets but with the same algorithm. Table 5 reports the results of the Mann-Whitney U test and the corresponding \(\hat{A}_{12}\) effect size. The p-value is highlighted in bold when the detected difference is statistically significant. The time budget is underlined when i) the test showed a significant difference, and ii) the experiment running on that time budget led to higher HV values. In very few cases (two per case study), we obtained a significant difference, and in all the cases this trend was detected for PESA2: with a medium magnitude in TTBS and a large one in CoCoME. This situation suggests that, except for PESA2, the main difference in the HV values might be attributed to a difference in the algorithm being used, rather than to a difference in the budget. We further investigate this aspect in the next section.
In summary, **RQ2** indicates that the time budget is closely linked to the complexity and topology of the system under analysis. We discovered that the number of genetic evolutions is related to the time needed to compute the non-functional indices (performance and reliability), which is longer when the system is more complex.
### RQ3: Do different time budgets generate different software models?
For an initial assessment of the kinds of software models resulting from the time budgets, we generated scatter plots for #changes and #pas objectives as depicted in Figure 4. We observed that the solutions were confined to compact, well-defined regions of the space, in contrast to the variety of solutions offered by the reference Pareto front. In both case studies, two main clusters of solutions were identified. The clusters
\begin{table}
\begin{tabular}{l c c c} \hline \hline Algor. & Budget & HV avg & HV stdev \\ \hline \multicolumn{4}{c}{TTBS (\(\Omega=1.2\times 10^{13}\))} \\ \hline NSGA-II & 15 min & 0.3060 & 0.0915 \\ NSGA-II & 30 min & 0.3469 & 0.1071 \\ NSGA-II & 60 min & 0.3437 & 0.0980 \\ PESA2 & 15 min & 0.3532 & 0.0794 \\ PESA2 & 30 min & 0.4084 & 0.0757 \\ PESA2 & 60 min & 0.4182 & 0.0819 \\ SPEA2 & 15 min & 0.3041 & 0.0794 \\ SPEA2 & 30 min & 0.2917 & 0.0920 \\ SPEA2 & 60 min & 0.2868 & 0.0769 \\ \hline \multicolumn{4}{c}{CoCoME (\(\Omega=3.26\times 10^{16}\))} \\ \hline NSGA-II & 15 min & 0.0931 & 0.0335 \\ NSGA-II & 30 min & 0.1199 & 0.0523 \\ NSGA-II & 60 min & 0.1125 & 0.0604 \\ PESA2 & 15 min & 0.1363 & 0.0277 \\ PESA2 & 30 min & 0.1460 & 0.0300 \\ PESA2 & 60 min & 0.1514 & 0.0366 \\ SPEA2 & 15 min & 0.1189 & 0.0336 \\ SPEA2 & 30 min & 0.1098 & 0.0309 \\ SPEA2 & 60 min & 0.1023 & 0.0384 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Average HV quality indicator and its standard deviation over 31 runs, listed by algorithm and search budget. Higher values are associated to a better quality of the Pareto fronts. \(\Omega\) is the size of the solution space computed as the Cartesian product of the types of refactoring actions and all the eligible refactoring targets in any possible refactoring sequence.
Figure 3: TTBS and CoCoME Pareto frontiers for perfQ and reliability obtained by the three algorithms when varying the time budget between 15, 30, and 60 minutes. The top-right corner is the optimal point, whereas the bottom-left corner is the worst one. Filled symbols correspond to the results of each algorithm
were very clear (and segregated) in TTBS, with the majority of the models having at most one antipattern, and their refactoring costs were in a mid-range (\([3-25]\)). For CoCoME, the clusters shared some boundaries. The refactoring cost was around the same range as for TTBS, while the number of antipatterns covered an extended range and had more variability (\([2-13]\)).
The patterns for the clusters were similar, regardless of the algorithm being used. Some exceptions were noticed for NSGA-II, particularly for CoCoME, with more dispersed solutions than PESA2 and SPEA2. Restricting the time budget led to models with relatively few variations in terms of refactoring cost and antipatterns. Although there were slight differences in the CoCoME results, increasing the time budget did seem to affect the general cluster patterns. This means that even when imposing a time budget, the designer has chances of finding a number of (Pareto) optimal solutions for the refactoring problem. Certainly, the corresponding (alternative) models will be fewer (in terms of #changes and #pas) than when running the algorithms with no budgets.
It should be noticed that #changes and #pas provide a limited characterization of the underlying software models, as other structural properties of the models are not captured. For example, two models having one antipattern and a refactoring cost of 10 might still differ in their design structure. Thus, a finer-grained characterization of the models can help to expose additional differences. We elaborate on this issue to answer the RQ4, described in Section 6.4.
In summary, we can answer **RQ3** by saying that the usage of time budgets leads to a restricted set of design alternatives, but some of them are Pareto optimal. The different budgets seem to produce similar models, and only NSGA-II was able to generate a slightly wider range of alternatives than the remaining algorithms.
### RQ4: How do the sequences of refactoring actions look like when using different budgets?
From a constructive (or structural) point of view, the software models result from applying (sequences of) refactoring actions on the initial software model. Altogether, these refactoring actions constitute the search space explored by a given algorithm. In this context, one could take all the refactoring sequences used in a given experiment and arrange them as a prefix tree, in which the leaves correspond to models and the inner nodes capture actions shared by the different sequences. This tree representation is useful for identifying unique sequences in a given search space, but also for computing sequence intersections between the trees coming from different algorithms or budgets.
For instance, Figure 5 and Figure 6 show a pair of trees for certain TTBS and CoCoME experiments, respectively. Each path from the root to a leaf represents a unique sequence of refactoring actions, which can produce one or more models. All the sequences involve exactly four refactoring actions. The colored paths correspond to common sequences (_i.e.,_ an intersection) between both trees, while the remaining paths are particular to each tree. In this way, we can (approximately) determine that using a 30 min time budget (either for TTBS or CoCoME) generates a subset of models that are structurally different from those generated by running the optimization without any time budget (\(PF^{ref}\)). Note also that the number of unique sequences in \(PF^{ref}\) is smaller (_i.e.,_ less diverse) than that of the space explored with a time budget. Our idea with the these trees is to establish a "profile" of refactoring actions for a given experiment, and then make comparisons with other profiles. In general, the representation and analysis of search spaces have received less attention in the architecture optimization literature, since most works have focused on the objective space.
According to the procedure exemplified above, we compared the sequence trees obtained with different budgets among themselves and also compared each tree against the tree for \(PF^{ref}\) (baseline). For TTBS, we found that between \(18-37\%\) of the sequences obtained with time budgets were shared by the \(PF^{ref}\). These percentages were in the range \(8-24\%\) for CoCoME. These numbers would indicate that more than half of the models generated when using budgets differ from those found in the \(PF^{ref}\). As for the intersection of the trees resulting from imposing each budget, we observed an average of \(25\%\) of shared sequences for CoCoME and variations between \(33\%\) and \(51\%\) for TTBS, but without a clear trend with respect to the choice of the algorithms. These results are aligned with the observations made for the scatter plots in Figure 4, indicating that using limited time budgets does not produce many different models.
Regarding the \(PF^{ref}\), nonetheless, the trees resulting from the time budgets included many more sequences (in terms of types of refactoring actions) than the baseline trees. This was
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Algor. & Budget 1 & Budget 2 & MWU p & \(\hat{A}_{12}\) & \\ \hline \multicolumn{6}{c}{TTBS} \\ \hline NSGA-II & 15 min & 30 min & 0.1677 & (S) 0.3975 & \\ NSGA-II & 15 min & 60 min & 0.1677 & (S) 0.3975 & \\ NSGA-II & 30 min & 60 min & 0.9327 & (N) 0.5068 & \\ PESA2 & 15 min & 30 min & **0.018** & (M) 0.3247 & \\ PESA2 & 15 min & 60 min & **0.0031** & (M) 0.281 & \\ PESA2 & 30 min & 60 min & 0.4223 & (N) 0.4402 & \\ SPEA2 & 15 min & 60 min & 0.4992 & (N) 0.5505 & \\ SPEA2 & 30 min & 15 min & 0.4556 & (N) 0.4443 & \\ SPEA2 & 30 min & 60 min & 0.7999 & (N) 0.4807 & \\ \hline \multicolumn{6}{c}{CoCoME} \\ \hline NSGA-II & 15 min & 30 min & 0.0574 & (S) 0.359 & \\ NSGA-II & 60 min & 15 min & 0.1054 & (S) 0.6202 & \\ NSGA-II & 60 min & 30 min & 0.8769 & (N) 0.488 & \\ PESA2 & 15 min & 30 min & \(<\)**0.0001** & (L) 0.2092 & \\ PESA2 & 60 min & 15 min & \(<\)**0.0001** & (L) 0.7992 & \\ PESA2 & 60 min & 30 min & 0.6024 & (N) 0.539 & \\ SPEA2 & 30 min & 15 min & 0.2483 & (S) 0.4142 & \\ SPEA2 & 60 min & 15 min & 0.1249 & (S) 0.3861 & \\ SPEA2 & 60 min & 30 min & 0.6123 & (N) 0.462 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Mann–Whitney U test and \(\hat{A}_{12}\) effect sizes comparing the HV achieved with different time budgets in 30 runs. Magnitude interpretation: negligible (N), small (S), medium (M), large (L). The magnitude of the effect size is also represented by bars.
a common trend for both case studies, as hinted by Figure 5 and Figure 6, in which the trees on the right are more dense than the trees on the left. We believe this situation is due to the convergence of the solutions (and the corresponding sequences thereof) near the Pareto front, after a considerable number of evolutions.
To further analyze differences in the software models, we computed the frequency of the refactoring actions being used by the sequences in the experiments. The intuition here is that the repeated usage of certain actions might be driven by the optimization objectives, which might originate the model differences within a given search space. The (normalized) frequency for the four available actions for CoCoME and TTBS is summarized in Figure 7. Note that every sequence consists of exactly four actions.
For CoCoME, we can see that the frequency of actions resulting from imposing the budgets are more or less similar in their composition, with _MO2N_ being on par with _Clon_ as the most prevalent actions. This pattern contrasts with the very high frequency of _MO2N_ observed in the baselines and the very low contributions of the remaining actions. We conjecture that this action could play a key role in the solutions in the \(PF^{ref}\), and this might explain why some solutions in the experiments using time budgets did not reach the Pareto front. The frequencies for the baselines achieved by the three algorithms also showed some variations, such as the prevalence of _MO2C_ for NSGA-II in the fourth sequence position. When it comes to TTBS, the patterns were similar to those for CoCoME, but the prevalence of _MO2N_ was even higher in the baselines, except for PESA2 where that action was less dominant and other actions were used. In addition, _MO2C_ became more relevant in the fourth sequence position for PESA2 and NSGA-II, as in the case of CoCoME. This could hint at a particular behavior in the last sequence position mainly for NSGA-II.
In general, we observe that the frequency profiles provide additional evidence about the similarity among the models generated with the time budgets, as well as their differences with respect to the models in \(PF^{ref}\). The high-prevalence pattern for a specific action (such as _MO2N_) and the role of actions in the last sequence position could be related to the satisfaction of the optimization objectives (within the algorithms), although the phenomenon should still be further studied.
Overall, we can answer **RQ4** by saying that using time budgets generates different models, in terms of sequences of refactoring actions, while sharing a small fraction of those models with \(PF^{ref}\). This fraction did seem to be affected by increases in the budget. The search spaces derived from the budgets tend to have many models in common, with some variations that could be attributed to the policies of each algorithm. The profiles of refactoring actions were also similar, regardless of the time budgets or algorithms being used,
## 7 Threats to validity
In this section, we discuss threats that might affect our results.
Construct validityAn aspect that might affect our results is the estimation of the reference Pareto front (\(PF^{ref}\)), which is used to extract the quality indicators, as described in Section 4. We mitigate this threat by building the \(PF^{ref}\) from a run without a search budget for each case study. Therefore, \(PF^{ref}\) should contain all the non-dominated solutions across all configurations,
Figure 4: TTBS and CoCoME Pareto frontiers for #changes and #pas obtained by the three algorithms when varying the time budget between 15, 30, and 60 minutes. The bottom-left corner is the optimal point, whereas the top-right corner is the worst one. Filled symbols correspond to the results of each algorithm.
and it should also represent a good Pareto front for computing the HV and IGD+ indicators.
Another important aspect that might threaten our experimentation concerns the parameters of the initial UML model. For example, CoCoME showed higher initial reliability that might affect the search. However, in our experiments, it seems that TTBS and CoCoME initial configurations did not threaten the optimization process. We will further investigate how different initial UML model parameters could change the optimization results. We remark that changing a single model parameter means starting the optimization process on a different point of the solution space that might produce completely different results.
External validityOur results might be affected by _external validity_ threats, as their generalization might be limited to some of the assumptions behind our approach.
In the first place, a threat might be represented by the use of a single modeling notation. We cannot generalize our results to other modeling notations, which could imply using a different portfolio of refactoring actions. The syntax and semantics of the modeling notation determine the amount and nature of refactoring actions that can be performed. However, we have adopted UML, which is the de facto standard in the software modeling domain. In general terms, this threat can be mitigated by porting the whole approach on a different modeling notation, but this is out of this paper scope.
Another threat might be found in the fact that we have validated our approach on two case studies. While the two case studies were selected from the available literature, they might not represent all the possible challenges that our approach could face in practice.
Internal validityOur optimization approach might be affected by _internal validity_ threats. There are high degrees of freedom on our settings. For example, the variations of genetic configurations, such as the \(P_{\mathit{crossover}}\) probability, may produce \(PF^{c}\) with different quality solutions. Also, the problem configuration variations may also change our results. The degrees of freedom in our experimentation generate unfeasible brute force investigation of each suitable combination. For this reason, we limit the variability to subsets of problem configurations, as shown in Table 1. We also mitigate this threat by involving two different case studies derived from the literature, thus reducing biases in their construction.
Another aspect that might affect our findings is a misleading interpretation of the outcome due to the random nature of genetic algorithms. In order to mitigate this threat, we performed 30 executions for each configuration [44].
Conclusion validityThe observations made by this study might change with different, better-tuned parameters for each algorithm. For scoping reasons, we did not perform an extensive tuning phase for each algorithm. Instead, we rely on common parameters to set up the algorithms, which should mitigate the threat [50]. Wherever possible, we used appropriate statistical procedures with p-value and effect size measures to test the significance of the differences and their magnitude.
Another aspect that might affect our results is the estimation of the reference Pareto frontier (\(PF^{ref}\)). \(PF^{ref}\) is used for extracting the quality indicators as described in Section 6. We soften this threat by building the \(PF^{ref}\) overall our \(PF^{c}\) for each case study. Therefore, the reference Pareto should optimistically contain all non-dominated solutions across all configurations.
## 8 Conclusion and Future Work
In this study, we presented an investigation of the impact of the time budget for multi-objective refactoring optimization of software models. The study aims at helping designers to select
Figure 5: Examples of search spaces for TTBS represented as trees, as generated by NSGA-II. The orange nodes and edges are sequences of refactoring actions shared by both trees (_i.e.,_ intersections). Each node maps to an individual refactoring action as indicated in the legend.
the best algorithm with respect to the time budget. We performed the study on two model benchmarks, Train Ticket Booking Service, and CoCoME, and on three genetic algorithms, NSGA-II, SPEA2, and PESA2.
We assessed the quality of the results obtained by each algorithm through the HV and IGD+ indicators. HV measures the amount of the search space volume that a computed Pareto front (\(PF^{c}\)) covers with respect to a reference Pareto front (\(PF^{ref}\)), while IGD+ is the inverse of the Euclidean distance between points belonging to \(PF^{c}\) and \(PF^{ref}\). From our results (see Section 6.1, and Section 6.2), NSGA-II emerged as the fastest algorithm because it performed the highest number of genetic evolutions within the search budget. PESA2, in turn, was the algorithm that generated the best quality results in terms of HV. SPEA2 was the slowest algorithm and generated the results with the worst quality. This means that it achieved the lowest number of genetic evolutions and the lowest HV values.
Regarding the different budgets, they seem to produce similar models both in terms of structure and objective values, and only NSGA-II was able to generate a slightly wider range of alternatives than the remaining algorithms (see Section 6.3 and Section 6.4). In terms of their sequences of refactoring actions, the sets of models derived from the time budgets tend to have many models in common, despite some variations attributed to the policies of each algorithm. Moreover, a small fraction of these models was shared with those models in \(PF^{ref}\), which indicate that the budgets can still generate optimal models.
As future work, also with the goal of saving optimization time, we intend to analyze the Pareto front at each evolution in order to detect situations in which the quality is not having enough improvement, and one could decide to stop the algorithm. Furthermore, we would like to get more insights from the tree representation of the search spaces, which can enable the discovery of particular refactoring actions being correlated with the satisfaction of certain objectives by the optimization algorithms. Finally, we plan to experiment with additional case studies and further investigate the impact of the case study structure (_i.e.,_ size and complexity) on the quality of the optimization results.
## Acknowledgments
Daniele Di Pompeo and Michele Tucci are supported by European Union - NextGenerationEU - National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR) -
Figure 6: Examples of search spaces for CoCoME represented as trees, as generated by NSGA-II. The orange nodes and edges are sequences of refactoring actions shared by both trees (_i.e.,_ intersections). Each node maps to an individual refactoring action, as indicated in the legend.
Figure 7: Frequency of refactoring actions used at each position of the sequences, for different time budgets and algorithms. The sequence position is indicated by the _pos_ column.
Project: "SoBigData.it - Strengthening the Italian RI for Social Mining and Big Data Analytics" - Prot. IR0000013 - Avviso n. 3264 del 28/12/2021. J. Andres Diaz-Pace was supported by project PICT-2021-00757, Argentina.
|
2307.12093 | Spinon continuum in the Heisenberg quantum chain compound
Sr$_2$V$_3$O$_9$ | Magnetic excitations in the spin chain candidate Sr$_2$V$_3$O$_9$ have been
investigated by inelastic neutron scattering on a single crystal sample. A
spinon continuum with a bandwidth of $\sim22$ meV is observed along the chain
formed by alternating magnetic V$^{4+}$ and nonmagnetic V$^{5+}$ ions.
Incipient magnetic Bragg peaks due to weak ferromagnetic interchain couplings
emerge when approaching the magnetic transition at $T_N\sim 5.3$ K while the
excitations remain gapless within the instrumental resolution. Comparisons to
the Bethe ansatz, density matrix renormalization group (DMRG) calculations, and
effective field theories confirm Sr$_2$V$_3$O$_9$ as a host of weakly coupled
$S = 1/2$ chains dominated by antiferromagnetic intrachain interactions of
$\sim7.1$(1) meV. | Shang Gao, Ling-Fang Lin, Pontus Laurell, Qiang Chen, Qing Huang, Clarina dela Cruz, Krishnamurthy V. Vemuru, Mark D. Lumsden, Stephen E. Nagler, Gonzalo Alvarez, Elbio Dagotto, Haidong Zhou, Andrew D. Christianson, Matthew B. Stone | 2023-07-22T14:52:14Z | http://arxiv.org/abs/2307.12093v2 | # Spinon continuum in the Heisenberg quantum chain compound Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\)1
###### Abstract
Magnetic excitations in the spin chain candidate Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) have been investigated by inelastic neutron scattering on a single crystal sample. A spinon continuum with a bandwidth of \(\sim 22\) meV is observed along the chain formed by alternating magnetic V\({}^{4+}\) and nonmagnetic V\({}^{5+}\) ions. Incipient magnetic Bragg peaks due to weak ferromagnetic interchain couplings emerge when approaching the magnetic transition at \(T_{N}\sim 5.3\) K while the excitations remain gapless within the instrumental resolution. Comparisons to the Bethe ansatz, density matrix renormalization group (DMRG) calculations, and effective field theories confirm Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) as a host of weakly coupled \(S=1/2\) chains dominated by antiferromagnetic intrachain interactions of \(\sim 7.1(1)\) meV.
+
Footnote †: preprint: APS/123-QED
## I I. Introduction
Spin chains are one of the simplest models that illustrate many fundamental concepts in quantum magnets [1]. The reduced number of neighboring sites greatly enhances quantum fluctuations and promotes exotic phenomena like fractional spinons [2; 3] and valence bonds [4]. Compared to higher dimensional systems, an advantage of the chain models is that they can be solved with high accuracy [5]. Starting from the Bethe ansatz for the \(S=1/2\) Heisenberg chains [2], analytical or numerical solutions for spin chains have been obtained for various types of chains that incorporate perturbations like Ising anisotropy, interchain couplings, and magnetic fields, thus allowing a thorough understanding of a plethora of novel phenomena including Zeeman ladders [6; 7; 8; 9; 10], pisnon excitations [11; 12], and Bethe strings [13; 14].
The strontium vanadate Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) has been proposed as a host of the \(S=1/2\) Heisenberg antiferromagnetic chain (HAFMC) [15; 16]. Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) belongs to the monoclinic \(C2/c\) space group, with lattice constants determined as \(a=7.55\), \(b=16.28\), \(c=6.95\) A, and \(\beta=119.78^{\circ}\)[17]. In this compound, the V-O layers in the \(ac\) planes are separated at a large distance of \(\sim 8.14\) A by the Sr layers along the \(b\) axis. As shown in the inset of Fig. 1, within the V-O layers, the V\({}^{4+}\)O\({}_{6}\) octahedra containing the magnetic V\({}^{4+}\) ions (\(S=1/2\)) share corners along the \(\mathbf{a}+\mathbf{c}\) direction. Along the \(\mathbf{a}-\mathbf{c}\) direction, the V\({}^{4+}\)O\({}_{6}\) octahedra are linked across the nonmagnetic V\({}^{5+}\)O\({}_{4}\) tetrahedra. Surprisingly, thermal transport measurements on a crystal sample indicate the spin chains are along the \(\mathbf{a}-\mathbf{c}\) direction [18], suggesting stronger spin couplings across the nonmagnetic V\({}^{5+}\)O\({}_{4}\) tetrahedra. Although such a scenario was supported by the density functional theory (DFT) calculations [19], direct spectroscopic evidence for chain physics in Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) is still missing.
Here we utilize neutron scattering to study the spin dynamics in Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\). A gapless spinon continuum, which is a characteristic feature of the \(S=1/2\) Heisenberg chain, is observed at temperatures down to \(\sim 5\) K. The chain direction is determined to be along the \(\mathbf{a}-\mathbf{c}\) direction, thus verifying the scenario deduced from the thermal transport experiments [18]. By comparing the inelastic neutron scattering (INS) spectra with the Bethe ansatz, density matrix renormalization group (DMRG) calculations, and field theories, we conclude Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) is a host of weakly coupled \(S=1/2\) HAFMs.
## II II. Methods
Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) crystals were prepared using a floating zone image furnace following reported procedures [20]. In order to synthesize phase pure Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\), polycrystalline Sr\({}_{2}\)V\({}_{2}\)O\({}_{7}\) was first prepared using a stoichiometric SrCO\({}_{3}\) and V\({}_{2}\)O\({}_{5}\) powder mixture fired at 700\({}^{\circ}\)C for 72 hours in air. The obtained Sr\({}_{2}\)V\({}_{2}\)O\({}_{7}\) powder was then mixed with VO\({}_{2}\) powder in a molar ratio of 1:1. The mixture was pressed into a rod of \(\sim 7\) mm in diameter, \(\sim 10\) cm in length, and then annealed at 540\({}^{\circ}\)C in argon for 24 hours. The following floating zone growth was performed using a NEC two-mirror image furnace. As is reported in Ref. [20], the twice-scanning technique is utilized for this growth. The first scan was a fast scan with a speed of 35 mm/h under flowing Ar of 2.5 atm. The 2nd growth scan, was done using a speed of 1 mm/h in the same gas flow. Several large segments of single crystal were obtained. These crystals were then oriented by backscattering X-ray Laue diffraction in preparation for the neutron scattering measurements. DC magnetic susceptibility measurements were performed at temperatures of 2-300 K using a Quantum Design superconducting quantum interference device - Vibrating Sample Magnetometer (SQUID-VSM). The sample is cooled in zero-field (ZFC) and measured in an external field of 0.5 T for increasing temperatures.
Inelastic neutron scattering (INS) experiments on Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) were performed on the fine-resolution Fermi chopper spectrometer SEQUOIA at the Spallation Neutron Source (SNS) of the Oak Ridge National Laboratory (ORNL). A single crystal with mass of \(\sim\)200 mg was aligned with the \(b\) axis vertical. A closed cycle refrigerator (CCR) was employed to reach temperatures, \(T\), down to 5 K. Incident neutron energies were \(E_{i}=35\), 10, and 4 meV. For the \(E_{i}=35\) meV measurements, a Fermi chopper frequency of 240 Hz was used with the high flux chopper. Data were acquired by rotating the sample in 1\({}^{\circ}\) steps about its vertical axis, covering a total range of 165\({}^{\circ}\) at \(T=5\), 20, and 50 K. For the \(E_{i}=10\) and 4 meV measurements, a Fermi chopper frequency of 120 Hz was used with the high resolution chopper. Data for the \(E_{i}=10\) meV (4 meV) measurements at 4 K were acquired by rotating the sample in 1\({}^{\circ}\) (0.4\({}^{\circ}\)) steps, covering a total range of 200\({}^{\circ}\) (39.2\({}^{\circ}\)). Measurements of an empty sample holder were subtracted as the background. Data reductions and projections were performed using the MANTID software [21].
For the theoretical calculations, the canonical one-dimensional isotropic \(S=1/2\) HAFMC model described by the Hamiltonian \({\cal H}=J\sum_{\langle{\rm NN}\rangle}{\bf S}_{\bf i}\cdot{\bf S}_{\bf j}\) is adopted, where the summation is over the nearest neighbors (NN). The \(T=0\) dynamical spin structure factor was calculated in the algebraic Bethe ansatz approach using the ABACUS algorithm [22]. The calculation was performed on a system of \(L=500\) sites with periodic boundary conditions, using an energy step of \(\Delta\omega=0.002J\). A Gaussian energy broadening of \(0.02J\approx 0.142\) meV was applied. A sum rule saturation of 99% was reached, which can be compared with the approximately 98% saturation expected from the two- and four-spinon contributions to the total intensity in the thermodynamic limit [23].
Theoretical spectra were also calculated using the density matrix renormalization group (DMRG) technique [24; 25] as implemented in the DMRG++ code [26]. The calculations were carried out using the Krylov-space correction vector approach [27; 28] with open boundary conditions (OBC). Targeting a truncation error below \(10^{-10}\), a minimum of 100 and up to 1000 states were kept during our DMRG calculations. The half width at half maximum of the Lorentzian energy broadening was set as \(0.1J\). For the \(T=0\) DMRG calculations, we used a chain with \(N=100\) sites, while for the \(T>0\) calculations we adopted a system of 50 physical and 50 ancilla sites by using the ancilla (or purification) method [29; 30; 31]. Examples of input files and more details can be found in the Supplemental Materials [32].
## III III. Results and Discussions
As a reference for the temperature evolution of the spin correlations, Fig. 1 presents the magnetic susceptibility \(\chi(T)\) measured on pulverized single crystals of Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\). A broad hump around \(T=50\) K signals strong short-range spin correlations. Following Ref. [15], we fit \(\chi(T)\)
Figure 1: Magnetic susceptibility of Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) measured in a 5 kOe field. An antiferromagnetic transition is observed at \(T_{N}=5.3\) K as a sharp peak. Red solid line is the fit to the \(\chi(T)\) in the temperature range of [10, 300] K, as described in the text. Inset is the Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) crystallographic structure viewed along the \(b\) axis. The V\({}^{4+}\)O\({}_{6}\) octahedra and V\({}^{5+}\)O\({}_{4}\) tetrahedra are shown in red and blue, respectively. Positions of the Sr\({}^{2+}\) ions are not shown for clarity.
to \(\chi_{\rm 1D}+\chi_{\rm LT}+\chi_{\rm vv}\), where \(\chi_{\rm 1D}\) is the polynomial approximation of the contribution from a \(S=1/2\) Heisenberg chain [33], \(\chi_{LT}\) is a Curie-Weiss term to account for the upturn at low temperatures, and \(\chi_{\rm vv}\) is a temperature-independent Van Vleck contribution. The fitted intrachain coupling strength is \(J=6.95(5)\) meV, which is close to the previously reported value [15]. An antiferromagnetic transition is observed at \(T_{N}\sim 5.3\) K, indicating the existence of weak interchain couplings \(J_{\perp}\). A tiny jump in \(\chi(T)\) above \(T_{N}\), which is described by the \(\chi_{\rm LT}\) term, has been ascribed to the antisymmetric Dvaloshinskii-Moriya (DM) interactions [15; 16], although such a scenario cannot be directly verified in our zero-field experiments [34; 35].
The existence of a magnetic ordered state below \(T_{N}\) is directly confirmed by the neutron scattering data measured at \(T=5\) K. For the elastic map shown in the left panel of Fig. 2(a), the data collected at \(T=50\) K is subtracted to expose the weak magnetic reflections. Unless otherwise stated, all presented neutron scattering data are integrated along the (0,\(k\),0) direction in the range of \(k=[-4,4]\) reciprocal lattice units (r.l.u.) to improve counting statistics. Therefore, the strongest magnetic reflection in Fig. 2(a), \((1/2,k,-1/2)\), can be indexed as \((1/2,1,-1/2)\), revealing the magnetic propagation vector to be \(\mathbf{q}=(1/2,0,1/2)\). As this \(\mathbf{q}\) vector indicates parallel spin alignment along the \(\mathbf{a}+\mathbf{c}\) direction, we can conclude that the weak interchain coupling should be ferromagnetic if assuming the strongest interchain couplings arise from the corner-sharing V\({}^{4+}\)O\({}_{6}\) octahedra along the \(\mathbf{a}+\mathbf{c}\) direction.
At an energy transfer of \(E=3\) meV, the constant energy map shown in the right panel of Fig. 2(a) exhibits narrow streaks along the \((h,0,h)\) direction. Such a highly anisotropic scattering pattern is direct evidence for the emergence of spin chains in Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\), with chains running along the \(\mathbf{a}-\mathbf{c}\) direction. Weak modulation along the streaks can be ascribed to the perturbations from the interchain couplings. In the Supplemental Materials [32], we present slices along the \((0,k,0)\) the \((h,0,h)\) directions, which reveal very weakly dispersive excitations due to marginal interchain couplings.
After integrating the INS data along the \((h,0,h)\) direction within a range of \(\delta h=[-1.2,1.2]\) r.l.u., the excitation spectra along the \((h,0,\bar{h})\) direction are obtained. As shown in the left panel of Fig. 2(b), the spectra exhibit a continuum of excitations up to \(\sim 22\) meV, which is a typical feature of fractional spinon excitations of HAFMCs [36; 37; 38; 39; 40]. The lower and upper boundaries of the 2-spinon continuum can be described by \(\omega_{L}(q)=(\pi/2)J|\sin q|\) and \(\omega_{U}(q)=\pi J|\sin(q/2)|\), respectively, where \(J\) is the strength of the intrachain couplings [36]. To describe the shape of the spinon continuum in Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\), \(J=7.1(1)\) meV is determined by a \(\chi^{2}\) fit of the spinon continuum, and the corresponding boundaries are overplotted in Fig. 2(b) as dashed lines. Weak scattering intensities are observed outside the continuum boundary, including a step like excitation below \(\sim 12\) meV and a broad flat band around \(\sim 23\) meV. Since these features exhibit no wavevector dependence [32], they may be ascribed to the background scattering due to possible oxygen deficiency and the consequent valence variance of the vanadium ions.
Various analytical and numerical methods have been developed to describe the dynamical structure factor of the HAFMCs. Here we first compare the INS spectra of Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) to the cross section calculated by the Bethe
Figure 2: (a) Constant energy slices of the Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) INS spectra \(S(Q,\omega)\) in the \((h,0,l)\) plane at \(E=0\) and \(3\) meV integrated in the energy range of \([-2,2]\) and \([2,4]\) meV, respectively. Data is plotted on an orthogonal coordinate system for clarity. The incident neutron energy is \(E_{i}=35\) meV with the measuring temperature \(T=5\) K. Intensity in the slice at \(E=3\) meV is multiplied by a factor of \(2\) for better visibility. In the \(E=0\) meV slice, circles and squares indicate the magnetic and nuclear Bragg peaks, respectively. (b) Comparison between the experimental and theoretical \(S(Q,\omega)\) along the \((h,0,\bar{h})\) direction. The experimental data is integrated in the range of \(\delta h=[-1.2,1.2]\) r.l.u. along \((h,0,h)\). For the calculated cross section, a constant intensity is added to account for any measurement background. Dashed lines are the lower and upper boundaries of the 2-spinon continuum for \(J=7.1\) meV. (c) Scattering intensity as a function of \(E\) at \(h=0\), \(0.25\), and \(0.5\) along \((h,0,\bar{h})\). Black dashed line is a fit to the background scattering at \(h=0\) using a Gaussian function plus a step function. Red solid line is a fit to the spinon continuum plus the background. (d) Scattering intensity as a function of \((h,0,\bar{h})\) at \(E=5\), \(10\), and \(15\) meV. Black dashed line indicates the constant background extracted from the scan at \(h=0\) in panel (c). Red solid line is a fit to the spinon continuum plus the background.
ansatz. The 2-spinon continuum is known to account for \(\sim 71\) % of the total spectral weight [23; 39; 41; 42], while the remaining spectral weight is mostly accounted for by the 4-spinon continuum [23; 39; 43]. As a zero temperature calculation method, the comparison with the experimental data acquired at 5 K is justified since the overall bandwidth of the system, which sets the relevant energy scale of the 1D fluctuations, is at much higher energy scales than the measuring temperature.
For \(J=7.1\) meV, the calculated spectral function is shown on the right panel in Fig. 2(b). The calculated data are convolved by a Gaussian function with a full-width-half-maximum of \(\Delta Q=0.1\) r.l.u. along the \(Q\) axis and by the instrumental energy resolution along the \(E\) axis [32]. More detailed comparisons for scans at constant \(Q\) and \(E\) are presented in Fig. 2(c) and (d), respectively. For the background scan at \(h=0\), the intensity is fitted by a Gaussian function plus a step function to account for the additional scattering at \(\sim 12\) meV described earlier. This is then added to the other calculated spectra shown in Fig. 2(c) and (d). The calculation reproduces the INS spectra, thus confirming the existence of HAFMCs in Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\).
The obtained strength of the intrachain coupling of \(J=7.1(1)\) meV, together with the magnetic long-range order transition temperature \(T_{N}\)= 5.3 K, allows an estimate of the strength of the ferrromagnetic interchain coupling \(J_{\perp}\). Following the mean field analysis [15; 44], \(|J_{\perp}|\) is estimated to be \(\sim 0.16\) meV, which is \(\sim 2.3\) % of the intrachain coupling \(J\). This agrees well with the extent of the dispersion measured orthogonal to the chain direction [32].
In order to resolve a possible gap in the spinon excitations, further INS experiments were performed with lower incident energies of \(E_{i}=10\) and 4 meV. Figure 3 summarizes the spectra after full integrations along directions perpendicular to the chain. As compared in Fig. 3(b) and (d), the spectra at \((1/2,0,-1/2)\) follows the theoretical
Figure 3: (a) Low-energy section of the INS spectra \(S(Q,\omega)\) measured with an incident neutron energy of \(E_{i}=10\) meV. Data along directions perpendicular to the chain have been integrated for better statistics. (b) Scattering intensity as a function of \(E\) integrated in the range of \(\delta h=[0.49,0.51]\) along \((h,0,-h)\). Red line is the theoretical scattering cross section for a HAFMC plus a constant background. (c),(d) Similar as panels (a),(b) with a lower \(E_{i}=4\) meV.
Figure 4: (a) Constant energy slices of the Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) INS spectra \(S(Q,\omega)\) in the \((h,0,l)\) plane measured at \(T=20\) (left) and 50 K (right). Data is integrated in the energy range of \([2,4]\) meV and is plotted on an orthogonal coordinate system for clarity. (b) Comparison of the scattering intensity along \((h,0,h)\) measured at \(T=5\) (gray circles), 20 (red triangles), and 50 K (purple squares). Data is integrated in the range of \(\delta h=[0.45,0.55]\) r.l.u. along \((h,0,-h)\) and \(E=[2,4]\) meV. Data at 20 (50) K is shifted along the \(x\) axis by 1.5 (3) for clarity. (c) \(S(Q,\omega)\) along \((h,0,-h)\) measured at \(T=20\) (left) and 50 K (right). Data is integrated in the range of \(\delta h=[-1.2,1.2]\) r.l.u. along \((h,0,h)\) and \(\delta k=[-4,4]\) r.l.u. along \((0,k,0)\). Dashed lines are the lower and upper boundaries of the 2-spinon continuum for J = 7.1 meV. (d) Comparison of the scattering intensity as a function of \(E\) measured at \(T=5\) (gray circles), 20 (red triangles), and 50 K (purple squares). Data is integrated in the range of \(\delta h=[0.475,0.525]\) along \((h,0,-h)\). At each temperature, the spectra at \(Q=0\) measured at \(T=5\) K is subtracted as the background. Red solid (black dash-dotted) lines are theoretical scattering cross section calculated by DMRG (field theory) at \(T=50\), 20, and 0 K (50, 20, and 5 K) assuming \(J=7.1\) meV. Data at 20 (50) K is shifted along the \(x\) axis by 1.5 (3) units for clarity.
dynamical structure factor down to \(\sim 0.1\) meV. Therefore, it can be concluded that the spinon excitations in Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) are gapless within the instrumental resolution of \(\sim 0.1\) meV.
The temperature evolution of the INS spectra is summarized in Fig. 4. For \(T=20\) and 50 K, the constant-\(E\) map at an energy transfer of \(E=3\) meV is compared in Fig. 4(a). The main features are similar to the map at \(T=5\) K shown in Fig. 2(a), but the scattering intensity is weaker at elevated temperatures. Figure 4(b) compares the intensity along the \((h,0,h)\) direction at \(T=5\), 20 and 50 K. The intensity contrast along the streaks is reduced at elevated temperatures as thermal fluctuations overcome the interchain couplings.
After integration in the range of \(\delta h=[-1.2,1.2]\) r.l.u. along the \((h,0,h)\) direction, the spectral functions along \((h,0,-h)\) are compared in Fig. 4(c) for \(T=20\) and 50 K. The dashed lines outline the 2-spinon continuum for \(J=7.1\) meV as in Fig. 2(b). Besides the reduced scattering intensities, the excitations become softened at elevated temperatures, with a significant fraction of the scattering intensity lying below \(\omega_{L}(q)\) at \(T=50\) K.
According to theoretical calculations [45], an intensity transfer from \((h=1/2,\omega\to 0)\) to \((h=0,\omega\to 0)\) is expected in the spectra function \(S(q,\omega)\) at elevated temperatures due to thermal fluctuations. Although such an intensity transfer is not directly probed in our experiment, it may induce a peak at nonzero energies in the constant-\(Q\) scan at \(h=0.5\) as the zero energy intensity is greatly reduced. Figure 4(d) compares the constant-\(Q\) scans at \(h=0.5\) for \(T=5\), 20, and 50 K. Theoretical spectral functions calculated by the DMRG method at the corresponding temperatures are plotted as red solid lines, which reproduce the spectral function over a large range of energy transfers. At \(T=50\) K, the reduced intensities around \(E=0\) is consistent with the theoretical prediction of the HAFMCs, thus confirming the chain physics in Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\).
The temperature evolution of the scattering intensity for \(S=1/2\) HAFMCs has also been investigated through effective field theories in the continuum limit [46; 47]. At relatively low energy transfers, the energy dependence of the cross section at \(\mathbf{q}=(1/2,0,-1/2)\) is expressed as
\[S(\omega)\propto(n_{\omega}+1)\mathrm{Im}\left\{\frac{1}{T}\left[\rho\left( \frac{\omega}{4\pi k_{B}T}\right)\right]^{2}\right\}, \tag{1}\]
with the \(\rho(x)\) function defined as
\[\rho(x)=\frac{\Gamma(\frac{1}{4}-ix)}{\Gamma(\frac{3}{4}-ix)}. \tag{2}\]
In this expression, \(n_{\omega}\) is the Bose factor and \(\Gamma\) is the complex gamma function. Using this expression, we calculate the cross section for \(S=1/2\) HAFMCs for energies up to 16 meV. The calculated results, with a fitted scale factor, are shown in Fig. 4(d) as dash-dotted lines. In the calculated energy range, the field theoretical results capture the temperature evolution of both the experimental data and and the DMRG results, which further justifies the existence of HAFMCs in Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\).
## IV IV. Conclusions
The existence of \(S=1/2\) HAFMCs in Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) is spectroscopically confirmed through inelastic neutron scattering experiments and comparison with numerical simulations and mean field approximations. A spinon continuum is observed along the \((h,0,\bar{h})\) direction, verifying that the intrachain couplings are mediated by the nonmagnetic V\({}^{5+}\) ions. The spinon continuum, with a bandwidth of \(\sim 22\) meV, indicates the strength of the intrachain couplings to be \(\sim 7.1(1)\) meV. Despite the magnetic transition at \(T_{N}\sim 5.3\) K, the excitations in Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) remain gapless down to 5 K. Through comparisons to the Bethe ansatz, the density matrix renormalization group (DMRG) calculations, and the field theories, we conclude that Sr\({}_{2}\)V\({}_{3}\)O\({}_{9}\) is a host of weakly coupled \(S=1/2\) HAFMCs.
This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. This research used resources at the Spallation Neutron Source (SNS) and the High Flux Isotope Reactor (HFIR), both are DOE Office of Science User Facilities operated by the Oak Ridge National Laboratory (ORNL). The work of S. N. and G. A. was supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Science Center. Q.C., Q.H, and H.Z. thank the support from National Science Foundation with Grant No. NSF-DMR-2003117.
|
2308.02405 | Development Of Automated Cardiac Arrhythmia Detection Methods Using
Single Channel ECG Signal | Arrhythmia, an abnormal cardiac rhythm, is one of the most common types of
cardiac disease. Automatic detection and classification of arrhythmia can be
significant in reducing deaths due to cardiac diseases. This work proposes a
multi-class arrhythmia detection algorithm using single channel
electrocardiogram (ECG) signal. In this work, heart rate variability (HRV)
along with morphological features and wavelet coefficient features are utilized
for detection of 9 classes of arrhythmia. Statistical, entropy and energy-based
features are extracted and applied to machine learning based random forest
classifiers. Data used in both works is taken from 4 broad databases (CPSC and
CPSC extra, PTB-XL, G12EC and Chapman-Shaoxing and Ningbo Database) made
available by Physionet. With HRV and time domain morphological features, an
average accuracy of 85.11%, sensitivity of 85.11%, precision of 85.07% and F1
score of 85.00% is obtained whereas with HRV and wavelet coefficient features,
the performance obtained is 90.91% accuracy, 90.91% sensitivity, 90.96%
precision and 90.87% F1 score. The detailed analysis of simulation results
affirms that the presented scheme effectively detects broad categories of
arrhythmia from single-channel ECG records. In the last part of the work, the
proposed classification schemes are implemented on hardware using Raspberry Pi
for real time ECG signal classification. | Arpita Paul, Avik Kumar Das, Manas Rakshit, Ankita Ray Chowdhury, Susmita Saha, Hrishin Roy, Sajal Sarkar, Dongiri Prasanth, Eravelli Saicharan | 2023-07-23T17:31:59Z | http://arxiv.org/abs/2308.02405v1 | # Development Of Automated Cardiac Arrhythmia Detection Methods Using Single Channel ECG Signal
###### Abstract
Arrhythmia, an abnormal cardiac rhythm, is one of the most common types of cardiac disease. Automatic detection and classification of arrhythmia can be significant in reducing deaths due to cardiac diseases. This work proposes a multi-class arrhythmia detection algorithm using single channel electrocardiogram (ECG) signal. In this work, heart rate variability (HRV) along with morphological features and wavelet coefficient features are utilized for detection of 9 classes of arrhythmia. Statistical, entropy and energy-based features are extracted and applied to machine learning based random forest classifiers. Data used in both works is taken from 4 broad databases (CPSC and CPSC extra, PTB-XL, G12EC and Chapman-Shaoxing and Ningbo Database) made available by Physionet. With HRV and time domain morphological features, an average accuracy of 85.11%, sensitivity of 85.11%, precision of 85.07% and F1 score of 85.00% is obtained whereas with HRV and wavelet coefficient features, the performance obtained is 90.91% accuracy, 90.91% sensitivity, 90.96% precision and 90.87% F1 score. The detailed analysis of simulation results affirms that the presented scheme effectively detects broad categories of arrhythmia from single-channel ECG records. In the last part of the work, the proposed classification schemes are implemented on hardware using Raspberry Pi for real time ECG signal classification.
###### Abstract
The group of heart disorders, commonly called cardiovascular diseases (CVDs), have become the cause of an increasing number of premature deaths worldwide Tsao et al. (2022). Since 1990, prevalent cases of CVDs all over the world have doubled and the number of deaths due to the same has increased from 12.1 million in 1990 to 18.6 million in 2019 Roth et al. (2020). According to the world health organization (WHO), 32 % of all global deaths in 2019 were from CVDs b2 (2021). Over three-quarters of these deaths took place in poor countries with low doctor-to-patient ratios and inadequate medical infrastructure. Efficient and automated computer-aided diagnosis of CVDs can significantly reduce the burden on already strained healthcare systems, leading to timely detection and treatment of patients that results in reducing the number of deaths due to chronic heart diseases Faust and Ng (2016).
ECG is the most extensively used non-invasive method for the clinical detection of CVDs Hong et al. (2022). ECG signal is difficult to analyze visually for long-term application due to its non-stationary nature. That may lead to missed or late diagnosis of life-threatening heart ailments. Automated signal processing with a machine learning algorithm can be developed to process ECG data in real-time for accurate and prompt detection of cardiac diseases with less human effort and error Bertsimas et al. (2021), Ran et al. (2022).
Arrhythmia is a heart condition characterized by an irregular heart rate where the heart beats either too slow or too fast. It occurs due to improper electrical impulses that coordinate the heartbeats. Rhythms are classified into different categories as per their origin like sinus rhythm, atrial rhythm, atrioventricular (AV) node rhythm, ventricle rhythms etc. Each category of rhythms is further sub-divided into several classes as per characteristics of rhythms Walraven (2010).
Methods for diagnosing arrhythmia using classification approaches based on a variety of ECG parameters have been proposed in a number of promising research Garcia et al. (2016), Asgari et al. (2015), Rahul et al. (2021), Mijahad et al. (2017), Acharya et al. (2016), Malik et al. (2021), Elhaj et al. (2016). Most of the works have classified individual beats into arrhythmia classes. Some works have focused on single arrhythmia type detection Garcia et al. (2016),Asgari et al. (2015) while other works have classified more than one class of arrhythmia Rahul et al. (2021), Mijahad et al. (2017), Acharya et al. (2016), Malik et al. (2021), Elhaj et al. (2016). In Garcia et al. (2016), relative wavelet energy from wavelet decomposition of T-Q segments in the ECG cycle is used for the detection of atrial fibrillation (AF). In Asgari et al. (2015), an AF detection method is proposed using features from Stationary Wavelet Transform (SWT) decomposition coefficients and support vector machine as the classifier. The technique described in Rahul et al. (2021), offers training several classifiers to distinguish between premature atrial contraction (PAC), premature ventricular contraction (PVC), and normal beats based on the R-R interval and other statistical features. In Mijahad et al. (2017), distinguishing between ventricular fibrillation and ventricular tachycardia is done using time-frequency representations of ECG signal. The method in Acharya et al. (2016) recognizes and categorizes four groups of ECG beats using a set of nonlinear features, including Shannon entropy, fuzzy entropy, approximate entropy, permutation entropy, sample entropy, etc. When analyzing patient-specific based arrhythmia classification, the method described in Malik et al. (2021) employs self-organized operational neural networks to detect and categorize five groups of arrhythmia beats. In Elhaj et al. (2016), five categories of heartbeat classification have been done using wavelet transform coefficients and independent component analysis (ICA).
Most of the previous works in the literature have used either only one or very limited databases and the performance of such methods are tested only in a small and homogeneous population. Moreover, the classification ability of the existing approaches is limited to only very few rhythm types. Broad-range classification of multiple classes has not been attempted much in the available literature. Majority of the works in arrhythmia classification have focused on classifying single heartbeat classification only, not arrhythmic episode detection. In doing so, a significant amount of inter-beat information is lost which restricts the classification to only a few classes of arrhythmia. Considering the above-mentioned aspect, the primary aim of this work is to propose a multi-class rhythm classification scheme to detect arrhythmia from ECG records in widely collected different unbalanced databases. The contributions of the paper are as follows:
* Development of robust multi-class ECG arrhythmia classification schemes which can effectively detect broad categories of arrhythmia like normal sinus rhythm (NSR), sinus arrhythmia (SA), sinus bradycardia (SB), sinus tachycardia (STACH), atrial fibrillation (AF), atrial flutter (AFL), premature atrial contraction (PAC), 1st Degree AV block (1AVB) and premature ventricular contraction (PVC).
* Utilization of broadly acquired ECG records combined from four standard databases of Physionet challenge 2021 which makes the model more robust to different methods of data acquisition and population demography.
* Extraction of robust heart rate variability and time domain features based on scientific knowledge of characteristics of ECG signal variations caused by each of the arrhythmia classes. Also, deriving the feature set from established facts used for actual pathological diagnosis by physicians which provides the classifier model interpretability.
* Extraction of effective features using SWT-based sub-band signal decomposition. Deployment of heart rate variability features along with wavelet coefficient features for efficient classification of multi-class cardiac arrhythmia.
* Hardware implementation of the ECG classification algorithms using Raspberry Pi for real time ECG signal analysis.
The rest of the paper is as follows: Section 2 describes the detailed information on ECG records and databases which are utilized in this work. Sections 3, 4, 5 and 6 describes the proposed multi-class arrhythmia classification methodology followed by hardware implementation of the proposed methodology in Section 7. Section 8 discusses all the results obtained from the methodologies presented in the preceding sections. Finally, the conclusion is summarised in Section 9.
## 2 Test Database
The present work uses four open source databases, such as; CPSC database and CPSC-Extra database Liu et al. (2018), PTB-XL database Wagner et al. (2020), the Georgia 12-lead ECG challenge (G12EC) database and Chapman-Shaoxing and Ningbo database Zheng et al. (2020), Zheng et al. (2020), made available by Physionet for the computing in cardiology challenge 2021 Reyna et al. (2021).
Each database contains a variety of 12-channel ECG records having different arrhythmia conditions. In this work, the lead II ECG signal is utilized for simulation purposes. Each of the signals is sampled at a rate of 500 Hz. The ECG records with PAC and supraventricular premature beats (SVPB) are grouped into a single class of PAC. Similarly, records of PVC and ventricular premature beats (VPB) are merged as the same class PVC. A view of each arrhythmia category of ECG signals is presented in Figure 1.
The databases are used to accumulate single channel ECG records which form the test dataset. Due to the degradation of the performance of the classifier, the test dataset can be highly imbalanced with 33 records of PVC and 14,993 entries of NSR. Hence, the synthetic minority oversampling technique (SMOTE) and random under-sampling are used to balance the dataset. Initially, SMOTE is applied to increase the data of minority classes Chawla et al. (2002), Sivapalan et al. (2022). Further, random under-sampling is used to reduce the data in the majority class by randomly eliminating
Figure 1: ECG records having different arrhythmia conditions.
the records to obtain a balancing in data Batista et al. (2004). A total of 31059 ECG records (3451 from each class) are considered in this work. The detailed information on each class of ECG records is described in Table 1.
## 3 Proposed ECG Arrhythmia Classification Methodology
The proposed arrhythmia detection and classification methodology is implemented in 3 parts - In the first part, heart rate variability and time domain morphological features from ECG signals are extracted to train a classifier. To improve the results obtained from this classification approach, in the second part, heart rate variability features are paired with wavelet coefficient features obtained from stationary wavelet decomposition of the ECG signal. This gives significantly improved classification performance for each of the 9 classes. In the last part, both the classification methods are implemented in a hardware embedded device to make it viable for real time ECG signal analysis and arrhythmia detection,
Flowchart of the proposed arrhythmia classification work is presented in Figure 2. The detailed description of each step is discussed in the following subsections.
### ECG Signal Pre-processing
The ECG records are often corrupted with baseline wander, muscle artifacts etc. These noises mask the clinical components in the ECG signal which results in poor classification performance Satija et al. (2018); Chatterjee et al. (2020). Most of the clinical information of ECG signals is in the frequency range of 1-150 Hz Walraven (2010). For the first part of the work, the ECG signals are filtered through a band-pass filter with a cut-off frequency of 1-150 Hz, which
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Arrhythmia class** & **Original ECG records** & **After SMOTE** & **Final ECG records** \\ \hline
**NSR** & 14993 & 14993 & 3451 \\
**SA** & 1361 & 3451 & 3451 \\
**SB** & 9353 & 9353 & 3451 \\
**STACH** & 3451 & 3451 & 3451 \\
**AF** & 1500 & 3451 & 3451 \\
**AFL** & 1526 & 3451 & 3451 \\
**PAC** & 538 & 3451 & 3451 \\
**IAVB** & 754 & 3451 & 3451 \\
**PVC** & 33 & 3451 & 3451 \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|} \hline
**NSR - Normal sinus rhythm, SA - Sinus Arrhythmia, SB - Sinus bradycardia, STACH - Sinus tachycardia, AF - Atrial fibrillation, AFL - Atrial flutter, PAC - Premature atrial contraction, 1AVB - 1st Degree AV Block, PVC - Premature ventricular contraction** \\ \hline \end{tabular}
\end{table}
Table 1: Number of ECG records of each arrhythmia class
Figure 2: Flowchart of the proposed arrhythmia detection scheme.
will remove low out-band noise (baseline wander) as well as high frequency noise (muscle artifacts). In the second part, for the wavelet coefficient features, the coefficient sets with frequency range of baseline wander and muscle artifacts are discarded and not used for feature extraction. Powerline interference is removed in both parts of the work using a 50 Hz notch filter. For time domain features, it is crucial to properly delineate the local components (P wave, QRS complex and T wave) in an ECG signal. In this work, ECGDeli, an open-source ECG delineation toolbox is used for accurate delineation of the local components in ECG signals Pilia et al. (2021). The onset, offset and peak of P waves, QRS complex and T waves are reliably detected by this toolbox as shown in Fig 3
## 4 Heart rate variability and time domain feature extraction
As per the description of arrhythmia in Walraven (2010), it can be concluded that heart rate (HR), regularity of HR, P wave morphology, PR interval (PRI) and QRS complex morphology are the important factors to be considered while looking for the signs of arrhythmia in ECG records. Distinct variations are caused by different types of arrhythmia in HR, P wave morphology, PRI timing and QRS complex morphology. These characteristic variations are analyzed by physicians to detect arrhythmia from ECG recordings and accurate assessment of its type. Hence, in the first part of this work, keeping in consideration the characteristic variations attributed to different types of arrhythmia, detected fiducial points are used to extract the time domain and morphological features which can be grouped under four categories - HR variability features, P wave features, PRI features and QRS complex features. The features are a combination of non-linear, higher-order statistical, entropy and energy-based features so as to best capture the morphological and timing variations in the ECG for different arrhythmia conditions. All categories of features are listed in Table 2. The details regarding each category of features are described in the following subsections.
### Heart Rate Variability (HRV) Features
The analysis of HR is an imperative step in detecting arrhythmia from ECG signals. Heart rate variation may contain signs of cardiac disease that are already present or warnings of impending cardiac diseases. HR regularity and variability analysis is a crucial step for determining heart rate rhythm as well as arrhythmia Walraven (2010). In this work, a total of 11 features are extracted from heart rate for analyzing the HR regularity and variability. HR is obtained from the interval between two consecutive R peaks. R peaks in this work are readily obtained from ECG deli. If \(RR_{i}\) is the \(i^{th}\) RR interval, then for sampling frequency 500 Hz, the HR is calculated as:
\[HR_{i}=\frac{60*500}{RR_{i}}\ bpm \tag{1}\]
Mean, standard deviation, maximum and minimum HR are some of the linear features calculated. Maximum deviation in HR is a feature calculated as the difference in maximum and minimum HR. From the difference between consecutive HR, the mean absolute difference of HR is also calculated.
Figure 3: Local signal component delineation in ECG using ECGDeli toolbox.
\[HR_{MeanAbsDiff}=\frac{1}{n}\sum_{i=1}^{n}|HR_{i+1}-HR_{i}| \tag{2}\]
Higher order statistics feature such as kurtosis and skewness of HRV signal are also considered and calculated as follows:
\[HR_{Kurt}=\frac{\frac{1}{n}\sum_{i=1}^{n}(HR_{i}-HR_{mean})^{4}}{[\frac{1}{n} \sum_{i=1}^{n}(HR_{i}-HR_{mean})^{2}]^{2}} \tag{3}\]
\[HR_{Skew}=\frac{\frac{1}{n}\sum_{i=1}^{n}(HR_{i}-HR_{mean})^{3}}{[\frac{1}{n} \sum_{i=1}^{n}(HR_{i}-HR_{mean})^{2}]^{\frac{3}{2}}} \tag{4}\]
Non-linear entropy based features such as approximate entropy (\(HR_{ApEn}\)), permutation entropy (\(HR_{PeEn}\)) and Shannon entropy (\(HR_{ShEn}\)) are calculated from HRV signal. These features are determined as follows:
For a time series of length \(L\), window size of \(m\) and threshold of \(r\)
\[C_{i}^{m}(r)=\frac{n_{im}(r)}{L+m-1} \tag{5}\]
\[\emptyset^{m}(r)=\frac{\sum_{i=1}^{L-m+1}ln[C_{i}^{m}(r)]}{L-m+1} \tag{6}\]
where, \(C_{i}^{m}(r)\) is probability of similarity at threshold \(r\), \(n_{im}(r)\) is no. of segments similar to \(i^{th}\) segment with threshold \(r\) and \(\emptyset^{m}(r)\) is segment value.
Approximate entropy is determined as
\[HR_{ApEn}=\emptyset^{m}(r)-\emptyset^{m+1}(r) \tag{7}\]
Permutation entropy (\(PeEn\)) is a measure of complexity taking into account temporal order of the successive points in a time series Riedl et al. (2013). In \(PeEn\) calculation total time series is presented into a group of pattern. If \(p_{k}\) is
\begin{table}
\begin{tabular}{|l|l|} \hline
**Feature Category** & **Features** \\ \hline \multirow{4}{*}{**HRV Feature**} & Maximum HR (\(HR_{max}\)), Minimum HR (\(HR_{min}\)), Mean HR (\(HR_{mean}\)), Std. of HR (\(HR_{std}\)), \\ & Max deviation of HR (\(HR_{MaxDev}\)), Mean of absolute difference of HR (\(HR_{MAD}\)), \\ & Kurtosis of HR (\(HR_{kurt}\)), Skewness of HR (\(HR_{skew}\)), Approximate entropy of HR (\(HR_{ApEn}\)), \\ & Shannon entropy of HR (\(HR_{ShEn}\)), Permutation entropy of HR (\(HR_{PeEn}\)) \\ \hline \multirow{4}{*}{**P wave Feature**} & P wave peak (\(P_{peak}\)), P wave width (\(P_{width}\)), Max Deviation of P wave (\(P_{MaxDev}\)), \\ & P wave energy (\(P_{energy}\)), Correlation of P waves (\(P_{Corr}\)), Spectral Entropy of P wave (\(P_{SpEn}\)), \\ & Kurtosis of P wave (\(P_{kurt}\)), Skewness of P wave (\(P_{skew}\)), Atrial HR (\(P_{trivial}\)) \\ \hline \multirow{2}{*}{**PRI Feature**} & Mean PR Interval (\(PRI_{mean}\)), Std. of PR Interval (\(PRI_{std}\)), Maximum PR Interval (\(PRI_{max}\)), \\ & Minimum PR Interval (\(PRI_{min}\)), Max Deviation of PR Interval (\(PRI_{MaxDev}\)) \\ \hline \multirow{4}{*}{**QRS Complex Feature**} & QRS Width (\(QRS_{width}\)), Correlation of QRS Complex (\(QRS_{Corr}\)), QRS complex energy (\(QRS_{energy}\)) \\ & Spectral Entropy of QRS Complex (\(QRS_{SpEn}\)), Sample Entropy of QRS (\(QRS_{SaEn}\)), \\ & Kurtosis of QRS complex (\(QRS_{kurt}\)), Skewness of QRS complex (\(QRS_{skew}\)) \\ \hline \end{tabular}
\end{table}
Table 2: Category wise feature details.
probability of occurrence of the \(k_{th}\) pattern and \(K\) is the length of each pattern then \(PeEn\) can be expressed as:
\[HR_{PeEn}=-\sum_{k=1}^{K!}p_{k}log_{2}(p_{k}) \tag{8}\]
\[HR_{ShEn}=-\sum_{i=1}^{n}HR_{norm}log_{2}(HR_{norm}) \tag{9}\]
where
\[HR_{norm}=\frac{HR}{max(HR)} \tag{10}\]
### P wave features
P wave in ECG signal is the representation of the atrial activity of the heart and signifies depolarisation of atria. Many of the arrhythmia types (AF, AFL, PAC) affect the morphology of P waves in different ways Walraven (2010). Hence, the morphological information from P wave features can be used effectively to distinguish these arrhythmia classes. Amplitude, width, energy and maximum deviation of P waves are extracted as features which capture essential information like the presence, shape and morphology of P waves. The correlation coefficient of the P wave analyzes the similarity of P waves in consecutive beats. This feature can effectively detect the changes in P wave morphology from beat to beat within an ECG segment in case of an ectopic beat. Other non-linear, statistical features like spectral entropy, skewness, and kurtosis are also calculated for P wave signal components which capture the difference of P wave information for different arrhythmia conditions. Atrial HR is also calculated from the P-P interval which is the time between two consecutive P peaks. As multiple P waves exist throughout the entire ECG segment hence mean and standard deviation are calculated for each of the above-mentioned features. A total group of 18 features are extracted from the P wave signal component.
### PR interval features
PRI is the time duration between the onset of a P wave to the onset of the QRS complex. It signifies the start of atrial depolarisation to the start of ventricular depolarization. It is an important marker for the atrial activity and conduction of electrical impulses through the AV node. The PRI value may be prolonged for the rhythm generated other than sinus or heart block condition. Hence, information on PRI can be used to distinguish between arrhythmia classes. A total 5 PRI features are extracted such as mean PRI (\(PRI_{mean}\)), standard deviation of PRI (\(PRI_{std}\)), maximum PRI (\(PRI_{max}\)), minimum PRI (\(PRI_{min}\)) and maximum deviation of PRI (\(PRI_{MaxDev}\)).
### QRS features
QRS complex represents the ventricular activity of the heart. The ventricular rhythm airrhythmia like PVC affects the morphology of the QRS complex as compared to normal sinus rhythm. Hence, effective features can be extracted from QRS morphology for better classification. Different morphology, entropy and higher order statics features such as QRS width, correlation coefficient of consecutive QRS, spectral entropy, sample entropy, kurtosis and skewness are extracted from QRS complexes. Similar to P waves multiple QRS complexes exist throughout the entire ECG signal hence mean and standard deviation are calculated for each of the above-mentioned features. A total group of 14 features are extracted from QRS complexes.
## 5 Wavelet coefficient feature extraction
To improve the methodology proposed in the first part, in the second part, an arrhythmia detection scheme is devised where in addition to heart rate features, wavelet domain features are obtained from coefficients of wavelet decomposition of the ECG signals using SWT.
### SWT based signal decomposition
Wavelet transform is a signal processing tool that allows the decomposition of the signal into different time and frequency scales where each scale allows analysis of various signal properties and characteristics. This tool for analyzing non-stationary signals is useful and simple for identifying subtle variations in the signal morphology over the scales of interest Asgari et al. (2015). A series of high and low-pass filters are used to analyze high and low-frequency
components of the signal. At each level, convolution of the input signal with high pass filters gives the detail coefficients \(D_{n}\) and convolution with low pass filters gives approximation coefficients \(A_{n}\).
SWT is a time-invariant discrete wavelet transform method. At each decomposition level, SWT coefficients have the same number of samples as the original signal, thus preserving the temporal information of the signal and overcoming the problem of repeatability and robustness which exists with discrete wavelet transformAsgari et al. (2015). SWT of a signal \(x[n]\) gives the coefficients \(c_{i,j}\) obtained using equation Merah et al. (2015) -
\[c_{i,j}=\sum_{n\in z}x[n]\psi_{i,j}^{*}[n] \tag{11}\]
where \(\psi_{i,j}\) is a translated and scaled version of the mother wavelet \(\psi_{0,0}\)
\[\psi_{i,j}[k]=2^{-(i/2)}\psi_{0,0}(2^{-i}(k-j)) \tag{12}\]
To implement \(L\)-level SWT on a signal, the length of the signal should be a multiple of \(2^{L}\). Signals with lesser samples are zero-padded to make the signal length a multiple of \(2^{L}\). At each successive decomposition level, the impulse response of the high and low pass filters are upsampled by a factor of 2 giving a coefficient series with the same temporal resolution as the original signal.
In this work, the ECG records are sampled at 500 Hz with a maximum frequency component of 250 Hz. Considering the frequency ranges of the QRS complex, P wave and T wave, a 7-level SWT decomposition is chosen to be applied to the ECG records using Symlet-7 wavelet. Symlet-7 wavelet is chosen as the mother wavelet because of its close similarity to ECG signal morphology and is extensively used in different ECG signal processing-based works Ansari et al. (2017). The frequency range of each decomposition level is shown in Figure 4. Detail coefficients D3, D4, D5 and D6 correspond to the frequency range of the QRS complex while D6 and D7 correspond to the P and T waves of the ECG signal. Thus, D3, D4, D5, D6 and D7 are considered for wavelet coefficient feature calculation in the subsequent steps of the algorithm.
### HRV features
In addition to the 11 HR features from the Subsection 4.1, 5 more features were added in this scheme to increase the arrhythmia classification accuracy as HR features contribute the most in differentiating multiple classes of arrhythmia. These new features are - standard deviation of absolute difference, coefficient of variance (CoV) of HR, Higuchi fractal dimension, Hjorth mobility and Hjorth complexity. CoV of HR is the ratio of the standard deviation of HR to the mean of HR. Higuchi fractal dimension is a computational method to determine changes in a signal from the
Figure 4: Frequency range information of SWT decomposition sub-band levels.
measure of its complexity Chinara et al. (2021). For a finite set of time series observations taken at regular intervals \(X(1),X(2),X(3).....X(N)\), a new time sub series can be constructed by taking
\[X_{k}^{m}:X(m),X(m+k),X(m+2k),...,X(m+[\frac{N-m}{k}].k) \tag{13}\]
where \(m=1,2,3,..,k\) is the start of every sub series and \(k\) is the interval. This gives a total of \(k\) new sub series. The length of each of the sub series is given by
\[L_{m}(k)=\frac{\left|\sum_{i=1}^{[\frac{N-m}{k}]}x_{m+id}-x_{m+(i-1)d}\right| \frac{N-1}{\frac{m-m}{k}.k}}{k} \tag{14}\]
On plotting the average value of the length over \(k\) sub series against \(k\) in a double logarithmic scale, the data falls on a straight line with slope-D. The value \(D\) is Higuchi fractal dimension for the given time series Higuchi (1988).
Hjorth mobility (\(HR_{HM}\)) is a Hjorth descriptor that describes the mean frequency of a signal. Hjorth complexity (\(HR_{HC}\)) denotes an estimation of the signal bandwidth from the ratio of the peak value to the harmonic content of the signal Hjorth (1970). These two features are calculated as:
Figure 5: Original ECG with decomposed signals (a) Original ECG ; (b)-(h) Detail coefficients D1 - D7 ; (i) Approximate coefficient A7.
\[HR_{HM}=\frac{\sigma^{\prime}_{x}}{\sigma_{x}} \tag{15}\]
\[HR_{HC}=\frac{\frac{\sigma^{\prime\prime}_{x}}{\sigma^{\prime}_{x}}}{\frac{ \sigma^{\prime}_{x}}{\sigma^{\prime}_{x}}} \tag{16}\]
where, \(\sigma_{x}\) is the variance of the time series, \(\sigma^{\prime}_{x}\) is the 1st derivative of the variance and \(\sigma^{\prime\prime}_{x}\) is the \(2^{nd}\) derivative of the variance.
### Wavelet coefficient features
Detail coefficients D3, D4, D5, D6 and D7 correspond to the frequency range of clinically significant information in ECG signal. 10 features are extracted from each of these four sets of coefficients to give a total of 50 features. The feature set includes statistical, entropy and energy based features. Statistical features such as mean, standard deviation, skewness and kurtosis are obtained from each set of detail coefficients. Entropy features of approximate entropy, Shannon entropy, permutation entropy and log energy entropy (\(LEEn\)), when applied to wavelet coefficients, capture the complexity, regularity and uncertainty in the wavelet decomposed subbands of the ECG signal. Two energy based features namely: relative wavelet energy (\(RWE\)) and mean wavelet energy (\(MWE\)) are also considered. The calculation of \(LEEn\), \(RWE\) and \(MWE\) are as follows Kumar et al. (2018):
\[LE_{En}=\sum_{i=1}^{N}log(x_{i}^{2}) \tag{17}\]
where, \(x_{j}\) is the \(i^{th}\) sample and \(N\) is the length of the sub-band signal.
\[RWE=\frac{\sum_{i=1}^{N}C_{j}(i)^{2}}{\sum_{j=1}^{L}\sum_{i=1}^{N}C_{j}(i)^{2}} \tag{18}\]
\[MWE=\frac{\sum_{i=1}^{N}C_{j}(i)^{2}}{N} \tag{19}\]
where, \(N\) is the total number of coefficients in \(j^{th}\) level and \(L\) is the total number of decomposition levels.
## 6 Machine Learning based Arrhythmia Classifier
In this section, a machine learning based classifier is utilized for the automated detection of arrhythmia. The supervised machine learning based approaches require feature matrix along with labels. In part one of the work, a total of 48 features (11 HRV, 18 P wave, 5 PRI and 14 QRS features) made up the feature set and in next part, there were 66 total features (16 HRV and 50 wavelet features) in the feature set. Feature sets obtained in the previous sections are supplied to the classifier. There are several machine learning-based techniques and each of them is extensively used in many literature. Random forest (RF) is one of the popular ensemble classifiers. It is extensively employed in many
\begin{table}
\begin{tabular}{|l|} \hline
**Wavelet features** \\ \hline Mean (\(CD_{mean}\)) \\ Standard deviation (\(CD_{std}\)) \\ Skewness (\(CD_{skew}\)) \\ Kurtosis (\(CD_{kurt}\)) \\ Approximate entropy (\(CD_{ApEn}\)) \\ Shannon entropy (\(CD_{ShEn}\)) \\ Permutation entropy (\(CD_{PreEn}\)) \\ Log energy entropy (\(CD_{LE_{En}}\)) \\ Relative wavelet energy (\(CD_{MWE}\)) \\ Mean wavelet energy (\(CD_{RWE}\)) \\ \hline \end{tabular}
\end{table}
Table 3: Set of wavelet coefficient features
classification problem including medical signal and image processing Kung et al. (2020); Panayides et al. (2020). RF is an ensemble classicer which is made from bootstrap aggregation of multiple decision trees. Each decision tree independently generates an output as per the input feature matrix supplied to it. Finally, the net resultant output can be found by applying a voting strategy on the outputs from multiple trees. The aggregation of voting makes RF more effective classifier and less susceptible to outliers and noises Shaikhina et al. (2019). A comparative analysis on different machine learning classifiers is studied in the following subsection and it can be checked that the RF performs better compared to others. Hence, RF is used here as the arrhythmia classifier. Hyper-parameter tuning is done separately for both features sets to obtain a set of optimal parameters for the RF classifier giving the best performance.
## 7 Hardware implementation of the proposed classification schemes
To validate the proposed arrhythmia detection and classification schemes, this section presents the hardware implementation using a Raspberry Pi 4 model B. A programmable system on a chip like Raspberry Pi can handle the complexity of computation while keeping the power consumption low. Raspberry Pi is a single board computer working on a Linux based operating system and takes real time input data which can then be used for a multitude of applications. Apart from this low cost Pi, a computer for feature extraction from dataset and a display monitor to view the output from the Raspberry Pi is used as hardware components. The ECG dataset consisting of 31,059 10-sec ECG records is given as input to the feature extraction algorithm. Feature extraction in both the proposed schemes is done using Matlab 2022b. The obtained feature set is given as input to the Raspberry Pi where the classifier is deployed. The classifier model is written in Python programming language using scikit-learn library. The classifier is trained on the feature set and the trained model then can be used to detect and classify arrhythmic rhythms in real time ECG signals.
## 8 Results and Discussion
Performance of both the proposed cardiac arrhythmia detection scheme is presented in this section. Training and testing of the classifier is done through 10-fold cross-validation. In 10-fold cross-validation, the entire dataset is divided into 10 subsets. At each time, one subset is used for testing purposes and the remaining nine subsets are used for training the algorithm. This process is repeated for all 10 sub-sets and finally, the average values of the performance metrics are reported. The performance of the proposed arrhythmia detection method is evaluated using standard metrics: accuracy (\(Acc\)), sensitivity (\(Se\)), precision (\(+P\)) and \(F1\) score. These are calculated from parameters: true positives (\(TP\)), true negative (\(TN\)), false positive (\(FP\)) and false negative (\(FN\)).
Figure 6: Hardware implementation with Raspberry Pi 4B
\[Accuracy(Acc)=\frac{TP+TN}{TP+TN+FP+FN} \tag{20}\]
\[Sensitivity(Se)=\frac{TP}{TP+FN} \tag{21}\]
\[Precision(+P)=\frac{TP}{TP+FP} \tag{22}\]
\[F1Score=\frac{2*TP}{2*TP+FP+FN} \tag{23}\]
The detailed 10-fold cross-validation performance result of classification with HRV, P wave, PRI and QRS complex morphological features are presented in Table 4. Average performance obtained is \(Acc\) of 85.11%, \(Se\) of 85.11%, \(P\) of 85.07% and \(F1\) score of 85%.
Class-wise classification performance of the described method is presented in Table 5. Maximum classification \(F1\) score of 96.36% is obtained for STACH. Lowest \(F1\) score of 75.31% and 71.60% is obtained for the classes of AF and AFL.
To have a more in depth analysis of the feature set with HRV features and three categories of time domain and morphological features, performance of different subsets of the feature set is compared in Table 6. It can be observed that the HRV feature has the superior effectiveness of detecting arrhythmia with a maximum classification \(Acc\) of 82.81%. It solidifies the importance of HR features for arrhythmia classification. PRI features alone are the least
Figure 7: Feature extraction is fed to Raspberry Pi 4B (left) for feature extraction and a display monitor (right) to show output obtained from classifier loaded on Raspberry Pi 4B
accurate in multi-class arrhythmia classification as the PRI segment mostly signifies the conduction time of impulse through the AV node. Similarly, P wave features and QRS complex features also provide less classification accuracy compared to the HRV feature set when used individually. Adding P-wave, PRI and QRS complex features along with HRV features makes the classification of nine classes more robust and accurate. It is also established that each of the four category features adds more robustness to the algorithm and makes it efficient in multi-class arrhythmia classification. This gives important insights into the clinical relevance of different ECG signal components.
Next, results of classification with HRV and wavelet features are presented in Table 7. 10-fold cross-validation with the proposed scheme gives an average \(Acc\) of 90.91%, \(Se\) of 90.91%, \(+P\) of 90.96% and \(F1\) score of 90.87%. Both the methods show similar results for each validation subset which suggests the robust classification performance of the proposed schemes.
The performance of the proposed scheme for individual arrhythmia class is described in Table 8. The method shows a maximum classification \(F1\) score of 98.00% for PVC. The \(F1\) score of AFL arrhythmia class is marginally small at 80.81% compared to other classes. The ECG morphology characteristics of AF and AFL arrhythmia classes are quite similar, hence for these two classes, both the methods give larger false detection compared to other classes. This results in a degradation of performance for AFL arrhythmia classification.
Proper selection of the mother wavelet in SWT-based decomposition is a crucial task. Generally, a wavelet is chosen whose shape is nearly similar to the morphology of the ECG cycle. Different mother wavelets such as Daubechies, Coiflet, Symlet, Bi-orthogonal etc are extensively used in several ECG signal processing-related works. In this work,
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Feature Category** & **Acc(\%)** & **Se(\%)** & **P(\%)** & **F1(\%)** \\ \hline
**HRV** & **82.81** & **82.81** & **82.64** & **82.61** \\
**P Wave** & 73.10 & 73.10 & 72.87 & 72.69 \\
**PRI** & 43.65 & 43.65 & 42.83 & 43.06 \\
**QRS Complex** & 59.19 & 59.19 & 59.58 & 58.81 \\
**HRV + P Wave** & 80.48 & 80.48 & 80.18 & 80.15 \\
**HRV + P Wave + PRI** & 82.46 & 82.46 & 82.43 & 82.33 \\
**HRV + P Wave + PRI + QRS** & **85.11** & **85.07** & **85.00** \\ \hline \end{tabular}
\end{table}
Table 6: Performance comparison with different subsets of HRV and time domain features.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Fold** & **Acc(\%)** & **Se(\%)** & **+P(\%)** & **F1(\%)** \\ \hline
1 & 85.64 & 85.64 & 85.65 & 85.57 \\
2 & 85.12 & 85.12 & 85.02 & 85.00 \\
3 & 84.86 & 84.87 & 84.81 & 84.77 \\
4 & 84.83 & 84.84 & 84.79 & 84.68 \\
5 & 85.57 & 85.57 & 85.55 & 85.46 \\
6 & 85.47 & 85.47 & 85.31 & 85.26 \\
7 & 85.44 & 85.44 & 85.46 & 85.34 \\
8 & 85.02 & 85.02 & 84.98 & 84.93 \\
9 & 84.93 & 84.92 & 84.95 & 84.86 \\
10 & 84.21 & 84.21 & 84.15 & 84.09 \\
**Average** & **85.11** & **85.11** & **85.07** & **85.00** \\ \hline \end{tabular}
\end{table}
Table 4: 10- fold Cross-validation performance results with heart rate and time domain features
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Class** & **Se(\%)** & **+P(\%)** & **F1(\%)** \\ \hline
**NSR** & 79.98 & 92.00 & 85.57 \\
**SA** & 86.58 & 86.56 & 86.57 \\
**SB** & 97.19 & 89.04 & 92.93 \\
**STACH** & 97.74 & 95.01 & 96.36 \\
**AF** & 75.02 & 75.59 & 75.31 \\
**AFL** & 70.65 & 72.58 & 71.60 \\
**PAC** & 79.83 & 79.76 & 79.80 \\
**IAVB** & 83.14 & 83.74 & 83.44 \\
**PVC** & 96.35 & 91.70 & 93.97 \\ \hline \end{tabular}
\end{table}
Table 5: Class wise performance of arrhythmia classification with heart rate and time domain features
a comparative performance study has been carried out using different mother wavelets. As presented in Table 9, the Symlet wavelet of order 7 (Symlet 7) shows a better performance of accuracy of 90.91%.
The arrhythmia classification performance of HRV features along with different combinations of wavelet coefficient features is presented in Table 10. It can be observed that the HRV features alone have an effective classification accuracy of 82.81%. Wavelet features of D3, D4, D5, D6 and D7 coefficient set give an accuracy of 83.04% without including HRV features. Performance gradually increases on combining the HRV features and wavelet coefficient feature sets. This justifies the effectiveness of HRV features along with wavelet coefficient features for the detection of multi-class cardiac arrhythmia.
The machine learning-based classifier for arrhythmia detection is an imperative component in this work. Output of the classification is highly dependent on the proper selection of a machine learning classifier. In this section, a comparative
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Feature Category** & **Acc(\%)** & **Se(\%)** & **+P(\%)** & **F1(\%)** \\ \hline
**HRV** & 82.81 & 82.81 & 82.64 & 82.61 \\
**D3 + D4 + D5 + D6 + D7** & 83.04 & 83.03 & 82.88 & 82.73 \\
**HRV + D3** & 89.45 & 89.45 & 89.43 & 89.40 \\
**HRV + D3 + D4** & 89.57 & 89.57 & 89.54 & 89.52 \\
**HRV + D3 + D4 + D5** & 89.62 & 89.62 & 89.63 & 89.58 \\
**HRV + D3 + D4 + D5 + D6** & 90.45 & 90.45 & 90.47 & 90.41 \\
**HRV+ D3 + D4 + D5 + D6** & **90.91** & **90.91** & **90.96** & **90.87** \\ \hline \end{tabular}
\end{table}
Table 10: Performance comparison of proposed scheme for different feature combinations
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Fold** & **Acc(\%)** & **Se(\%)** & **+P(\%)** & **F1(\%)** \\ \hline
1 & 90.98 & 90.98 & 91.09 & 90.98 \\
2 & 91.21 & 91.21 & 91.25 & 91.18 \\
3 & 90.18 & 90.18 & 90.20 & 90.13 \\
4 & 91.07 & 91.07 & 91.05 & 91.02 \\
5 & 90.62 & 90.62 & 90.66 & 90.59 \\
6 & 91.36 & 91.37 & 91.34 & 91.32 \\
7 & 91.69 & 91.69 & 91.80 & 91.66 \\
8 & 90.62 & 90.62 & 90.68 & 90.59 \\
9 & 90.59 & 90.59 & 90.67 & 90.53 \\
10 & 90.78 & 90.78 & 90.80 & 90.73 \\
**Average** & **90.91** & **90.91** & **90.96** & **90.87** \\ \hline \end{tabular}
\end{table}
Table 7: Detailed cross-validation results with heart rate and wavelet features
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Class** & **Se(\%)** & **+P(\%)** & **F1(\%)** \\ \hline
**NSR** & 87.54 & 92.08 & 89.75 \\
**SA** & 91.97 & 88.09 & 89.99 \\
**SB** & 97.74 & 93.07 & 95.35 \\
**STACH** & 98.12 & 95.49 & 96.78 \\
**AF** & 79.27 & 87.11 & 83.01 \\
**AFL** & 82.61 & 79.08 & 80.81 \\
**PAC** & 88.03 & 90.23 & 89.12 \\
**IAVB** & 85.71 & 88.80 & 87.23 \\
**PVC** & 99.57 & 96.49 & 98.00 \\ \hline \end{tabular}
\end{table}
Table 8: Class-wise performance result with heart rate and wavelet features
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Wavelet** & **Acc(\%)** & **Se(\%)** & **+P(\%)** & **F1(\%)** \\ \hline
**Haar** & 89.97 & 89.97 & 89.98 & 89.90 \\
**Daubechies 6** & 90.41 & 90.41 & 90.40 & 90.35 \\
**Coiflet 2** & 90.86 & 90.86 & 90.89 & 90.82 \\
**Biorthogonal 4.4** & 90.90 & 90.90 & 90.93 & 90.86 \\
**Symlet 7** & **90.91** & **90.91** & **90.96** & **90.87** \\ \hline \end{tabular}
\end{table}
Table 9: Performance comparison of the proposed scheme for different mother wavelets
analysis is studied on the performance of both schemes for different machine learning classifiers. The techniques are tested on three other extensively used classifiers namely: K- nearest neighbours (KNN), decision tree (DT), support vector machine (SVM) and RF. As presented in Table 11, maximum accuracy is obtained with RF classifier. Aggregation of the voting concept in RF improves the classification performance compared to other classifiers.
## 9 Conclusion
In this work, multi-class cardiac arrhythmia detection schemes are proposed. In the first part, HRV features are incorporated together with time domain statistical, entropy and higher order statistical features of P wave, PR interval and QRS complex for the effective classification of cardiac arrhythmia using single-channel ECG records. A set of total 48 features are applied to a machine learning-based random forest classifier. The detailed 10-fold cross-validation results show that the proposed multi-class cardiac arrhythmia detection algorithm effectively classifies nine rhythms with an average Acc of 85.11%, Se of 85.11%, P of 85.07% and F1 score of 85.00%. In the second part, wavelet coefficient features are used along with HRV features to successfully classify different arrhythmia types. Initially, stationary wavelet transform is applied to decompose the ECG signal into different sub-band levels. Considering the frequency localization property of wavelet transform, matching with the frequency of ECG local components, the detail coefficients D3, D4, D5, D6, and D7 are further processed for feature extraction. A set of 66 features based on timing information, entropy, higher-order statistics, and energy are extracted and applied to RF classifier. Detailed cross-validation results show that the proposed multi-class cardiac arrhythmia detection algorithm can effectively classify nine rhythm type with average Acc of 90.91%, Se of 90.91%, +P of 90.96% and F1 score of 90.87%. For both parts of the work, ECG records of broadly distributed four standard databases of the Physionet Challenge 2021 are combined to prepare a test database having nine classes of arrhythmia. Lastly, both the classification schemes are implemented on Raspberry Pi. It's low power consumption, light weight and compact design makes it suitable for application in real time monitoring and processing of ECG signals. A close observation of the simulation results affirm that the proposed schemes can effectively be utilized in an advanced automated cardiac disease monitoring system.
## 10 Acknowledgment
The authors would like to thank Dr. Ankita Pramanik, Assistant Professor, Electronics and Telecommunication Engineering Department, IIEST, Shibpur, for her guidance and support.
|
2304.02908 | A Context-Switching/Dual-Context ROM Augmented RAM using Standard 8T
SRAM | The landscape of emerging applications has been continually widening,
encompassing various data-intensive applications like artificial intelligence,
machine learning, secure encryption, Internet-of-Things, etc. A sustainable
approach toward creating dedicated hardware platforms that can cater to
multiple applications often requires the underlying hardware to context-switch
or support more than one context simultaneously. This paper presents a
context-switching and dual-context memory based on the standard 8T SRAM
bit-cell. Specifically, we exploit the availability of multi-VT transistors by
selectively choosing the read-port transistors of the 8T SRAM cell to be either
high-VT or low-VT. The 8T SRAM cell is thus augmented to store ROM data
(represented as the VT of the transistors constituting the read-port) while
simultaneously storing RAM data. Further, we propose specific sensing
methodologies such that the memory array can support RAM-only or ROM-only mode
(context-switching (CS) mode) or RAM and ROM mode simultaneously (dual-context
(DC) mode). Extensive Monte-Carlo simulations have verified the robustness of
our proposed ROM-augmented CS/DC memory on the Globalfoundries 22nm-FDX
technology node. | Md Abdullah-Al Kaiser, Edwin Tieu, Ajey P. Jacob, Akhilesh R. Jaiswal | 2023-04-06T07:41:41Z | http://arxiv.org/abs/2304.02908v1 | # A Context-Switching/Dual-Context ROM Augmented RAM using Standard 8T SRAM
###### Abstract
The landscape of emerging applications has been continually widening, encompassing various data-intensive applications like artificial intelligence, machine learning, secure encryption, Internet-of-Things, etc. A _sustainable_ approach toward creating dedicated hardware platforms that can cater to multiple applications often requires the underlying hardware to context-switch or support more than one context simultaneously. This paper presents a _context-switching_ and _dual-context_ memory based on the standard 8T SRAM bit-cell. Specifically, we exploit the availability of multi-\(\mathrm{V_{T}}\) transistors by selectively choosing the read-port transistors of the 8T SRAM cell to be either high-\(\mathrm{V_{T}}\) or low-\(\mathrm{V_{T}}\). The 8T SRAM cell is thus augmented to store ROM data (represented as the \(\mathrm{V_{T}}\) of the transistors constituting the read-port) while simultaneously storing RAM data. Further, we propose specific sensing methodologies such that the memory array can support RAM-only or ROM-only mode (context-switching (CS) mode) or RAM and ROM mode simultaneously (dual-context (DC) mode). Extensive Monte-Carlo simulations have verified the robustness of our proposed ROM-augmented CS/DC memory on the Globalfoundries 22nm-FDX technology node.
context switching, dual-context, SRAM, memory, ROM-augmented RAM
## 1 Introduction
According to Moore's law, the remarkable scaling of the Silicon transistor technology has driven steady improvements in power, performance, and area (PPA) metrics of state-of-the-art computing platforms Shalf (2020). The ever-improving hardware PPA metrics have been sustained through a series of innovations at device Ye et al. (2019), circuit Song et al. (2021), and architectural level Shin et al. (2018). Historically, as the improvement in processor clock speed slowed down due to power concerns, parallel multi-core architectures emerged to cater to the ever-increasing compute demand of consumer applications Leiserson et al. (2020). State-of-the-art hardware platforms feature multi-core architectures, forming the backbone of existing computing solutions.
Recently, however, the scope of consumer applications has vastly increased and is driven by data-intensive applications like big data, IoT, machine learning, artificial intelligence, secure encryption, etc. Coupled with the decreased pace of Moore's Law and the sky-rocketing compute demands of emerging applications, domain-specific architectures that can
cater to the needs of a specific application of interest are being extensively explored by the research community Jouppi et al. (2018). Nevertheless, given the vast scope of emerging applications, hardware platforms often have to context switch between multiple applications. For example, dedicated domain-specific IoT devices could require support for both data analysis (machine learning) and data encryption (for secure wireless transfer) Liu et al. (2021). Thus, a sustainable pathway toward satisfying the computing need for a wide range of emerging applications requires custom hardware solutions that can seamlessly cater to multiple contexts while meeting the required power and performance metrics. Such dedicated multi-context hardware systems inevitably require an on-chip memory solution to rapidly context switch or cater to multi-context data.
We present a Context-Switching (CS) and a Dual-Context (DC) on-chip memory solution based on standard 8T SRAM cells. Our proposal is based on augmenting the standard 8T SRAM cells Verma and Chandrakasan (2008) with ROM-based memory. Each 8T bit-cell can simultaneously store independent RAM and ROM data, catering to multiple-context data stored within the same memory array. The ROM augmented part of the proposed scheme can store look-up tables for transcendental and polynomials functions for a wide range of applications or weights of a neural network Frigo and Johnson (2005); Ramanathan et al. (2020); Agrawal et al. (2018), while the RAM part can serve as a scratch pad memory. Furthermore, the presented ROM-augmented RAM bit-cell can operate in context-switching (CS) mode, wherein the memory array can act as ROM or RAM array, or a dual-context (DC) mode, wherein the memory array can function as ROM and RAM array, simultaneously. Our proposed bit-cell can perform better for ROM-intensive workloads that require frequent ROM access and computation that depends heavily on ROMs. Prior works on ROM-embedded SRAM can cater to ROM or RAM data, one at a time and/or use increased wordline/bit-line capacitance or multiple supply rails that can degrade speed and energy-efficiency Lee and Roy (2012); Agrawal and Roy (2018); Matsumura et al. (1994); Brandon et al. (2006). In contrast, our proposal can cater to both CS and DC modes, wherein RAM and ROM data can be accessed simultaneously by exploiting advanced foundry nodes' multi-\(\mathrm{V_{T}}\) nature.
The key highlights of the paper are as follows:
1. We present a novel approach to augment standard 8T SRAM cells with ROM, such that a single bit-cell can simultaneously store RAM and ROM data.
2. We propose two different operating modes for the presented ROM augmented RAM - a) Context-Switching (CS) Mode, wherein the 8T SRAM array can operate either as RAM _or_ ROM, b) Dual-Context (DC) Mode, wherein the memory array can simultaneously operate both as a RAM _and_ ROM.
3. For the DC mode of operation, we propose using a single sense amplifier with a dual-thresholding sensing scheme for reading both RAM and ROM data. In addition, in DC mode, our proposed method can read both RAM and ROM data within two cycles without destructing or storing data into a temporary buffer, thus saving latency and power overhead.
## 2 Proposed Context-Switching / Dual-Context 8T SRAM Bit-Cell
### Proposed Bit-cell
Figure 1 shows the proposed CS/DC memory bit-cell. It consists of the standard 8T SRAM bit-cell with a decoupled read-port. Write operation is achieved by utilizing the write-port (WWL, WBL, and WBLB) similar to the standard 6T SRAM. However, the read operation is performed through the decoupled read-port (RWL, RBL) and shared source line (SL). To embed ROM data inside the standard 8T SRAM cell, we propose to exploit the availability of multi-\(\mathrm{V_{T}}\) transistors in commercial foundry process design kits (PDKs). Specifically, the read-port of the standard 8T SRAM cell can be constructed by either using high-\(\mathrm{V_{T}}\) or low-\(\mathrm{V_{T}}\) transistors. The high-\(\mathrm{V_{T}}\) and low-\(\mathrm{V_{T}}\) transistors represent the ROM data bit of '0' and '1', respectively. Figure 1(a), and 1(b) illustrate the CS/DC bit-cell's storing the ROM data bit of '0' and '1', respectively. As shown in the figure, when the read-port transistors are implemented using high-\(\mathrm{V_{T}}\) (low-\(\mathrm{V_{T}}\)) transistors, the 8T bit-cell can be considered to be storing a ROM data '0' (ROM data '1'), in addition to the usual SRAM data stored on nodes Q and QB. Thus, the presented 8T bit-cell, wherein the read-port transistors are selectively chosen to be either high-\(\mathrm{V_{T}}\) or low-\(\mathrm{V_{T}}\), simultaneously stores one-bit RAM data and one-bit ROM data _within the same memory footprint_.
### RAM and ROM Read Analysis
To read the RAM and the ROM data stored in the 8T bit-cell, the sensing circuit in the periphery should be able to differentiate between various BL discharge rates controlled by the node Q (stored SRAM data) and the \(\mathrm{V_{T}}\) flavor of the transistor constituting the read-port (stored ROM data). Figure 2(a) illustrates a typical bit-line discharge voltage versus time for different SRAM and ROM data bit combinations. The proposed CS/DC bit-cell can store 2-bit, 1-bit
re-writable data in the SRAM and 1-bit fixed data in the ROM. For the rest of the paper, we follow the convention that the MSB and LSB represent the SRAM and ROM data bits stored in the same 8T bit-cell, respectively.
Consider the source line (SL) is connected to a small negative voltage within the reliability limit of the transistors. For case-00 (i.e., Q = 0 and the read-port transistors are high-\(\mathrm{V_{T}}\)), the pre-charged read bit-line will remain close to \(\mathrm{V_{DD}}\). The negative \(\mathrm{V_{SL}}\) and the SRAM stored data bit (Q = 0) cannot turn ON the high-\(\mathrm{V_{T}}\) transistor along the read path. However, for case-01 (i.e., Q = 0 and the read-port transistors are low-\(\mathrm{V_{T}}\)), the bit-line would experience slow
Figure 1: Proposed Context-Switching/Dual-Context memory using standard 8T SRAM bit-cell. High-\(\mathrm{V_{T}}\) and low-\(\mathrm{V_{T}}\) read transistors represent the ROM data (a) ’0’ and (b) ’1’, respectively.
Figure 2: (a) Conceptual bit-line discharge voltage versus time for different data bit combinations in the proposed CS/DC bit-cell. MSB and LSB represent the SRAM and ROM stored data bit, respectively, (b), (c), and (d) Monte-Carlo simulations of the bit-line discharge voltage versus time for different source line (SL) voltages at TT corner for 5000 samples per each case.
discharge. Note, though the SRAM data bit is '0', the negative \(\mathrm{V_{SL}}\) marginally activates the low-\(\mathrm{V_{T}}\) read transistor. On the other hand, for case-10 (i.e., Q = 1 and the read-port transistors are high-\(\mathrm{V_{T}}\)) and case-11 (i.e., Q = 1 and the read-port transistors are low-\(\mathrm{V_{T}}\)), the bit-line voltage discharges faster compared to the Case-01 due to higher gate-voltage-overdrive of the lower read-port transistor. Further, the high-\(\mathrm{V_{T}}\) read transistors (Case-10) will have slower discharge compared to the low-\(\mathrm{V_{T}}\) (Case-11) transistors. Thus, various RAM and ROM data stored in the same 8T bit-cell lead to different rates of BL discharge. The sensing circuit can read the ROM and RAM data bits from the same bit-cell.
Figure 2(b), (c), and (d) show the bit-line discharge voltage versus time considering the local variation at TT (Typical) corner for 5000 simulations for different source line voltages per each data set. Figure 2(b) exhibits that there is no robust sense margin between case-00 and case-01, similarly, between case-10 and case-11 due to the local mismatch variation when the SL is connected to a small negative voltage (i.e., -0.1 V). Hence, applying the small negative \(\mathrm{V_{SL}}\) can only differentiate between the SRAM data; however, it cannot sense the ROM data robustly. Figure 2(c) shows a sufficient sense margin between case-10 and case-11 when the \(\mathrm{V_{SL}}\) is positive (i.e., 0.2 V). Due to the application of the positive \(\mathrm{V_{SL}}\) with the stored SRAM data '1', the high-\(\mathrm{V_{T}}\) read-port transistor can marginally start conducting. In contrast, the low-\(\mathrm{V_{T}}\) transistor can entirely turn on due to its lower threshold voltage requirement. As a result, the sensing circuit can easily differentiate between case-10 and case-11. Moreover, Figure 2(d) exhibits that the necessary sense margin can be achieved between case-00 and case-01 by applying a larger negative voltage (\(\mathrm{V_{GS}}\) < \(\mathrm{V_{DD}}\) as \(\mathrm{V_{G}}\) of the read transistor is 0 V for case-00 and case-01) at the source line (SL). The applied \(\mathrm{V_{GS}}\) for the lower read-port transistors exceeds the threshold voltage of the low-\(\mathrm{V_{T}}\) transistor; however, it is not enough to completely turn on the high-\(\mathrm{V_{T}}\) transistor. As a result, ROM data can be easily differentiated when SRAM stored data is '0' by applying the appropriate negative \(\mathrm{V_{SL}}\).
## 3 Mode of Operation
The proposed scheme can operate in both context-switching and dual-context modes robustly. The memory array supports RAM or ROM read operation in the context-switching mode. Conversely, SRAM and ROM data can be read simultaneously in the dual-context mode. Simultaneous SRAM and ROM data access is achieved in two phases by using proper \(\mathrm{V_{SL}}\) to detect 2-bit data. Detailed discussions about the sensing operation with timing waveforms for both modes are described below.
### Context-Switching Mode
#### 3.1.1 ROM-only Mode
During the ROM-only mode, the array acts as a ROM array and stores only the ROM data. All the SRAM bit-cells are initialized such that Q = '0' for all SRAM cells, and a negative voltage is applied at the source line (SL). When the RWL is activated, the bit-line voltage discharges faster for the low-\(\mathrm{V_{T}}\) read transistors than the high-\(\mathrm{V_{T}}\) read transistors. The negative \(\mathrm{V_{SL}}\) is chosen in such a way that the effective \(\mathrm{V_{GS}}\) of the lower read-port transistor is sufficient to turn on the low-\(\mathrm{V_{T}}\) read-port transistor, whereas the high-\(\mathrm{V_{T}}\) transistor is marginally on. A standard current-based sense
Figure 3: Timing waveform of the (a) ROM-only mode sensing when SRAM data is ’0’, (b) and (c) RAM-only mode sensing when SRAM data is ‘1’ and ‘0’, respectively for both low-\(\mathrm{V_{T}}\) and high-\(\mathrm{V_{T}}\) read-port transistors.
amplifier with the appropriate reference voltage can sense the stored ROM data. When the ROM data is '0' (high-\(\rm V_{T}\)), the sense amplifier outputs 0, and when the ROM data is '1' (low-\(\rm V_{T}\)), the output becomes 1. Figure 3(a) shows the timing waveform for the ROM-only mode operation.
#### 3.1.2 RAM-only Mode
ROM data can be both '1' and '0' in the RAM-only read operation. Hence, SRAM data needs to be detected irrespective of the low-\(\rm V_{T}\) or high-\(\rm V_{T}\) read-port transistors. The source line (SL) can be grounded like standard SRAM sensing in this mode. Note the source line (SL) can also be pulled down to a negative voltage (within the voltage reliability limit of the transistors) for faster sensing. The bit-line discharges below a specific reference voltage when the SRAM data is '1' for low-\(\rm V_{T}\) or high-\(\rm V_{T}\) read transistors. In contrast, the bit-line remains close to \(\rm V_{DD}\) when the SRAM data is '0'. Hence, a sense amplifier can differentiate between the SRAM data '1' and '0' with an appropriate reference voltage. Figure 3(b) and 3(c) show the timing waveform for the RAM-only mode sensing when SRAM is storing data '1' and '0', respectively.
### Dual-Context Mode
The dual-context mode performs the read operation utilizing two phases and a single sense amplifier to detect 2-bit of data simultaneously without destroying the SRAM data. In the first phase, the source line (SL) node will be connected to the ground, and the standard SRAM sensing operation will be performed. In the next phase, the control circuit will connect the shared source line (SL) to a positive or negative voltage based on the SRAM data. Figure 4 illustrates the timing waveform for all 2-bit data combinations. For case-00 and case-01, the sense amplifier detects the SRAM data in the first phase; as the SRAM data is '0', the control circuit will connect the source line (SL) to a negative voltage for the next phase. Figure 3(d) shows that ROM data can be differentiated by applying proper reference voltage on the RBL when the SL is pulled to a negative voltage. The effective \(\rm V_{GS}\) applied at the lower read-port transistor is enough to turn on the low-\(\rm V_{T}\) transistor; however, not sufficient for the high-\(\rm V_{T}\) transistor. As a result, the low-\(\rm V_{T}\) read transistor will allow the bit-line to discharge below the reference voltage, generating high logic output (case-01). In contrast, the bit-line voltage remains close to \(\rm V_{DD}\) when the read-port is constituted of a high-\(\rm V_{T}\) transistor (case-00). Similarly, for case-10 and case-11, the control circuit will connect the source line (SL) with a positive voltage as the SRAM is now storing data '1'. Note, as shown in Figure 3(c), for case-10, the bit-line voltage cannot discharge below the reference voltage due to the high-\(\rm V_{T}\) read-port transistors; in contrast, the bit-line voltage goes below the reference for the case-11 due to the low-\(\rm V_{T}\) read-port transistors. Thus, a two-phase operation wherein, in phase-I, the sensing circuit determines the data stored in the SRAM cell, and in phase II, the sensing circuit can use appropriate source line (SL) voltage to determine the ROM data, achieving a dual-context operation.
Figure 4: Timing waveform of the Dual-Context mode sensing.
## 4 Evaluation and Process Variation Analysis
The proposed CS/DC memory bit-cell has been implemented using 22nm Globalfoundries FDSOI technology. To verify the functionality in CS and DC mode, Monte-Carlo simulations have been run for 5000 samples per each case, considering the local variation at the TT corner only using HSPICE. The global PVT variation effects can be calibrated by adjusting the read word line (RWL) delay, reference voltages of the sense amplifier, and source line (SL) voltage, hence, ignored for the verification.
### Context-Switching Mode
#### 4.1.1 ROM-only Mode
Figure 5 shows the bit-line voltage and the output of the sense amplifier for 5000 samples for both ROM data of '0' and '1' (high-\(\mathrm{V_{T}}\) and low-\(\mathrm{V_{T}}\) read-port transistors). A noticeable sense margin can be observed from the figure between the ROM data '0' and '1', and the sense amplifier can robustly read the ROM data.
Figure 5: Monte-Carlo simulation results at TT corner for ROM-only mode.
Figure 6: Monte-Carlo simulation results at TT corner for the reliability-friendly RAM-only mode when the source line (SL) is connected to the ground.
#### 4.1.2 Reliability-friendly RAM-only Mode
Figure 6 shows Monte-Carlo simulation results in the reliability-friendly RAM-only mode. In this mode, the source line (SL) is connected to the ground; as a result, the \(\mathrm{V_{GS}}\) of the lower transistor of the read-port never exceeds the \(\mathrm{V_{DD}}\). The figure shows that the bit-line voltage discharges faster with a different rate and goes below the reference voltage when the SRAM is storing '1' irrespective of the low-\(\mathrm{V_{T}}\) or high-\(\mathrm{V_{T}}\) read-port transistors. In contrast, the bit-line voltage remains close to \(\mathrm{V_{DD}}\) when SRAM stores '0'.
#### 4.1.3 Delay-friendly RAM-only Mode
Figure 7 shows the Monte-Carlo simulation results in the delay-friendly RAM-only mode when the source line (SL) is connected to a negative voltage. As a result, when the SRAM is storing '1', the \(\mathrm{V_{GS}}\) across the lower read-port transistor exceeds \(\mathrm{V_{DD}}\); hence, the bit-line discharges at a faster rate than the reliability-friendly mode due to the gate overdrive. This delay-friendly mode can compensate for the extra delay associated with the high-\(\mathrm{V_{T}}\) read-port transistors of our CS/DC bit-cell. It can be illustrated from Figure 7 that the output becomes high (low) when SRAM is storing '1' ('0') regardless of the stored ROM data.
### Dual-Context Mode
Figure 8 shows the Monte-Carlo simulation results for the dual-context mode where both SRAM and ROM data are being read simultaneously, preserving the SRAM data. From the figure, it can be illustrated that the 2-bit data can be sensed robustly for all combinations using one sense amplifier only. In case-00, the output of the sense amplifier remains at the ground constantly. For case-01, the output remains at the ground in the first phase due to the SRAM stored data of '0'; however, it switches to \(\mathrm{V_{DD}}\) due to ROM stored data of '1' in the second phase. Accordingly, in case-10, the output first becomes \(\mathrm{V_{DD}}\) and goes to the ground in the following phase. Finally, for case-11, the output remains at the \(\mathrm{V_{DD}}\) in both phases according to the SRAM and ROM stored data.
### Performance Comparison
Table 1 shows the performance metrics of the proposed bit-cell normalized to the standard 8T SRAM bit-cell in both CS and DC modes. The ROM-only mode exhibits higher delay due to gate underdrive (less than \(\mathrm{V_{DD}}\)) and high-\(\mathrm{V_{T}}\) read-port transistors. The reliability-friendly RAM-only mode also exhibits higher delay due to the high-\(\mathrm{V_{T}}\)
Figure 7: Monte-Carlo simulation results at TT corner for the delay-friendly RAM-only mode when the source line (SL) is connected to a negative voltage.
read-port transistors. On the other hand, the delay-friendly RAM-only mode can sense faster than the reliability-friendly RAM-only mode due to the high gate overdrive of the read-port transistors. However, the average read energy in both cases is slightly higher than the standard 8T SRAM read energy. The DC mode shows a higher delay due to the cumulative effect of the SRAM and ROM sensing delay. Since in the DC mode, the timing has to be optimized considering the worst-case scenario (case-10 for the SRAM sensing due to high-V\({}_{\mathrm{T}}\) read-port transistors and case-11 for the ROM sensing due to the application of a positive voltage at the source line (SL) to underdrive the gate), delay per bit becomes larger. However, the energy consumption per bit is almost close to the standard 8T SRAM bit-cell. Moreover, the overall leakage of our proposed CS/DC bit-cell is nearly similar to the standard 8T SRAM cell due to the averaging effect from the low-V\({}_{\mathrm{T}}\) and high-V\({}_{\mathrm{T}}\) read-port transistors. Considering the 6T SRAM bit-cell area and metal pitch for the GF22FDX node, our proposed CS/DC memory exhibits 1.1\(\times\) and 1.3\(\times\) area improvement compared to 6T SRAM and 8T SRAM with separate ROM bank, respectively for iso-memory array size.
## 5 Conclusion
In this brief, we have presented a context-switching and dual-context ROM augmented 8T SRAM bit-cell. The proposed bit-cell can simultaneously store independent ROM and SRAM data, ensuring the 1.3\(\times\) storage density improvement in the same memory footprint compared to separate RAM and ROM architecture. The memory array can operate as RAM or ROM array (context-switching mode) or RAM and ROM array simultaneously (dual-context mode). The robustness of the functionality of the proposed CS/DC bit-cell has been verified through extensive Monte-Carlo simulations on the GF22FDX technology node. Finally, we believe the proposed bit-cell can be a good candidate for supporting a wide class of emerging applications that need multi-context operations.
\begin{table}
\begin{tabular}{l c c c c} \hline Metrics & ROM-only & RAM-only & RAM-only & Dual-Context \\ & & Reliability-friendly & Delay-friendly & \\ \hline Read Delay/bit & 1.35\(\times\) & 1.79\(\times\) & 1.25\(\times\) & 1.95\(\times\) \\ Read Energy/bit & 1.06\(\times\) & 1.04\(\times\) & 1.03\(\times\) & 1.08\(\times\) \\ \hline Leakage Power & \multicolumn{4}{c}{0.997\(\times\)} \\ \hline \end{tabular}
\end{table}
Table 1: Performance metrics of the proposed bit-cell normalized to the standard 8T SRAM bit-cell
Figure 8: Monte-Carlo simulation results at TT corner for dual-context mode.
## 6 Acknowledgments
The Center for Undergraduate Research at Viterbi (CURVE) partly supported the work. We acknowledge Global-Foundries's support for 22nm technology.
|
2307.15924 | Correlator webs of massive multiparton amplitudes at four loops: A study
of boomerang webs | Logarithm of the soft function can be organized into sets of Feynman diagrams
known as Cwebs. We introduced a new formalism in~\cite{Agarwal:2022wyk}, that
allows to determine several of the building blocks of Cweb mixing matrices
without explicit computations. In~\cite{Agarwal:2022xec} we used this formalism
to obtain the diagonal blocks of four general classes of Cwebs to all orders in
perturbation theory which also covered all the four loop Boomerang Cwebs
connecting four Wilson lines. In this work we present complete mixing matrices
and exponentiated colour factors for Boomerang Cwebs at four loops that connect
three and four Wilson lines. Also, we present a more efficient version of the
algorithm of generating Cwebs that was presented in~\cite{Agarwal:2020nyc}.
This new algorithm has been used to generate the Cwebs in the present work. | Neelima Agarwal, Sourav Pal, Aditya Srivastav, Anurag Tripathi | 2023-07-29T07:41:32Z | http://arxiv.org/abs/2307.15924v2 | # Cwebs of massive multiparton amplitudes at 4 loops:
###### Abstract
Logarithm of the soft function can be studied in terms of sets of Feynman diagrams known as Cwebs. We introduced a new formalism in [1], that allows to determine several of the building blocks of Cweb mixing matrices without explicit computations, in the context of massless multiparton scattering amplitudes; in [2] we applied it for massive amplitudes as well, and obtained the diagonal blocks for Boomerang (_self-energy_) Cwebs. Here we present an efficient algorithm to find the Cwebs that are present at any loop order, and determine complete mixing matrices, and exponentiated colour factors associated with each Boomerang Cweb at four loops connecting three and four Wilson lines.
###### Contents
* 1 Introduction
* 2 Cwebs and web mixing matrices
* 3 An improved algorithm to generate Cwebs
* 4 A brief description of the code CwebGen 2.0
* 5 Boomerang Cwebs at Four loops
* 6 Conclusions
* 7 Appendix
* A Replica trick and mixing matrices
* B Boomerang Cwebs at Four loops
* B.1 Boomerang Cwebs connecting four Wilson lines
* B.2 Boomerang Cwebs connecting three Wilson lines
## 1 Introduction
The infrared (IR) singularity structure of scattering amplitudes involving the non-abelian gauge bosons has been a subject of interest for several decades with a rich and long history [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. A recent review on IR singularities of gauge theories [20] provide a comprehensive knowledge of the subject. The universal structure of IR singularities of gauge theories, that is their independence of relevant hard scattering process helps one to study their all order structure in perturbation theory. The structure of Infrared singularities not only helps us in understanding gauge theories to all orders in the perturbation theory, but also important for phenomenological applications. These studies are relevant to the high energy scattering experiments at different colliders. The calculations of physical observables such as scattering cross section and decay rates suffer from these IR divergences which eventually cancel upon adding the real emission contribution to the virtual correction at a given perturbative order. This cancellation results in large logarithms of kinematic invariant that damage the predictive power of fixed order calculation in certain kinematical regions. Again the universality property of IR singularity provides a way of summing these logarithms to all order in perturbation and helps in recovering the predictions in those regions. The intricate cancellation of IR poles between real and virtual contributions for observables relevant to colliders is a non-trivial task for which several subtraction procedures [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34] are developed.
The QCD factorization theorem enables us in studying the singular parts of scattering amplitudes involving massless gauge bosons, without calculating the complicated hard part. The soft
function, which captures the IR singular parts of a scattering amplitude, is expressed as the correlators of Wilson line operators. The usual Wilson-line operators \(\Phi(\zeta)\) evaluated on smooth space-time contours \(\zeta\) are defined as,
\[\Phi\left(\zeta\right)\,\equiv\,\mathcal{P}\exp\left[\mathrm{i}g\! \int_{\zeta}dx\cdot\mathbf{A}(x)\right]\,. \tag{1}\]
where \(\mathbf{A}^{\mu}(x)=A_{a}^{\mu}(x)\,\mathbf{T}^{a}\) is a non-abelian gauge field, and \(\mathbf{T}^{a}\) is a generator of the gauge algebra, which can be taken to belong to any desired representation, and \(\mathcal{P}\) denotes path ordering of the gauge fields. The soft function in general can be expressed as,
\[\mathcal{S}_{n}\left(\zeta_{i}\right)\,\equiv\,\left\langle 0\right|\prod_{k=1}^{n }\Phi\left(\zeta_{k}\right)\left|0\right\rangle\,. \tag{2}\]
These Wilson lines are semi-infinite Wilson lines along the direction of the hard particle. Thus, one can write the soft function as,
\[\mathcal{S}_{n}\Big{(}\beta_{i}\cdot\beta_{j},\alpha_{s}(\mu^{2}),\epsilon \Big{)}\,\equiv\,\left\langle 0\right|\prod_{k=1}^{n}\Phi_{\beta_{k}}\left( \infty,0\right)\left|0\right\rangle,\quad\Phi_{\beta}\left(\infty,0\right) \,\equiv\,\mathcal{P}\exp\left[\mathrm{i}g\!\int_{0}^{\infty}d\lambda\,\beta \cdot\mathbf{A}(\lambda\beta)\right]\,. \tag{3}\]
The object \(\mathcal{S}_{n}\) suffers from both UV and IR (soft) singularities, and thus needs renormalization. In dimensional regularization \(d=4-2\epsilon\), \(\mathcal{S}_{n}\) is equal to zero, as it is made out of Wilson line correlators, that involve only scaleless integrals. Thus, in a renormalized theory, \(\mathcal{S}_{n}\) contains pure UV counterterms.
The renormalized soft function obey a renormalization group equation and solving this equation results in an exponentiation of the form,
\[\mathcal{S}_{n}\Big{(}\beta_{i}\cdot\beta_{j},\alpha_{s}(\mu^{2}),\epsilon \Big{)}\,=\,\mathcal{P}\exp\left[-\frac{1}{2}\int_{0}^{\mu^{2}}\frac{d\lambda ^{2}}{\lambda^{2}}\,\mathbf{\Gamma}_{n}\Big{(}\beta_{i}\cdot\beta_{j},\alpha_{s }(\lambda^{2}),\epsilon\Big{)}\right]\,, \tag{4}\]
where \(\mathbf{\Gamma}_{n}\) is known as the soft anomalous dimension. In case of processes involving multi-parton scatterings, the soft anomalous dimension is a matrix, which is a interesting theoretical object to study, and is our main focus in this article. The renormalization group approach has been used for more than two decades to calculate the soft-anomalous dimension. One-loop calculations for \(\mathbf{\Gamma}_{n}\) were performed in [35; 36; 37; 38], while two-loop calculations were done for both the massless case in [39; 40] and the massive case in [41; 42; 43; 44; 45; 46; 47; 48]. The three-loop calculations were finally carried out for the massless case in [49; 50]. The calculation of soft anomalous dimension at four loops is an ongoing effort. Several studies in this direction are available in the literature in [51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65]. The kinematic dependence of the soft anomalous dimension for scatterings involving only massless lines is restricted due to the constraints discussed in [17; 18; 66; 67]. However, these constraints do not hold true for scatterings involving massive particles. The state-of-the-art knowledge for soft anomalous dimension is known upto two loops for scatterings involving massive particle [41; 42; 43; 44; 45; 46; 47; 48], and at three-loop for one massive Wilson lines [68].
An alternative approach to determine the exponent of the soft function is through diagrammatic
exponentiation. In terms of Feynman diagrams, the soft function has the form,
\[\mathcal{S}_{n}\left(\gamma_{i}\right)\,=\,\exp\left[\mathcal{W}_{n}\left(\gamma_{ i}\right)\right], \tag{5}\]
where \(\mathcal{W}_{n}\left(\gamma_{i}\right)\) is known as _webs_, and can be directly computed using Feynman diagrams. Webs were first defined as two-line irreducible diagrams for scattering involving two Wilson lines [69; 70; 71]. In case of multiparton scattering process, webs in non-abelian gauge theory [72; 73] are defined as the sets of diagrams that differ among each other by the order of gluon attachments on each Wilson lines. A generalization of web called Cweb [74; 75] -- a set of skeleton diagrams built out of connected gluon correlators attached to Wilson lines. The diagrams of a Cweb are closed under shuffles of the gluon attachments to each Wilson line.
The state-of-the-art studies for massive multiparton webs at three loops [76], for massless webs upto three loops [73; 77; 78; 79], at four loops [74; 75], and partially at five loops [80]. The kinematics and the colour factors of diagrams in a Cweb mix among themselves, through a web mixing matrix.
These web mixing matrices are crucial objects in the study of non-abelian exponentiation, and the general method to calculate them is by applying a well-known replica trick algorithm [73; 81]; an alternative approach of generating functionals was developed in [82; 83; 84]. The web mixing matrices are combinatorial objects and have also been studied from the viewpoint of combinatorial mathematics using posets [85; 86; 87]. Further, a novel method of calculating the diagonal blocks -- using several new concepts such as Normal Ordering, Uniqueness theorem and Fused-Webs -- of the mixing matrices has been developed in [1] and has been applied for a certain classes of webs in [2].
Recently, Boomerang webs are introduced in [76] to calculate the soft anomalous dimension at three loops for scattering processes involving massive particles, such as top quark, whose mass cannot be ignored, in several QCD processes. Boomerang webs are defined as webs, which contain at least one gluon self energy correction to one of the Wilson lines. Following the definition of Cwebs, Boomerang Cwebs are defined in [2] as the Cwebs that contain at least one two-point gluon correlator whose both ends are attached to the same massive Wilson line. In [76], the authors have computed the web mixing matrices, their exponentiated colour factors, and kinematics for all three-loop Boomerang webs. Recently, the diagonal blocks of mixing matrices for four-loop Boomerang Cwebs connecting four Wilson lines are presented in [2] using the concept of Fused-Webs [1] and combinatorial properties of the Cwebs. However, to determine the exponentiated colour factors and the kinematic contribution of these Cwebs one needs to calculate the explicit form of the mixing matrices.
In this article we present the explicit results of the mixing matrices for all the Boomerang Cwebs that connect three and four massive Wilson lines. To enumerate these Cwebs uniquely, we have modified the algorithm of enumerating Cwebs [74; 75] and implemented it in a Mathematica code. Further, we have modified the older version of an in-house Mathematica code used in [74; 75] to calculate the mixing matrices for Cwebs using the replica-trick algorithm. These modifications are incorporated in our in-house Mathematica Code CwebGen 2.0.
The rest of the paper is structured as follows. In section 2, we review Cwebs and the properties
of web mixing matrices. In the next section 3 we give the details of the modified algorithm to generate Cwebs at a given perturbative order and its comparison with the older version. Section 4 discusses the working of CwebGen 2.0. In section 5, we describe the calculation of mixing matrices for two Boomerang Cwebs at four loops. Finally in section 6 we conclude our findings and give a future outlook on our results. In ancillary file _Boomerang.nb_, we provide the explicit form of the mixing matrices and the corresponding column weight vector for all Boomerang Cwebs that are considered in this article.
## 2 Cwebs and web mixing matrices
For the convenience of the reader, we collect here some of the definitions that are used in this article.
**Definitions**
**Web**: A set of diagrams closed under shuffles of the gluon attachments on each Wilson line.
**Cweb**: A set of skeleton diagrams built out of connected gluon correlators attached to Wilson lines. The diagrams of a Cweb are closed under shuffles of the gluon attachments to each Wilson line.
**Boomerang Cweb**: A Cweb that contains at least one two-point gluon correlator whose both ends are attached to the same massive Wilson line.
**Weight factor (\(s\)-factor)**: The weight factor \(s(d)\) for a diagram \(d\) is defined as the number of different ways in which the gluon correlators can be _sequentially_ shrunk to their common origin.
**Column weight vector**: We can construct a column weight vector out of the \(s\)-factors for a Cweb with \(n\) diagrams as
\[S=\{s(d_{1}),s(d_{2}),\ldots,s(d_{n})\}. \tag{1}\]
\(\mathbf{W}_{n}^{(c_{2},\ldots,c_{p})}(\mathbf{k_{1}},\ldots,\mathbf{k_{n}})\): This is how we denote a Cweb constructed out of \(c_{m}\)\(m\)-point connected gluon correlators (\(m=2,\ldots,p\)) and choose the ordering \(k_{1}\leq k_{2}\leq\ldots\leq k_{n}\), where \(k_{i}\) denotes number of attachments on different Wilson lines.
Note that the perturbative expansion for an \(m\)-point connected gluon correlator starts at \(\mathcal{O}(g^{m-2})\), while each attachment to a Wilson line carries a further power of \(g\), the perturbative expansion for a Cweb can be written as
\[W_{n}^{(c_{2},\ldots,c_{p})}(k_{1},\ldots,k_{n})\,=\,g^{\sum_{i=1}^{n}k_{i}\, +\,\sum_{r=2}^{r}c_{r}(r-2)}\,\sum_{j=0}^{\infty}W_{n,j}^{(c_{2},\ldots,c_{p}) }(k_{1},\ldots,k_{n})\,g^{2j}\,, \tag{2}\]
which defines the perturbative coefficients \(W_{n,j}^{(c_{2},\ldots,c_{p})}(k_{1},\ldots,k_{n})\). Here the perturbative order of this Cweb is \(g^{\sum_{i=1}^{n}k_{i}\,+\,\sum_{r=2}^{r}c_{r}(r-2)}\).
Cwebs are the proper building blocks of the logarithm of the Soft function -- they are useful in the organization and counting of diagrammatic contributions at higher perturbative orders [74; 75]. The logarithm of the Soft function is a sum over all the Cwebs at each perturbative order:
\[\mathcal{S}\,=\,\exp\left[\sum_{W}\sum_{d,d^{\prime}\in W}K(d)\,R_{W}(d,d^{ \prime})\,C(d^{\prime})\right]\,. \tag{3}\]
The \(d\) here denotes a diagram in a Cweb \(W\) and its corresponding kinematic and colour factor are denoted by \(K(d)\) and \(C(d)\). The action of web mixing matrix \(R_{W}\) on the colour of a diagram \(C(d)\) generates its exponentiated colour factor \(\widetilde{C}(d)\),
\[\widetilde{C}(d)\,=\,\sum_{d^{\prime}\in W}R_{W}(d,d^{\prime})\,C(d^{\prime})\,. \tag{4}\]
The contribution of colour and kinematic factors to a web \(W\) can be arranged in a more transparent manner if we diagonalize the mixing matrix \(R\):
\[W=\left(K^{T}Y^{-1}\right)YRY^{-1}\left(YC\right)\,=\,\sum_{j=1}^{r}\left(K^{ T}Y^{-1}\right)_{j}\left(YC\right)_{j}\,, \tag{5}\]
where \(Y\) is the diagonalizing matrix and \(YRY^{-1}\equiv\mathcal{D}\) is the diagonal matrix that we get; furthermore we have arranged \(\mathcal{D}_{jj}=1\) for \(1\leq j\leq r\). \(\left(YC\right)_{j}\) are also referred to as exponentiated colour factors and the corresponding kinematic factors are \(\left(K^{T}Y^{-1}\right)_{j}\). In this article we will present the exponentiated colour factors in terms of \(\left(YC\right)_{j}\).
To understand the structure of the soft anomalous dimension, the study of the web mixing matrices is crucial. We will begin this study by listing down the properties that they are known to obey [77; 78; 85; 86; 87; 1; 88; 72; 73; 77] which we list down below.
**Properties of mixing matrices**
1. _Idempotence_: A Cweb mixing matrix is idempotent, that is, \(R^{2}=R\). This implies that: (a) the eigenvalues of \(R\) can either be 0 or 1, (b) trace \(\text{tr}(R)\) is equal to its rank, and (c) it acts as a projection operator; acting on its right, it give completely connected exponentiated colour factors.
2. _Zero row sum rule_: The entries of \(R\) obey the zero row sum rule \(\sum_{d^{\prime}}R(d,d^{\prime})=0\). This ensures that while acting on the vector of kinematic factors, the web mixing matrix implements cancellation of the leading divergences among the diagrams of the Cweb.
3. _Column sum rule_: Along with these general properties, the mixing matrices also obey a conjectured column-sum rule of the form \[\sum_{d}s(d)R(d,d^{\prime})=0\,.\] (6)
4. _Uniqueness_: For a given column weight vector \(S=\{s(d_{1}),s(d_{2}),\ldots,s(d_{k})\}\) with all \(s(d_{i})\neq 0\), the mixing matrix is unique.
In section 5, we will verify these properties for four loop Boomerang Cwebs.
In the next section, we describe a recursive algorithm that generates Cwebs present at \(\mathcal{O}(g^{2l+2})\) using Cwebs at \(\mathcal{O}(g^{2l})\).
## 3 An improved algorithm to generate Cwebs
An algorithm to generate Cwebs at \(l+1\) loops using Cwebs at \(l\) loops was presented in [74] in which some of the present authors were also involved. This was later used in [75] to obtain the Cwebs that are present at four loops. Now we will present (for ease of reading) the original algorithm below and then spell out a modified and more efficient version of the algorithm, which is implemented in a subroutine of in-house Mathematica code CwebGen 2.0.
### Original algorithm
The original algorithm presented in [74; 75] to generate Cwebs at \(l+1\) loops using Cwebs at \(l\) loops is reproduced below:
1. Add a two-gluon connected correlator connecting any two Wilson lines (including Wilson lines that had no attachments at lower orders).
2. Connect an existing \(m\)-point correlator to any Wilson line (again, including Wilson lines with no attachments at lower orders), turning it into an \((m+1)\)-point correlator.
3. Connect an existing \(m\)-point correlator to an existing \(n\)-point correlator, resulting in an \((n+m)\)-point correlator. correlator.
4. Discard the duplicate Cwebs.
As discussed in [74; 20], one of the major drawback of the above algorithm is that it generates multiple copies of a Cweb and they need to be discarded before starting the calculation of mixing matrices at a given perturbative order.
Figure 1: Representative diagrams of Cwebs appearing at two loops connecting light like Wilson lines. Cwebs (a), and (b) belong to \(\{W_{2}^{2}\}\), and \(\{W_{3}^{2}\}\) has Cwebs (c), and (d)
#### New algorithm
We find that it is sufficient to use Cwebs at \(l\) loops connecting \(m\) Wilson lines to generate all Cwebs at \(l+1\) loops connecting the same number of Wilson lines; also the third step of the original algorithm generates the Cwebs that are already generated using first and the second steps. Based on these, an improved and efficient algorithm to generate Cwebs recursively is given below:
1. To generate Cwebs at \(l+1\) loops connecting at most \(l+1\) Wilson lines starting from a Cweb at \(l\) loops connecting \(m\) (\(1\leq m\leq l+1\)) lines 1. Connect any two existing Wilson lines by introducing a two-point gluon correlator (for massive Wilson lines a two point correlator can be attached to one line only), 2. Connect any existing \(k\)-point gluon correlator to an existing Wilson line.
2. To generate Cwebs at \(l+1\) loops connecting the highest number \((l+2)\) of Wilson lines, the following steps need to be applied to Cwebs at \(l\) loops connecting the highest number \((l+1)\) of lines allowed at this order: 1. Connect a new Wilson line to any of the existing lines by introducing a two-point gluon correlator. 2. Connect any existing \(k\)-point gluon correlator to a new Wilson line.
3. Discard the duplicate Cwebs.
Let us now explain how the above algorithm works and also contrast it with the older version of the algorithm. Let us denote the set of all Cwebs, \(\{W_{n}^{(c_{2},\ldots,c_{p})}(k_{1},\ldots,k_{n})\}\) that are present \(l\) loops connecting \(n\) lines by \(\{W_{n}^{l}\}\). To generate \(\{W_{n}^{l+1}\}\) the earlier version of the algorithm required implementation of operations on both \(\{W_{n}^{l}\}\) and \(\{W_{n-1}^{l}\}\). In contrast, the new algorithm requires operations only on \(\{W_{n}^{l}\}\) -- the Cwebs in \(\{W_{n}^{l+1}\}\) generated from \(\{W_{n-1}^{l}\}\) need an extra gluon that can connect the existing lines or _blobs_ with a new Wilson line, however, we can simply start with \(\{W_{n}^{l}\}\) and attach a gluon to the existing lines or the blobs of the correlators in all possible ways to generate \(\{W_{n}^{l+1}\}\).
We apply above reasoning to generate three loop Cwebs \(\{W_{3}^{3}\}\) using two loop Cwebs \(\{W_{3}^{2}\}\). Cwebs at two loops, as shown in fig. (1) fall into two sets: \(\{W_{2}^{2}\}\) containing Cwebs (a) and (b), and \(\{W_{3}^{2}\}\) containing Cwebs (c) and (d). All the Cwebs at three-loops connecting three lines [74, 78] are shown in the fig. (2). The same Cwebs are generated if we use the algorithm on \(\{W_{3}^{2}\}\). In the fig. (2) we have indicated by dashed _boxes_ the Cwebs that are contained in \(\{W_{3}^{2}\}\) in fig. (1); note that _each_ of them has a box that contains one of the Cwebs in \(\{W_{3}^{2}\}\).
In a similar fashion Cwebs in the set \(\{W_{2}^{2}\}\) are present in \(\{W_{3}^{3}\}\) except those in figs. (2b), and (2e). This example, thus, provides a justification for our above algorithm.
#### Comparison of the older and newer Mathematica implementations
The in-house Mathematica code CwebGen 2.0 that we have used to generate the results reported in this paper incorporates a new subroutine that is based on the more efficient algorithm of generating Cwebs presented above.
With this new algorithm we have reproduced the known results of Cwebs appearing at three [78] and four loops [74; 75] connecting massless Wilson lines, which provides a check on our implementation. This subroutine is completely general to all perturbative order. As mentioned in the algorithm, certain Cwebs (colour factors) are generated multiple times and we discard the duplicates to obtain the unique ones. At four loops (for massless Wilson lines) the new algorithm generates 150 Cwebs as compared to 226 Cwebs that are generated by original algorithm; after discarding the duplicates only 60 remain [74; 75]. The new algorithm generates 2.5 times of the unique Cwebs, whereas the previous algorithm generates 3.8 times. Thus, this new algorithm increases the efficiency by 34%.
In this work, we have used the subroutine where the colour factors of one diagram from each three-loop Boomerang Cwebs are given as inputs to generate colour factors of Boomerang Cwebs at four loops. The older algorithm when applied on each of the 9 Boomerang Cwebs present at three loops, generates 95 Boomerang Cwebs, whereas, the new algorithm generates 71. After discarding the duplicates only 45 remain at four loops. Thus the new algorithm increases the efficiency in finding unique Cwebs by 25%. As we go beyond four loops, the number of Cwebs at lower orders increase, thus we expect that new algorithm will become more useful in finding Cwebs at higher orders.
## 4 A brief description of the code CwebGen 2.0
One of the most powerful techniques in the combinatorial problems in physics, which involves exponentiation is the replica trick [89]. For Wilson line correlators, the replica trick algorithm was
Figure 2: Cwebs (webs) of \(\{W_{3}^{3}\}\) generated using the non recursive algorithm
developed in [73, 81]. The same replica trick was adopted in [74, 75] for the calculation of the mixing matrices at four-loop Cwebs. The details of replica trick -- discussed in appendix A -- will be used in the calculation of mixing matrices for Boomerang Cwebs at four loops.
We present below the current version of the in-house Mathematica code CwebGen 2.0, that incorporates replica trick algorithm and is a significantly improved version of the codes that were used in [74, 75]. This code has been used in this work to obtain the Cweb mixing matrices.
* The code begins by generating the colour of one of the Cweb diagrams; however, it is different from the usual colour assignment as we also assign replica variables \(i,j,k,\ldots\) to each of the gluon correlators that are present in the diagram.
* A subroutine then shuffles the attachments on each of the Wilson lines belonging to different correlators, and correspondingly different replica variables, to generate all the diagrams of the Cweb.
* Next hierarchies between the replica variables are generated. For example, for a Cweb with two correlators, with replica variables \(i\) and \(j\), the code generates hierarchies \(\{h\}=\{i=j,i>j,i<j\}\). Then distinct replica variables \(N_{r}\) for each hierarchy is calculated, for example, for hierarchy \(i=j\), \(N_{r}=1\), whereas, for \(i>j\), and \(i<j\), \(N_{r}=2\). Using this, \(M_{N_{r}}(h)\) is obtained using eq. (10). Note that previous versions of the code could generate all the possible hierarchies only if the number of replica variables was less than or equal to 4. CwebGen 2.0 can generate the hierarchies for any number of replica variables.
* Another subroutine then generates replica ordered colour factors \(\mathbf{R}\big{[}C(d)\big{]}h\) for each of the hierarchies (see appendix A) for every diagram of the Cweb. The code then generates the data for each diagram in the Cweb (as given in the Table 1 of [73]) and gives the mixing matrix \(R_{W}\). A diagonalizing matrix \(Y_{W}\) is constructed to diagonalize \(R_{W}\) using its right eigenvectors. \(Y_{W}\) then acts on the column vector containing the colour factors of each of the diagrams of the Cweb, and then gives the corresponding independent exponentiate colour factors.
The runtime of the code depends on the number of correlators or equivalently number of replica variables. The computation time for four loops is larger as compared to three loops, since the maximum number of hierarchies at four loops is 75 as opposed to 13 at three-loops. In addition to large number of hierarchies, the total number of diagrams in any Boomerang Cwebs is larger than that for Cwebs involving only massless lines at a given perturbative order.
The algorithm and its implementation of the older version two versions of the same code can be found in [74, 75]. To make a comparison of the present version with the previous versions, we have calculated the mixing matrix of \(\mathrm{W}_{5}^{(4)}(1,1,1,1,4)\) appearing at four loops [74]. The largest dimension mixing matrix is \(24\times 24\) and it took 7 days in the first version [74], upon subsequent improvement it took 6.4 hours [75]. The same calculation by CwebGen 2.0 takes only 1.54 seconds. Thus, the latest version of code is almost 15000 times faster as compared to the previous versions.
In the next section, we provide the results for two Boomerang Cwebs that are present at four loops and the results for the remaining Cwebs will be presented in the appendix B.
## 5 Boomerang Cwebs at Four loops
Using CwebGen 2.0 and discarding the duplicates we get 8 and 20 Boomerang Cwebs connecting four and three Wilson lines respectively. In [2] we had obtained the diagonal blocks of mixing matrices for all the 8 Boomerang Cwebs that are present at four loops using the formalism introduced in [1]. Additionally, we were also able to make predictions for the diagonal blocks of 4 classes of Cwebs to all orders in \(\alpha_{s}\). Of course, we could not obtain the complete mixing matrices and exponentiated colour factors in that work which is the subject matter of this paper.
We now present the results for two Cwebs: one connecting four Wilson lines, and another connecting three Wilson lines. In each case we will present (i) Column weight vector \(S\), (ii) Mixing Matrix \(R\), (iii) Exponentiated colour factors (\(YC\)). The mixing matrices in this article are _Normal ordered_[1] and take the general form
\[R=\left(\begin{array}{cc}A&B\\ O&D\end{array}\right), \tag{10}\]
where \(A\) is associated with the mixing of irreducible diagrams, \(D\) with the mixing of reducible diagrams, and \(O\) is null matrix [1].
\(\mathbf{W}_{4}^{\,(1,0,1)}\,(1,1,3)\)**: a four-line Boomerang Cweb**
Boomerang Cweb \(\mathrm{W}_{4}^{\,(1,0,1)}\,(1,1,3)\), shown in fig (3), connects four massive Wilson lines with one Boomerang and a four point gluon correlator. This Cweb has three possible shuffles on Wilson line 1. The shuffle of gluon attachments from the two correlators generates three diagrams as shown in fig. (3); thus the order of mixing matrix for this Cweb is three. To proceed we choose the order of the diagrams as in the fig. (3). This order is labeled using the order of attachments on line 1: \(C_{1}=\{ABK\}\), \(C_{2}=\{AKB\}\) and \(C_{3}=\{BAK\}\). The mixing matrix \(R\) and the diagonalizing matrix \(Y\) has been calculated using CwebGen 2.0 and we get,
\[R\,=\,\frac{1}{2}\left(\begin{array}{ccc}2&-1&-1\\ 0&1&-1\\ 0&-1&1\end{array}\right)\,,\qquad\qquad Y\,=\,\left(\begin{array}{ccc}1&-1&0 \\ 0&-1&1\\ 0&1&1\end{array}\right)\,. \tag{11}\]
This matches with the result obtained in [2].
The column weight vector \(S\) for this Cweb can be calculated by determining the \(s\)-factors [77] for each diagram. The \(s\)-factor for \(C_{1}\) is zero as one can not shrink any of the two correlators to the origin of Wilson lines. In the diagram \(C_{2}\) one can shrink the Boomerang first then the four point correlator, thus there is only one way to shrink all the correlators to origin, making \(s=1\). The \(s\)-factor for \(C_{3}\) is also one except the fact that here the Boomerang has to be shrunk after the four point correlator. Thus the column weight vector becomes \(S=\{0,1,1\}\,\).
The matrix \(R\) satisfies all the know properties of mixing matrix: it is idempotent, \(R^{2}=R\), satisfies the zero sum rule as in each row of matrix sum of entries is zero, and, the conjectured column sum rule. The rank of this mixing matrix is two as there are only two independent rows in \(R\). We obtain the independent exponentiated colour factors given as,
\[(YC)_{1}\,=\,0\,,\qquad(YC)_{2}\,=\,-i\,f^{abn}f^{bch}f^{deh}\,{\bf T}_{1}^{a} {\bf T}_{1}^{n}{\bf T}_{2}^{c}{\bf T}_{3}^{d}{\bf T}_{4}^{e}\,. \tag{5.3}\]
Let us define
\[N_{ecf}=\text{number of non-vanishing independent ECFs}. \tag{5.4}\]
For the above Cweb \(N_{ecf}=1\), whereas the rank of the mixing matrix \(r(R)=2\). The vanishing of one of the independent ECFs can be understood in terms of \(\widetilde{C}\). Applying \(R\) on the \(C\) we get,
\[\widetilde{C}_{1}=\frac{1}{2}\left(2C_{1}-C_{2}-C_{3}\right),\qquad\widetilde {C}_{2}=\frac{1}{2}\left(C_{2}-C_{3}\right),\qquad\widetilde{C}_{3}=\frac{1}{ 2}\left(C_{3}-C_{2}\right)\,. \tag{5.5}\]
The colour factors of diagrams \(C_{2}\) and \(C_{3}\) are identical; the only difference is due to the placement of the Boomerang in each of the diagrams, however, it contributes the same factor \(C_{A}\). Hence the exponentiated colour factors \(\widetilde{C}_{2}\) and \(\widetilde{C}_{3}\) in the above equations vanish. This makes ECFs for certain diagrams vanish for Boomerang Cwebs and it holds true to all orders in the perturbation theory [76]. Recall that when a Cweb is Normal ordered, the block \(D\) gives mixing between the reducible diagrams. All these diagrams contain Boomerang that give \(C_{A}\) factors, and thus their ECFs vanish. That is,
\[N_{ecf}=\text{rank}(R)\] Cwebs with massless Wilson lines, \[N_{ecf}<\text{rank}(R)\] Cwebs with Massive Wilson lines containing Boomerang (5.6)
The results presented in the appendix also exhibit this property. The mixing matrices and exponentiated colour factors for remaining seven Boomerang Cwebs that connect four Wilson lines are given in appendix B.1.
\({\bf W}^{(2,1)}_{3,{\bf I}}(1,3,3)\)**: a three-line Boomerang Cweb**
In \({\rm W}^{(2,1)}_{3,{\bf I}}(1,3,3)\), shown in fig. (4), one Boomerang gluon, a two-point and a three-point gluon correlator connect the three massive lines. The subscript I indicates that there are more than one Cwebs with the same attachment and correlator content. The shuffle of attachments generates nine diagrams; the order of diagrams is given in table 1 along with their \(s\)-factors.
Using these \(s\)-factors we obtain the column weight vector \(S\):
\[S\;=\;\{0,0,0,0,0,1,1,2,2\}\,. \tag{5.7}\]
Using _CwebGen 2.0_ we obtain
\[R=\frac{1}{6}\left(\begin{array}{cccccccc}6&-3&-3&-3&-3&2&2&1&1\\ 0&3&0&0&-3&-1&2&-2&1\\ 0&0&3&-3&0&-1&2&1&-2\\ 0&0&-3&3&0&2&-1&-2&1\\ 0&-3&0&0&3&2&-1&1&-2\\ 0&0&0&0&0&2&2&-2&-2\\ 0&0&0&0&0&2&2&-2&-2\\ 0&0&0&0&0&-1&-1&1&1\\ 0&0&0&0&0&-1&-1&1&1\end{array}\right)\,. \tag{5.8}\]
This satisfies all the known properties listed in section 2. This Cweb has four independent exponentiated colour factors, two of which vanish due to the presence of a Boomerang. The remaining two ECFs are,
\[\left(YC\right)_{3}=i\,f^{edk}f^{abn}f^{bcd}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^ {n}\mathbf{T}_{2}^{c}\mathbf{T}_{2}^{k}\mathbf{T}_{3}^{e}+i\,f^{eck}\,f^{abn}\, f^{bcd}\,\mathbf{T}_{1}^{a}\,\mathbf{T}_{1}^{n}\,\mathbf{T}_{2}^{k}\,\mathbf{T}_{2}^{d} \,\mathbf{T}_{3}^{e}\]
\[\left(YC\right)_{4}=i\,f^{edk}\,f^{abn}\,f^{bcd}\,\mathbf{T}_{1}^{a}\,\mathbf{ T}_{1}^{n}\,\mathbf{T}_{2}^{c}\,\mathbf{T}_{2}^{k}\,\mathbf{T}_{3}^{e}\,.\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABF\},\{GHL\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{AFB\},\{GHL\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ABF\},\{GLH\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ABF\},\{HGL\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{BAF\},\{GHL\}\}\) & 0 \\ \hline \end{tabular}
\begin{tabular}{|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{6}\) & \(\{\{AFB\},\{GLH\}\}\) & 1 \\ \hline \(C_{7}\) & \(\{\{BAF\},\{HGL\}\}\) & 1 \\ \hline \(C_{8}\) & \(\{\{AFB\},\{HGL\}\}\) & 2 \\ \hline \(C_{9}\) & \(\{\{BAF\},\{GLH\}\}\) & 2 \\ \hline \end{tabular}
\begin{tabular}{|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{6}\) & \(\{\{AFB\},\{GLH\}\}\) & 1 \\ \hline \(C_{7}\) & \(\{\{BAF\},\{HGL\}\}\) & 1 \\ \hline \(C_{8}\) & \(\{\{AFB\},\{HGL\}\}\) & 2 \\ \hline \(C_{9}\) & \(\{\{BAF\},\{GLH\}\}\) & 2 \\ \hline \end{tabular}
\end{table}
Table 1: Order of diagrams of Cweb \(\mathrm{W}_{3,1}^{(2,1)}(1,3,3)\) and their \(s\)-factors
he results for all the twenty Boomerang Cwebs that connect three Wilson lines at four loops are given in appendix B.2.
## 6 Conclusions
In this article we study Boomerang Cwebs that connect three and four massive Wilson lines at four loops which are important ingredients in the studies of scattering involving massive particles.
To enumerate Cwebs at a given perturbative order, we have introduced an improved version of the algorithm that was developed in [74]. We presented some details of the current version of the in-house Mathematica code CwebGen 2.0, that incorporates this new algorithm, and the replica trick, and is a significantly faster version of the codes that were used in [75, 74].
We have computed the mixing matrices, the diagonalizing matrices and the exponentiated colour factors for all these Cwebs. We have verified that our results match with the predictions of the diagonal blocks presented in [2]. We found that the general structure of exponentiated colour factor arising from the calculation of all the twenty eight Boomerang Cwebs -- shown in fig. (5) -- is same as the general structure of ECFs for the massless case at four loops [75, 74]. This is an artifact of the self energy Cwebs, for which quadratic Casimir \(C_{A}\) (the colour of a Boomerang) is absent from all the exponentiated colour factors. This is in agreement with the calculations of Boomerang Cwebs at three loops [76]. We found that the ECFs corresponding to the \(D\) block of mixing matrices vanish which is consistent with [76]. The exponentiated colour factors have long expressions for Cwebs connecting three Wilson lines and thus we refrain from presenting it in this article. However, they can be obtained from the authors upon request as FORM files.
The interplay of colour and kinematics in the study of soft anomalous dimension has been an interesting object of study and it often puts constraints on the general structure of the anomalous dimension. It will be an interesting study to determine the constraints on the massive soft anomalous dimension based on the exponentiated colour factors presented in this article.
## Acknowledgments
SP would like to thank Physical Research Laboratory, Department of Space, Govt. of India, for a Post Doctoral Fellowship. NA, SP, AT would like to thank Prof. Lorenzo Magnea for collaboration on the related earlier projects. AS would like to thank CSIR, Govt. of India, for a SRF Fellowship (09/1001(0075)/2020-EMR-I).
Figure 5: Exponentiated colour factor obtained from all the Boomerang Cwebs
Appendix
### Replica trick and mixing matrices
In this appendix we briefly discuss the replica trick algorithm, which will be used in the calculation of the mixing matrices for Boomerang Cwebs at four loops. To start with, we consider the path integral of the Wilson line correlators as,
\[\mathcal{S}_{n}(\gamma_{i}) = \int\mathcal{D}A_{\mu}^{a}\,\exp(iS(A_{\mu}^{a}))\prod_{k=1}^{n} \phi_{k}(\gamma_{k})=\exp[\mathcal{W}_{n}(\gamma_{i})] \tag{10}\]
where \(S(A_{\mu}^{a})\) is the classical action of the gauge fields. In order to proceed with the replica trick algorithm, one introduces \(N_{r}\) non-interacting identical copies of each gluon field \(A_{\mu}\), which means, we replace each \(A_{\mu}\) by \(A_{\mu}^{i}\), where, \(i=1,\ldots,N_{r}\). Now, for each replica, we associate a copy of each Wilson line, thereby, replacing each Wilson line by a product of \(N_{r}\) Wilson lines. Thus, in the replicated theory, the path integral of the Wilson line correlator can then be written as,
\[\mathcal{S}_{n}^{\text{repl.}}\left(\gamma_{i}\right) = \left[\mathcal{S}_{n}\left(\gamma_{i}\right)\right]^{N_{r}}\,= \,\exp\left[N_{r}\,\mathcal{W}_{n}(\gamma_{i})\right]\,=\,\mathbf{1}+N_{r}\, \mathcal{W}_{n}(\gamma_{i})+\mathcal{O}(N_{r}^{2})\,. \tag{11}\]
Now, using this equation, one can calculate \(\mathcal{W}_{n}\) by calculating \(\mathcal{O}(N_{r})\) terms of the Wilson line correlator in the replicated theory. The method of replicas involves five steps, which are summarized below.
* Associate a replica number to each connected gluon correlator in a Cweb.
* Define a replica ordering operator \(\mathbf{R}\), which acts on the colour generators on each Wilson line and order them according to their replica numbers. Thus, if \(\mathbf{T}_{i}\) denotes a colour generator for a correlator belonging to replica number \(i\), then action of \(\mathbf{R}\) on \(\mathbf{T}_{i}\mathbf{T}_{j}\) preserves the order for \(i\leq j\), and reverses the order for \(i>j\). Thus, replica ordered colour factor for a diagram in a Cweb will always be a diagram of the same Cweb.
* The next step in order to calculate the exponentiated colour factors, one needs to find the hierarchies between the replica numbers present in a Cweb. If a Cweb has \(m\) connected pieces, we call hierarchies \(h(m)\). \(h(m)\) are known as Bell number or Fubini number [90] in the number theory and combinatorics. The first few Fubini numbers are given by \(h(m)=\{1,1,3,13,75,541\}\) for \(m=0,1,2,3,4,5\). At four loops, the highest number of correlator in a Cweb is \(m_{\text{max}}=4\), which corresponds to \(h_{\text{max}}=75\)
* The next object is to calculate \(M_{N_{r}}(h)\), which counts the number of appearances of a particular hierarchy in the presence of \(N_{r}\) replicas. For a given hierarchy \(h\), which contains \(n_{r}(h)\) distinct replicas, the multiplicity \(M_{N_{r}}(h)\) is given by, \[M_{N_{r}}(h) = \frac{N_{r}!}{\left(N_{r}-n_{r}(h)\right)!\,\,n_{r}(h)!}\] (12)
* The exponentiated colour factor for a diagram \(D\) is then given by, \[C_{N_{r}}^{\,\,\mathrm{repl.}}(D)\,=\,\sum_{h}M_{N_{r}}(h)\,\mathbf{R}\big{[}C(D) \big{|}h\big{]}\,,\] (10) where \(\mathbf{R}\big{[}C(D)\big{|}h\) is the replica ordered colour factor od diagram \(D\), for hierarchy \(h\). Finally, the exponentiated colour factor for diagram \(D\) is computed by extracting the coefficient of \(\mathcal{O}(N_{r})\) terms of the above equation.
## Appendix B Boomerang Cwebs at Four loops
### Boomerang Cwebs connecting four Wilson lines
1. \(\mathbf{W}_{4}^{\,\,(4)}(1,1,3,3)\) This Cweb is made up with four two-gluon correlator. It contains eighteen diagrams, one of which is displayed below. The table below shows the chosen order of the eighteen shuffles of the gluon attachments, and their corresponding \(S\) factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ACK\},\{FHD\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ACK\},\{FDH\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ACK\},\{HFD\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ACK\},\{HDF\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{ACK\},\{DFH\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{ACK\},\{DHF\}\}\) & 0 \\ \hline \(C_{7}\) & \(\{\{AKC\},\{HFD\}\}\) & 1 \\ \hline \(C_{8}\) & \(\{\{AKC\},\{HDF\}\}\) & 1 \\ \hline \(C_{9}\) & \(\{\{CAK\},\{FDH\}\}\) & 1 \\ \hline \(C_{10}\) & \(\{\{CAK\},\{DFH\}\}\) & 1 \\ \hline \(C_{11}\) & \(\{\{AKC\},\{FHD\}\}\) & 2 \\ \hline \(C_{12}\) & \(\{\{AKC\},\{FDH\}\}\) & 2 \\ \hline \(C_{13}\) & \(\{\{AKC\},\{DFH\}\}\) & 2 \\ \hline \(C_{14}\) & \(\{\{AKC\},\{DHF\}\}\) & 2 \\ \hline \(C_{15}\) & \(\{\{CAK\},\{FHD\}\}\) & 2 \\ \hline \(C_{16}\) & \(\{\{CAK\},\{HFD\}\}\) & 2 \\ \hline \(C_{17}\) & \(\{\{CAK\},\{HDF\}\}\) & 2 \\ \hline \(C_{18}\) & \(\{\{CAK\},\{DHF\}\}\) & 2 \\ \hline \end{tabular}
The mixing matrix, and the diagonal matrix are given by,
\[R=\frac{1}{12}\left(\begin{array}{cccccccccccccccc}4&-2&-2&-2&4&-2&1&-3&-1&1&2&- 3&1&2&0&1&-1&0\\ -2&4&-2&4&-2&-2&-3&1&1&-1&2&1&-3&2&0&-1&1&0\\ -2&-2&4&-2&-2&4&1&1&1&1&-2&1&1&-2&-2&1&1&-2\\ -2&4&-2&4&-2&-2&-1&1&1&-3&0&1&-1&0&2&-3&1&2\\ 4&-2&-2&-2&4&-2&1&-1&-3&1&0&-1&1&0&2&1&-3&2\\ -2&-2&4&-2&-2&4&1&1&1&1&-2&1&1&-2&-2&1&1&-2\\ 0&0&0&0&0&0&3&-1&1&-3&-2&-1&3&-2&2&-3&1&2\\ 0&0&0&0&0&0&-1&3&-3&1&-2&3&-1&-2&2&1&-3&2\\ 0&0&0&0&0&0&1&-3&3&-1&2&-3&1&2&-2&-1&3&-2\\ 0&0&0&0&0&0&-3&1&-1&3&2&1&-3&2&-2&3&-1&-2\\ 0&0&0&0&0&0&-1&-1&1&1&2&-1&-1&2&-2&1&1&-2\\ 0&0&0&0&0&0&-1&-1&1&0&1&-1&0&0&1&-1&0\\ 0&0&0&0&0&0&1&-1&1&-1&0&-1&1&0&0&-1&1&0\\ 0&0&0&0&0&0&1&1&-1&-1&-2&1&1&-2&2&-1&-1&2\\ 0&0&0&0&0&0&-1&1&-1&1&0&1&-1&0&0&1&-1&0\\ 0&0&0&0&0&0&1&-1&1&-1&-0&-1&1&0&0&-1&1&0\\ 0&0&0&0&0&0&1&1&-1&-1&-2&1&1&-2&2&-1&-1&2\end{array}\right),\]
\[{\cal D}\,= ({\bf 1}_{4},0)\,. \tag{14}\]
The exponentiated colour factors are given by,
\[(YC)_{1} = 0\,,\] \[(YC)_{2} = 0\,,\] \[(YC)_{3} = if^{acn}f^{bkg}f^{cdk}{\bf T}_{1}^{a}{\bf T}_{1}^{n}{\bf T}_{2}^{b }{\bf T}_{3}^{g}{\bf T}_{4}^{d}\,,\] \[(YC)_{4} = if^{acn}f^{bdr}f^{crm}{\bf T}_{1}^{a}{\bf T}_{1}^{n}{\bf T}_{2}^{b}{\bf T }_{3}^{m}{\bf T}_{4}^{d}\,. \tag{15}\]
**2. \({\bf W}_{4}^{(4)}(1,1,2,4)\)**
This is a Boomerang Cweb which is made up with four two-gluon correlators. It has twenty-four diagrams, and one of them is shown below. The table below shows chosen order of shuffles and their corresponding \(S\)-factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABKC\},\{FD\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ABKC\},\{DF\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ABCK\},\{FD\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ABCK\},\{DF\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{ACKB\},\{FD\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{ACKB\},\{DF\}\}\) & 0 \\ \hline \(C_{7}\) & \(\{\{ACBK\},\{FD\}\}\) & 0 \\ \hline \(C_{8}\) & \(\{\{ACBK\},\{DF\}\}\) & 0 \\ \hline \(C_{9}\) & \(\{\{BACK\},\{FD\}\}\) & 0 \\ \hline \(C_{10}\) & \(\{\{BACK\},\{DF\}\}\) & 0 \\ \hline \(C_{11}\) & \(\{\{CABK\},\{FD\}\}\) & 0 \\ \hline \(C_{12}\) & \(\{\{CABK\},\{DF\}\}\) & 0 \\ \hline \(C_{13}\) & \(\{\{AKBC\},\{FD\}\}\) & 1 \\ \hline \(C_{14}\) & \(\{\{BAKC\},\{FD\}\}\) & 1 \\ \hline \(C_{15}\) & \(\{\{CAKB\},\{DF\}\}\) & 1 \\ \hline \(C_{16}\) & \(\{\{CBAK\},\{DF\}\}\) & 1 \\ \hline \(C_{17}\) & \(\{\{AKBC\},\{DF\}\}\) & 2 \\ \hline \(C_{18}\) & \(\{\{AKCB\},\{FD\}\}\) & 2 \\ \hline \(C_{19}\) & \(\{\{AKCB\},\{DF\}\}\) & 2 \\ \hline \(C_{20}\) & \(\{\{BAKC\},\{DF\}\}\) & 2 \\ \hline \(C_{21}\) & \(\{\{BCAK\},\{FD\}\}\) & 2 \\ \hline \(C_{22}\) & \(\{\{BCAK\},\{DF\}\}\) & 2 \\ \hline \(C_{23}\) & \(\{\{CAKB\},\{FD\}\}\) & 2 \\ \hline \(C_{24}\) & \(\{\{CBAK\},\{FD\}\}\) & 2 \\ \hline \end{tabular}
The mixing matrix and the diagonal matrix for this Cweb are given by,
\[R=\frac{1}{12}\left(\begin{array}{
This Cweb has twelve diagrams, one of them is shown below. The table shows the chosen order of shuffle and the corresponding \(S\)-factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ADK\},\{CH\},\{EF\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ADK\},\{CH\},\{FE\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ADK\},\{HC\},\{EF\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ADK\},\{HC\},\{FE\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{AKD\},\{HC\},\{FE\}\}\) & 1 \\ \hline \(C_{6}\) & \(\{\{DAK\},\{CH\},\{EF\}\}\) & 1 \\ \hline \(C_{7}\) & \(\{\{AKD\},\{CH\},\{EF\}\}\) & 2 \\ \hline \(C_{8}\) & \(\{\{AKD\},\{HC\},\{EF\}\}\) & 2 \\ \hline \(C_{9}\) & \(\{\{DAK\},\{CH\},\{FE\}\}\) & 2 \\ \hline \(C_{10}\) & \(\{\{DAK\},\{HC\},\{FE\}\}\) & 2 \\ \hline \(C_{11}\) & \(\{\{AKD\},\{CH\},\{FE\}\}\) & 4 \\ \hline \(C_{12}\) & \(\{\{DAK\},\{HC\},\{EF\}\}\) & 4 \\ \hline \end{tabular}
The mixing matrix, and the diagonal matrix are given by,
\[R=\frac{1}{12}\left(\begin{array}{ccccccccc}4&-4&-4&4&-3&-1&-3&3&1&-1&3&1 \\ -2&2&2&-2&1&1&1&-1&-1&1&-1&-1\\ -2&2&2&-2&1&1&1&-1&-1&1&-1&-1\\ 4&-4&-4&4&-1&-3&-1&1&3&-3&1&3\\ 0&0&0&3&-3&3&-3&3&-3&-3&3\\ 0&0&0&0&-3&3&-3&3&3&3&-3\\ 0&0&0&0&1&-1&1&-1&1&-1&-1&1\\ 0&0&0&0&-1&1&-1&1&-1&1&1&-1\\ 0&0&0&0&1&-1&1&-1&1&-1&1\\ 0&0&0&0&-1&1&-1&1&-1&1&1&-1\\ 0&0&0&0&-1&1&-1&1&-1&1&1&-1\\ 0&0&0&0&1&-1&1&-1&1&-1&1&-1\\ \end{array}\right),\mathcal{D}=\left(\mathbf{1}_{2},0\right).\]
The rank of the mixing matrix is 2, which corresponds to the following exponentiated colour factors,
\[(YC)_{1} = 0\] \[(YC)_{2} = i\,f^{adg}f^{bcn}f^{cdk}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{q} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{k}\mathbf{T}_{4}^{n} \tag{10}\]
4. \({\bf W}\,^{(2,1)}_{4,1}\,(1,1,2,3)\) This is first of two Cwebs which has the same correlator and attachment content. It has six diagrams, one of which is shown below. The table below shows the chosen order of shuffles, and their corresponding \(S\)-factors.
The mixing matrix, and the diagonal matrix are given by,
\[R=\frac{1}{6}\left(\begin{array}{cccccc}3&-3&-1&2&1&-2\\ -3&3&2&-1&-2&1\\ 0&0&2&2&-2&-2\\ 0&0&2&2&-2&-2\\ 0&0&-1&-1&1&1\\ 0&0&-1&-1&1&1\end{array}\right)\,,{\cal D}\,=\,({\bf 1}_{2},0)\,. \tag{100}\]
Finally, the exponentiated colour factors are given by,
\[(YC)_{1} = 0\,,\] \[(YC)_{2} = i\,f^{abn}f^{bcd}f^{dme}\,{\bf T}^{a}_{1}{\bf T}^{n}_{1}{\bf T}^{ c}_{2}{\bf T}^{m}_{3}{\bf T}^{e}_{4}\,. \tag{101}\]
5. \({\bf W}\,^{(2,1)}_{4,\Pi}(1,1,2,3)\) This is the second Cweb with same correlator and attachment content. It has six diagrams, one of them is shown below. The table shows the chosen order of shuffle and their corresponding \(S\)-factors.
The mixing matrix and the diagonal matrix for this Cweb are given by,
\[R=\frac{1}{6}\left(\begin{array}{cccccc}3&-3&2&-1&-2&1\\ -3&3&-1&2&1&-2\\ 0&0&2&2&-2&-2\\ 0&0&2&2&-2&-2\\ 0&0&-1&-1&1&1\\ 0&0&-1&-1&1&1\end{array}\right),\,\mathcal{D}\,=\,(\mathbf{1}_{2},0)\,. \tag{110}\]
Finally, the exponentiated colour factors are given by,
\[(YC)_{1} = 0\,,\] \[(YC)_{2} = i\,f^{abk}f^{abg}f^{cde}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{g} \mathbf{T}_{2}^{k}\mathbf{T}_{3}^{d}\mathbf{T}_{4}^{e}\,. \tag{111}\]
6. \(\mathbf{W}_{4}^{\,(2,1)}\,(1,1,1,4)\) This Cweb is made up with one three-gluon correlator, and two two-gluon correlators. This has 12 diagrams, one of them is shown below. The table shows the chosen order shuffle on Wilson line 1, and their corresponding \(S\)-factors.
The \(R\), and \(D\) matrices are given by,
\[R=\frac{1}{6}\left(\begin{array}{ccccccccc}6&0&-3&-3&-3&-3&-1&2&2&-1&2&2&2\\ 0&6&-3&-3&-3&-3&2&-1&2&2&2&2&-1\\ 0&0&3&0&0&-3&-1&-1&-1&-1&2&2\\ 0&0&0&3&-3&0&-1&-1&2&2&-1&-1\\ 0&0&0&-3&3&0&-1&2&-1&-1&2&-1\\ 0&0&-3&0&0&3&2&-1&2&-1&-1&-1\\ 0&0&0&0&0&0&2&-1&-1&-1&-1&2\\ 0&0&0&0&0&-1&2&-1&2&-1&-1\\ 0&0&0&0&0&-1&-1&2&-1&2&-1\\ 0&0&0&0&0&0&-1&2&-1&2&-1&-1\\ 0&0&0&0&0&0&-1&-1&2&-1&2&-1\\ 0&0&0&0&0&0&2&-1&-1&-1&-1&2\end{array}\right),\mathcal{D}\,=\,(\mathbf{1}_{6},0)\,. \tag{115}\]
Finally, the exponentiated colour factors are given by,
\[(YC)_{1} = 0\,,\] \[(YC)_{2} = -i\,f^{abg}f^{cde}f^{ckg}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{k} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{d}\mathbf{T}_{4}^{e}\] \[-i\,f^{abg}f^{ack}f^{cde}\,\mathbf{T}_{1}^{k}\mathbf{T}_{1}^{g} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{d}\mathbf{T}_{4}^{e},\] \[(YC)_{3} = 0\,,\] \[(YC)_{4} = -i\,f^{abg}f^{ack}f^{cde}\,\mathbf{T}_{1}^{g}\mathbf{T}_{1}^{k} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{d}\mathbf{T}_{4}^{e},\]
\[(YC)_{5} = i\,f^{akg}f^{bcg}f^{cde}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{k} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{d}\mathbf{T}_{4}^{e}\,,\] \[(YC)_{6} = -i\,f^{ack}f^{bkg}f^{cde}\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{g} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{d}\mathbf{T}_{4}^{e}\,. \tag{101}\]
7. \(\mathbf{W}_{4}^{(1,0,1)}\,(1,1,1,3)\) This Cweb has one four-gluon correlator and one two-gluon correlator. It has three diagrams, one of them is shown below.. The figure below shows one of the three diagrams for this Cweb, and the table gives chosen order of shuffles and their corresponding \(S\)-factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABK\}\}\) & \(0\) \\ \hline \(C_{2}\) & \(\{\{AKB\}\}\) & \(1\) \\ \hline \(C_{3}\) & \(\{\{BAK\}\}\) & \(1\) \\ \hline \end{tabular}
The \(R\), \(Y\), and \(D\) matrices are given by,
\[R=\frac{1}{2}\left(\begin{array}{ccc}2&-1&-1\\ 0&1&-1\\ 0&-1&1\end{array}\right),\,\mathcal{D}\,=\!(\mathbf{1}_{2},0)\,. \tag{102}\]
This mixing matrix agrees with the universal form of \(3\times 3\) mixing matrices computed in [75].
Finally, the exponentiated colour factors are given by,
\[(YC)_{1} = 0\,,\] \[(YC)_{2} = -i\,f^{abn}f^{bch}\,f^{deh}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{n} \mathbf{T}_{2}^{c}\mathbf{T}_{3}^{d}\mathbf{T}_{4}^{e}\,. \tag{103}\]
8. \(\mathbf{W}_{4}^{(4)}(1,1,1,5)\) This is the largest Boomerang Cweb that can connect four Wilson lines. It has sixty diagrams, one of them is shown below. The following table shows the chosen order of shuffle and their corresponding \(S\)-factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABCDK\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ABCDK\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ACBDK\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ACDBK\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{ADBCK\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{ADCBK\}\}\) & 0 \\ \hline \(C_{7}\) & \(\{\{ABKCD\}\}\) & 0 \\ \hline \(C_{8}\) & \(\{\{ABCDC\}\}\) & 0 \\ \hline \(C_{9}\) & \(\{\{ABCKD\}\}\) & 0 \\ \hline \(C_{10}\) & \(\{\{ABDKC\}\}\) & 0 \\ \hline \(C_{11}\) & \(\{\{ACKBD\}\}\) & 0 \\ \hline \(C_{12}\) & \(\{\{ACKDB\}\}\) & 0 \\ \hline \(C_{13}\) & \(\{\{ACBKD\}\}\) & 0 \\ \hline \(C_{14}\) & \(\{\{ACDKB\}\}\) & 0 \\ \hline \(C_{15}\) & \(\{\{ADKBC\}\}\) & 0 \\ \hline \(C_{16}\) & \(\{\{ADKCB\}\}\) & 0 \\ \hline \(C_{17}\) & \(\{\{ADBKC\}\}\) & 0 \\ \hline \(C_{18}\) & \(\{\{ADCKB\}\}\) & 0 \\ \hline \(C_{19}\) & \(\{\{BACKD\}\}\) & 0 \\ \hline \(C_{20}\) & \(\{\{BACDK\}\}\) & 0 \\ \hline \(C_{21}\) & \(\{\{BADKC\}\}\) & 0 \\ \hline \(C_{22}\) & \(\{\{BADCK\}\}\) & 0 \\ \hline \(C_{23}\) & \(\{\{BCADK\}\}\) & 0 \\ \hline \(C_{24}\) & \(\{\{BDACK\}\}\) & 0 \\ \hline \(C_{25}\) & \(\{\{CABKD\}\}\) & 0 \\ \hline \(C_{26}\) & \(\{\{CABDK\}\}\) & 0 \\ \hline \(C_{27}\) & \(\{\{CADKB\}\}\) & 0 \\ \hline \(C_{28}\) & \(\{\{CADBK\}\}\) & 0 \\ \hline \(C_{29}\) & \(\{\{CBADK\}\}\) & 0 \\ \hline \(C_{30}\) & \(\{\{CDABK\}\}\) & 0 \\ \hline \end{tabular}
There are twenty-four independent exponentiated colour factors for this Cweb given as
\[(YC)_{1} = 0\,,\] \[(YC)_{2} = -i\,f^{abg}f^{cdk}f^{nkg}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{n} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}-i\,f^{abg}f^{ank}f^{ cdk}\,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{g}\mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c} \mathbf{T}_{4}^{d},\] \[(YC)_{3} = 0\,,\] \[(YC)_{4} = 0\,,\] \[(YC)_{5} = i\,f^{acg}f^{bdk}f^{nkg}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{a} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}+i\,f^{acg}f^{ank}f^{bdk} \,\mathbf{T}_{1}^{g}\mathbf{T}_{1}^{n}\mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c} \mathbf{T}_{4}^{d},\]
\[(YC)_{6} = 0\,,\] \[(YC)_{7} = i\,f^{ack}f^{bng}f^{dnk}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{g} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}-i\,f^{ack}f^{ddg}f^{bng} \,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{k}\mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c} \mathbf{T}_{4}^{d}\] \[-i\,f^{abg}f^{ack}f^{dnk}\,\mathbf{T}_{1}^{g}\mathbf{T}_{1}^{n} \mathbf{T}_{2}^{n}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}-i\,f^{ack}f^{ddg}f^{bnk} \,\mathbf{T}_{1}^{g}\mathbf{T}_{1}^{n}\mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c} \mathbf{T}_{4}^{d}\,,\] \[(YC)_{8} = i\,f^{abg}f^{cnk}f^{dng}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{k} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}-i\,f^{abg}f^{adn}f^{ adz}\,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{k}\mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c} \mathbf{T}_{4}^{d}\] \[-i\,f^{abg}f^{ack}f^{dng}\,\mathbf{T}_{1}^{k}\mathbf{T}_{1}^{n} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}+i\,f^{abg}f^{adn}f^{ cnk}\,\mathbf{T}_{1}^{k}\mathbf{T}_{1}^{g}\mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c} \mathbf{T}_{4}^{d}\,,\]
\[(YC)_{9} = 0\,,\] \[(YC)_{10} = 0\,,\] \[(YC)_{11} = -i\,f^{adn}f^{akg}f^{bcg}\,\mathbf{T}_{1}^{k}\mathbf{T}_{1}^{n} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}\,,\]
\[(YC)_{12} = -i\,f^{abg}f^{adk}f^{cng}\,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{k} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}-i\,f^{adk}f^{ang}f^{bcg} \,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{k}\mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c} \mathbf{T}_{4}^{d}\,,\]
\[(YC)_{13} = i\,f^{ang}f^{bkg}f^{cdk}\,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{a} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}\,,\] \[(YC)_{14} = -i\,f^{akg}f^{bkg}f^{cdk}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{n} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}\,,\] \[(YC)_{15} = -i\,f^{abk}f^{bdg}f^{eng}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{k} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}\,,\] \[(YC)_{16} = i\,f^{akg}f^{bdg}f^{cnk}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{n} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}\,,\] \[(YC)_{17} = i\,f^{adn}f^{bkg}f^{cnk}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{g} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}\,,\] \[(YC)_{18} = i\,f^{adn}f^{bnk}f^{adz}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{g} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}\,,\] \[(YC)_{19} = -i\,f^{acg}f^{ddk}f^{bng}\,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{k} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}-i\,f^{acg}f^{ank}f^{bdk} \,\mathbf{T}_{1}^{g}\mathbf{T}_{1}^{n}\mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c} \mathbf{T}_{4}^{d}\,,\]
\[(YC)_{20} = -i\,f^{acg}f^{adk}f^{bng}\,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{k} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}-i\,f^{acg}f^{adk}f^{bnk} \,\mathbf{T}_{1}^{g}\mathbf{T}_{1}^{n}\mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c} \mathbf{T}_{4}^{d}\,,\]
\[(YC)_{21} = i\,f^{adz}f^{ank}f^{bck}\,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{a} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}-i\,f^{ank}f^{bck}f^{dng} \,\mathbf{T}_{1}^{y}\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{b}\mathbf{T}_{2}^{b} \mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}\,,\]
\[(YC)_{22} = -i\,f^{abg}f^{cdk}\,\mathbf{T}_{1}^{a}\mathbf{T}_{1}^{n} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}-i\,f^{abg}f^{ack}f^{dnk} \,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{g}\mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c} \mathbf{T}_{4}^{d}\] \[-i\,f^{abg}f^{ank}f^{cdk}\,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{g} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}-i\,f^{abg}f^{ack}f^{dng} \,\mathbf{T}_{1}^{k}\mathbf{T}_{1}^{n}\mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c} \mathbf{T}_{4}^{d}\,,\]
\[(YC)_{23} = -i\,f^{abn}f^{adk}f^{ckg}\,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{g} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}+i\,f^{abn}f^{akg}f^{cdk} \,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{g}\mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c} \mathbf{T}_{4}^{d}\,,\]
\[(YC)_{24} = -i\,f^{abg}f^{adk}f^{cng}\,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{k} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}-i\,f^{adk}f^{ang}f^{bcg} \,\mathbf{T}_{1}^{n}\mathbf{T}_{1}^{k}\mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c} \mathbf{T}_{4}^{d}\] (B.15) \[-i\,f^{abg}f^{adk}f^{cnk}\,\mathbf{T}_{1}^{g}\mathbf{T}_{1}^{n} \mathbf{T}_{2}^{b}\mathbf{T}_{3}^{c}\mathbf{T}_{4}^{d}\,.\]
### Boomerang Cwebs connecting three Wilson lines
1. \(\mathbf{W}^{(4)}_{3,1}(1,3,4)\) This is the first of the two Cwebs with same correlator and attachment content. It has thirty-six diagrams, one of them is displayed below. The table shows the chosen order of shuffles and their corresponding \(s\)-factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s** \\ \hline \(C_{1}\) & \(\{\{ABFG\},\{LMP\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ABGF\},\{LMP\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{BAFG\},\{LMP\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{BAGF\},\{PML\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{BAGF\},\{LMP\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{ABFG\},\{LPM\}\}\) & 0 \\ \hline \(C_{7}\) & \(\{\{ABFG\},\{MLP\}\}\) & 0 \\ \hline \(C_{8}\) & \(\{\{ABGF\},\{PLM\}\}\) & 0 \\ \hline \(C_{9}\) & \(\{\{ABGF\},\{PML\}\}\) & 0 \\ \hline \(C_{10}\) & \(\{\{ABGF\},\{LPM\}\}\) & 0 \\ \hline \(C_{11}\) & \(\{\{ABGF\},\{MPL\}\}\) & 0 \\ \hline \(C_{12}\) & \(\{\{ABGF\},\{MLP\}\}\) & 0 \\ \hline \(C_{13}\) & \(\{\{AGBF\},\{LPM\}\}\) & 0 \\ \hline \(C_{14}\) & \(\{\{AGBF\},\{LMP\}\}\) & 0 \\ \hline \(C_{15}\) & \(\{\{AGBF\},\{MLP\}\}\) & 0 \\ \hline \(C_{16}\) & \(\{\{BAFG\},\{PLM\}\}\) & 0 \\ \hline \(C_{17}\) & \(\{\{BAFG\},\{PML\}\}\) & 0 \\ \hline \(C_{18}\) & \(\{\{BAFG\},\{LPM\}\}\) & 0 \\ \hline \end{tabular}
The mixing matrix and the diagonal matrix
\[\begin{pmatrix}12&0&0&0&-6&-6&0&0&0&0&2&-6&4&0&0&0&0&0&0&0&4-6&2&-6&-6&-6&2&2&-6&12&4&4 \\ 0&12&0&0&0&0&-2&4&-6&-2&-6&-6&4&-8&0&4&0&0&0&0&4-6&2&-4&-6&2&-2&2&8&4&0\\ 0&0&12&0&0&0&0&4&-8&0&4&0&2&-6&-2&-4&-6&-2&-6&0&0&0&4&-6&2&-4&-4&2&-2&-6&8&0&4\\ 0&0&0&12&0&0&0&2&-4&0&2&0&0&0&0&2&-4&0&2&0&-6&-0&-6&0&0&0&0&-2&-2&-4&2&2&-4&4&2&2\\ 0&0&0&0&12&0&0&2&-4&0&2&0&2&0&2&-6&4&2&-4&0&2&0&0&-6&-6&-6&-6&2&-2&-2&2&-2&4&0&0\\ 0&0&0&0&0&-6&-0&0&0&0&0&-4&0&0&0&0&0&0&0&0&0&-2&0&-2&-2&0&0&0&2&0&0\\ 0&0&0&0&-6&0&0&0&0&0&2&-0&2&0&0&0&0&0&0&0&0&0&0&0&4&-2&-0&0&-2&0&0&2&0&2&-2\\ 0&0&0&0&0&0&0&-2&-0&2&0&0&0&0&-2&-2&0&0&0&0&0&0&0&0&0&2&2&0&0&2&-4&-0&2\\ 0&0&0&0&0&0&-2&-2&6&-4&-6&-4&-4&-2&-2&0&0&0&0&-2&0&2&0&-2&2&2&2&2&-4&0
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABF\},\{GHPL\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ABF\},\{GHPL\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ABF\},\{HGPL\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ABF\},\{GHLP\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{AFB\},\{GHPL\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{AFB\},\{GHPL\}\}\) & 0 \\ \hline \(C_{7}\) & \(\{\{AFB\},\{GPLH\}\}\) & 0 \\ \hline \(C_{8}\) & \(\{\{AFB\},\{PGHL\}\}\) & 0 \\ \hline \(C_{9}\) & \(\{\{ABF\},\{HGLP\}\}\) & 0 \\ \hline \(C_{10}\) & \(\{\{ABF\},\{HGPL\}\}\) & 0 \\ \hline \(C_{11}\) & \(\{\{ABF\},\{HPGL\}\}\) & 0 \\ \hline \(C_{12}\) & \(\{\{ABF\},\{GHLP\}\}\) & 0 \\ \hline \(C_{13}\) & \(\{\{ABF\},\{GLHP\}\}\) & 0 \\ \hline \(C_{14}\) & \(\{\{ABF\},\{GLPH\}\}\) & 0 \\ \hline \(C_{15}\) & \(\{\{ABF\},\{GPLH\}\}\) & 0 \\ \hline \(C_{16}\) & \(\{\{ABF\},\{PHGL\}\}\) & 0 \\ \hline \(C_{17}\) & \(\{\{ABF\},\{PGHL\}\}\) & 0 \\ \hline \(C_{18}\) & \(\{\{ABF\},\{PGLH\}\}\) & 0 \\ \hline \end{tabular}
The mixing matrix for this Cweb is given by,
\[\begin{array}{c}\begin{pmatrix}12\ 0&4\ 4\ -6\ 0\ 2\ 2\ \ 4\ -6\ -2\ -6\ -2\ -4\ -6\ -4\ -6\ -4\ 2\ \ 2\ -6\ \ 0\ 4\ \ 4\ -3\ \ -1\ \ -3\ -3\ \ 0\ \ -1\ -2\ -1\ \ -1\ \ \ 1\ \ \ 2\ -2\\ 0\ 12\ 4\ \ 0\ -6\ 2\ \ 2\ \ 4\ -6\ -4\ \ -6\ \ 4\ -2\ -6\ -2\ -6\ \ 4\ \ 2\ \ 2\ \ 0\ -6\ \ 4\ \ 4\ -3\ \ -1\ \ -3\ -2\ \ 1\ \ \ 2\ -1\ \ -1\ \ \ -1\ \ \ -2\ \\ 0\ 0\ 4\ \ 0\ \ 0\ -4\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ \ 0\ \ \ 0\ \ \ 0\ \ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ \ 0\ \ 0\ \ 0\ \ 0\ \ \ 0\ \ 0\ \ \ 0\
\[\mathcal{D}\,=\,(\mathbf{1}_{12},0) \tag{118}\]
3. \(\mathbf{W}_{3}^{\,(4)}(2,2,4)\) This is a Cweb made out of four two-gluon correlators, and has 48 diagrams. We present one of the diagrams below. The table gives the chosen order of shuffle and their corresponding \(s\)-factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABKC\},\{DE\},\{FH\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ABCK\},\{ED\},\{HF\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ABCK\},\{DE\},\{FH\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ACKB\},\{ED\},\{HF\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{ACBK\},\{ED\},\{HF\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{ACBK\},\{DE\},\{FH\}\}\) & 0 \\ \hline \(C_{7}\) & \(\{\{BAKC\},\{DE\},\{FH\}\}\) & 0 \\ \hline \(C_{8}\) & \(\{\{BACK\},\{DE\},\{FH\}\}\) & 0 \\ \hline \(C_{9}\) & \(\{\{CAKB\},\{ED\},\{HF\}\}\) & 0 \\ \hline \(C_{10}\) & \(\{\{CABK\},\{ED\},\{HF\}\}\) & 0 \\ \hline \(C_{11}\) & \(\{\{AKBC\},\{DE\},\{FH\}\}\) & 0 \\ \hline \(C_{12}\) & \(\{\{ACKB\},\{ED\},\{HF\}\}\) & 0 \\ \hline \(C_{13}\) & \(\{\{ABK\},\{ED\},\{FH\}\}\) & 0 \\ \hline \(C_{14}\) & \(\{\{ABKC\},\{ED\},\{HF\}\}\) & 0 \\ \hline \(C_{15}\) & \(\{\{ABKC\},\{DE\},\{HF\}\}\) & 0 \\ \hline \(C_{16}\) & \(\{\{ABK\},\{ED\},\{FH\}\}\) & 0 \\ \hline \(C_{17}\) & \(\{\{ABK\},\{DE\},\{HF\}\}\) & 0 \\ \hline \(C_{18}\) & \(\{\{ACKB\},\{ED\},\{FH\}\}\) & 0 \\ \hline \(C_{19}\) & \(\{\{ACKB\},\{DE\},\{FH\}\}\) & 0 \\ \hline \(C_{20}\) & \(\{\{ACKB\},\{DE\},\{HF\}\}\) & 0 \\ \hline \(C_{21}\) & \(\{\{ACKB\},\{ED\},\{FH\}\}\) & 0 \\ \hline \(C_{22}\) & \(\{\{ACKB\},\{DE\},\{HF\}\}\) & 0 \\ \hline \(C_{23}\) & \(\{\{BACK\},\{ED\},\{FH\}\}\) & 0 \\ \hline \(C_{24}\) & \(\{\{BACK\},\{ED\},\{HF\}\}\) & 0 \\ \hline \end{tabular}
The mixing matrix and the diagonal matrix for this Cweb are given by,
\[\begin{pmatrix}12&0&0&0&0&0&0&0&-6&-4&-8&0&0&0&0&0&0&0&0&-6&-4&-8&0&5&-3&-1&-3&-1&-1& 5&-3&-3&-5&-3&-4&2&2&4&6&-2\\ 0&12&0&0&0&0&0&0&0&0&4&-4&-6&-6&-2&4&-2&0&0&4&-8&0&-2&4&-2&0&1&-1&-3&-3&-1&1&1&1&-3&1&0&2&6& 0&-2&2\\ 0&0&12&0&0&0&0&0&0&-6&-0&-2&4&-2&-6&-4&-8&0&-2&4&-2&-6&-4&-8&0&-5&-3&-5&1&-1&-3&-5&-3&-4&-2 &-2&4&6&-2\\ 0&0&0&0&0&0&0&0&0&-6&-4&-8&0&-8&0&-8&0&-8&0&-8&0&-8&0&-8&0&-8&0&-8&-8&-8&-8&-8&-8&-8&-8&-8&-8 &-8
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABF\},\{GH\},\{LMP\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{AFB\},\{GH\},\{MLP\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{AFB\},\{GH\},\{LMP\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{AFB\},\{GH\},\{LPM\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{ABF\},\{GH\},\{MPL\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{ABF\},\{GH\},\{MLP\}\}\) & 0 \\ \hline \(C_{7}\) & \(\{\{ABF\},\{GH\},\{PML\}\}\) & 0 \\ \hline \(C_{8}\) & \(\{\{ABF\},\{GH\},\{PLM\}\}\) & 0 \\ \hline \(C_{9}\) & \(\{\{ABF\},\{GH\},\{LPM\}\}\) & 0 \\ \hline \(C_{10}\) & \(\{\{BAF\},\{GH\},\{MLP\}\}\) & 0 \\ \hline \(C_{11}\) & \(\{\{BAF\},\{GH\},\{LMP\}\}\) & 0 \\ \hline \(C_{12}\) & \(\{\{BAF\},\{GH\},\{LPM\}\}\) & 0 \\ \hline \(C_{13}\) & \(\{\{AFB\},\{GH\},\{MPL\}\}\) & 1 \\ \hline \(C_{14}\) & \(\{\{BAF\},\{GH\},\{PLM\}\}\) & 1 \\ \hline \(C_{15}\) & \(\{\{AFB\},\{GH\},\{PML\}\}\) & 2 \\ \hline \(C_{16}\) & \(\{\{AFB\},\{GH\},\{PLM\}\}\) & 2 \\ \hline \(C_{17}\) & \(\{\{BAF\},\{GH\},\{MPL\}\}\) & 2 \\ \hline \(C_{18}\) & \(\{\{BAF\},\{GH\},\{PML\}\}\) & 2 \\ \hline \end{tabular}
\[R=\frac{1}{12}\left(\begin{array}{cccccccccccccccc}12&4&-6&2&2&-6&-4&2&-6&2&-6&4& -2&-2&2&0&0&2\\ 0&4&0&-4&0&0&0&0&0&-4&0&4&-2&-6&-4&6&2&4\\ 0&-2&6&-4&0&0&0&0&2&-6&4&0&-2&-2&2&0&2\\ 0&-2&0&2&0&0&0&0&2&0&-2&2&2&0&-2&-2&0\\ 0&0&0&0&2&0&-4&2&0&0&0&0&-2&0&0&-2&4\\ 0&-2&0&2&-4&6&-4&8&-6&-4&0&4&2&-6&0&-2&2&4\\ 0&0&0&0&-4&0&8&-4&0&0&0&0&2&2&-4&2&2&-4\\ 0&0&0&0&2&0&-4&2&0&0&0&0&-2&0&4&-2&0&0\\ 0&4&0&-4&8&-6&-4&-4&6&2&0&-2&-6&2&4&2&-2&0\\ 0&-2&0&2&0&0&0&0&0&2&0&-2&2&2&0&-2&-2&0\\ 0&4&-6&2&0&0&0&0&0&-4&6&-2&-2&0&2&0&2&-2\\ 0&4&0&-4&0&0&0&0&0&-4&0&4&-6&-2&4&2&6&-4\\ 0&0&0&0&0&0&0&0&0&0&0&2&-2&-4&2&-2&4\\ 0&0&0&0&0&0&0&0&0&0&0&-2&2&4&-2&2&-4\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&2&-2&-4&2&-2&4\\ \end{array}\right)\] (124) \(\mathcal{D}=(\mathbf{1}_{6},0)\)
5. \(\mathbf{W}^{(4)}_{3,\Pi}(2,3,3)\) This is the second Cweb with same correlator and attachment content. It has eighteen diagrams, one of them is shown below. The table shows the chosen order of shuffle and their corresponding \(s\)-factors.
\[\begin{array}{|c|c|c|}\hline\text{\bf Diagrams}&\text{\bf Sequences}&\text{\bf s-factors} \\ \hline C_{1}&\{\{APB\},\{FG\},\{HLM\}\}&0\\ \hline C_{2}&\{\{APB\},\{GF\},\{HLM\}\}&0\\ \hline C_{3}&\{\{ABP\},\{FG\},\{LHM\}\}&0\\ \hline C_{4}&\{\{ABP\},\{FG\},\{HLM\}\}&0\\ \hline C_{5}&\{\{ABP\},\{FG\},\{HML\}\}&0\\ \hline C_{6}&\{\{ABP\},\{GF\},\{LHM\}\}&0\\ \hline C_{7}&\{\{ABP\},\{GF\},\{HLM\}\}&0\\ \hline C_{8}&\{\{ABP\},\{GF\},\{HML\}\}&0\\ \hline C_{9}&\{\{BAP\},\{FG\},\{HLM\}\}&0\\ \hline C_{10}&\{\{BAP\},\{GF\},\{HLM\}\}&0\\ \hline C_{11}&\{\{APB\},\{FG\},\{LHM\}\}&1\\ \hline C_{12}&\{\{BAP\},\{GF\},\{HML\}\}&1\\ \hline C_{13}&\{\{APB\},\{FG\},\{HML\}\}&2\\ \hline C_{14}&\{\{APB\},\{GF\},\{HML\}\}&2\\ \hline C_{15}&\{\{BAP\},\{FG\},\{LHM\}\}&2\\ \hline C_{16}&\{\{BAP\},\{GF\},\{LHM\}\}&2\\ \hline C_{17}&\{\{APB\},\{GF\},\{LHM\}\}&4\\ \hline C_{18}&\{\{BAP\},\{FG\},\{HML\}\}&4\\ \hline\end{array}\]
\[\begin{array}{|c|cccccccccccc|}\hline 4&-4&0&0&0&0&0&0&-4&4&-1&-3&-3&3&1&-1&1&3 \\ -2&2&0&0&0&0&0&0&2&-2&1&1&1&-1&-1&-1\\ 0&0&4&0&-4&-4&0&4&0&-1&-3&1&-1&-3&3&1&3\\ -2&2&-2&6&-4&2&-6&4&-4&1&-3&1&-1&1&-1&-1&3\\ 0&0&-2&0&2&2&0&-2&0&0&1&1&-1&1&-1&-1&-1\\ 0&0&-2&0&2&2&0&-2&0&0&1&1&-1&1&1&-1&-1\\ 4&-4&4&-6&2&-4&6&-2&2&-2&-3&1&-1&1&-1&1&3&-1\\ 0&0&4&0&-4&-4&0&4&0&0&-3&-1&3&-3&-1&1&3&1\\ -2&2&0&0&0&0&0&2&-2&1&1&1&-1&-1&1&-1&-1\\ 4&-4&0&0&0&0&0&-4&4&-3&-1&-1&1&3&-3&3&1\\ 0&0&0&0&0&0&0&0&0&3&-3&-3&3&-3&3&-3&3\\ 0&0&0&0&0&0&0&0&0&-3&3&3&-3&3&-3&3&-3\\ 0&0&0&0&0&0&0&0&0&-1&1&-1&1&-1&1&-1\\ 0&0&0&0&0&0&0&0&0&1&-1&-1&1&-1&1&-1&1\\ 0&0&0&0&0&0&0&0&0&-1&1&1&-1&1&-1&-1\\ 0&0&0&0&0&0&0&0&0&0&-1&1&-1&1&-1&1&-1\\ 0&0&0&0&0&0&0&0&0&-1&1&-1&1&-1&1&-1\\ 0&0&0&0&0&0&0&0&0&0&1&-1&-1&1&-1&1&-1\\ 0&0&0&0&0&0&0&0&0&-1&1&-1&1&-1&1&-1\\ 0&0&0&0&0&0&0&0&0&1&-1&-1&1&-1&1&-1&1\\ \end{array}\]
\[\mathcal{D}\,=\,(\mathbf{1}_{4},0) \tag{112}\]
6. \(\mathbf{W}^{4}_{3,1}(1,2,5)\)
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABFGH\},\{MP\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ABFGH\},\{PM\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{AFBHG\},\{MP\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{AFBHG\},\{PM\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{AFBGH\},\{MP\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{AFBGH\},\{PM\}\}\) & 0 \\ \hline \(C_{7}\) & \(\{\{AFBHG\},\{MP\}\}\) & 0 \\ \hline \(C_{8}\) & \(\{\{AFHBG\},\{PM\}\}\) & 0 \\ \hline \(C_{9}\) & \(\{\{AFHGB\},\{MP\}\}\) & 0 \\ \hline \(C_{10}\) & \(\{\{AFHGB\},\{PM\}\}\) & 0 \\ \hline \(C_{11}\) & \(\{\{AFGBH\},\{MP\}\}\) & 0 \\ \hline \(C_{12}\) & \(\{\{AFGBH\},\{PM\}\}\) & 0 \\ \hline \(C_{13}\) & \(\{\{AFGHB\},\{MP\}\}\) & 0 \\ \hline \(C_{14}\) & \(\{\{AFHB\},\{PM\}\}\) & 0 \\ \hline \(C_{15}\) & \(\{\{AGBFH\},\{MP\}\}\) & 0 \\ \hline \(C_{16}\) & \(\{\{AGBFH\},\{PM\}\}\) & 0 \\ \hline \(C_{17}\) & \(\{\{AGFBH\},\{MP\}\}\) & 0 \\ \hline \(C_{18}\) & \(\{\{AGFBH\},\{PM\}\}\) & 0 \\ \hline \(C_{19}\) & \(\{\{AGFB\},\{MP\}\}\) & 0 \\ \hline \(C_{20}\) & \(\{\{AGFB\},\{PM\}\}\) & 0 \\ \hline \(C_{21}\) & \(\{\{GAFBH\},\{MP\}\}\) & 0 \\ \hline \(C_{22}\) & \(\{\{GAFBH\},\{PM\}\}\) & 0 \\ \hline \(C_{23}\) & \(\{\{GAFB\},\{MP\}\}\) & 0 \\ \hline \(C_{24}\) & \(\{\{GAFB\},\{PM\}\}\) & 0 \\ \hline \(C_{25}\) & \(\{\{ABFBG\},\{MP\}\}\) & 1 \\ \hline \(C_{26}\) & \(\{\{GABFH\},\{PM\}\}\) & 1 \\ \hline \(C_{27}\) & \(\{\{ABFBHG\},\{PM\}\}\) & 2 \\ \hline \(C_{28}\) & \(\{\{ABGFH\},\{MP\}\}\) & 2 \\ \hline \(C_{29}\) & \(\{\{ABGFH\},\{PM\}\}\) & 2 \\ \hline \(C_{30}\) & \(\{\{GABFH\},\{MP\}\}\) & 2 \\ \hline \end{tabular}
\[\mathcal{D}\,=\,(\mathbf{1}_{10},0)\] (B.23)
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABFGH\},\{MP\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ABHGF\},\{MP\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ABGFH\},\{MP\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ABGHF\},\{MP\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{AGBFH\},\{MP\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{AGBHF\},\{MP\}\}\) & 0 \\ \hline \(C_{7}\) & \(\{\{BAFGH\},\{MP\}\}\) & 0 \\ \hline \(C_{8}\) & \(\{\{BAHGF\},\{PM\}\}\) & 0 \\ \hline \(C_{9}\) & \(\{\{BAHGF\},\{MP\}\}\) & 0 \\ \hline \(C_{10}\) & \(\{\{BAGFH\},\{MP\}\}\) & 0 \\ \hline \(C_{11}\) & \(\{\{BAGHF\},\{PM\}\}\) & 0 \\ \hline \(C_{12}\) & \(\{\{BAGHF\},\{MP\}\}\) & 0 \\ \hline \(C_{13}\) & \(\{\{BGAFH\},\{MP\}\}\) & 0 \\ \hline \(C_{14}\) & \(\{\{BGAHF\},\{PM\}\}\) & 0 \\ \hline \(C_{15}\) & \(\{\{BGAHF\},\{MP\}\}\) & 0 \\ \hline \(C_{16}\) & \(\{\{ABFHG\},\{MP\}\}\) & 0 \\ \hline \(C_{17}\) & \(\{\{ABHFG\},\{PM\}\}\) & 0 \\ \hline \(C_{18}\) & \(\{\{ABHFG\},\{MP\}\}\) & 0 \\ \hline \(C_{19}\) & \(\{\{ABHGF\},\{PM\}\}\) & 0 \\ \hline \(C_{20}\) & \(\{\{ABGFH\},\{PM\}\}\) & 0 \\ \hline \(C_{21}\) & \(\{\{ABGHF\},\{PM\}\}\) & 0 \\ \hline \(C_{22}\) & \(\{\{ABHFG\},\{MP\}\}\) & 0 \\ \hline \(C_{23}\) & \(\{\{ABBGF\},\{PM\}\}\) & 0 \\ \hline \(C_{24}\) & \(\{\{ABBGF\},\{MP\}\}\) & 0 \\ \hline \(C_{25}\) & \(\{\{ABGHF\},\{MP\}\}\) & 0 \\ \hline \(C_{26}\) & \(\{\{AGBHF\},\{PM\}\}\) & 0 \\ \hline \(C_{27}\) & \(\{\{AGHBF\},\{MP\}\}\) & 0 \\ \hline \(C_{28}\) & \(\{\{BAFHG\},\{PM\}\}\) & 0 \\ \hline \(C_{29}\) & \(\{\{BAFHG\},\{MP\}\}\) & 0 \\ \hline \(C_{30}\) & \(\{\{BAFGH\},\{PM\}\}\) & 0 \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{31}\) & \(\{\{BAHFG\},\{PM\}\}\) & 0 \\ \hline \(C_{32}\) & \(\{\{BAHFG\},\{MP\}\}\) & 0 \\ \hline \(C_{33}\) & \(\{\{BAHFG\},\{PM\}\}\) & 0 \\ \hline \(C_{34}\) & \(\{\{BFAHG\},\{MP\}\}\) & 0 \\ \hline \(C_{35}\) & \(\{\{BFAGH\},\{MP\}\}\) & 0 \\ \hline \(C_{36}\) & \(\{\{BFAH\},\{MP\}\}\) & 0 \\ \hline \(C_{37}\) & \(\{\{BGAFH\},\{PM\}\}\) & 0 \\ \hline \(C_{38}\) & \(\{\{BGFAH\},\{PM\}\}\) & 0 \\ \hline \(C_{39}\) & \(\{\{BGFAH\},\{MP\}\}\) & 0 \\ \hline \(C_{40}\) & \(\{\{GABFH\},\{MP\}\}\) & 0 \\ \hline \(C_{41}\) & \(\{\{GABHF\},\{PM\}\}\) & 0 \\ \hline \(C_{42}\) & \(\{\{GABHF\},\{MP\}\}\) & 0 \\ \hline \(C_{43}\) & \(\{\{GAHBF\},\{MP\}\}\) & 0 \\ \hline \(C_{44}\) & \(\{\{GBAFH\},\{PM\}\}\) & 0 \\ \hline \(C_{45}\) & \(\{\{GBAFH\},\{MP\}\}\) & 0 \\ \hline \(C_{46}\) & \(\{\{GBAHF\},\{PM\}\}\) & 0 \\ \hline \(C_{47}\) & \(\{\{GBAHF\},\{MP\}\}\) & 0 \\ \hline \(C_{48}\) & \(\{\{GBFAH\},\{MP\}\}\) & 0 \\ \hline \(C_{49}\) & \(\{\{ABFHG\},\{PM\}\}\) & 1 \\ \hline \(C_{50}\) & \(\{\{ABFGH\},\{PM\}\}\) & 1 \\ \hline \(C_{51}\) & \(\{\{AHBFG\},\{PM\}\}\) & 1 \\ \hline \(C_{52}\) & \(\{\{AHGBF\},\{PM\}\}\) & 1 \\ \hline \(C_{53}\) & \(\{\{AGBFH\},\{PM\}\}\) & 1 \\ \hline \(C_{54}\) & \(\{\{AGHBF\},\{PM\}\}\) & 1 \\ \hline \(C_{55}\) & \(\{\{BFAHG\},\{PM\}\}\) & 1 \\ \hline \(C_{56}\) & \(\{\{BFAGH\},\{PM\}\}\) & 1 \\ \hline \(C_{57}\) & \(\{\{BFGAH\},\{PM\}\}\) & 1 \\ \hline \(C_{58}\) & \(\{\{GABFH\},\{PM\}\}\) & 1 \\ \hline \(C_{59}\) & \(\{\{GAHBF\},\{PM\}\}\) & 1 \\ \hline \(C_{60}\) & \(\{\{GBFAH\},\{PM\}\}\) & 1 \\ \hline \end{tabular}
## 8 \(\mathbf{W}_{3,1}^{(2,1)}(1,2,4)\)
This is the first Cweb that connects three Wilson lines and is made out of a three-gluon correlator and two two-gluon correlators. We present one representative diagram of this Cweb. The table shows the chosen order of shuffles and their corresponding \(s\)-factors.
## 9 \(\mathbf{W}_{3,1}^{(2,1)}(1,2,4)\)
This is the first Cweb that connects three Wilson lines and is made out of a three-gluon correlator and two two-gluon correlators. We present one representative diagram of this Cweb. The table shows the chosen order of shuffles and their corresponding \(s\)-factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABFG\},\{HL\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ABFG\},\{LH\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ABGF\},\{HL\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ABGF\},\{LH\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{BAFG\},\{HL\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{BAFG\},\{LH\}\}\) & 0 \\ \hline \(C_{7}\) & \(\{\{BAGF\},\{HL\}\}\) & 0 \\ \hline \(C_{8}\) & \(\{\{BAGF\},\{LH\}\}\) & 0 \\ \hline \(C_{9}\) & \(\{\{AFBG\},\{LH\}\}\) & 1 \\ \hline \(C_{10}\) & \(\{\{BGAF\},\{HL\}\}\) & 1 \\ \hline \(C_{11}\) & \(\{\{AFBG\},\{HL\}\}\) & 2 \\ \hline \(C_{12}\) & \(\{\{BGAF\},\{LH\}\}\) & 2 \\ \hline \end{tabular}
The mixing matrix and the diagonal matrix for this Cweb are given by,
\[R=\!\frac{1}{6}\left(\begin{array}{cccccccccc}3&-3&0&0&0&0&0&0&2&-1&-2&1 \\ -3&3&0&0&0&0&0&-1&2&1&-2\\ 0&0&3&-3&0&0&0&0&2&-1&-2&1\\ 0&0&-3&3&0&0&0&-1&2&1&-2\\ 0&0&0&0&3&-3&0&0&2&-1&-2&1\\ 0&0&0&0&-3&3&0&0&-1&2&1&-2\\ 0&0&0&0&0&0&3&-3&2&-1&-2&1\\ 0&0&0&0&0&0&-3&3&-1&2&1&-2\\ 0&0&0&0&0&0&0&2&2&-2&-2\\ 0&0&0&0&0&0&0&0&2&2&-2&-2\\ 0&0&0&0&0&0&0&0&-1&-1&1&1\\ 0&0&0&0&0&0&0&0&-1&-1&1&1\\ \end{array}\right),\mathcal{D}\,=\,(\mathbf{1}_{5},0)\,. \tag{100}\]
* \(\mathbf{W}\,_{3,\rm II}^{(2,1)}(1,2,4)\) This is the second Cweb with same correlator and attachment content. It has twenty four diagrams, one of them is shown below. The table lists all possible shuffles and their corresponding \(s\)-factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABFG\},\{HL\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ABGF\},\{LH\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ABGF\},\{HL\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{AGFB\},\{LH\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{AGBF\},\{LH\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{AGBF\},\{HL\}\}\) & 0 \\ \hline \(C_{7}\) & \(\{\{BAFG\},\{HL\}\}\) & 0 \\ \hline \(C_{8}\) & \(\{\{BAGF\},\{HL\}\}\) & 0 \\ \hline \(C_{9}\) & \(\{\{GAFB\},\{LH\}\}\) & 0 \\ \hline \(C_{10}\) & \(\{\{GABF\},\{LH\}\}\) & 0 \\ \hline \(C_{11}\) & \(\{\{AFBG\},\{HL\}\}\) & 0 \\ \hline \(C_{12}\) & \(\{\{AFGB\},\{LH\}\}\) & 0 \\ \hline \(C_{13}\) & \(\{\{ABFG\},\{LH\}\}\) & 0 \\ \hline \(C_{14}\) & \(\{\{AGFB\},\{HL\}\}\) & 0 \\ \hline \(C_{15}\) & \(\{\{BAGF\},\{LH\}\}\) & 0 \\ \hline \(C_{16}\) & \(\{\{BGAF\},\{HL\}\}\) & 0 \\ \hline \(C_{17}\) & \(\{\{GABF\},\{HL\}\}\) & 0 \\ \hline \(C_{18}\) & \(\{\{GBAF\},\{LH\}\}\) & 0 \\ \hline \(C_{19}\) & \(\{\{AFBG\},\{LH\}\}\) & 1 \\ \hline \(C_{20}\) & \(\{\{AFGB\},\{HL\}\}\) & 1 \\ \hline \(C_{21}\) & \(\{\{BAFG\},\{LH\}\}\) & 1 \\ \hline \(C_{22}\) & \(\{\{BGAF\},\{LH\}\}\) & 1 \\ \hline \(C_{23}\) & \(\{\{GAFB\},\{HL\}\}\) & 1 \\ \hline \(C_{24}\) & \(\{\{GBAF\},\{HL\}\}\) & 1 \\ \hline \end{tabular}
The mixing matrix \(R\), and the diagonal matrix \(D\) are given by,
10. \(\mathbf{W}^{(2,1)}_{3,\rm III}(1,2,4)\) This is the fourth Cweb with same correlator and attachment content. It has twelve diagrams, one of them is displayed below.
The mixing matrix, and the diagonal matrix are given by,
\[R=\frac{1}{6}\left(\begin{array}{cccccccccccc}6&0&-3&-3&-3&-3&-1&2&2&-1&2&2\\ 0&6&-3&-3&-3&-3&2&-1&2&2&2&-1\\ 0&0&3&0&0&-3&-1&-1&-1&2&2\\ 0&0&0&3&-3&0&-1&-1&2&2&-1&-1\\ 0&0&0&-3&3&0&-1&2&-1&-1&2&-1\\ 0&0&-3&0&0&3&2&-1&2&-1&-1&-1\\ 0&0&0&0&0&0&2&-1&-1&-1&-1&2\\ 0&0&0&0&0&0&-1&2&-1&2&-1&-1\\ 0&0&0&0&0&0&-1&-1&2&-1&2&-1\\ 0&0&0&0&0&0&-1&2&-1&2&-1&-1\\ 0&0&0&0&0&0&-1&-1&2&-1&2&-1\\ 0&0&0&0&0&0&-1&-1&2&-1&2&-1\\ 0&0&0&0&0&0&-1&-1&2&-1&2&-1\\ 0&0&0&0&0&0&2&-1&-1&-1&-1&2\end{array}\right),\,{\cal D}\,=\left({\bf 1}_{6},0 \right). \tag{111}\]
1. \({\bf W}^{\,(2,1)}_{\,3,1}(2,2,3)\) This Cweb also has twelve diagrams, and one of them is shown below. The table shows twelve shuffles and their corresponding \(s\)-factors.
The mixing matrix, and the diagonal matrix are given by,
\[R= \frac{1}{6}\left(\begin{array}{cccccccccccc}6&0&-3&0&-3&-3&-3&0&2&2&1&1\\ 0&6&0&-3&-3&-3&0&-3&2&2&1&1\\ 0&0&3&0&0&0&-3&0&-1&2&-2&1\\ 0&0&0&3&0&0&0&-3&-1&2&-2&1\\ 0&0&0&0&3&-3&0&0&-1&2&1&-2\\ 0&0&0&0&-3&3&0&0&2&-1&-2&1\\ 0&0&-3&0&0&0&3&0&2&-1&1&-2\\ 0&0&0&-3&0&0&0&3&2&-1&1&-2\\ 0&0&0&0&0&0&0&2&2&-2&-2\\ 0&0&0&0&0&0&0&0&-1&-1&1&1\\ 0&0&0&0&0&0&0&-1&-1&1&1\end{array}\right)\] \[\mathcal{D}\,= (\mathbf{1}_{6},0) \tag{111}\]
* \(\mathbf{W}^{\,(2,1)}_{3,\Pi}(2,2,3)\) This Cweb has six diagrams, one of which is shown below. The table shows the chosen order of shuffles and their corresponding \(s\)-factors.
The \(R\) and \(D\) matrices are given by,
\[R= \frac{1}{6}\left(\begin{array}{cccccc}3&-3&-1&2&1&-2\\ -3&3&2&-1&-2&1\\ 0&0&2&2&-2&-2\\ 0&0&2&2&-2&-2\\ 0&0&-1&-1&1&1\\ 0&0&-1&-1&1&1\end{array}\right),\mathcal{D}\,=\,(\mathbf{1}_{2},0)\,. \tag{114}\]
* \(\mathbf{W}^{\,(2,1)}_{3,1}(1,3,3)\) This is the first Cweb with same correlator and attachment content. It Cweb has nine diagrams, one of them is shown below. The table shows chosen order of shuffle and their corresponding \(s\)-factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABF\},\{GH\},\{LM\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ABF\},\{GH\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ABF\},\{GH\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ABF\},\{HGL\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{BAF\},\{GH\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{AFB\},\{GLH\}\}\) & 1 \\ \hline \(C_{7}\) & \(\{\{BAF\},\{HGL\}\}\) & 1 \\ \hline \(C_{8}\) & \(\{\{AFB\},\{HGL\}\}\) & 2 \\ \hline \(C_{9}\) & \(\{\{BAF\},\{GLH\}\}\) & 2 \\ \hline \end{tabular}
The \(R\) and \(D\) matrices are given by,
\[R=\frac{1}{6}\left(\begin{array}{cccccccc}6&-3&-3&-3&-3&2&2&1&1\\ 0&3&0&0&-3&-1&2&-2&1\\ 0&0&3&-3&0&-1&2&1&-2\\ 0&0&-3&3&0&2&-1&-2&1\\ 0&-3&0&0&3&2&-1&1&-2\\ 0&0&0&0&0&2&2&-2&-2\\ 0&0&0&0&0&2&2&-2&-2\\ 0&0&0&0&0&-1&-1&1&1\\ 0&0&0&0&0&-1&-1&1&1\end{array}\right),\mathcal{D}\,=\,(\mathbf{1}_{4},0)\,. \tag{114}\]
\(\mathbf{W}_{3,\rm II}^{\,(2,1)}(1,3,3)\)
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABF\},\{GHL\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{AFB\},\{GHL\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ABF\},\{HGL\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ABF\},\{GLH\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{BAF\},\{GHL\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{AFB\},\{HGL\}\}\) & 1 \\ \hline \(C_{7}\) & \(\{\{AFB\},\{GLH\}\}\) & 1 \\ \hline \(C_{8}\) & \(\{\{BAF\},\{HGL\}\}\) & 2 \\ \hline \(C_{9}\) & \(\{\{BAF\},\{GLH\}\}\) & 2 \\ \hline \end{tabular}
The \(R\), and \(D\) matrices are given by,
\[R=\frac{1}{6}\left(\begin{array}{cccccccc}6&-3&-3&-3&-3&2&2&1&1\\ 0&3&0&0&-3&-1&2&-2&1\\ 0&0&3&-3&0&-1&2&1&-2\\ 0&0&-3&3&0&2&-1&-2&1\\ 0&-3&0&0&3&2&-1&1&-2\\ 0&0&0&0&0&2&2&-2&-2\\ 0&0&0&0&0&2&2&-2&-2\\ 0&0&0&0&0&-1&-1&1&1\\ 0&0&0&0&0&-1&-1&1&1\end{array}\right),\mathcal{D}\,=\,(\mathbf{1}_{4},0) \tag{115}\]
\(\mathbf{W}_{3,\rm III}^{\,(2,1)}(1,3,3)\)
This is the third Cweb with same correlator and attachment content. It has nine diagrams, out which one is shown below. The table shows chosen order of all possible shuffles and their corresponding \(s\)-factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABF\},\{GHL\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{AFB\},\{GHL\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ABF\},\{GLH\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ABF\},\{HGL\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{BAF\},\{GHL\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{AFB\},\{HGL\}\}\) & 1 \\ \hline \(C_{7}\) & \(\{\{BAF\},\{GLH\}\}\) & 1 \\ \hline \(C_{8}\) & \(\{\{AFB\},\{GLH\}\}\) & 2 \\ \hline \(C_{9}\) & \(\{\{BAF\},\{HGL\}\}\) & 2 \\ \hline \end{tabular}
The mixing matrix and the diagonal matrix are given by,
\[R=\frac{1}{6}\left(\begin{array}{cccccccc}6&-3&-3&-3&-3&2&2&1&1\\ 0&3&0&0&-3&-1&2&-2&1\\ 0&0&3&-3&0&2&-1&-2&1\\ 0&0&-3&3&0&-1&2&1&-2\\ 0&-3&0&0&3&2&-1&1&-2\\ 0&0&0&0&0&2&2&-2&-2\\ 0&0&0&0&0&2&2&-2&-2\\ 0&0&0&0&0&-1&-1&1&1\\ 0&0&0&0&0&-1&-1&1&1\end{array}\right),\,\mathcal{D}\,=\,(\mathbf{1}_{4},0). \tag{104}\]
**16. \(\mathbf{W}_{3}^{\,(1,0,1)}(1,2,3)\)**
This Cweb has three diagrams, one which is displayed below. The mixing matrix for this particular Cweb agrees with the general form of any prime dimensional mixing matrices [75]. The table shows the chosen order of shuffles and their corresponding \(s\)-factors.
The \(R\) and \(D\) matrices are given by,
\[R=\frac{1}{2}\left(\begin{array}{ccc}2&-1&-1\\ 0&1&-1\\ 0&-1&1\end{array}\right),\,{\cal D}\,=\,({\bf 1}_{2},0)\,.\] (B.33)
**17**: \({\bf W}_{3,1}^{\,(2,1)}(1,1,5)\)
This is the first Cweb with same correlator and attachment content. It has fifteen diagrams, one of which is shown below. The table shows the chosen order of shuffle and their corresponding \(s\)-factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABFGH\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ABHGF\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ABGFH\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ABGFH\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{AGBFH\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{AGBHF\}\}\) & 0 \\ \hline \(C_{7}\) & \(\{\{AFBGH\}\}\) & 0 \\ \hline \(C_{8}\) & \(\{\{ABFHG\}\}\) & 0 \\ \hline \(C_{9}\) & \(\{\{ABHFG\}\}\) & 0 \\ \hline \(C_{10}\) & \(\{\{AGFBH\}\}\) & 0 \\ \hline \(C_{11}\) & \(\{\{GABFH\}\}\) & 0 \\ \hline \(C_{12}\) & \(\{\{GABHF\}\}\) & 0 \\ \hline \(C_{13}\) & \(\{\{AFBHG\}\}\) & 1 \\ \hline \(C_{14}\) & \(\{\{AFGBH\}\}\) & 1 \\ \hline \(C_{15}\) & \(\{\{GAFBH\}\}\) & 1 \\ \hline \end{tabular}
The mixing matrix and the diagonal matrix for this Cweb are given by,
\[R= \frac{1}{12}\left(\begin{array}{cccccccccccccccc}12&0&0&0&0&0&-6&-6&0&-6&-6& 0&2&2&8\\ 0&12&0&0&0&0&-6&0&-6&-6&0&-6&2&2&8\\ 0&0&12&0&0&0&-12&-6&0&-12&-6&0&8&8&8\\ 0&0&0&12&0&0&-12&0&-6&-12&0&-6&8&8&8\\ 0&0&0&0&12&0&-6&-6&0&-6&-6&0&8&2&2\\ 0&0&0&0&0&12&-6&0&-6&-6&0&-6&8&2&2\\ 0&0&0&0&0&0&6&0&0&-6&0&0&-4&2&2\\ 0&0&0&0&0&0&0&6&0&0&-6&0&-4&-4&8\\ 0&0&0&0&0&0&-6&0&0&6&0&0&2&2&-4\\ 0&0&0&0&0&0&0&-6&0&0&6&0&8&-4&-4\\ 0&0&0&0&0&0&0&0&-6&0&0&6&8&-4&-4\\ 0&0&0&0&0&0&0&0&0&0&0&0&2&-4&2\\ 0&0&0&0&0&0&0&0&0&0&0&-4&8&-4\\ 0&0&0&0&0&0&0&0&0&0&0&2&-4&2\\ \end{array}\right)\] (B.34) \[\mathcal{D}\,= (\mathbbm{1}_{10},0)\]
* \(\mathbf{W}^{\,(2,1)}_{3,\rm II}\,(1,1,5)\) This is the second Cweb with same correlator and attachment content. It has thirty diagrams, one of which is shown below. The table gives the chosen order of thirty shuffles and their corresponding \(s\)-factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABFG\}\}\) & \(0\) \\ \hline \(C_{2}\) & \(\{\{ABFH\}\}\) & \(0\) \\ \hline \(C_{3}\) & \(\{\{AFGB\}\}\) & \(0\) \\ \hline \(C_{4}\) & \(\{\{AFBG\}\}\) & \(0\) \\ \hline \(C_{5}\) & \(\{\{AFBH\}\}\) & \(0\) \\ \hline \(C_{6}\) & \(\{\{AFHB\}\}\) & \(0\) \\ \hline \(C_{7}\) & \(\{\{FAGB\}\}\) & \(0\) \\ \hline \(C_{8}\) & \(\{\{FABG\}\}\) & \(0\) \\ \hline \(C_{9}\) & \(\{\{FABH\}\}\) & \(0\) \\ \hline \(C_{10}\) & \(\{\{FAHB\}\}\) & \(0\) \\ \hline \(C_{11}\) & \(\{\{FBAG\}\}\) & \(0\) \\ \hline \(C_{12}\) & \(\{\{FBAH\}\}\) & \(0\) \\ \hline \(C_{13}\) & \(\{\{AGFB\}\}\) & \(0\) \\ \hline \(C_{14}\) & \(\{\{ABGF\}\}\) & \(0\) \\ \hline \(C_{15}\) & \(\{\{AFGH\}\}\) & \(0\) \\ \hline \(C_{16}\) & \(\{\{AFHG\}\}\) & \(0\) \\ \hline \(C_{17}\) & \(\{\{BAFG\}\}\) & \(0\) \\ \hline \(C_{18}\) & \(\{\{BAFH\}\}\) & \(0\) \\ \hline \(C_{19}\) & \(\{\{BFAG\}\}\) & \(0\) \\ \hline \(C_{20}\) & \(\{\{BFAH\}\}\) & \(0\) \\ \hline \(C_{21}\) & \(\{\{FAGH\}\}\) & \(0\) \\ \hline \(C_{22}\) & \(\{\{FAHG\}\}\) & \(0\) \\ \hline \(C_{23}\) & \(\{\{FBHA\}\}\) & \(0\) \\ \hline \(C_{24}\) & \(\{\{FHAB\}\}\) & \(0\) \\ \hline \(C_{25}\) & \(\{\{AGBF\}\}\) & \(1\) \\ \hline \(C_{26}\) & \(\{\{AGFH\}\}\) & \(1\) \\ \hline \(C_{27}\) & \(\{\{BAGF\}\}\) & \(1\) \\ \hline \(C_{28}\) & \(\{\{BFHA\}\}\) & \(1\) \\ \hline \(C_{29}\) & \(\{\{FHAG\}\}\) & \(1\) \\ \hline \(C_{30}\) & \(\{\{FHBA\}\}\) & \(1\) \\ \hline \end{tabular}
The mixing matrix and the diagonal matrix for this Cweb are given by,
1. \({\rm W}_{3}^{(1,0,1)}(1,1,4)\) This is a Cweb made out of one four-gluon correlator and a two-gluon correlator. It has six diagrams, one of them is displayed below. The table shows the chosen order of shuffles and their corresponding \(s\)-factors.
\[\mathcal{D}\,=\,({\bf 1}_{20},0) \tag{120}\]
1. \({\rm W}_{3}^{(1,0,1)}(1,1,4)\) This is a Cweb made out of one four-gluon correlator and a two-gluon correlator. It has six diagrams, one of them is displayed below. The table shows the chosen order of shuffles and their corresponding \(s\)-factors.
The \(R\), and \(D\) matrices are given by,
\[R=\frac{1}{2}\left(\begin{array}{cccccc}2&0&0&0&-1&-1\\ 0&2&0&0&-1&-1\\ 0&0&2&0&-1&-1\\ 0&0&0&2&-1&-1\\ 0&0&0&0&1&-1\\ 0&0&0&0&-1&1\end{array}\right),\,\mathcal{D}\,=\,(\mathbf{1}_{5},0)\] (B.36)
20. \(\mathbf{W}_{3}^{\,(4)}(1,1,6)\) This is the largest Cweb, it has ninety diagrams. The table shows the chosen order of shuffle and their corresponding \(s\)-factors.
\begin{tabular}{|c|c|c|} \hline
**Diagrams** & **Sequences** & **s-factors** \\ \hline \(C_{1}\) & \(\{\{ABFGHL\}\}\) & 0 \\ \hline \(C_{2}\) & \(\{\{ABFHGL\}\}\) & 0 \\ \hline \(C_{3}\) & \(\{\{ABLGF\}\}\) & 0 \\ \hline \(C_{4}\) & \(\{\{ABLHGF\}\}\) & 0 \\ \hline \(C_{5}\) & \(\{\{ABGFHL\}\}\) & 0 \\ \hline \(C_{6}\) & \(\{\{ABGLHF\}\}\) & 0 \\ \hline \(C_{7}\) & \(\{\{ABGHFL\}\}\) & 0 \\ \hline \(C_{8}\) & \(\{\{ABGHLF\}\}\) & 0 \\ \hline \(C_{9}\) & \(\{\{ABHFGL\}\}\) & 0 \\ \hline \(C_{10}\) & \(\{\{ABHLGF\}\}\) & 0 \\ \hline \(C_{11}\) & \(\{\{ABHGFL\}\}\) & 0 \\ \hline \(C_{12}\) & \(\{\{ABHGLF\}\}\) & 0 \\ \hline \(C_{13}\) & \(\{\{ABFHL\}\}\) & 0 \\ \hline \(C_{14}\) & \(\{\{AGBLHF\}\}\) & 0 \\ \hline \(C_{15}\) & \(\{\{AGBHFL\}\}\) & 0 \\ \hline \(C_{16}\) & \(\{\{AGBHLF\}\}\) & 0 \\ \hline \(C_{17}\) & \(\{\{AGHBFL\}\}\) & 0 \\ \hline \(C_{18}\) & \(\{\{AGHBLF\}\}\) & 0 \\ \hline \(C_{19}\) & \(\{\{AHEFGL\}\}\) & 0 \\ \hline \(C_{20}\) & \(\{\{AHBLGF\}\}\) & 0 \\ \hline \(C_{21}\) & \(\{\{AHBGFL\}\}\) & 0 \\ \hline \(C_{22}\) & \(\{\{AHBGLF\}\}\) & 0 \\ \hline \(C_{23}\) & \(\{\{AHGBLF\}\}\) & 0 \\ \hline \(C_{24}\) & \(\{\{AFBGLH\}\}\) & 0 \\ \hline \(C_{25}\) & \(\{\{AFBGL\}\}\) & 0 \\ \hline \(C_{26}\) & \(\{\{AFBHLG\}\}\) & 0 \\ \hline \(C_{27}\) & \(\{\{AFBHGL\}\}\) & 0 \\ \hline \(C_{28}\) & \(\{\{AFGBHL\}\}\) & 0 \\ \hline \(C_{29}\) & \(\{\{AFHBBGL\}\}\) & 0 \\ \hline \(C_{30}\) & \(\{\{ABFLGH\}\}\) & 0 \\ \hline \end{tabular} |
2303.16702 | Toroidal cavitation by a snapping popper | Cavitation is a phenomenon in which bubbles form and collapse in liquids due
to pressure or temperature changes. Even common tools like a rubber popper can
be used to create cavitation at home. As a rubber popper toy slams a solid wall
underwater, toroidal cavitation forms. As part of this project, we aim to
explain how an elastic shell causes cavitation and to describe the bubble
morphology. High-speed imaging reveals that a fast fluid flow between a
snapping popper and a solid glass reduces the fluid pressure to cavitate.
Cavitation occurs on the popper surface in the form of sheet cavitation. Our
study uses two-dimensional Rayleigh-Plesset equations and the energy balance to
capture the relationship between the bubble lifetime and the popper
deformability. The initial distance between the popper and the wall is an
important parameter for determining the cavitation dynamics. Presented results
provide a deeper understanding of cavitation mechanics, which involves the
interaction between fluid and elastic structure. | Akihito Kiyama, Sharon Wang, Sunghwan Jung | 2023-03-29T13:53:54Z | http://arxiv.org/abs/2303.16702v2 | # Toroidal cavitation by a snapping popper
###### Abstract
Cavitation is a phenomenon in which bubbles form and collapse in liquids due to pressure or temperature changes. Even common tools like a rubber popper can be used to create cavitation at home. As a rubber popper toy slams a solid wall underwater, toroidal cavitation forms. As part of this project, we aim to explain how an elastic shell causes cavitation and to describe the bubble morphology. High-speed imaging reveals that a fast fluid flow between a snapping popper and a solid glass reduces the fluid pressure to cavitate. Cavitation occurs on the popper surface in the form of sheet cavitation. Our study uses two-dimensional Rayleigh-Plesset equations and the energy balance to capture the relationship between the bubble lifetime and the popper deformability. The initial distance between the popper and the wall is an important parameter for determining the cavitation dynamics. Presented results provide a deeper understanding of cavitation mechanics, which involves the interaction between fluid and elastic structure.
## I Introduction
Cavitation is a phase change process from liquid to gas (i.e., vaporization) due to an abrupt decrease in fluid pressure. Cavitation bubbles collapse onto solid objects and walls after nucleation, causing destructive erosion. For example, cavitation in a high-speed flow around a hydrofoil or a propeller damages their structure (e.g., [1; 2; 3]). It can significantly reduce the efficiency of marine propulsion and hydro turbine systems and cause design failures due to the excessive vibration. On the other hand, engineers use the impulsive fluid motion associated with cavitation bubbles in beneficial ways, e.g. for medical [4] and cleaning applications [5; 6].
Researchers have been employing various experimental methods to create a spherical cavitation bubble [7]. For example, a short-pulsed laser was used to study bubble collapse and rebound behaviors [8; 9; 10]. The laser method allows researchers to study the high-precision behavior of cavitation bubbles down to nanosecond time scales. The electric spark method is used to create a single bubble at a low cost, as used in the study for bubble-particle interaction [11] and bubble dynamics in non-Newtonian fluid [12]. Ultrasonic transducers have also been used as an alternative to electric spark method to create bubble cavitation [13; 14].
Cavitation occurs not only in engineering systems but also in natural systems and everyday items. In nature, pistol shrimp use bubbles to stun their prey [15; 16]. These bubbles are created by the shrimp's claw and can reach high temperatures and high pressure, killing small fish. Even in everyday activities, cavitation can be seen when one cracks the finger joint [17], drops a water-filled vials [18]/tubes [19], or performs a party trick with a beer bottle [20]. Such events are caused by the formation of bubbles and their subsequent collapse, which release energy in the form of shockwaves with loud sound and heat.
Activating a rubber popper underwater can also create cavitation. Upon activation, an inverted rubber popper quickly returns to its original hemispherical shape. The dynamic is called "snap-through" instability and is widely studied (e.g., [21]) and even used as an actuator for soft robotics [22]. Often times, the popper can jump up to a few meters in the air. Even underwater, the popper dynamics remain very fast. Figure 1 shows the formation of toroidal cavitation in an aqueous solution upon the slamming of the popper to the substrate. The bubble forms spontaneously within a thin gap between the popper and the substrate and lasts for \(\sim O(1)\) ms. We note that the entire process seems similar to the cavitation reported upon the underwater collision between a solid object and substrate [23; 24; 25]. In these works, the cavitation onset was explained by either the depressurization upon rebound [26] or the high shear stress [27]. Both mechanisms may not be applicable to this particular toroidal cavitation resulting from a snapping popper (figure 1).
In this paper, we examine the cavitation phenomena caused by an elastic popper. First, we classify three types of cavitain and discuss their mechanism of cavitation onset. Systematic experiments suggest that a fast water flow squeezed out from a thin gap between the popper surface and the glass substrate dominates the toroidal cavitation, as implied by a conventional cavitation number [1]. We then focus on the morphology of the bubble (i.e., the lifetime and radius) and discuss it through the two-dimensional Rayleigh-Plesset equation. We also adopt the energy balance between the inverted popper and the fully expanded cavitation and discuss the physical meaning of the control parameter to provide a theoretical framework. The present paper provides insights into cavitation mechanics that are a result of fluid-elastic interaction.
## II Methods
### Preliminary Experiments and Observations
We performed a preliminary experiment to capture overall dynamics (see Appendix for the details). First, a rubber popper was mounted on a 3D-printed platform with an inner diameter of 3 cm. Since the popper was slightly lighter than water and could float, this platform was used to fix the popper's location. The initial height of the platform determines the parameter \(H\). The stand-off parameter \(H/R_{p}\) is defined as the initial popper location normalized by the popper radius \(R_{p}\). The stand-off parameter is varied as \(0.5\leq H/R_{p}\leq 1.4\). Five experiments were conducted for each condition.
We used commercially available poppers (ArtCreativity com.) of two different radii, \(R_{p}\sim 16\) mm and \(\sim\)22 mm. We assumed that Young's modulus is \(E\sim 25\) MPa based on a previous study [21]. We used deionized water as a working fluid, where the density and the vapour pressure were assumed to be \(\rho\sim 1,000\) kg/m\({}^{3}\) and \(p_{v}\sim 2\) kPa. In figure 1, we used a glycerol-water mixture (\(\approx 50\%\) by volume) for visualization purposes, whose viscosity is expected to be slightly higher than the water (\(\sim 5\) cSt [29]). The experiments were performed in Ithaca, NY. We assumed the atmospheric pressure to be \(p_{0}\sim 101\) kPa.
The dynamics of the popper were captured by synchronized high-speed cameras. The bottom-view images were recorded by
Figure 1: (a) side-view images of the underwater popper (\(R_{p}\approx 16\) mm) approaching the glass substrate. The platform height was set at \(H\approx 9\) mm (i.e., \(H/R_{p}\approx 0.56\)). (b) bottom-view images of the same popper and cavitation dynamics. We used 50% glycerol-water mixture (\(\approx 5\) cSt) for visualization purpose. The platform height was set at \(H\approx 12\) mm (i.e., \(H/R_{p}\approx 0.75\)). The scale bars represent 10 mm. Both images were edited to enhance the brightness/clarity. The artwork was first presented in [28].
two Phantom Fastcam NOVA (5,000 frames per second) either directly or through mirrors. The cavitation onset was manually detected and bubble lifetime \(t_{\text{life}}\) and bubble size \(R_{\text{in}}\) and \(R_{\text{out}}\) were estimated. The side-view images were filmed by Photron Fastcam SA-Z (5,000 frames per second).
We first show the general trend of the size of the cavitation bubble \(R_{\text{in}}\) and \(R_{\text{out}}\) as a function of the platform position \(H\) (figure 2(a)). Blue and red markers show the data from the preliminary experiment, while the black ones represent the main experiment result as explained in the following section. By definition (see figure 1(b)), the outer radius \(R_{\text{out}}\) (filled markers) is always bigger than the inner one \(R_{\text{in}}\) (open markers). But their trend as a function of \(H\) remains similar to each other for both popper sizes (\(R_{p}=16\) mm and \(R_{p}=22\) mm). It is noted that a larger popper can create a larger bubble while maintaining a similar downward trend against \(H\). The bubble lifetime \(t_{\text{life}}\) also goes with a similar trend against \(H\) (figure 2(b)).
The preliminary experiment revealed that the toroidal cavitation we presented in figure 1 can be observed in only limited experimental conditions. If a popper is released too close to the substrate (i.e., small \(H\)), the popper could not be accelerated fast enough to cavitate fluid. Indeed, we observed non or only a partial toroidal bubble at \(H=7\) mm for \(R_{p}\sim 16\) mm (not shown in figure 2). Also, the cavitation bubbles except for the ring-type bubble (discussed in the following section) vanish if \(H\) is very large. The dimensionless popper location \(H/R_{p}\) seemed to be the primary parameter to describe the phenomena.
### Main Experiments
After performing the preliminary experiment, it became evident that close-up observations were necessary. We performed the three-dimensional imaging by employing a simpler setup and higher magnification (figure 3(a)) to estimate the inner shape and slamming speed of the popper right before the cavitation onset. The frame rate of the two Photron NOVA high-speed cameras was set at 6,000 frames per second. The spatial resolutions were almost identical to each other (25.56 and 27.45 pixels/mm). In this experiment, we selected one popper, whose radius was \(R_{p}\sim 16\) mm and whose surface was painted by black dots (see figure 4), as a representation. Image pairs were cross-correlated through a DLTdwS digitizing tool [30] to estimate the deflection of the inverted popper surface. The software is available for free and can be run as the Matlab Application. Dotted patterns were tracked semi-manually to compute values in not only the \(x\) and \(y\) but also the \(z\) coordinates. The vertical speed of the slamming popper, \(U_{\text{popper}}\), can be estimated as \(U_{\text{popper}}=\Delta z_{\text{popper}}/\Delta t_{1}\), where \(\Delta z_{\text{popper}}\) is the vertical displacement of the popper center for a short period of time \(\Delta t_{1}\) before cavitation onset (\(\Delta t_{1}=1\) ms). We tested 10 different \(H\) levels (from 9 mm to 18 mm, \(0.56\leq\ H/R_{p}\leq 1.13\)) and repeated measurement 5 times.
We also performed two side-view measurements (see figure 3(b)). One of them was to measure the flow speed right before
Figure 2: (a) Radius of the cavitation \(R_{\text{in}}\) (open markers) and \(R_{\text{out}}\) (filled markers) as a function of the platform position \(H\). Colors distinguish the popper sizes (blue for \(R_{p}=16\) mm in preliminary experiment, black for \(R_{p}=16\) mm in main experiment, and red for \(R_{p}=22\) mm in preliminary experiment). We note that we have five trials at each condition (marked by dots) to obtain the mean value (square) and standard deviation (error bar). (b) Lifetime of the cavitation \(t_{\text{life}}\) as a function of the platform position \(H\). Trend lines are \(t_{\text{life}}=-0.089H+2.118\) (R-squared value: 0.7306) for a smaller popper (a black dashed line, preliminary and main experiments combined) and \(t_{\text{life}}=-0.094H+2.687\) (R-squared value: 0.5891) for a larger popper (a red dashed line).
the cavitation onset. Silver-coated ceramic particles with a typical diameter of \(d\sim 85\)\(\mu\)m were seeded (see the left-hand side of figure 3(c)). A Photron SA-Z high-speed camera could achieve a spatial resolution of \(\sim 32\) pixels/mm while maintaining a high temporal resolution (100,000 frames per second). We kept using the same popper (\(R_{p}\sim\)16 mm) while changing the release height \(H\) for 5 different levels (from 9 mm to 18 mm, 0.56 \(\leq\ H/R_{p}\leq 1.13\)). Particle tracking was performed via the free software Tracker (e.g., [31]) for 0.3 ms until cavitation starts. As a measure of the flow speed, the radial speed of the particle \(V_{\text{particle}}=\Delta r_{\text{particle}}/\Delta t_{2}\) was estimated, where \(\Delta r\) is the radial displacement of the particle for a short period of time \(\Delta t_{2}\) before cavitation onset (\(\Delta t_{2}=0.1\) ms). We note that we assume the popper and fluid dynamics were to be axis-symmetric. While the particle dispersion and popper dynamics are not exactly the same for each trial, the general behavior was confirmed to be similar enough based on 5 trials (see Appendix).
We also filmed the side-view images of the same popper (\(R_{p}\sim\)16 mm) slamming the substrate in water without particles, to obtain a better understanding of the thickness of the fluid gap between the popper and substrate, \(h\), and that of the bubble (see also the right-hand side of figure 3(c)). We used a Photron SA-Z high-speed camera at 100,000 frames per second at 15.6 pixels/mm. The gap \(h\) is measured at one frame earlier than the cavitation onset. Experiments were repeated 5 times for each condition (from 9 mm to 18 mm, 0.56 \(\leq\ H/R_{p}\leq 1.13\)).
In addition, we measured the force \(F\) that the popper can induce, to scale the kinetic energy released. We employed a force sensor (DYLY-106 S-type load cell, measurable range: up to 2 kg) and placed it under the popper activating it in the air (Appendix). The force sensor is connected to the amplifier (2310B Signal Conditioner Amplifier, Gain: 2.0\(\times\)10\({}^{2}\)) and the data acquisition system (National Instrument DAQ USB-6001). The data were processed through the Matlab Analog Input Recorder (sampling rate: 20,000 Hz, sampling duration: 10 s) in the PC. We used the same platform to change \(H\) while holding the popper by hand until it gets activated. Once the popper is activated, it slams a 3D-printed circular plate that is mounted on the top of the force sensor. The calibration information for the force sensor is shown in the Appendix.
## III Results and Discussion
### Cavitation Mechanism
We observed three different bubble dynamics: the toroidal cavitation, the vortex-ring type bubble, and the transition. The first to be discussed is the toroidal cavitation that was mentioned in the preliminary observation (figure 1). As shown in figure 4(a1), bubbles form from the middle of the popper surface and then expand, maintaining a toroidal shape if \(H/R_{p}\ll\)1. The bubble formation seems to be similar to the cavity formation upon a sphere water entry [32], where the gas phase expands from the three-phase contact point. From the side, it can be clearly seen that when the bubble begins to form, there is a thin gap between the bottom of the popper and the substrate (see \(t=0.1\) ms in figure 4(a2)). We note that the bubble does not form as the symmetric torus when \(H/R_{p}\) is too small, as noted in the preliminary experiment at \(H/R_{p}\sim 0.44\). From the bottom view in the main experiment, we observed this partial cavitation when \(H/R_{p}\sim 0.56\) as well as in some of the \(H/R_{p}\sim 0.63\) trials. The fully-developed toroidal bubble was observed within the range starting from \(H/R_{p}\sim 0.63\) up to \(H/R_{p}\sim 0.88\).
We observed another unique bubble at the other end of parameter space (i.e., \(H/R_{p}\gg 1\)). The bubble formed not from the mid-surface but at the tip of the hole on the popper (see \(t=0.33\) ms in figure 4(c1)). The side-view images indicate that the vortex ring-type bubble is ejected from the hole at some translational speeds and levitates in the gap for a while (figure 4(c2)). The mechanism of bubble onset is different than the toroidal cavitation at a smaller \(H/R_{p}\) and seems to be dominated by the popper dynamics.
Figure 3: (a) Schematic of the bottom-view measurement by employing two high-speed cameras (not to scale). (b) Schematic of the side-view measurements with and without tracer particles (see also the left- and right-hand side portions of figure 3(c), not to scale). Some of the measured quantities (\(H,h,U_{\text{popper}}\), and \(V_{\text{particle}}\)) are presented in figure 3(c), while they were calculated in separate measurements.
The third regime is the intermediate one. The bubble showed a somewhat ring-like shape (figure 4(b1)) but was not as uniform as the toroidal cavitation. The destruction started to appear from \(H/R_{p}\sim 0.88\) as "cracks" on the bubble surface and becomes apparent when \(H/R_{p}\sim 1.0\). The gap \(h\) between the popper and the substrate becomes very thin and almost not visible (\(t=0.1\) ms in figure 4(b2)). The popper perhaps recovered its original shape as \(H/R_{p}\) approaches 1.0, but still generates cavitation near the center of the popper.
The three-dimensional imaging enabled us to visualize the complex inner shape of the popper while it is snapping. Figure 5 shows the estimated shape of the popper at different moments (from \(t=\)-3 ms to \(t=\)1 ms). The reference time \(t=0\) ms represents one frame earlier than the cavitation onset. When \(H/R_{p}\ll 1\) (figure 5(a)), the popper maintains either a flattened or even inverted bottom shape at the time of cavitation. The location of the extreme \(R_{\rm{rim}}\) at \(t=0\) ms, where the distance between the popper and the substrate becomes minimum, was computed and marked by a star in figure 5(a). A similar popper bottom shape was observed for \(0.56\leq H/R_{p}\leq 0.81\). As \(H/R_{p}\) increases, the popper recovers the hemispherical shape when it approaches the
Figure 4: High-speed images of cavitation phenomena upon popper slamming (\(R_{p}\sim\)16 mm). (a1) angled bottom-view of the toroidal bubble at \(H/R_{p}\sim 0.69\). The dark dots and tiltness are used to measure 3D shapes. The frame rate was 6,000 frames per second. (a2) side-view images of the toroidal bubble separately filmed at the same condition with the frame rate of 100,000 frames per second. (b1 & b2) high-speed images of the bubble in the transition regime (\(H/R_{p}\sim 1\)). (c1 & b2) high-speed images of the vortex-ring type bubble formed at the tip of the popper (\(H/R_{p}\sim 1.13\)).
substrate (\(t\sim 0\) ms). A stretched popper for \(H/R_{p}\sim 1.0\) touched (or approached close enough) the substrate as suggested by the slight movement of the popper tip between \(t=0\) ms and \(t=1\) ms (figure 5(b)). When \(H/R_{p}\gg 1\), the popper is still moving when it induces the vortex-ring type bubble (figure 5(c), \(t=0\) ms and \(t=1\) ms). The popper oscillates and travels toward the substrate (see also figure 4(c2)). Comparing the traveling distance of the popper center between \(t=-1\) ms and \(t=0\) ms in figures 5(a & c), it is visible that the speed of the popper increased as \(H/R_{p}\) increased.
Regardless of the bubble dynamics type, the liquid pressure needs to be reduced significantly to cavitate. The cavitation number \(Ca\) is a powerful tool to scale the likelihood of cavitation (e.g., [1]), which compares the pressure threshold \(p_{0}-p_{v}\) and pressure drop \(\Delta p\) as
\[Ca=\frac{p_{0}-p_{v}}{\Delta p}. \tag{1}\]
Here, \(p_{0}\) and \(p_{v}\) are respectively the atmospheric (\(p_{0}=101\) kPa) and liquid vapor (\(p_{v}=2\) kPa for water) pressures. This dimensionless number tells us that the lower the Cavitation number, the higher the chance of cavitation. An appropriate representation of \(\Delta p\) may vary depending on the mechanism of cavitation [20]. In this manuscript, we adopted the conventional dynamic pressure representation to estimate it as \(\Delta p\sim\frac{1}{2}\rho V^{2}\), where \(V\) is the characteristic flow speed of this expansion flow [33].
The popper center might move fast enough to cavitate water when \(H/R_{p}\) is large enough. Circles in figure 6(a) show the speed of the popper center, \(V_{\text{popper}}\), which is estimated by the three-dimensional imaging data (see also figure 5). In general, \(V_{\text{popper}}\) increases as \(H/R_{p}\) increases. It showed a somewhat flat response for a larger \(H/R_{p}\) values, perhaps because the popper achieved the maximum stretch. The popper speed \(V_{\text{popper}}\) reached \(V_{\text{popper}}\sim 11\) m/s, which gave us a Cavitation number of \(Ca\sim 1.64\). Because this \(V_{\text{popper}}\) is the averaged speed over the relatively long time interval (\(\Delta t_{1}\sim 1\) ms), we can safely assume that the instantaneous speed is faster, and might satisfy \(Ca<1\). It suggests that the vortex-ring type bubble (the third regime, figure 4(c)) occurs due to the fast snapping of the tip of the popper center.
The toroidal cavitation was observed when \(H/R_{p}\geq 0.81\) as discussed earlier., where \(V_{\text{popper}}\), i.e., the measure of the highest speed that the popper can achieve, was not fast enough to cavitate water. In such cases, the flow near the substrate was however fast enough. The result of the particle tracking from the side-view shows that the particles below the flattened popper bottom (marked by the triangles in figure 6(a)) can achieve \(Ca<1\) (i.e., \(>14.1\) m/s). The speed of the fastest particle (\(V_{\text{particle}}\)) slowed down as \(H/R_{p}\) increased. Therefore, we conjecture that the toroidal cavitation occurs because the flow in the gap between the popper and substrate (see also figures 1(a) and 4(a2)) is accelerated significantly. Here, let \(\Omega=\pi R_{\text{rim}}^{2}h\) be a cylindrical fluid volume below the flattened popper bottom. \(R_{\text{rim}}\), which is estimated as the location of the extreme in the fitting curve (see figure 5(a)), became smaller as \(H/R_{p}\) became larger for \(H/R_{p}\leq 0.81\) (marked by circles in figure 6(b)). A volume conservation (i.e., \(d\Omega/dt=2\pi R_{\text{rim}}h(dR_{\text{rim}}/dt)+\pi R_{\text{rim}}^{2}(dh/ dt)=0\)) gives us a scaling \(V_{r}\sim(R_{\text{rim}}/(2h))U_{\text{popper}}\). This simple scaling law indeed captures the extremely fast flow speed expected from that of particles (figure 6(a)). While the fine scale of the gap \(h\) makes it challenging to discuss its trend, our data shows the gap could be \(h\sim 0.4\) mm for \(H/R_{p}\leq 0.81\) (a gray line in figure 6(c)). It implies a flow speed can be as fast as \(V_{r}\sim(R_{\text{rim}}/(2h))U_{\text{popper}}\sim 30-50\) m/s, which is supposed to be fast enough
Figure 5: The bottom shape of the popper (\(R_{p}\sim 16\) mm) is estimated by the three-dimensional imaging at the frame rate of 6,000 frames per second. Colors distinguish the times from \(t=-3\) ms to \(t=1\) ms. The reference time \(t=0\) ms represents the one frame earlier than the cavitation onset. Solid lines are computed based on the popper height at each distance \(r\) (marked by dots) from the center by adopting the fourth-order polynomials while assuming that the popper shape is axis-symmetric. Stars represent the popper center and \(R_{\text{rim}}\) at \(t=0\) ms when available. The height of the substrate \(z=0\) is set arbitrarily but remains the same for all three panels. The experimental conditions were \(H/R_{p}\sim 0.69\) for (a), \(H/R_{p}\sim 1.0\) for (b), and \(H/R_{p}\sim 1.13\) for (c). We note that the arrows in (a) indicate the direction of the speeds of popper center \(U_{\text{popper}}\) and seeded particles \(V_{\text{particle}}\) that we present in figure 6(a) (see also figure 3(c)).
to cavitate water. We note that the particle speed \(V_{\text{particle}}\) is the measure of the lower bound of the flow speed as discussed in Appendix. The inset of figure 6(a) shows the ratio of speeds \(V^{*}=V_{\text{particle}}/U_{\text{popper}}\) as a function of \(H/R_{p}\). It shows the radial flow is enhanced with respect to the vertical popper motion at a small \(H/R_{p}\) while qualitatively obeying the scaling. Observations above agree with our hypothesis that the fast flow squeezed out from the thin gap between the popper and substrate drives the toroidal cavitation.
We note that the cavitation in the transition region might occur slightly differently than the toroidal cavitation. \(R_{\text{film}}\) could not be computed and thus the popper perhaps reached the substrate. The scaling for the water flow in the gap no longer holds. It was in the same line with our side-view visualization that showed the gap \(h\) for \(H/R_{p}\geq 1\) was not visible or was negligibly small (figure 6(c)). In the particle tracking, the fastest particles were found outside of the popper bottom surface (i.e., \(r>R_{\text{film}}\), marked by the crosses in figure 6(a)). It suggests that the radial removal of surrounding fluid play a role in cavitation onset in the transition regime, where the detailed mechanism is yet unclear.
It is also important to compare this unique phenomenon to those reported in similar settings. In the previous study, the cavitation bubbles formed immediately after the collision of the rigid sphere were spherically nucleated around the impact point [26]. In contrast, the bubbles in our study nucleate annually without physical contact between the popper and the substrate. We do not observe bubbles in the central region, indicating that pressure reduction is localized in the annulus region, which is perhaps assisted by the snap-through dynamics of the popper. The bubbles in the annulus then merge to form a toroidal bubble during the evolving stage (figure 1(b), \(t=0.33\ -\ 0.89\) ms). We also note that our experiment does not provide sufficient evidence to determine the contribution of stress-induced cavitation [27]. The shear stress might be scale as \(\sigma\sim\mu(\partial V_{r}/\partial z)\sim\mu(V_{r}/h)\) by assuming a Couette flow. This simple scaling predicts \(\mu(V_{r}/h)\sim 50\) Pa \(\ll(p_{0}-p_{v})\) for the toroidal cavitation cases, where \(\mu\sim 1\) mPa\(\cdot\)s, \(V_{r}\sim 20\) m/s and \(h\sim\)0.4 mm are assumed. However, it might become dominant in the transition regime (the second regime, \(H/R_{p}\sim 1\)) as the gap \(h\) can be extremely thin (figure 6(c)).
### Cavitation Morphology
We first discuss how the lifetime of the cavitation bubble is related to its size through the equation of motion (i.e, Rayleigh-Plesset equation [34; 35]). For simplicity, we make a crude assumption that bubble dynamics are two-dimensional and purely radial. We also assume that the inner radius (\(R_{\text{in}}\)) does not move within the lifetime of the bubble. These assumptions imply that we derive a scaling law for cavitation bubbles in the toroidal cavitation and the transition regimes, where the bubble dynamics
Figure 6: (a) The circles show the vertical velocity of the popper, \(U_{\text{popper}}\), measured through the three-dimensional imaging as shown in figure 5. The open circles represent the mean value of five trials at the same condition, while small dots show the individual trials. The triangles show the speed of the particles in the radial direction, \(V_{\text{particle}}\), measured through the particle tracking below the popper bottom surface. The fastest particles for a larger \(H/R_{p}\) were found outside of the popper bottom surface (\(r>R_{\text{film}}\)), which are marked by crosses. The inset shows the ratio of speeds \(V^{*}=V_{\text{particle}}/U_{\text{popper}}\) computed by the values in figure 6(a) as a function of \(H/R_{p}\). (b) The circles show the location of the extreme of the polynomials that fit the popper bottom shape (e.g., figure 5), \(R_{\text{film}}\). Markers and error bars respectively show the mean and standard deviation calculated based on five trials. A gray shaded region shows the maximum and minimum values for each condition. A solid line shows the first-order approximation \(R_{\text{film}}\sim\ R_{p}\sqrt{1-(H/R_{p})^{2}}\). A discrepancy from the data perhaps indicates that the popper stretches more than its original length due to its elasticity. (c) The thickness of the gap \(h\) was measured through side-view imaging (marked by the diamonds). Markers and error bars respectively show the mean and standard deviation calculated based on five trials. A gray shaded region shows the maximum and minimum values for each condition. A solid line shows \(h\sim 0.4\) mm, which was calculated from the data for \(0.56\leq H/R_{p}\leq 0.81\). Note that we did not measure \(h\) when \(H/R_{p}>1\), as the vortex-ring-type bubbles occurred regardless of the distance from the substrate.
are largely restricted to the two-dimensional. We note that the vortex-ring-type bubble requires the translational speed taken into account [36], which is not the scope of this study. We may use the two-dimensional Rayleigh equation (e.g., [37]) in terms of the bubble radius \(R\), as
\[\left[\left(\frac{dR}{dt}\right)^{2}+R\frac{d^{2}R}{dt^{2}}\right]\log\left( \frac{R}{R_{\infty}}\right)+\frac{1}{2}\bigg{(}\frac{dR}{dt}\bigg{)}^{2}= \frac{p_{\nu}-p_{0}}{\rho}. \tag{2}\]
We neglected the influence of viscosity, surface tension, and dissolved gas. With the approximation of \(\log(R/R_{\infty})\approx 1\) and \(R\ (d^{2}R/dt^{2})+3/2(dR/dt)^{2}\approx\ (d^{2}R^{2}/dt^{2})/2\)[38], the equation above can be rewritten as
\[\frac{d^{2}R^{2}}{dt^{2}}\approx\ 2\ \bigg{(}\frac{p_{\nu}-p_{0}}{\rho}\bigg{)}. \tag{3}\]
We solve this equation in terms of the bubble collapse stage to estimate the characteristic timescale \(\tau\). We use the initial conditions \(R=R_{\text{out}}\) and \(dR/dt=0\) at \(t=0\), and then obtain
\[R^{2}=R_{\text{out}}^{2}+\left(\frac{p_{\nu}-p_{0}}{\rho}\right)\,t^{2}. \tag{4}\]
The timescale \(\tau\), that a bubble requires to shrink from \(R=R_{\text{out}}\) at \(t=0\) to \(R=R_{\text{in}}\) at \(t=\tau\), can be scaled as
\[\tau\sim\sqrt{\left(R_{\text{out}}^{2}-R_{\text{in}}^{2}\right)\ \frac{\rho}{p_{0}-p_{\nu}}}. \tag{5}\]
Note that this becomes compatible with the three-dimensional Rayleigh-type bubble lifetime if \(R_{\text{in}}=0\).
Equation 5 indeed describes the experimental data well (figure 7(a)), despite the simplifications we made. Both quantities are measured experimentally and normalized by the Rayleigh-type factor \(R_{p}\sqrt{\rho/(p_{0}-p_{\nu})}\). Colors represent the difference in the popper size \(R_{p}\). Red and blue markers show data from the preliminary experiment, while black markers represent the ones from the main experiment. Squares represent the mean values over the five trials, while the error bars show the standard deviation. Individual trials were marked by dots. It is visible that data series collapsed well and showed an incremental trend. The best fit for these data (solid line) \(t_{\text{life}}/(R_{p}\sqrt{\rho/(p_{0}-p_{\nu})})=1.085\ [(R_{\text{out}}^{2}-R_{\text{in}}^{2})/R_{p}^{2}]^{\frac{1}{2}}\) scales the general behaviour for both popper sizes, while the prefactor differed from 2. This supports our approach that employing the simplified Rayleigh-Plesset-type model to capture the morphology of the toroidal cavitation bubble. We note that a few data points are showing non-zero \(t_{\text{life}}\) values while \((R_{\text{out}}^{2}-R_{\text{in}}^{2})/R_{p}^{2}=0\) in figure 7. In these cases, the bubbles formed partially but not fully developed annual shapes at small \(H/R_{p}\) and thus the radii were not identified. We note that the bubble at \(H/R_{p}\sim 1.13\) was clearly the vortex-ring type one (figure 4(c)) and thus excluded from the plot.
To connect the bubble and the popper dynamics, we consider the energy balance between them. For simplicity, we assume that an elastic potential energy, \(Y_{\text{elastic}}\), which is stored in the indented hemispherical shell, will be fully used to form a toroidal bubble and thus be balanced with the hydrostatic potential energy of the bubble at its maximum size. A simple energy balance can yield
\[(R_{\text{out}}^{2}-R_{\text{in}}^{2})\sim\bigg{(}\frac{Y_{\text{elastic}}}{ \pi\ \delta(p_{0}-p_{\nu})}\bigg{)}, \tag{6}\]
where \(\delta\) is the characteristic thickness of the toroidal bubble. We note that we were not able to measure \(\delta\) precisely. Here, we arbitrarily set \(\delta=1\) mm, which is slightly larger than the gap thickness \(h\) but still smaller than the outer rim of the fully expanded toroidal bubble. The elastic potential energy \(Y_{\text{elastic}}\) can be approximated through the indentation force \(F\)[39] as
\[Y_{\text{elastic}}\sim\int F\ dx\sim\frac{2}{3}\frac{E\ h_{p}^{\frac{5}{2}}}{ R_{p}}e^{\frac{3}{2}}\sim\frac{2}{3}\frac{E\ h_{p}^{\frac{5}{2}}}{R_{p}}(R_{p}-H)^{ \frac{3}{2}}. \tag{7}\]
Parameters \(E\), \(h_{p}\), and \(e\) are Young's modulus, the characteristic thickness of the popper, and the depth of indentation, respectively. The indentation depth \(e\) is scaled as \(e\sim(R_{p}-H)\) based on the first-order geometrical consideration (neglecting the stretch of the popper) with the initial height of the platform, \(H\). We note that we assumed the Young's modulus \(E\) and the popper thickness \(h_{p}\) to be constants for simplicity. Hereafter, we use the Young's modulus of \(E=25\) MPa from the literature [21] and the popper thickness of \(h_{p}\approx\)3 mm measured at the rim of the cut popper, although \(h_{p}\) varies slightly along its arc. The uncertainty associated with the choice of \(\delta,E\), and \(h_{p}\) would result in the limitation of this approach and thus their influence deserves further
investigation. In equation 7, it is visible that the parameter \(H/R_{p}\) governs the elastic potential energy \(Y_{\rm elastic}\). Plugging equations 6 and 7 into equation 5 would finally gives us the relationship as
\[t_{\rm life}\;\sim\;\sqrt{\frac{Y_{\rm elastic}}{\pi\delta(p_{0}-p_{v})}\frac{ \rho}{p_{0}-p_{v}}}\;\to\;\frac{t_{\rm life}}{R_{p}\sqrt{\rho/(p_{0}-p_{v})}} \;\sim\;\sqrt{\frac{E}{p_{0}-p_{v}}\frac{h_{p}}{\delta}\left(\frac{h_{p}}{R_{p }}\right)^{\frac{3}{2}}\left(1-\frac{H}{R_{p}}\right)^{\frac{3}{2}}}. \tag{8}\]
Equation 8 implies that the lifetime of the bubble \(t_{\rm life}\) is largely determined by the popper geometry \((h_{p}/R_{p})\), which makes sense as the popper size is a parameter that determines both the bubble size and the lifetime (figure 2). It is also implied that the location of the popper \((H/R_{p})\) is another important parameter. This is intuitive as the larger \(H/R_{p}\) allows the popper to release more energy, which is evidenced by the incremental trend of \(U_{\rm popper}\) over \(H/R_{p}\) (figure 6). Figure 7(b) evaluates the equation 8. All the parameters \(E\), \(h_{p}\), \(\delta\), and \(R_{p}\) are chosen as mentioned. The dashed line is the best-fit line \(t_{\rm life}/(R_{p}\sqrt{\rho/(p_{0}-p_{v})}=0.202[E/(p_{0}-p_{v})]^{\frac{1}{2 }}(h_{p}/R_{p})^{\frac{1}{2}}(1-H/R_{p})^{\frac{3}{2}}+0.328\). Despite the uncertainties mentioned above, the model captures the incremental trend of the bubble lifetime. This suggests that the bubble lifetime is scaled as a function of the popper dynamics. In other words, the parameter \(H/R_{p}\), which is a measure of the the intensity of the interaction between the popper and the substrate, can dominate the bubble morphology.
The inset in figure 7(b) compares the normalized force, \(F^{\prime}\sim\frac{1}{T}\int F\;dt\;R_{p}^{\frac{3}{2}}/(Eh_{p}^{\frac{5}{2 }})\) measured in the air, as a function of the stand-off parameter \(H/R_{p}\). We note that we employ the impulse \((\frac{1}{T}\int F\;dt)\) as a measure of the force to capture the overall behaviour. It is visible that the force decreases as \(H/R_{p}\) increases, and reaches a near-zero value at \(H/R_{p}\sim 1\), which makes sense based on the geometrical constraint. A dotted line shows a trend line \(F^{\prime}\sim 0.111(0.92-H/R_{p})^{\frac{1}{2}}\). Though a threshold 0.92 was an arbitrary choice to fit the data with the slope of 1/2, it is possibly justifiable because both the platform and the popper can deform and the threshold can differ from \(H/R_{p}=1.0\). For comparison purposes, the dashed line denotes the best fit when we restrict the threshold to be 1.0, where we found \(F^{\prime}\sim 0.136(1-H/R)^{0.83}\). In general, the downward trend of the data implies
that \(H/R_{p}\) can control the intensity of the interaction between the popper and the substrate as argued above.
## IV Conclusion
We demonstrated that the underwater slamming of a rubber popper toy against a glass substrate can induce a toroidal cavitation bubble (figures 1 and 4). A series of experiments (figure 3) indicated that the toroidal cavitation occurs due to a fast liquid flow squeezed out from a thin gap between the rubber popper and the glass substrate (figures 4-6). As the initial position of the rubber popper in the experiment (\(H/R_{p}\)) increased, the bubble dynamics transient from the toroidal one to the vortex-ring type one (figures 4(b & c)). The bubble lifetime (\(t_{\rm life}\)) and radii (\(R_{\rm in}\) and \(R_{\rm out}\)) we found for the toroidal cavitation and the transient one to be interrelated through the two-dimensional Rayleigh-Plesset-type model (figure 7(a)). We also discussed an analytical framework for this uniquely formed cavitation through the energy balance between the deformed rubber popper and the fully expanded cavitation bubble. The parameter \(H/R_{p}\), which scales the elastic potential energy used to form cavitation, captured the qualitative trend of both the popper and the bubble dynamics (figure 7(b)). This paper might provide a platform for further studies on bubbles formed in a complex system with the involvement of elastic structures that may include the Mantis Shrimp fist [40] or the brain system [41].
## V Acknowledgments
This work was supported by NSF grant CBET-2002714 and CMMI-1238169.
## VI Author contributions
S.J. conceptualized the work; A.K. designed the experiments; A.K. and S.W. conducted the experiments and analyzed data. A.K. wrote the original draft of the manuscript, and S.W. and S.J. edited the manuscript.
## VII Data availability
Most figure files and matlab figure data are available on the Open Science Framework (DOI 10.17605/OSF.IO/YCNV8).
## Appendix A Details on the experiment
### Preliminary experiment
The preliminary experiment was performed by employing three high-speed cameras synchronized at the frame rate of 5,000 frames per second (figure 8(a)). The depth of deionized water was set at approximately 10 cm. A 3D-printed platform (see figure 8(b)) was submerged and held by hand.
The onset and the collapse of cavitation are visually detected. Thus, the presence of cavitation in this article is defined as the presence of any bubbles, which are larger than a few pixels on the image. The measured data may contain \(\pm 1\) frame uncertainty that depends on the individual. The size and the lifetime of the bubble are estimated by using the "reslice" function implemented in the freely available software ImageJ. We assumed the angle made by the two bottom cameras is small enough. We thus directly analyzed either one of these images. The experimental conditions covered were summarized in table 1. We note that we only focused on the first onset of the bubble, while secondary cavitation was found to occur.
### Main experiments
The three dimensional imaging data were correlated through multiple checkerboard images. The vertical (\(z\)) axis was determined based on the path of the popper center, which was estimated from one of the experimental data for \(H/R_{p}=1.13\). The three dimensional imaging data were also used to estimate the bubble radii \(R_{\rm in}\) and \(R_{\rm out}\). Due to the restricted angle between two high-speed cameras, we manually measured the typical bubble radii for each trial through a DLTdv8 digitizing tool [30]. We calculated radius from the popper center as \(r=\sqrt{x_{i}^{2}+y_{i}^{2}}-\sqrt{x_{0}^{2}+y_{0}^{2}}\) at each time step \(i\) (subscript 0 represents the popper center), while the \(z\)-direction values remained almost constant.
The inner radius \(R_{\rm in}\) (squares) is compared with the lowest \(r\)-direction value of the estimated popper shape \(R_{\rm rim}\) (circles) (figure 9(a)). The black squares and circles are estimated from the same data set, while red and blue squares are shown for comparison purposes. When the bubble maintains the toroidal shape (\(H/R_{p}\leq 0.81\)), the inner radius \(R_{\rm in}\) is pretty close to, or slightly larger than, \(R_{\rm rim}\), suggesting that a fast radial flow developed within a thin gap nucleated cavitation and expel it outside. In a transition region, a somewhat radial bubble was formed while \(R_{\rm rim}\) was not well identified (\(H/R_{p}\leq 0.88\)). We also show the bubble lifetime \(t_{\rm life}\) as a function of \(H/R_{p}\) for both preliminary and main experiments (figure 9(b)). It shows the bubble dynamics in general were well reproduced for various trials. We note that the trend line here was for comparison purposes and was not based on the theoretical background discussed in the paper.
We adopted the particle tracking approach to estimate the speed of the flow within a narrow gap between the popper and the substrate, which was expected to achieve \(V_{\rm particle}\sim O(10)\) m/s to satisfy \(Ca<1\). We arbitrarily selected the traceable particles out of 5 trials for 5 different \(H\) values. Figure 10 shows the typical images of the particle tracking, where the particles tracked are marked by dots (the height \(H\) was set to \(H=13\) mm). In this example, we tracked particles at 11 different locations as marked. We note that we selected the particles floating on the glass substrate to estimate the radial flow speed. We acknowledge that the
Figure 8: (a) Glass tank is elevated and filled with \(\approx 10\) cm of deionized water. A cantilever structure including a 3D-printed platform is submerged in water to position the rubber popper. Three synchronized high-speed cameras are positioned to capture the side-view and bottom-view videos. The frame rate was set at 5,000 frames per second. for all three cameras. (b) 3-D printed platform used to fix popper height. The initial height of the platforms determines the parameter \(H\).
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \(R_{p}\) (mm) & \(H\) (mm) & \(H/R_{p}\) & Number of conditions \(\times\) trials & Experiment type \\ \hline
16 & \(7-18\) & \(0.438-1.13\) & \(7\times 5\) & Preliminary \\
22 & \(12-22\) & \(0.545-1.00\) & \(5\times 5\) & Main (bottom view) \\ \hline
16 & \(9-18\) & \(0.563-1.13\) & \(10\times 5\) & Main (particle tracking) \\
16 & \(9-18\) & \(0.563-1.13\) & \(10\times 5\) & Main (side view) \\ \hline
16 & \(9-21\) & \(0.563-1.31\) & \(6\times 5\) & Force measurement in air \\
22 & \(13-22\) & \(0.591-1.00\) & \(5\times 5\) & Force measurement in air \\ \hline \end{tabular}
\end{table}
Table 1: Summary of experimental conditions
particle motion might be affected by the viscous boundary layer over the substrate and be decelerated. This analysis reflects the lower bound of the flow speeds. As time progresses, particles move to the right, suggesting that the flow is indeed squeezed away from the popper center. This flow pattern is always the case for multiple trials with different \(H\) values.
Figure 10(e) shows the time series of the particle positions. The vertical axis represents the displacement of the particles with respect to their original location at \(t=-0.3\) ms. The numbers in the legend show the characteristic location of each particle \(r_{\rm rep}\), which is measured at \(t=-0.05\) ms to reflect both their location and speed. Particles located far away from the popper center (green markers) travel at an almost constant speed. The traveling speed of particles decreases as \(r_{\rm rep}\) increases (see blue and green markers). Interestingly, the particle trajectory for particles closer to the popper center shows a slightly different trend (red markers). Particles do not move much at first (-0.3 ms\(\leq\ t\leq-0.1\) ms) and then accelerated rapidly (\(t\geq-0.1\) ms). We set \(\Delta_{2}\)=0.1 ms to calculate \(V_{\rm particle}\). Figure 10(f) shows the averaged particle speeds for 0.1 ms \(V_{\rm particle}\) as a function of \(r_{\rm rep}\). The speed of outside particles decreases as the distance increases, while inner particles maintain faster speeds. Figure 10(e) & (f) suggest that the flow reaches at least \(V_{r}\sim\)18 m/s, causing cavitation.
As represented by figure 10(e), the particle speed \(V_{\rm particle}\) reaches the maximum at a certain distance \(r\) from the popper center. It is interesting to note that the peak shifts to a smaller \(r\) as \(H\) increases (figures 11(a-c)). This is intuitive from the trend shown in figure 6(b), i.e., the radius of the thinnest gap \(R_{\rm rim}\) shrinks as \(H/R_{p}\) increases. Solid and dashed gray lines in figure 11(a-c) represent the mean and standard deviation of \(R_{\rm rim}\) estimated by the three-dimensional imaging (figure 6(b)), showing a qualitative agreement. \(R_{\rm rim}\) values for \(H/R_{p}\sim 0.94\) and 1.13 were not available as discussed. They showed the reduction in speed for larger \(H/R_{p}\) and the outward shift of the peak.
### Force upon the popper snap
Calibration was performed by using the weight balances to convert units from signal output (\(S.O.\)) (V) to force (N), where the result is shown in figure 12(a). We varied the mass of the weight balances from 0.2 to 1.2 kg. The calibration curve (a dashed line) is \(S.O.=0.1292F-2.1757\). The dimensional result based on the calibration curve is shown in figure 12(b) for both popper sizes. Two dashed lines are demonstrating the downward trends. It was obvious that the impulsive force \(F\) becomes smaller as \(H\) increases. Also, the larger popper (\(R_{p}=22\) mm) can cause a larger force when compared to the one with a smaller popper (\(R_{p}=16\) mm). Figure 12(b) demonstrates that the height \(H\) and the popper size \(R_{p}\) govern the impulsive force \(F\) as discussed in the main manuscript. Note that, we assumed the scaling relationship between \(F\) and \(H/R_{p}\) does not change in either air or underwater, in order to apply the findings to the current problem.
Figure 9: (a) A comparison between the inner radius \(R_{\rm in}\) (squares, see figure 2(a)) and the location of the extreme of the polynominals for the popper bottom shape \(R_{\rm rim}\) (circles, see figure 5(b)) as a function of the platform height \(H/R_{p}\). Both radii match well as long as cavitation maintains the toroidal shape (\(H/R_{p}\leq 0.81\)). (b) The lifetime of the bubble as a function of \(H/R_{p}\) with a trend line of \(R_{\rm life}=-1.502\)\((H/R_{p})+2.186\) (R-squared value: 0.683). |
2310.19509 | SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on
Fine-Grained Group Sparsity | To address the challenge of increasing network size, researchers have
developed sparse models through network pruning. However, maintaining model
accuracy while achieving significant speedups on general computing devices
remains an open problem. In this paper, we present a novel mobile inference
acceleration framework SparseByteNN, which leverages fine-grained kernel
sparsity to achieve real-time execution as well as high accuracy. Our framework
consists of two parts: (a) A fine-grained kernel sparsity schema with a
sparsity granularity between structured pruning and unstructured pruning. It
designs multiple sparse patterns for different operators. Combined with our
proposed whole network rearrangement strategy, the schema achieves a high
compression rate and high precision at the same time. (b) Inference engine
co-optimized with the sparse pattern. The conventional wisdom is that this
reduction in theoretical FLOPs does not translate into real-world efficiency
gains. We aim to correct this misconception by introducing a family of
efficient sparse kernels for ARM and WebAssembly. Equipped with our efficient
implementation of sparse primitives, we show that sparse versions of
MobileNet-v1 outperform strong dense baselines on the efficiency-accuracy
curve. Experimental results on Qualcomm 855 show that for 30% sparse
MobileNet-v1, SparseByteNN achieves 1.27x speedup over the dense version and
1.29x speedup over the state-of-the-art sparse inference engine MNN with a
slight accuracy drop of 0.224%. The source code of SparseByteNN will be
available at https://github.com/lswzjuer/SparseByteNN | Haitao Xu, Songwei Liu, Yuyang Xu, Shuai Wang, Jiashi Li, Chenqian Yan, Liangqiang Li, Lean Fu, Xin Pan, Fangmin Chen | 2023-10-30T13:08:48Z | http://arxiv.org/abs/2310.19509v1 | # SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity
###### Abstract
To address the challenge of increasing network size, researchers have developed sparse models through network pruning. However, maintaining model accuracy while achieving significant speedups on general computing devices remains an open problem. In this paper, we present a novel mobile inference acceleration framework SparseByteNN, which leverages fine-grained kernel sparsity to achieve real-time execution as well as high accuracy. Our framework consists of two parts: (a) A fine-grained kernel sparsity schema with a sparsity granularity between structured pruning and unstructured pruning. It designs multiple sparse patterns for different operators. Combined with our proposed whole network rearrangement strategy, the schema achieves a high compression rate and high precision at the same time. (b) Inference engine co-optimized with the sparse pattern. The conventional wisdom is that this reduction in theoretical FLOPs does not translate into real-world efficiency gains. We aim to correct this misconception by introducing a family of efficient sparse kernels for ARM and WebAssembly. Equipped with our efficient implementation of sparse primitives, we show that sparse versions of MobileNet-v1 outperform strong dense baselines on the efficiency-accuracy curve. Experimental results on Qualcomm 855 show that for 30% sparse MobileNet-v1, SparseByteNN achieves 1.27\(\times\) speedup over the dense version and 1.29\(\times\) speedup over the state-of-the-art sparse inference engine MNN with a slight accuracy drop of 0.224%. The source code of SparseByteNN will be available at [https://github.com/lsw2juer/SparseByteNN](https://github.com/lsw2juer/SparseByteNN).
## 1 Introduction
Deep convolutional neural networks (CNNs) have achieved extraordinary performance in computer vision tasks and become the fundamental element and core enabler of ubiquitous artificial intelligence. With the fast growth of embedded and mobile applications, executing CNNs on mobile platforms is becoming increasingly attractive, which will improve computing power utilization, enhance data security, and reduce dependence on the network [5][21][22]. However, typical state-of-the-art(SOTA) CNNs models are computation-extensive and memory-hungry. Even mobile devices with advanced CPUs and GPUs are considered resource-constrained when executing them. Thus, achieving efficient inference with real-time performance is still a challenging task.
To achieve this goal, extensive efforts have been made for the optimization of algorithms, software, and hardware. Algorithm optimization includes efficient network backbone design and model compression. Early SOTA CNNs [36][12][37]usually have a backbone stacked by normal 3\(\times\)3 convolution (CONV) layers. Their computationally prohibitive cost makes real-time deployment on mobile devices almost impossible. [15][35][9][38] use depthwise separable convolution instead of normal CONV layers to build efficient models for mobile and embedded vision applications, which become the mainstream of mobile net
Figure 1: SparseByteNN overview
work design. In order to further reduce the redundancy of CNNs, model compression techniques, including model pruning [10][11][42][24][17][26][14][31] and model quantization [8][3] have been proposed and studied intensively for model storage reduction and computation acceleration. Weight quantization is less supported in mobile devices, especially mobile GPUs [33]. Therefore, this paper leverages model pruning as the primary model compression technique. Recent developments in pruning can be mainly divided into weight pruning [10][11][42] and filter pruning [24][31][26][14]. Weight pruning directly removes weight values at any position in the network, which is demonstrated to achieve an extremely high compression rate with high accuracy performance. However, weight pruning is not friendly for hardware or software optimization. Specifically, the compression makes few contributions to memory access saving and calculation acceleration on general CPU(SIMD) and GPU(SIMT) architecture. In contrast, filter pruning directly removes the entire filter in the convolutional neural network, which can generate hardware-efficient regular models but fails to maintain accuracy beyond moderate sparsity ratios. Especially for mobile-oriented lightweight CNNs, such as Mobilenet [15], due to the small redundancy of model parameters, filter pruning encounters severe accuracy loss problems.
We notice that the pruning granularity of weight pruning and filter pruning represent two extremes in the design space, leading to the failure of balancing model accuracy and speedup gains. Besides, these optimization algorithms are isolated and have not been co-optimized with software and hardware optimization. In this paper, we introduce a new pruning strategy called fine-grained kernel group pruning(FKGP), whose sparsity granularity is between weight pruning and structured pruning, revealing a previously unknown point in the design space. In particular, for the core operators in the mobile network, including pointwise convolution(Conv1\(\times\)1) and depthwise convolution(DwConv3\(\times\)3), we designed diverse sparse patterns, which can have a better trade-off between accuracy and hardware efficiency. Our fine-grained kernel sparsification is implemented in groups, which means that kernels in the same group are kept or removed uniformly, and kernels in the kept group have the same sparse pattern. Compared with single kernel sparse, group kernel sparse has less precision loss but is more friendly to parallel acceleration. Based on this, we propose a whole network rearrangement strategy to derive a more influential kernel group for accuracy improvements. The above fine-grained sparse patterns cannot be directly accelerated by a general inference engine, so we introduce a family of efficient sparse kernels for ARM and WebAssembly to translate reduction in theoretical FLOPs to hardware efficiency.
In summary, we propose a novel end-to-end mobile acceleration framework named _SparseByteNN_. Combined with the improved algorithm optimization strategy and sparse engine implementation, _SparseByteNN_ advances SOTA in model pruning and open source Inference engine. The overall framework of _SparseByteNN_ is shown in Fig 1. Our contributions can be summarized as follows:
1. We focus on the acceleration of mobile lightweight CNNs, and design fine-grained kernel group sparse strategies for Conv1\(\times\)1 and DwConv3\(\times\)3 respectively. The co-optimized sparse patterns achieve an extremely high compression rate with high accuracy performance. Moreover, with the high-performance sparse kernel implementation for ARM and WebAssembly, the designed patterns can recover the hardware efficiency lost due to the fine-grained patterns. For Conv1\(\times\)1, we demonstrate a geometric mean of speedups of 26.80% compared to the dense network at 30% sparsity. In particular, we achieve high-performance compression of DwConv3\(\times\)3, which can speed up by up to 49.6% at 33% sparsity.
2. We propose a whole network rearrangement strategy, which divides kernels with similar importance into a group, improves the accuracy of each group's importance evaluation and derives a more influential kernel group for accuracy improvements.
3. We propose an end-to-end model acceleration framework _SparseByteNN_, consisting of three components: a) _compression algorithm component_, which provides out-of-the-box pruning capabilities for pre-trained models b) _model conversion tool_, which converts the model IR of the training framework into Model IR of sparse engine c)_sparse inference engine_, which provides efficient inference implementation compatible with CPUs for fine-grained kernel group sparsity.
## 2 Related Works
### Model Pruning
The improvement of neural network performance is usually accompanied by the increase of resource requirements such as params and flops, One popular approach for reducing them at test time is model pruning, which can be categorized into weight pruning and filter pruning. Weight pruning dates back to Optimal Brain Damage [23], which prunes weights based on the Hessian of the loss function. Many recent works [10][11][42] have further optimized the pruning evaluation criteria and pruning methods. For example, Han et al. [11] proposed a three-step strategy including training, pruning, and fine-train to remove unimportant connections and restore accuracy. Michael et.al [42] proposed a gradual pruning technique that can be seamlessly incorporated into the training process. Although it is an adaptive in-training pruning strategy, it cannot recover from premature pruning. Lin et al. [25] proposed a dynamic allocation of sparsity patterns and incorporated feedback signals to reactivate pre
maturely pruned weights. Weight pruning focuses on pruning the fine-grained weight of filters leading to unstructured sparsity in models, which cannot be directly accelerated on general computing libraries. In contrast, filter pruning targets pruning the entire filter, which could achieve structured sparsity. [16] proposed to explore sparsity in activations for network pruning. [17] uses \(\mathrm{I2}\)-norm to select unimportant filters and explores the sensitivity of layers for filter pruning. [26]introduces sparsity on the scaling parameters of batch normalization (BN) layers to prune the network. [31] proposes a Taylor expansion-based pruning criterion to approximate the change in the cost function induced by pruning. To reduce dependence on pre-trained models and improve model capacity, [13][14]proposed soft filter pruning enables the pruned filters to be updated when training the model after pruning. Although the pruned model obtained by filter pruning can take full advantage of high-efficiency Basic Linear Algebra Subprograms (BLAS) libraries to achieve better acceleration but fails to maintain accuracy beyond moderate sparsity ratios. The pruning granularity of weight pruning and filter pruning represent two extremes in the design space, causing them to fail to balance accuracy and acceleration gains.
Recently, some work has noticed this problem and proposed some pattern-based or block-based weight pruning schemes with compiler-based optimizations [33][32][30]. Similar to our work, their pruning granularity is between weight pruning and filter pruning to balance accuracy and inference. [30] describe a 2:4 pattern pruning scheme and NVIDIA Ampere architecture introduces Sparse Tensor Cores to provide dedicated acceleration capabilities for this sparse mode. Furthermore, PatDNN [33] uses Alternating Direction Methods of Multipliers(ADMM) and pattern-based weight pruning schema to solve a fine-grained sparse model and performs compiler optimizations to achieve real-time mobile inference. PatDNN mainly optimized the performance of Conv3x3, but the principal layers of the mobile network represented by MobileNet-v1 [15] are Conv1\(\times\)1 and DwConv3\(\times\)3, which means that it suffers difficulties when generalized to mobile networks. In contrast, SparseByteNN focuses on the optimization of mobile networks, and designs customized 4\(\times\)4 and 16\(\times\)1 pattern-based sparsity for Conv1\(\times\)1 and DwConv3\(\times\)3 respectively, and replaces compilation optimization with expert-level manual optimization, achieving a more extreme performance.
### Acceleration Frameworks on Mobile
On-mobile neural network deployment relies on the performance of inference framework, so on-mobile DNN inference frameworks have attracted more and more attentions [27]. Representative DNN acceleration frameworks, such as TensorFlow-Lite [6], Pytorch-Mobile [28], and TVM [2] are designed to support inference acceleration of dense neural networks. Although these inference frameworks already incorporate several graph optimization and compilation optimization strategies, including layer fusion, constant folding and Auto-Tuning, they lack the ability to further accelerate sparse models. Similar to our work, MNN [19] recognizes the potential of sparse speedup and supports block-based sparse speedup based on expert hand-crafted optimization, with a sparse granularity of N\(\times\)1. In order to improve the optimization efficiency, PatDNN [33] and Auto-PatchNN [41] realize the sparse model acceleration based on compiler-based optimization. Although these frameworks support sparse acceleration, they support limited types of sparse operators and suffer difficulties when generalized to DNN layers other than Conv3\(\times\)3 layers(PatDNN) and Conv1x1 layers(MNN). In Section 4.2, we will discuss this issue and compare performance.
## 3 Method
In this section, we first introduce the mathematical representation of FKGP in Section 3.1. Then we introduce the sparsity patterns of Conv3\(\times\)3, Conv1\(\times\)1, and DwConv3\(\times\)3 in Section 3.2, and the co-optimized implementation in Section 3.3. In Section 3.4, we describe a whole network rearrangement strategy, which can improve the performance of
Figure 2: Illustration of the implementation form of fine-grained kernel group sparsity on core operators
the sparse model. Finally, we introduce the overall framework of SparseByteNN in Section 3.5.
### Preliminaries
For an L-layer pre-trained model, the weights and biases for the i-th layer are denoted by \(W^{i}\in\mathbb{R}^{n^{i}\times kh^{i}\times kw^{i}\times c^{i}}\) and \(B^{i}\in\mathbb{R}^{n^{i}}\), where \(n^{i}\), \(kh^{i}\), \(kw^{i}\) and \(c^{i}\) stand for the output channel, input channel, kernel height, and kernel width respectively. The input for i-th layer is denoted by \(INPUT^{i}\in\mathbb{R}^{ih^{i}\times iw^{i}\times c^{i}}\), where \(ih^{i}\), \(iw^{i}\) stand for the input height and input width. To obtain a sparse model, a general approach is to prune part of \(W^{i}\), i.e., to set them to zero. This process can be implemented by applying a mask \(M^{i}\in\{0,1\}\) to the weights, resulting in a sparse model \(\bar{W}^{i}=\bar{W}^{i}\odot M^{i}\). The quality of pruning is defined as the parameter \(\delta\in[0,1]\) such that \(\delta=\frac{\|W^{i}-\bar{W}^{i}\|}{W^{i}}\). Pruning without information loss corresponds to \(W^{i}=\bar{W}^{i}\), i.e., \(\delta=0\). Thus, the pruning problem can be summarized as minimizing the \(\delta\) at the pruning ratio \(\rho\) with the optimal mask,
\[\operatorname*{arg\,max}_{M^{i}}\|W^{i}\odot M^{i}\|,\quad s.t.\frac{\|M^{i} \|_{0}}{K}=1-\rho \tag{1}\]
For weight pruning, the weights could be removed at random locations. In this case, the mask tensor \(M^{i}\) has the same shape as \(W^{i}\) of \(\mathbb{R}^{n^{i}\times kh^{i}\times kw^{i}\times c^{i}}\), which is \(K=n^{i}\times kh^{i}\times kw^{i}\times c^{i}\). For filter pruning, the sparse granularity is the entire filter. Thus, each mask \(M^{i}\) has the shape of \(\mathbb{R}^{n^{i}}\) and \(K=n^{i}\). To facilitate the implementation of pattern-based pruning, we reformat the expression of \(W^{i}\in\mathbb{R}^{n^{i}\times kh^{i}\times kw^{i}\times c^{i}}\) to \(\overline{W}^{i}\in\mathbb{R}^{n^{i}\times c^{i}}\). Each member in \(\overline{W}^{i}\) represents a kernel of shape \(n^{i}\times c^{i}\). Semantically, the kernel is a connection channel between the input feature map and the output feature map. For our FKGP, we further group the kernels on the input channel \(c^{i}\) and output channel \(n^{i}\) as a whole, which simultaneously sparse or removed. Continuing the definition of PatDNN [33], we define the two cases of fixed pattern sparse and complete removal as pattern group pruning and connectivity group pruning, such that
\[\operatorname*{arg\,max}_{M^{i}}\;\sum_{i=0}^{\frac{n^{i}}{g^{o}}} \sum_{j=0}^{\frac{c^{i}}{g^{i}}} \tag{2}\] \[\odot M^{i}_{*go:(i+1)*go,j*g:(j+1)*g{i},:,\cdot}\|\] \[s.t.\frac{\|M^{i}\|_{0}}{K}=1-\rho\]
where \(go\) and \(gi\) represent output channel and input channel group size respectively, and \(\rho\) is the sparsity rate.
### Fine-grained Kernel Group Sparsity
As shown in Fig 2, our proposed FKGP strategy designs a customized sparse strategy for Conv3x3, Conv1x1, and DwConv3x3.
**Conv3x3** is less computationally efficient than depthwise separable convolution, so it is not the core operator of mobile lightweight CNNs. For example, MobileNet-v1 [15] contains one layer of Conv3x3, and its calculation amount is only 1.91%. We focus on the acceleration of mobile lightweight CNNs so that the sparse mode of Conv3x3 is not carefully designed but directly adopts the 5:9 sparse mode proposed by PatDNN [33]. As shown in Fig 2a, each kernel is either completely removed called connectivity pruning, or partially removed, and the remaining weights form specific kernel patterns called pattern pruning. Every kernel reserves 4 non-zero weights out of the original 3 \(\times\) 3 kernel, which contains the central weight. PatDNN [33] elaborates on kernel patterns with more details. In order to better balance speed and accuracy, we regard \(g_{i}\times g_{o}(4\times 4)\) kernels as a group, and each group is considered as a whole.
**Conv1x1** and Fc layer are commonly transformed into GEMM, i.e., the multiplication of a weight matrix and an input matrix. Each kernel of these layers contains only one weight, and only connectivity sparsity exists in these layers. As shown in Fig 2c, we divide the weight tensor into \(\frac{n^{i}}{g_{o}}\times\frac{c^{i}}{g_{i}}\) blocks with equal size(\(g_{i}\times g_{o}\)) and apply connectivity group pruning. The importance of each block is evaluated by \(l_{1}\)-norm and the \(\frac{n^{i}}{g_{o}}\times\frac{c^{i}}{g_{i}}\times\rho\) blocks with the lowest importance are removed. The value of \(g_{i}\times g_{o}\) needs to comprehensively consider the model accuracy and acceleration friendliness. The larger the value is, the less sparse patterns exist in the model, which is not conducive to maintaining accuracy but more conducive to acceleration. We perform 30% connectivity group pruning on Conv1x1 in MobileNet-v1(ImageNet) to obtain accuracy with different group sizes. As shown in Table 1, there is only a slight loss of accuracy when the group size is no larger than \(4\times 4\). When the group size is further increased to \(8\times 8\), the accuracy loss increases by 3.72
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Acc\(g_{i}\times g_{o}\) & 2x2 & 4x4 & 8x8 & 16x16 \\ \hline Top1-Acc(\%) & 72.658 & 72.488 & 72.090 & 72 \\ \hline Top1-Acc \(\uparrow\) (\%) & +0.024 & -0.146 & -0.544 & -0.634 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Sparse model accuracy under different group size configurations and the accuracy of the baseline model is 72.634%.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Acc\(g_{i}\times g_{o}\) & 1x4 & 1x8 & 1x16 & 1x32 \\ \hline Top1-Acc(\%) & 72.838 & 72.866 & 72.954 & 72.896 \\ \hline Top1-Acc \(\uparrow\) (\%) & +0.204 & +0.232 & +0.320 & +0.262 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Sparse model accuracy under different group size configurations and the accuracy of the baseline model is 72.634%.
times. Since the group size of \(4\times 4\) is also computationally friendly (discussed in Section 3.3), we finally set the group size of Conv1x1 to \(4\times 4\).
**DwConv3x3** is one of the components of depthwise separable convolution, which is difficult to compress. Due to the loss of accuracy, the previous similar work [33] did not realize the pattern-based pruning of the DW layer. On the contrary, we propose 3:9 sparse patterns for the DwConv3x3 layer, which can achieve near-lossless pattern pruning. As shown in Fig 2, each kernel removes 3 weights from the original \(3\times 3\) kernel, which are taken from the first and third columns and distributed in three rows, in which case there are \(2^{3}\) potential kernel patterns. We regard \(g_{i}\times g_{o}\) kernels as a group and each group selects the best kernel pattern by maximizing the \(l_{1}\)-norm after sparse. It should be noted that the input channel of depthwise convolution is equal to 1, so a single kernel is essentially the entire filter, which means that the connectivity pruning will degenerate into filter pruning. In order to maintain accuracy, we only perform pattern group pruning for DwConv3x3, resulting in 33% sparsity. As For DwConv3x3, the calculation principle determines that \(g_{i}\) is fixed at 1. To determine the best \(g_{o}\) value, we study the impact of different \(g_{o}\) on the accuracy when only pruning the DwConv3x3. As shown in Table 2, DwConv3x3 pruning is not sensitive to the group size. Considering the calculation friendliness, we set \(g_{o}\) equal to 16.
### Co-design Inference Engine
Unstructured Conv1x1 pruning provides unique advantages in accuracy compared with structured sparsity. However, the discontinuous weights pose a problem in vectorized parallel computing and lead to increased cache misses. The random connectivity in both \(n\) and \(c\) dimensions results in negligible or even negative performance effects due to irregular memory accesses. To guarantee the effectiveness of random pruning, this paper proposes://www.overleaf.com/project/63b6d9cf9a3aed58c82f50fcases. The results are shown in Table 3. The results are shown in Table 3.
In our study, the first and third pixels of each row of the kernel were randomly pruned. From the perspective of memory access, the input data corresponding to the middle position of weights can be reused twice for every two output data as shown in Fig 3(a). However, the pruning of the middle pixel as shown in Fig 3(b) could not reuse the input data which increases the time consumption for memory access to cache and DDR. Every 8 sparsity patterns in our study reduce the access memory by 25% compared to the sparsity method for pruning the middle pixel.
The pseudo-code of Depthwise computing process is shown in Algorithm 2. The calculation in this study uses a sliding window method, and the input data format will be packed as NHWC16 to adapt to the computation for sparsity pattern. We regard 16 as a group for the output channel. The output channels that are not multiples of 16 are set to 4 or 8 as a group. In addition, to minimize the training loss of Depthwise, this study further adds a full-1 pattern, namely dense mode. The number of output channels for full-1 pattern can be dynamically increased according to the specific network training accuracy and which still follows the 16-block rules.
```
Data: Weight \(W\in\mathbb{R}^{occ\times hk\times kw\times ic}\), Input feature map \(I\in\mathbb{R}^{n\times ih\times iw\times ic}\) and PatternInfo Result: Output feature \(O\in\mathbb{R}^{n\times oh\times ow\times oc}\)
1Set block size corresponding to the \(ih\ast iw\) dimension and \(oc\) dimension to \(M_{p}\) and \(N_{p}\);
2for\(i\gets 1\) to \(ih\ast iw/M_{p}\)do
3for\(j\gets 1\) to \(oc/N_{p}\)do
4computing output block \(O_{i,j}\) of \(M_{p}\times N_{p}\);
5for\(kIndex\gets 1\) to \(oh\)do
6for\(owBlock\gets 1\) to \(ow/ow_{p}\)do
7computing every \(N_{p}\) channels for
8
9output block \(OW_{p}\times N_{p}\);
10
11
12
13
14
15
16
```
**Algorithm 1**Block-size Sparsity of Depthwise
In this study, the output data of 2x16 block is calculated in every computing cycle. The number of 2 and 16 stands for feature map dimension and output channel respectively. Considering the usage of registers, as shown in Fig 3(a) the weight data requires 6x16 block size which occupies 12 neon registers. The input data requires 9x16 block size which occupies 18 neon registers. The output data is 2x16 which occupies 4 neon registers. We reuse two registers for the input data and keep the data individual from the others.
For the pruning of Conv3x3 operator, this study applies the method proposed in paper PatDNN [33] that 56 sparsity patterns were implemented. The same sparsity pattern will be shared by several adjacent filters which can be dynamically selected by the number of 4, 2, and 1 during network training considering the balance between training accuracy and performance. In this paper, we mainly focus on weights pruning for lightweight networks, so the sparsity of Conv3x3 operator will not be introduced in detail.
```
Data: Weight \(W\in\mathbb{R}^{occ\times hk\times kw\times ic}\), Input feature map \(I\in\mathbb{R}^{n\times ih\times iw\times ic}\) and PatternInfo Result: Output feature \(O\in\mathbb{R}^{n\times oh\times ow\times oc}\)
1Set block size corresponding to the \(ow\) dimension \(ow_{p}\), and \(N=oc\)\(N_{p}=16,8,4\);
2for\(nBlock\gets 1\) to \(N/N_{p}\)do
3\(curSparsityPattern\) = (PatternInfo)[\(nBlock\)];
4if\(curSparsityPattern==0\)then
5for\(ohIndex\gets 1\) to \(oh\)do
6for\(owBlock\gets 1\) to \(ow/ow_{p}\)do
7computing every \(N_{p}\) channels for
8 output block \(OW_{p}\times N_{p}\);
9
10
11
12
13
```
**Algorithm 2**Block-size Sparsity of Depthwise
### Whole network rearrangement
For pattern group pruning and connectivity group pruning, we observed that when the importance of kernels in the group differs greatly, it will lead to evaluation inaccuracy, which means that relatively important kernels are affected
Figure 5: Illustration of network rearrangement. We rearrange the weight tensor by filter rearrangement index and channel rearrangement index to preserve weight magnitude.
by unimportant kernels. This observation motivates us to change the layout of the weight tensor before pruning to reduce the importance variance of the kernels within a group. As shown in Fig 5, we propose the whole network rearrangement strategy to derive more influential blocks for accuracy improvements. When the example matrix(top-left) is pruned by 50% with a group size of \(2\times 2\), it results in a sparse weight (top-right) with \(l_{1}\)-norm of 57. If we change the order of the input channel dimension and output channel dimension(bottom-left), the resulting sparse weight (bottom-right) would have a total weight magnitude of 70. In order to avoid changing the output of the network, the rearrangement index needs to be propagated throughout the network graph, which means that the filter rearrangement index calculated by the "parent" layer will be used as the channel rearrangement index of "children" layers. Searching for good filter permutations for the target layer is challenging because for a layer with \(n^{i}\) filters, there exists \(n^{i}!\) permutations, which is almost uncomputable for large \(n^{i}\). However, the number of unique permutations can be reduced to \(\frac{n^{i}!}{g_{o}!*(n^{i}/g_{o})!}\) in group pruning, in that both the order of filters in a group and the order of groups in a large matrix make no difference in accuracy improvements. Each unique permutation can represent \(g_{o}!*(n^{i}/g_{o})!\) permutations, which will lead to the same sparse matrix \(l1\)-norm. To quickly search and evaluate unique permutations, we define a canonical form that a permutation is unique only if each of its groups' filters is in sorted order and the groups are sorted with respect to each other (e.g. by the first index value of each group). Then we use the bounded regression [34] method to quickly solve the above problem.
### Overview of SparseByteNN Framework
As shown in Fig1, The classic neural network pruning process consists of three steps: training from scratch, pruning, and fine-traing. Before the second pruning step, we first rearrange the entire network to further reduce the pruning impact. Then, we apply FKGP pruning to obtain the sparse model and use fine-traing to recover the accuracy of the sparse model. Similar to NNI [29], we encapsulate the above process into an algorithm compression component to provide users with out-of-the-box sparse fine-traing capabilities. The model conversion tool converts the ONNX model exported by the sparse fine-traing process into an sparse model IR [39]. Finally, based on the sparse model IR, the sparse inference engine completes the forward process on the target hardware platform.
Figure 6: Acceleration performance of Conv1x1 under different configurations. The experiment is conducted on Qualcomm 855 CPU with 30% sparse rate. Best view in colors.
Figure 7: Acceleration performance of DwConv3x3 under different configurations. The experiment is conducted on Qualcomm 855 CPU with 33% sparse rate. Best view in colors.
## 4 Experiments
In this chapter, we first show that SparseByteNN has a better precision-speed trade-off than Filter Pruning, Weight Pruning, and other sparse engines in the industry through comparisons of different dimensions. Then, we prove the acceleration benefit of DwConv3x3 and Conv1x1, and the accuracy gain brought by the whole network rearrangement through a series of ablation. Finally, we extended FKGP to WebAsemebly and achieved remarkable performance.
### Implementation Settings
In order to make the comparison fair and sufficient, we use the Filter Pruning and Weight Pruning algorithms contained in NNI [29] to construct a comparable experiment. The sparse rate is the real sparse rate of the entire network, which considers the interlayer coupling. All the experiments based on resnet20 [18] of the CIFAR10 [20] have the same hyperparameters, in which epochs, batch size, learning rate, and weight decay are set to 250, 128, 1e-2, and 1e-5 respectively, and the optimizer and scheduler are set to sgd [1] and mstep respectively. Other experiments of ImageNet [4] are based on TIMM [40]. The pre-training and sparse-training of MobileNet-v1 use the same hyperparameters, in which epochs, batch size, learning rate, and weight decay are 300, 128, 0.045 and 1e-5 respectively, and the optimizer and scheduler are respectively choosing rmsproptf [7] and stepdecay, where decay-epochs is set to 2.4 and decay-rate is set to 0.973.
### Performance Comparison
Firstly, we compare the performance of our FKGP with weight pruning and filter pruning on resnet20, which is stacked by Conv3x3 operators. As shown in Fig 8, this comparison covers the State-of-the-Art filter pruning, including Apoz [16], Fpgm [14], L1 [17], L2 [17], ActivationMeanRank [31], ActivationTaylor [31], and weight pruning of Agp [42]. Fig 8 shows that filter pruning suffers the most performance degradation and the FKGP strategy surpasses all filter pruning. Although the accuracy of weight pruning under the same Flops exceeds that of FKGP, the former cannot obtain actual acceleration benefits. We conducted experiments on the actual latency of three types of pruning algorithms on mobile CPUs and found that FKGP exhibited a better speed-accuracy trade-off performance. Specifically, when the classification accuracy is 90.6%, FKGP achieved a 34% (0.91ms vs 1.34ms ) acceleration compared to FPGM on Qualcomm 855 and 29.6%(6.16ms vs 8.75ms) on Qualcomm 625.
Then, we compare the accuracy and latency on a lightweight neural network consisting of only conv1x1 and dwconv3x3. Since there is no obvious performance difference between different structured pruning algorithms, so we choose FPGM [14] as a representative. As shown in Table 3, compared with the baseline MobileNet-v1 [15], when the pruning rate is 20%, FKGP speeds up by 13%, and the accuracy increases by 0.264%. When the pruning rate is 40%, it speeds up by 29.6% while the accuracy only decreases by 0.78%. Compared with the Filter pruning, the accuracy of FKGP has an advantage of 0.878% when the latency is close to 25.3ms.
Finally, to further illustrate the acceleration advantages of SparseByteNN, taking mobilenetV1 as the baseline network and Qualcomm 855 as the test platform, we compare the performance between SparseByteNN and the SOTA on-mobile inference framework MNN [19]. Since MNN only supports sparse Conv1x1, for the fairness of the comparison, SparseByteNN turns off the sparse acceleration of DwConv3x3. As shown in Table 4, SparseByteNN is 3.21% faster than MNN for the dense model. With the increase of the sparse rate, the performance advantage of SparseByteNN is further highlighted. When the sparse rate is 30%, the performance advantage reaches a maximum of 22.30%. Based on the identical experimental configuration, we obtained the accuracy at this sparse rate. The results show that although SparseByteNN has a larger sparse granularity, the accuracy
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{FLOPs(M)} & Top-1 & Latency \\ & & Acc(\%)\(\uparrow\) & (ms) \\ \hline baseline & 568 & 0 & 32.22 \\ \hline WP [42] & 339 & +0.022 & 32.22 \\ \hline \multirow{5}{*}{FP [14]} & 511 & -0.523 & 29.34 \\ & 449 & -1.102 & 25.34 \\ & 397 & -2.043 & 22.41 \\ & 339 & -3.021 & 19.33 \\ & 284 & -5.830 & 16.04 \\ \hline \multirow{5}{*}{FKGP} & 514 & +0.568 & 30.56 \\ & 460 & +0.264 & 27.96 \\ \cline{1-1} & 406 & -0.224 & 25.29 \\ \cline{1-1} & 352 & -0.782 & 22.68 \\ \cline{1-1} & 299 & -2.386 & 19.68 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison on MobileNet-v1(ImageNet) and the accuracy of the baseline model is 72.634%
Figure 8: Performance comparison on resnet20(CIFAR10). 855 and 625 are Qualcomm chip models respectively
drop is close to that of MNN(0.224% vs 0.213%). The experimental results and its technical documentation show that MNN will have a significant acceleration compared to the dense model only when the sparse rate reaches more than 30%, and it suffers difficulties when generalized to DNN layers other than Conv1x1 layers.
### Ablation study
#### 4.3.1 Effectiveness of Sparse Patterns
One of our main contributions is to design different pattern-based group pruning strategies for Conv1x1 and DwConv3x3 respectively, taking into account both accuracy and speed. We will prove the effectiveness of the fine-grained sparse model based on experiments.
**Conv1x1:** As described in Section 3.2 and Section 3.3, we only perform connectivity group pruning with a group size of 4x4 on Conv1x1. Table 3 and Table 4 show that SparseByteNN has performance advantages over SOTA pruning algorithms and sparse inference engines when only considering Conv1x1 pruning. In order to further illustrate the acceleration performance of the Conv1x1 operator, we conducted a comprehensive benchmark on common input configurations. As shown in Fig 6, when the sparsity rate is 30%, the speedup of a single operator ranges from 11.50% to 39.70%, with a median of 26.80% and a average of 25.38%. Test results at more sparsity rates can be found in the appendix material.
**DwConv3x3:** To balance accuracy and speed, we only perform pattern group pruning with a group size of 1x16 for DwConv3x3. As shown in Table 5, compared with the 5:9 sparse patterns proposed by PatDNN, the 3:9 sparse patterns we designed for DwConv3x3 have lower accuracy loss and achieve approximately lossless pruning. Specifically, on MobileNet v1 to v3, the accuracy of 3:9 sparse exceeds 5:9 sparse by 0.52%, 0.636% and 0.73% respectively. Furthermore, we perform benchmarks under common configurations to demonstrate the acceleration performance of a single operator. It should be noted that 3:9 sparse means that all DwConv3x3 have a fixed 33% sparse rate. As shown in Fig 7, when the sparsity rate is 33%, the median speedup of a single operator is 24.8%, and the average is 25.5%. Under the configuration of a large feature map and small channels, the speedup can reach up to 49.6%.
#### 4.3.2 Effectiveness of Network Rearrangement
In order to derive more influential blocks in group pruning, we propose the whole network rearrangement strategy. Table 6 presents the experimental results, which are conducted to further explore the impact of rearrangement under different Conv1x1 sparse rates on MobileNet-v1(ImageNet). From Table 6, we observe that the whole network rearrangement can effectively improve network accuracy.
### WebAssemebly Acceleration
The above experimental results prove the excellent performance of our proposed fine-grained kernel group sparsity on ARM CPU. To illustrate the generalization of this strategy, we implemented efficient sparse kernels for Conv1x1 and DwConv3x3 based on WebAssembly, which can be used to accelerate neural network applications on the Web. As shown in Fig 9, when the input feature maps are 64x64, 96x96, and 128x128, under the channel configuration commonly used on the web side, the sparseness of 30% can achieve an average speedup of 22.3%, 24.15%, and 27.7%, respectively.Test results at more sparsity rates can be found in the appendix material.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{Mobilent-V1} & Sp 0.1(\%) & Sp 0.2(\%) & Sp 0.3(\%) \\ \hline w/o Rearrangement & 72.692 & 72.236 & 71.74 \\ \hline Rearrangement & 72.970 & 72.371 & 72.124 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance studies with and without network rearrangement. Sp represents the sparse rate of Conv1x1, while DwConv3x3 is sparse.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Framework & Sp 0(ms) & Sp 0.1(ms) & Sp 0.2(ms) & Sp 0.3(ms) & Sp 0.5(ms) \\ \hline MNN & 33.29 & 33.37 & 33.48 & 32.55 & 20.23 \\ \hline SparseByteNN & 32.22 & 30.56 & 27.96 & 25.29 & 19.68 \\ \hline Speedup & 3.21\% & 8.42\% & 16.48\% & 22.30\% & 2.71\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of inference time under different sparsity rates. Sp represents the sparse rate.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Model & Params(M) & Baseline(\%) & 3:9 Patterns(\%) & 5:9 Patterns(\%) \\ \hline MobileNet-v1 & 4.2 & 72.634 & 72.954 & 72.434 \\ \hline MobileNet-v2 & 3.5 & 72.944 & 72.892 & 72.256 \\ \hline MobileNet-v3 & 5.5 & 75.776 & 75.703 & 74.973 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Top1 accuracy comparison between the 3:9 patterns and the 5:9 patterns.
Figure 9: Speedup of Conv1x1 at 30% sparse rate. The test platform is MacBook Pro 16, and the chrome version is 111.0.5563.64
## 5 Conclusion and Future Work
This work proposed a novel mobile inference acceleration framework named SparseByteNN, which provides end-to-end neural network acceleration capabilities from algorithms to engines on general CPUs. It contains a fine-grained group kernel sparsity schema and a family of co-optimized efficient sparse kernels. Combined with a customized network rearrangement strategy, SparseByteNN achieves real-time execution as well as high accuracy. The experiments on the MobileNets and CPUs platforms demonstrated that SparseByteNN has better speed and accuracy trade-off performance than the current SOTA pruning algorithms and sparse inference engine. In the future, we will further expand the application of pattern-based software-hardware collaborative sparse acceleration on more architectures, including mobile GPU(OpenCL) and server GPU(CUDA).
|
2301.08345 | Complexity of linearized augmented Lagrangian for optimization with
nonlinear equality constraints | In this paper, we consider a nonconvex optimization problem with nonlinear
equality constraints. We assume that both, the objective function and the
functional constraints are locally smooth. For solving this problem, we propose
a linearized augmented Lagrangian method, i.e., we linearize the functional
constraints in the augmented Lagrangian at the current iterate and add a
quadratic regularization, yielding a subproblem that is easy to solve, and
whose solution is the next iterate. Under a dynamic regularization parameter
choice, we prove global asymptotic convergence of the iterates to a critical
point of the problem. We also derive convergence guarantees for the iterates of
our method to an $\epsilon$ first-order optimal solution in
$\mathcal{O}(1/{\epsilon^2})$ outer iterations. Finally, we show that, when the
problem data are e.g., semialgebraic, the sequence generated by our algorithm
converges and we derive convergence rates. We validate the theory and the
performance of the proposed algorithm by numerically comparing it with the
existing methods from the literature. | Lahcen El Bourkhissi, Ion Necoara | 2023-01-19T22:38:14Z | http://arxiv.org/abs/2301.08345v1 | # Complexity of linearized augmented Lagrangian for optimization with nonlinear equality constraints
###### Abstract
In this paper, we consider a nonconvex optimization problem with nonlinear equality constraints. We assume that both, the objective function and the functional constraints are locally smooth. For solving this problem, we propose a linearized augmented Lagrangian method, i.e., we linearize the functional constraints in the augmented Lagrangian at the current iterate and add a quadratic regularization, yielding a subproblem that is easy to solve, and whose solution is the next iterate. Under a dynamic regularization parameter choice, we prove global asymptotic convergence of the iterates to a critical point of the problem. We also derive convergence guarantees for the iterates of our method to an \(\epsilon\) first-order optimal solution in \(\mathcal{O}(1/\epsilon^{2})\) outer iterations. Finally, we show that, when the problem data are e.g., semialgebraic, the sequence generated by our algorithm converges and we derive convergence rates. We validate the theory and the performance of the proposed algorithm by numerically comparing it with the existing methods from the literature.
Keywords:Nonconvex optimization linearized augmented Lagrangian nonlinear functional constraints convergence analysis.
###### Abstract
We propose a novel approach to solve the problem of the _A_-ADMM problem. The proposed method is based on the _A_-ADMM problem.
the KL property. Similar convergence results are obtained in [8] for an algorithm solving problems having nonlinear constraints in the first block (for which the corresponding objective function is smooth) and linear constraints in the second block. The algorithm proposed in [8], called DAM (Dynamic linearized Alternating direction method of Multipliers), linearizes the smooth part of the augmented Lagrangian and adds a proximal regularization, where the proximal parameter is dynamically generated.
A different approach, that does not rely on the augmented Lagrangian framework, is presented in [20; 27] and is called Sequential Convex Programming (SCP). This method solves a sequence of convex approximations of the original problem by linearizing the nonconvex parts of the objective and of the functional constraints and preserving the structures that can be exploited by convex optimization techniques. In this case the subproblem has a (strongly) convex objective and linear constraints, for which efficient solution methods exist, e.g., [13; 21]. However, to the best of our knowledge, SCP methods converge under mild assumptions only locally [20; 27].
Due to the high cost of solving the nonconvex subproblem in Proximal AL [29] and the fact that SCP type schems [20; 27] have only local convergence guarantees, in this paper we propose a novel Linearized Augmented Lagrangian method (called Linearized AL) for solving smooth nonconvex problems with nonlinear equality constraints, that overcomes these limitations. More precisely, in our algorithm we linearize the functional constraints in the augmented Lagrangian at the current iterate and add a dynamic regularization term, yielding a subproblem that is easy to solve, and whose solution is the next iterate. Indeed, if the objective function is (weakly) convex, our subproblem is strongly convex and thus can be solved efficiently. Hence, our method borrows the advantages of both approaches, Proximal AL and SCP, since it converges globally and the subproblem is strongly convex and thus easy to solve. We prove that the sequence generated by our algorithm is bounded, each limit point of this sequence is a stationary point and under the KL property we prove convergence for the whole sequence. More precisely, under a dynamic regularization parameter choice, we prove global asymptotic convergence of the iterates to a critical point of the problem. We also derive convergence guarantees for the iterates of our method to an \(\epsilon\) first-order optimal solution in \(\mathcal{O}(1/\epsilon^{2})\) outer iterations. Finally, we show that under the KL property the whole sequence generated by the proposed algorithm converges and we derive convergence rates depending on the KL parameter. Unlike [29], in our method the penalty parameter is independent on the required accuracy \(\epsilon\). Moreover, in comparison with [20], our convergence results here are global, while only local convergence has been proved for SCP. The preliminary numerical results confirm the efficiency of our Linearized AL algorithm.
The paper is structured as follows. In Section 2, we introduce our problem of interest and some notions necessary for our analysis. In Section 3, we present our algorithm, followed in Section 4 by its convergence analysis. Finally, in Section 5, we compare numerically our method with existing algorithms.
## 2 Problem formulation and preliminaries
In this paper, we consider the following nonlinear optimization problem:
\[\min_{x\in\mathbb{R}^{n}} f(x)\] (1) s.t. \[F(x)=0,\]
where \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) and \(F(x)\triangleq\left(f_{1}(x),...,f_{m}(x)\right)^{T}\), with \(f_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) for all \(i=1:m\). We assume the functions \(f,f_{i}\in\mathcal{C}^{2}\) for all \(i=1:m\), \(f\) is (weakly) convex and \(F\) is nonlinear. For this problem we consider the following assumptions:
**Assumption 1**: _Assume that there exists \(\rho_{0}\geq 0\) such that \(f(x)+\frac{\rho_{0}}{2}\|F(x)\|^{2}\) has compact level sets, i.e., for all \(\alpha\in\mathbb{R}\), the following set is empty or compact:_
\[\mathcal{S}_{\alpha}^{0}\triangleq\{x:\,f(x)+\frac{\rho_{0}}{2}\|F(x)\|^{2} \leq\alpha\}.\]
**Assumption 2**: _Given a compact set \(\mathcal{S}\subseteq\mathbb{R}^{n}\), there exist positive constants \(M_{f},M_{F},\sigma,L_{f},L_{F}\) such that \(f\) and \(F\) satisfy the following conditions:_
1. \(\|\nabla f(x)\|\leq M_{f},\)__\(\|\nabla f(x)-\nabla f(y)\|\leq L_{f}\|x-y\|\) _for all_ \(x,y\in\mathcal{S}\)_._
2. \(\|\nabla F(x)\|\leq M_{F},\)__\(\sigma_{\text{min}}(\nabla F(x))\geq\sigma>0\) _for all_ \(x,y\in\mathcal{S}\)_._
3. \(\|\nabla F(x)-\nabla F(y)\|\leq L_{F}\|x-y\|\) _for all_ \(x,y\in\mathcal{S}\)_._
**Assumption 3**: _There exists finite \(\bar{U}\) such that \(f(x)\leq\bar{U}\) for all \(x\in\{x:\|F(x)\|\leq 1\}\)._
Note that these assumptions are standard in the nonconvex optimization literature, see e.g., [8; 29]. In fact, these assumptions are not restrictive because they need to hold only locally. Indeed, large classes of problems satisfy these assumptions as discussed below.
Remark 1: Assumption 1 holds e.g., when \(f(x)+\rho_{0}\|F(x)\|^{2}\) is coercive for some \(\rho\geq 0\), or when \(f(x)\) is strongly convex or \(f(x)\) is bounded bellow and \(F(x)=x^{T}x-1\), as is the case in dictionary learning applications. It also holds when \(f(x)=\frac{1}{2}x^{T}Qx-p^{T}x,F(x)=Ax-b\) and \(Q\) is a positive definite matrix on \(\text{null}(A):=\{x:\,Ax=0\}\).
Remark 2: Assumption 2 allows general classes of problems. In particular, conditions _(i)_ hold if \(f(x)\) is differentiable and \(\nabla f(x)\) is _locally_ Lipschitz continuous on a neighborhood of \(\mathcal{S}\). Conditions _(ii)_ hold when \(F(x)\) is differentiable on a neighborhood of \(\mathcal{S}\) and satisfies an LICQ condition over \(\mathcal{S}\). Finally, condition _(iii)_ holds if \(\nabla F(x)\) is _locally_ Lipschitz continuous on \(\mathcal{S}\).
Remark 3: For Assumption 3 to hold, it is sufficient the set \(\{x:\,\|F(x)\|\leq 1\}\) to be compact. In fact, we do not need this assumption if we can choose the starting point \(x_{0}\) such that \(F(x_{0})=0\), that is, the initial point is feasible.
The following lemma is an immediate consequence of Assumption 1.
**Lemma 1**: _If Assumption 1 holds, then \(f(x)+\frac{\rho_{0}}{2}\|F(x)\|^{2}\) is lower bounded:_
\[\bar{L}\triangleq\inf_{x\in\mathbb{R}^{n}}\{f(x)+\frac{\rho_{0}}{2}\|F(x)\|^{2} \}>-\infty. \tag{2}\]
_Proof_ See Appendix.
Further, let us introduce the following definitions (see also [29]):
**Definition 1**: [First-order solution and \(\epsilon\) first-order solution of (1)] The vector \(x^{*}\) is said to be first-order solution of (1) if \(\exists\lambda^{*}\in\mathbb{R}^{m}\) such that:
\[\nabla f(x^{*})+\nabla F(x^{*})^{T}\lambda^{*}=0\ \ \ \text{and}\ \ \ F(x^{*})=0.\]
Moreover, \(x^{*}\) is an \(\epsilon\) first-order solution of (1) if \(\exists\lambda^{*}\in\mathbb{R}^{m}\) such that:
\[\|\nabla f(x^{*})+\nabla F(^{*}x)^{T}\lambda^{*}\|\leq\epsilon\ \ \ \text{and}\ \ \ \|F(x^{*})\|\leq\epsilon.\]
Let \(\Phi:\mathbb{R}^{d}\rightarrow\bar{\mathbb{R}}\) be a proper lower semicontinuous function. For \(-\infty<\tau_{1}<\tau_{2}\leq+\infty\), we define \([\tau_{1}<\Phi<\tau_{2}]=\{x\in\mathbb{R}^{d}:\tau_{1}<\Phi(x)<\tau_{2}\}\). Let \(\tau\in(0,+\infty]\). We denote by \(\Psi_{\tau}\) the set of all continuous concave functions \(\varphi:[0,\tau]\rightarrow[0,+\infty)\) such that \(\varphi(0)=0\) and \(\varphi\) is continuously differentiable on \((0,\tau)\), with \(\varphi^{\prime}(s)>0\) over \((0,\tau)\).
**Definition 2**: Let \(\Phi:\mathbb{R}^{d}\rightarrow\bar{\mathbb{R}}\) be a proper lower semicontinuous function that takes constant value on \(\Omega\). We say that \(\Phi\) satisfies the KL property on \(\Omega\) if there exists \(\epsilon>0,\tau>0\), and \(\varphi\in\Psi_{\tau}\) such that for every \(x^{*}\in\Omega\) and every element \(x\) in the intersection \(\{x\in\mathbb{R}^{d}:\ \text{dist}(x,\Omega)<\epsilon\}\cap[\Psi(x^{*})<\Psi(x)< \Psi(x^{*})+\tau]\), we have:
\[\varphi^{\prime}\big{(}\Phi(x)-\Phi(x^{*})\big{)}\text{dist}\big{(}0,\partial \Phi(x)\big{)}\geq 1.\]
This definition covers many classes of functions arising in practical optimization problems. For example, if \(f\) is a proper closed semialgebraic function, then \(f\) is a KL function with exponent \(\nu\in[0,1)\)[1]. The function \(g(Ax)\), where \(g\) is strongly convex on a compact set and twice differentiable, and \(A\in\mathbb{R}^{m\times n}\), is a KL function. Convex piecewise linear quadratic functions such as \(\|x\|_{1},\|x\|_{0},\gamma\sum_{i=1}^{k}|x_{[i]}|\), where \(|x_{[i]}|\) is the \(i\)th largest entry in \(x,\ k\leq n\) and \(\gamma\in(0,1]\); the indicator function \(\delta_{\Delta}(x)\), where \(\Delta=\{x\in\mathbb{R}^{n}:e^{T}x=1,x\geq 0\}\); least-squares problems with the Smoothly Clipped Absolute Deviation (SCAD) [9]; Minimax Concave Penalty (MCP) regularized functions [31] are all KL functions. The KL property characterizes the local geometry of a function around the set of critical points.
## 3 A linearized augmented Lagrangian method
In this section, we propose a new algorithm for solving problem (1) using the augmented Lagrangian framework. Let us first introduce few notations. The augmented Lagrangian associated with the problem (1) is:
\[\mathcal{L}_{\rho}(x,\lambda)=f(x)+\langle\lambda\;,\;F(x)\rangle+\frac{\rho}{2 }\|F(x)\|^{2}=f(x)+\psi(x,\lambda),\]
where \(\psi(x,\lambda)=\langle\lambda\;,\;F(x)\rangle+\frac{\rho}{2}\|F(x)\|^{2}\). The gradient of \(\psi\) is: \(\nabla_{x}\psi(x,\lambda)=\nabla F(x)^{T}(\lambda+\rho F(x))\) and \(\nabla_{\lambda}\psi(x,\lambda)=F(x)\). In the rest of this paper, for the sake of clarity, we provide the proofs of all the lemmas in Appendix. The following lemma shows that \(\psi\) is locally smooth, (i.e., it has Lipschitz continuous gradient locally).
Lemma 2: _[Smoothness of \(\psi\)] If Assumption 2 holds, then for any compact set \(\Lambda\subset\mathbb{R}^{m}\) there exists \(L_{\psi}>0\) such that:_
\[\|\nabla\psi(x,\lambda)-\nabla\psi(x^{\prime},\lambda^{\prime})\|\leq L_{\psi }\left\|\begin{pmatrix}x-x^{\prime}\\ \lambda-\lambda^{\prime}\end{pmatrix}\right\|\;\;\;\;\;\forall(x,\lambda),(x^ {\prime},\lambda^{\prime})\in\mathcal{S}\times\Lambda,\]
_where \(L_{\psi}=\sup_{(x,\lambda)\in\mathcal{S}\times\Lambda}\{L_{F}\|\lambda+\rho F (x)\|+M_{F}(2+\rho M_{F})\}\)._
Proof: See Appendix.
Further, let us denote the following function derived from linearization of the functional constraints, at a given point \(\bar{x}\), in the augmented Lagrangian:
\[\begin{split}&\tilde{\mathcal{L}}_{\rho}(x,\lambda;\bar{x})\\ &=f(x)+\langle\lambda\;,\;F(\bar{x})+\nabla F(\bar{x})(x-\bar{x}) \rangle+\frac{\rho}{2}\|F(\bar{x})+\nabla F(\bar{x})(x-\bar{x})\|^{2}.\end{split}\]
To solve the optimization problem (1) we propose the following _Linearized augmented Lagrangian_ (Linearized AL) algorithm, i.e., we linearize the functional constraints in the augmented Lagrangian at the current iterate and add a quadratic regularization.
```
1:Initialization:\(x_{0},\lambda_{0},\;\text{and}\;\rho>0\);
2:\(k\gets 0\)
3:while stopping criterion is not satisfied do
4: generate a proximal parameter \(\beta_{k+1}>0\)
5:\(x_{k+1}\leftarrow\arg\min_{x}\tilde{\mathcal{L}}_{\rho}(x,\lambda_{k};x_{k})+ \frac{\beta_{k+1}}{2}\|x-x_{k}\|^{2}\)\(\triangleright\) The subproblem
6:\(\lambda_{k+1}\leftarrow\lambda_{k}+\rho\Big{(}F(x_{k})+\nabla F(x_{k})(x_{k+1 }-x_{k})\Big{)}\)
7:\(k\gets k+1\)
8:endwhile
```
**Algorithm 1** Linearized augmented Lagrangian
_To the best of our knowledge Linearized AL algorithm is new and its convergence behaviour has not been analyzed before in the literature._ Note that if
\(f\) is (weakly) convex function, the objective function in the subproblem of step 5 of Algorithm 1 is strongly convex, provided that \(\beta_{k+1}\) is chosen adequately. In particular, if \(f\) is a quadratic function, then finding a solution of the subproblem in step 5 is even equivalent with solving a linear system of equalities. In these cases, efficient solution methods exist for solving the subproblem, see e.g., [13; 21]. It is also important to note that our update of the dual multipliers is different from the literature, i.e., instead of evaluating the functional constraints at the new test point \(x_{k+1}\) and updating \(\lambda_{k+1}=\lambda_{k}+\rho F(x_{k+1})\)[8; 29], we evaluate their linearization at \(x_{k}\) in the new point \(x_{k+1}\) and update \(\lambda_{k+1}=\lambda_{k}+\rho(F(x_{k})+\nabla F(x_{k})(x_{k+1}-x_{k}))\). In the sequel, we denote:
\[\Delta x_{k}=x_{k}-x_{k-1}\ \ \ \mbox{and}\ \ \ \Delta\lambda_{k}=\lambda_{k}- \lambda_{k-1}\ \ \ \forall k\geq 1.\]
Let \(\alpha\in(0,1)\). Then, from Lemma 2 we can choose \(\beta_{k+1}\) such that:
\[\psi(x_{k+1},\lambda_{k})-\psi(x_{k},\lambda_{k})-\left\langle \nabla_{x}\psi(x_{k},\lambda_{k})\;,\;x_{k+1}-x_{k}\right\rangle\] \[\leq\frac{(1-\alpha)\beta_{k+1}}{2}\|x_{k+1}-x_{k}\|^{2}\ \ \ \forall k\geq 0. \tag{3}\]
Note that for any \(k\geq 1,\beta_{k}\) is well defined since \(\psi\) is smooth, see Lemma 2. This regularization parameter \(\beta_{k}\) can be found using e.g., a backtracking scheme as in [8]. In the sequel, we assume that \(\beta_{k}\) satisfies the following:
**Assumption 4**: _[Proximal parameter sequence is bounded] Assume that the generated parameter sequence \(\{\beta_{k}\}_{k\geq 1}\) is bounded, i.e.,_
\[\beta:=\sup_{k\geq 1}\beta_{k}<\infty.\]
## 4 Convergence analysis
In this section, we derive the asymptotic convergence of Linearized AL algorithm (Algorithm 1) and its efficiency to obtain an \(\epsilon\) first-order solution for problem (1). Moreover, we provide convergence rates under the KL condition. Note that for some intermediate steps in the convergence analysis we follow a similar methodology as in [29]. Let us start by proving the decrease with respect to the first argument for the augmented Lagrangian function.
Lemma 3: _[Descent of \(\mathcal{L}_{\rho}\) w.r.t. primal variables] If Assumption 2 holds, then we have the following descent:_
\[\mathcal{L}_{\rho}(x_{k+1},\lambda_{k})\leq\mathcal{L}_{\rho}(x_{k},\lambda_{ k})-\frac{\rho\sigma^{2}+\alpha\beta_{k+1}}{2}\|x_{k+1}-x_{k}\|^{2}\ \ \ \forall k\geq 0.\]
Proof: See Appendix.
Let us define the following Lyapunov function (inspired from [29]):
\[P(x,\lambda,y,\gamma)=\mathcal{L}_{\rho}(x,\lambda)+\frac{\gamma}{2}\|x-y\|^{2}. \tag{4}\]
The evaluation of the Lyapunov function along the iterates of Linearized AL algorithm is denoted by:
\[P_{k}=P(x_{k},\lambda_{k},x_{k-1},\gamma_{k})\ \ \forall k\geq 1, \tag{5}\]
with \(\gamma_{k}>0\) to be defined later. We prove next that \(\{P_{k}\}_{k\geq 1}\) is decreasing and bounded from bellow. Let us first prove that it is decreasing. Indeed:
\[P_{k+1}-P_{k}\] \[= \mathcal{L}_{\rho}(x_{k+1},\lambda_{k+1})-\mathcal{L}_{\rho}(x_{k +1},\lambda_{k})+\mathcal{L}_{\rho}(x_{k+1},\lambda_{k})-\mathcal{L}_{\rho}(x _{k},\lambda_{k})\] \[+\frac{\gamma_{k+1}}{2}\|x_{k+1}-x_{k}\|^{2}-\frac{\gamma_{k}}{2 }\|x_{k}-x_{k-1}\|^{2}\] \[\leq \big{\langle}F(x_{k+1})-F(x_{k})-\nabla F(x_{k})(x_{k+1}-x_{k}) \;,\;\Delta\lambda_{k+1}\big{\rangle}+\frac{1}{\rho}\|\Delta\lambda_{k+1}\|^{2}\] \[-\frac{\rho\sigma^{2}+\alpha\beta_{k+1}-\gamma_{k+1}}{2}\|\Delta x _{k+1}\|^{2}-\frac{\gamma_{k}}{2}\|\Delta x_{k}\|^{2}\] \[\leq \left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+\frac{1}{\rho}\right)\| \Delta\lambda_{k+1}\|^{2}-\frac{\gamma_{k}}{2}\|\Delta x_{k}\|^{2}\] \[-\left(\frac{\rho\sigma^{2}+\alpha\beta_{k+1}-\gamma_{k+1}}{2}-2M _{F}^{2}\sqrt[3]{\rho}^{\eta-1}\right)\|\Delta x_{k+1}\|^{2}, \tag{6}\]
where the first inequality follows from Lemma 3 and from the update of the dual multipliers in Step 6 of Algorithm 1. The second inequality holds for any \(\eta>1\) and follows from Assumption 2 and the inequality:
\[\langle a\;,\;b\rangle\leq\frac{1}{2r}\|a\|^{2}+\frac{r}{2}\|b\|^{2}\ \ \ \ \forall a,b\in\mathbb{R}^{m}\mbox{ and }r>0. \tag{7}\]
Next, let us bound \(\|\Delta\lambda_{k+1}\|^{2}\).
Lemma 4: _[Bound for \(\|\Delta\lambda_{k+1}\|\)] If Assumption 2 holds on some compact set \(\mathcal{S}\) and the sequence generated by Algorithm 1 is in \(\mathcal{S}\), then we have:_
\[\|\Delta\lambda_{k+1}\|^{2}\leq c_{1}(\beta_{k+1})\|\Delta x_{k+1}\|^{2}+c_{2} (\beta_{k})\|\Delta x_{k}\|^{2}, \tag{8}\]
_where \(c_{1}(\beta_{k+1})=2\frac{(L_{f}+\beta_{k+1})^{2}}{\sigma^{2}}\) and \(c_{2}(\beta_{k})=2\frac{(M_{f}L_{F}+(2M_{F}+\sigma)\beta_{k})^{2}}{\sigma^{4}}\)._
Proof: See Appendix.
Subsequently, using (8) in (6), we obtain that:
\[P_{k+1}-P_{k}\leq \Bigg{[}\left[\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+\frac{1}{\rho }\right)c_{1}(\beta_{k+1})-\frac{\gamma_{k+1}}{2}\right]\] \[-\frac{\sqrt[3]{\rho}^{\eta-1}(\sqrt[3]{\rho}\sigma^{2}-4M_{F}^{2 })+\alpha\beta_{k+1}-2\gamma_{k+1}}{2}\Bigg{]}\|\Delta x_{k+1}\|^{2}\] \[+\left[\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+\frac{1}{\rho} \right)c_{2}(\beta_{k})-\frac{\gamma_{k}}{2}\right]\|\Delta x_{k}\|^{2}.\]
Hence, if we choose \(\gamma_{k}\leq\frac{\alpha\beta_{k}}{2}\) and \(\rho\geq(\frac{4M_{F}^{2}}{\sigma^{2}})^{\eta}\), we get:
\[P_{k+1}-P_{k}\leq \left[\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+\frac{1}{\rho} \right)c_{1}(\beta_{k+1})-\frac{\gamma_{k+1}}{2}\right]\|\Delta x_{k+1}\|^{2}\] \[+\left[\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+\frac{1}{\rho} \right)c_{2}(\beta_{k})-\frac{\gamma_{k}}{2}\right]\|\Delta x_{k}\|^{2}.\]
Therefore, in order to obtain a decreasing Lyapunov function along iterates, we must take into account when choosing \(\gamma_{k}\) the following:
\[\gamma_{k}>2\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+\frac{1}{\rho}\right)\max \{c_{1}(\beta_{k}),c_{2}(\beta_{k})\}.\]
Hence, we want the parameter \(\gamma_{k}\) to satisfy the following requirements:
\[2\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+\frac{1}{\rho}\right)\max\{c_{1}( \beta_{k}),c_{2}(\beta_{k})\}<\gamma_{k}\leq\frac{\alpha\beta_{k}}{2}\ \ \ \ \forall k\geq 1. \tag{9}\]
Choosing \(\gamma_{k}=4\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+\frac{1}{\rho}\right) \max\{c_{1}(\beta_{k}),c_{2}(\beta_{k})\}\), one can see that it already satisfies the left inequality in (9). It remains to check the right inequality in (9). To do so, we have to check if there exists a \(\beta_{k}>0\) which satisfies the following inequality:
\[8\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+\frac{1}{\rho}\right)\max\{c_{1}( \beta_{k}),c_{2}(\beta_{k})\}\leq\alpha\beta_{k}. \tag{10}\]
Let us determine \(\max\{c_{1}(\beta_{k}),c_{2}(\beta_{k})\}\). Using the definitions of \(c_{1}(\beta_{k})\) and \(c_{2}(\beta_{k})\), we have:
\[c_{1}(\beta_{k})-c_{2}(\beta_{k})\] \[=\frac{2}{\sigma^{2}}\left[\left(L_{f}-\frac{M_{f}L_{F}}{\sigma} \right)-\frac{2M_{F}}{\sigma}\beta_{k}\right]\left[\left(L_{f}+\frac{M_{f}L_ {F}}{\sigma}\right)+\left(\frac{2M_{F}}{\sigma}+2\right)\beta_{k}\right],\]
which leads to two different cases.
**Case 1**: If \(\beta_{k}\geq\frac{L_{f}\sigma-M_{f}L_{F}}{2M_{F}}\), then \(\max\{c_{1}(\beta_{k}),c_{2}(\beta_{k})\}=c_{2}(\beta_{k})\). Let us now check if \(\beta_{k}\), which satisfies (10), is well-defined in this case. To do this, we replace the expression of \(c_{2}(\beta_{k})\) in (10) and reformulate (10) as follows:
\[\frac{16}{\sigma^{4}}\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+ \frac{1}{\rho}\right)\left[\left(2M_{F}+\sigma\right)^{2}\beta_{k}^{2}+2\left( 2M_{F}+\sigma\right)M_{f}L_{F}\beta_{k}+(M_{f}L_{F})^{2}\right]\] \[\leq\alpha\beta_{k}. \tag{11}\]
Denoting:
\[h_{1}(\beta_{k})= \frac{16}{\sigma^{4}}\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+ \frac{1}{\rho}\right)\left(2M_{F}+\sigma\right)^{2}\beta_{k}^{2}+\frac{16}{ \sigma^{4}}\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+\frac{1}{\rho}\right)(M_{ f}L_{F})^{2}\] \[+\left[\frac{32}{\sigma^{4}}\left(\frac{1}{2\sqrt[3]{\rho}^{\eta -1}}+\frac{1}{\rho}\right)\left(2M_{F}+\sigma\right)M_{f}L_{F}-\alpha\right] \beta_{k},\]
the inequality (11) is then equivalent to \(h_{1}(\beta_{k})\leq 0\). Since the roots of the second order equation \(h_{1}(\beta_{k})=0\) are:
\[\underline{\beta}_{1}=\delta\left(1-\sqrt{1-1/\delta^{2}}\right)\frac{M_{f}L_{ F}}{2M_{F}+\sigma}\quad\text{and}\quad\bar{\beta}_{1}=\delta\left(1+\sqrt{1-1/ \delta^{2}}\right)\frac{M_{f}L_{F}}{2M_{F}+\sigma},\]
it follows that (9) and (11) are verified for any \(\beta_{k}\) satisfying \(\underline{\beta}_{1}\leq\beta_{k}\leq\bar{\beta}_{1}\). Also, it is clear that when \(\delta\to\infty\), the interval \([\underline{\beta}_{1},\bar{\beta}_{1}]\to(0,\infty)\). Therefore, we should choose \(\delta\) large enough so that \(\bar{\beta}_{1}\geq\beta\). If it happens that for some \(k\geq 1\), we obtain from (3), \(\frac{L_{f}\sigma-M_{f}L_{F}}{2M_{F}}\leq\beta_{k}<\underline{\beta}_{1}\), then we take \(\beta_{k}=\underline{\beta}_{1}\). Note that when choosing \(\alpha\) one has to note that for \(\alpha\to 0\), \(\rho\to\infty\), whereas for \(\alpha\to 1\), the proximal parameter \(\beta_{k}\) might be large and similar \(\beta\) from Assumption 4.
**Case 2**: Similarly, if \(\beta_{k}<\frac{L_{f}\sigma-M_{f}L_{F}}{2M_{F}}\), then \(\max\{c_{1}(\beta_{k}),c_{2}(\beta_{k})\}=c_{1}(\beta_{k})\). Let us check if \(\beta_{k}\), that satisfies (10), is well-defined. Replacing \(c_{1}(\beta_{k})\) by its expression in (10), it allows us to recast (10) as:
\[\frac{16}{\sigma^{2}}\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+\frac{1}{\rho} \right)\left[\beta_{k}^{2}+2L_{f}\beta_{k}+L_{f}^{2}\right]\leq\alpha\beta_{k}. \tag{12}\]
Let us denote:
\[h_{2}(\beta_{k})= \frac{16}{\sigma^{2}}\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+ \frac{1}{\rho}\right)\beta_{k}^{2}+\frac{16}{\sigma^{2}}\left(\frac{1}{2\sqrt [3]{\rho}^{\eta-1}}+\frac{1}{\rho}\right)L_{f}^{2}\] \[+\left[\frac{32}{\sigma^{2}}\left(\frac{1}{2\sqrt[3]{\rho}^{\eta -1}}+\frac{1}{\rho}\right)L_{f}-\alpha\right]\beta_{k},\]
then the inequality (12) is equivalent to \(h_{2}(\beta_{k})\leq 0\). Since the roots of the equation \(h_{2}(\beta_{k})=0\) are:
\[\underline{\beta}_{2}=\delta^{\prime}\left(1-\sqrt{1-1/\delta^{\prime 2}} \right)L_{f}\quad\text{and}\quad\bar{\beta}_{2}=\delta^{\prime}\left(1+\sqrt{1 -1/\delta^{\prime 2}}\right)L_{f},\]
it follows that (9) and (12) are valid for all \(\underline{\beta}_{2}\leq\beta_{k}\leq\bar{\beta}_{2}\). We choose \(\delta^{\prime}\) such that \(\bar{\beta}_{2}=\frac{L_{f}\sigma-M_{f}L_{F}}{2M_{F}}\) (otherwise, if it happens that \(\beta_{k}>\frac{L_{f}\sigma-M_{f}L_{F}}{2M_{F}}\), we have \(c_{2}(\beta_{k})=\max\{c_{1}(\beta_{k}),c_{2}(\beta_{k})\}\) and then we are in the first case discussed previously). If for some \(k\geq 1\), we get from (3) \(\beta_{k}<\underline{\beta}_{2}\), then we take \(\beta_{k}=\underline{\beta}_{2}\). In conclusion, from the two cases above, if we choose:
\[\rho\geq\max\Bigg{\{}\!\left(\frac{4M_{F}^{2}}{\sigma^{2}}\right)^{\eta}\!, \!\left(\frac{48(\delta^{\prime}+1)}{\alpha\sigma^{2}}L_{f}\right)^{\frac{ \eta}{\eta-1}}\!,\left(\frac{48(\delta+1)}{\alpha\sigma^{4}}(2M_{F}+\sigma)M_ {f}L_{F}\right)^{\frac{\eta}{\eta-1}}\!\Bigg{\}},\]
\[\gamma_{k}=4\left(\frac{1}{2\sqrt[3]{\rho}^{\eta-1}}+\frac{1}{\rho}\right) \max\{c_{1}(\beta_{k}),c_{2}(\beta_{k})\},\]
then we have:
\[\gamma_{k}\leq\frac{\alpha\beta_{k}}{2}\ \ \ \ \forall k\geq 1. \tag{13}\]
This, in turn, implies that the Lyapunov function decreases along the iterates:
\[P_{k+1}-P_{k}\leq-\frac{\gamma_{k+1}}{4}\|\Delta x_{k+1}\|^{2}-\frac{\gamma_{ k}}{4}\|\Delta x_{k}\|^{2}\ \ \ \forall k\geq 1. \tag{14}\]
In the sequel, we assume that \(x_{0}\) is chosen such that:
\[\|F(x_{0})\|^{2}\leq\min\left\{1,\frac{c_{0}}{\rho}\right\}\ \ \ \ \ \ \ \mbox{for some $c_{0}>0$}, \tag{15}\]
and that \(f(x_{0})\leq\bar{U}\). Let us define:
\[\hat{\alpha}\triangleq 4\bar{U}+4c_{0}-3\bar{L}+8\|\lambda_{0}\|^{2}+3, \tag{16}\]
and \(D_{S}:=\max\{\|x-y\|:\,x,y\in\mathcal{S}^{0}_{\hat{\alpha}}\}\). Moreover, let us choose:
\[\rho\geq\max\Bigg{\{}\!\left(\frac{4M_{F}^{2}}{\sigma^{2}}\right)^{\eta}\!, \!\left(\frac{48(\delta^{\prime}+1)}{\alpha\sigma^{2}}L_{f}\right)^{\frac{ \eta}{\eta-1}}\!,\left(\frac{48(\delta+1)}{\alpha\sigma^{4}}(2M_{F}+\sigma)M_{ f}L_{F}\right)^{\frac{\eta}{\eta-1}}\!,\]
\[3\rho_{0},\rho_{0}+\left(\frac{M_{f}(2M_{F}+\sigma)+2\delta M_{f}L_{F}D_{S}}{ \sqrt{2}\sigma(2M_{F}+\sigma)}\right)^{2}\Bigg{\}}. \tag{17}\]
Note that, if we choose \(\delta\) large enough so that \(\bar{\beta}_{1}\geq\beta\), we get:
\[\frac{(M_{f}(2M_{F}+\sigma)+2\delta M_{f}L_{F}D_{S})^{2}}{2\sigma^{2}(2M_{F}+ \sigma)^{2}}\geq\frac{(M_{f}+\beta D_{S})^{2}}{2\sigma^{2}}.\]
It follows that \(\rho\geq\rho_{0}+\frac{(M_{f}+\beta D_{S})^{2}}{2\sigma^{2}}\) and \(\frac{4M_{F}^{2}}{\sigma^{2}}>1\), which imply that \(\rho\geq 1\). Using the definition of \(\mathcal{L}_{\rho}\), we have:
\[\mathcal{L}_{\rho}(x_{0},\lambda_{0}) = f(x_{0})+\langle\lambda_{0}\,\ F(x_{0})\rangle+\frac{\rho}{2}\|F(x_{0})\|^{2} \tag{18}\] \[\stackrel{{\eqref{eq:11}}}{{\leq}} \!\!f(x_{0})+\frac{\|\lambda_{0}\|^{2}}{2\rho}+\frac{\rho}{2}\|F(x_{0 })\|^{2}+\frac{\rho}{2}\|F(x_{0})\|^{2}\] \[\stackrel{{\eqref{eq:12}}}{{\leq}} \!\!\bar{U}+\frac{1}{2\rho}\|\lambda_{0}\|^{2}+c_{0}.\]
It then follows, after some re-arrangements, that:
\[\bar{U}+c_{0}-\bar{L} \geq f(x_{0})+\langle\lambda_{0}\;,\;F(x_{0})\rangle+\frac{\rho}{2} \|F(x_{0})\|^{2}-\frac{1}{2\rho}\|\lambda_{0}\|^{2}-\bar{L}\] \[\stackrel{{(\rho\geq 3\rho_{0})}}{{\geq}} f(x_{0})+\frac{\rho_{0}}{2}\|F(x_{0})\|^{2}-\bar{L}+\langle \lambda_{0}\;,\;F(x_{0})\rangle+\frac{\rho}{3}\|F(x_{0})\|^{2}-\frac{\|\lambda_ {0}\|^{2}}{2\rho}\] \[\stackrel{{(2)}}{{\geq}} \frac{\rho}{3}\|F(x_{0})+\frac{3\lambda_{0}}{2\rho}\|^{2}-\frac{3} {4\rho}\|\lambda_{0}\|^{2}-\frac{\|\lambda_{0}\|^{2}}{2\rho}\] \[\stackrel{{(\rho\geq 1)}}{{\geq}}-\frac{5}{4}\| \lambda_{0}\|^{2}. \tag{19}\]
The following lemma shows that the sequence \(\{(x_{k},\lambda_{k})\}_{k\geq 1}\) generated by Algorithm 1, is bounded.
Lemma 5: _Consider Algorithm 1 and let \(\{P_{k}\}_{k\geq 1}\) as defined in (5). If Assumptions 1, 2, 3 and 4 hold with \(\mathcal{S}=\mathcal{S}^{0}_{\hat{\alpha}}\) and \(\hat{\alpha}\) defined in (16) for any fixed constant \(c_{0}\) and \(D_{S}\) the radius of \(\mathcal{S}_{\hat{\alpha}^{0}}\). If \(\rho\) is chosen as in (17) and \(x_{0}\) is chosen to satisfy (15), then we have the following:_
\[x_{k}\in\mathcal{S}^{0}_{\hat{\alpha}}, \tag{20a}\] \[\|\lambda_{k}\|^{2}\leq\frac{(M_{f}+\beta D_{S})^{2}}{\sigma^{2} }\leq 2(\rho-\rho_{0}),\] (20b) \[P_{k}\leq 4\bar{U}+4c_{0}-3\bar{L}+8\|\lambda_{0}\|^{2}+2\;\;\;\;\; \forall k\geq 1. \tag{20c}\]
Proof: See Appendix.
Next, we show that the Lyapunov sequence \(\{P_{k}\}_{k\geq 1}\) is bounded from below.
Lemma 6: _Consider Algorithm 1 and let \(\{P_{k}\}_{k\geq 1}\) defined in (5). If Assumptions 1, 2, 3 and 4 hold with \(\mathcal{S}=\mathcal{S}^{0}_{\hat{\alpha}}\) and \(\hat{\alpha}\) defined in (16) for any fixed constant \(c_{0}\) and \(D_{S}\) the radius of \(\mathcal{S}_{\hat{\alpha}^{0}}\). If \(\rho\) is chosen as in (17) and \(x_{0}\) is chosen to satisfy (15), then we have the following:_
\[P_{k}\geq\bar{L}-1\;\;\;\;\;\forall k\geq 1 \tag{21}\]
_where \(\bar{L}\) is defined in (2)._
Proof: See Appendix.
Let us now bound the gradient of the augmented Lagrangian function.
Lemma 7: _[Boundedness of \(\nabla\mathcal{L}_{\rho}\)] Let \(\{(x_{k},\lambda_{k})\}_{k\geq 1}\) be a sequence generated by the Algorithm 1. If Assumption 2 holds, then:_
\[\|\nabla\mathcal{L}_{\rho}(x_{k+1},\lambda_{k+1})\|\leq\Gamma(\beta_{k+1})\|x _{k+1}-x_{k}\|+\Gamma(\beta_{k})\|x_{k}-x_{k-1}\|,\]
_where the function_
\[\Gamma(\beta_{k})=\left(M_{F}+\frac{1}{\rho}\right)\frac{M_{f}L_{F}+M_{F}L_{f} +(3M_{F}+\sigma)\beta_{k}}{\sigma^{2}}+2M_{F}(\rho M_{F}+1).\]
Proof: See Appendix.
Note that if Assumption 4 is satisfied, then \(\{\Gamma(\beta_{k})\}_{k\geq 1}\) is finite. Hence, in the sequel we define this bound as follows:
\[\Gamma_{\max}:=\sup_{k\geq 1}\{\Gamma(\beta_{k})\}. \tag{22}\]
### Global convergence
In this section we prove global convergence for the iterates generated by Algorithm 1 and also convergence rates to an \(\epsilon\) first-order solution. Based on the previous lemmas, we are now ready to present the global asymptotic convergence of the iterates of Algorithm 1.
Theorem 4.1: _[Limit points are stationary points] If Assumptions 1, 2, 3 and 4 hold with \(\mathcal{S}=\mathcal{S}^{0}_{\hat{\alpha}}\) and \(\hat{\alpha}\) defined in (16) for any fixed constant \(c_{0}\) and \(D_{S}\) the radius of \(\mathcal{S}_{\hat{\alpha}^{0}}\). If \(\rho\) is chosen as in (17) and \(x_{0}\) is chosen to satisfy (15), then any limit point \((x^{*},\lambda^{*})\) of the sequence \(\{(x_{k},\lambda_{k})\}_{k\geq 1}\), generated by Algorithm 1, is a stationary point, i.e., \(\nabla\mathcal{L}_{\rho}(x^{*},\lambda^{*})=0\). Equivalently:_
\[\nabla f(x^{*})+\nabla F(x^{*})^{T}\lambda^{*}=0,\qquad F(x^{*})=0.\]
Proof: Using (14) and the fact that \(\gamma_{k}>0\), we have:
\[\frac{\gamma_{k+1}}{4}\|\Delta x_{k+1}\|^{2}+\frac{\gamma_{k}}{4}\|\Delta x_{ k}\|^{2}\leq P_{k}-P_{k+1}\;\;\;\forall k\geq 1.\]
Let \(k\geq 1\), by summing up the above inequality from \(i=1\) to \(i=k\), we obtain:
\[\sum_{i=1}^{k}\Big{(}\frac{\gamma_{i+1}}{4}\|\Delta x_{i+1}\|^{2} +\frac{\gamma_{i}}{4}\|\Delta x_{i}\|^{2}\Big{)} \leq P_{1}-P_{k+1}\overset{\text{Lemma \ref{lem:1}}}{\leq}P_{1}-(\bar{L}-1)\] \[\overset{\eqref{lem:1}}{\leq}4\bar{U}+4c_{0}-3\bar{L}+8\|\lambda _{0}\|^{2}+3-\bar{L}\] \[=\hat{\alpha}-\bar{L}. \tag{23}\]
Since (23) holds for any \(k\geq 1\), we have:
\[\sum_{i=1}^{\infty}\Big{(}\frac{\gamma_{i+1}}{4}\|\Delta x_{i+1}\|^{2}+\frac{ \gamma_{i}}{4}\|\Delta x_{i}\|^{2}\Big{)}<\infty.\]
This, together with the fact that \(\gamma_{k}>0\), yields that:
\[\lim_{k\to\infty}\|\Delta x_{k}\|=0. \tag{24}\]
From (20b), (20a) and the fact that \(\mathcal{S}^{0}_{\hat{\alpha}}\) is compact, it follows that the sequence \(\{(x_{k},\lambda_{k})\}_{k\geq 1}\) is bounded and there exists a convergent subsequence, let us say \(\{(x_{k},\lambda_{k})\}_{k\in\mathcal{K}}\), with the limit \((x^{*},\lambda^{*})\). From Lemma 7 and (22), we have:
\[\|\nabla\mathcal{L}_{\rho}(x^{*},\lambda^{*})\|=\lim_{k\in\mathcal{K}}\|\nabla \mathcal{L}_{\rho}(x_{k},\lambda_{k})\|\leq\Gamma_{\max}\lim_{k\in\mathcal{K}} (\|\Delta x_{k}\|+\|\Delta x_{k-1}\|)=0.\]
Therefore, \(\nabla\mathcal{L}_{\rho}(x^{*},\lambda^{*})=0\), which completes our proof.
For the remainder of this paper, we define:
\[\bar{\gamma}:=\sup_{k\geq 1}\{\gamma_{k}\}\overset{\eqref{eq:13},\text{Ass. \ref{eq:14}}}{<}\infty\quad\text{ and }\quad\gamma:=\inf_{k\geq 1}\{\gamma_{k}\}>0. \tag{25}\]
Let us now present one of the main results of this paper, which derives the outer complexity of Algorithm 1 to find an \(\epsilon\) first-order solution of problem (1).
Theorem 4.1: _[First-order complexity] Consider Algorithm 1 and let \(\{P_{k}\}_{k\geq 1}\) be defined as in (5). If Assumptions 1, 2, 3 and 4 hold with \(\mathcal{S}=\mathcal{S}_{\hat{\alpha}}^{0}\) and \(\hat{\alpha}\) defined in (16) and \(\rho\) is chosen as in (17), then for any \(\epsilon>0\), Algorithm 1 yields an \(\epsilon\) first-order solution of (1) after \(K=16\Gamma_{\text{max}}{}^{2}\left(\frac{\hat{\alpha}-\bar{L}}{\gamma}\right) \frac{1}{\epsilon^{2}}\) outer iterations._
Proof: Let \(K\geq 1\), then from (23) and (25), we have:
\[\frac{\gamma}{4}\sum_{i=1}^{K}\left(\|\Delta x_{i+1}\|^{2}+\|\Delta x_{i}\|^{ 2}\right)\leq\sum_{i=1}^{K}\left(\frac{\gamma_{i+1}}{4}\|\Delta x_{i+1}\|^{2} +\frac{\gamma_{i}}{4}\|\Delta x_{i}\|^{2}\right)\leq\hat{\alpha}-\bar{L}.\]
Therefore, there exists \(k^{*}\in\{1,...,K\}\) such that:
\[\|\Delta x_{k^{*}+1}\|^{2}+\|\Delta x_{k^{*}}\|^{2}\leq 4\frac{\hat{\alpha}- \bar{L}}{K\gamma}.\]
It implies that: \(\|\Delta x_{k^{*}+1}\|\leq 2\sqrt{\frac{\hat{\alpha}-\bar{L}}{K\gamma}}\quad \text{ and }\quad\|\Delta x_{k^{*}}\|\leq 2\sqrt{\frac{\hat{\alpha}-\bar{L}}{K \gamma}}\). Hence, from Lemma 7 and (22), we get:
\[\|\nabla\mathcal{L}_{\rho}(x_{k^{*}+1},\lambda_{k^{*}+1})\|\leq\Gamma_{\text{ max}}\Big{(}\|\Delta x_{k^{*}+1}\|+\|\Delta x_{k^{*}}\|\Big{)}\leq 4\Gamma_{\text{ max}}\sqrt{\frac{\hat{\alpha}-\bar{L}}{K\gamma}}.\]
It follows that for any \(\epsilon>0\), \(\|\nabla\mathcal{L}_{\rho}(x_{k^{*}+1},\lambda_{k^{*}+1})\|\leq\epsilon\) when \(K\geq 16\Gamma_{\text{max}}{}^{2}\left(\frac{\hat{\alpha}-\bar{L}}{\gamma \epsilon^{2}}\right)\). Consequently, after \(K=16\Gamma_{\text{max}}{}^{2}(\frac{\hat{\alpha}-\bar{L}}{\gamma})\frac{1}{ \epsilon^{2}}\) outer iterations Algorithm 1 yields an \(\epsilon\) first-order solution of (1). This concludes our proof.
### Improved rates under KL
In this section, under the KL condition, we provide better convergence rates for the iterates of Algorithm 1. In particular, we prove that the whole sequence \(\{(x_{k},\lambda_{k})\}_{k\geq 1}\) converges. In order to show these results, we first bound the full gradient \(\nabla P(\cdot)\) (recall that \(P(\cdot)\) is the function defined in (4)).
Lemma 8: _[Boundedness of \(\nabla P\)] Let \(\{(x_{k},\lambda_{k})\}_{k\geq 1}\) be the sequence generated by Algorithm 1. If Assumptions 1, 2, 3 and 4 hold with \(\mathcal{S}=\mathcal{S}_{\hat{\alpha}}^{0}\) and \(\hat{\alpha}\) defined in (16) for any fixed constant \(c_{0}\), \(D_{S}\) is the radius of \(\mathcal{S}_{\hat{\alpha}}^{0}\) and \(\rho\) is chosen as in (17), then we have for any \(k\geq 1\):_
\[\|\nabla P(x_{k+1},\lambda_{k+1},x_{k},\gamma_{k+1})\|\leq\left(\Gamma_{\text{ max}}+D_{S}+2\bar{\gamma}\right)\left(\|x_{k+1}-x_{k}\|+\|x_{k}-x_{k-1}\|\right),\]
_where, \(\Gamma_{\text{max}}\) and \(\bar{\gamma}\) are defined in (22) and (25), respectively._
Proof: See Appendix.
The above lemma directly implies the following:
\[\|\nabla P(x_{k+1},\lambda_{k+1},x_{k},\gamma_{k+1})\|^{2}\leq 2(\Gamma_{\max}+D_{S} +2\bar{\gamma})^{2}\left(\|\Delta x_{k+1}\|^{2}+\|\Delta x_{k}\|^{2}\right). \tag{26}\]
Then, it follows from (26) and (14), that:
\[P_{k+1}-P_{k}\leq-\frac{\gamma}{8(\Gamma_{\max}+D_{S}+2\bar{\gamma})^{2}}\left\| \nabla P(x_{k+1},\lambda_{k+1},x_{k},\gamma_{k+1})\right\|^{2}. \tag{27}\]
Let us denote \(z_{k}=(x_{k},\lambda_{k})\) and \(u_{k}=(x_{k},\lambda_{k},x_{k-1},\gamma_{k})\). Moreover, crit \(P\) denote the set of critical points of the function \(P(\cdot)\) defined in (5). Furthermore, we denote \(\mathcal{E}_{k}=P_{k}-P^{*}\), where \(P^{*}=\lim_{k\to\infty}P_{k}\) (recall that the sequence \(\{P_{k}\}_{k\geq 1}\) is decreasing and bounded from bellow according to (14) and Lemma 6, respectively, hence it is convergent). Let us denote the set of limit points of \(\{u_{k}\}_{k\geq 1}\) by:
\[\Omega:=\{u^{*}\;:\;\exists\text{ a convergent subsequence }\{u_{k}\}_{k\in \mathcal{K}}\text{ such that }\lim_{k\in\mathcal{K}}u_{k}=u^{*}\}.\]
Let us now prove the following lemma.
Lemma 9: _Consider Algorithm 1 and let \(\{P_{k}\}_{k\geq 1}\) be defined as in (4). If Assumptions 1, 2, 3 and 4 hold, with \(\mathcal{S}=\mathcal{S}_{\hat{\alpha}}^{0}\) and \(\hat{\alpha}\) defined in (16) for any fixed constant \(c_{0}\), \(D_{S}\) is the radius of \(\mathcal{S}_{\hat{\alpha}}^{0}\) and \(\rho\) is chosen as in (17), then the following statements hold:_
1. \(\Omega\) _is a compact subset of crit_ \(P\) _and_ \(\lim_{k\to\infty}\text{dist}(u_{k},\Omega)=0\)_._
2. _For any_ \(u\in\Omega,\) _we have_ \(P(u)=P^{*}\)_._
3. _For any_ \((x,\lambda,y,\gamma)\in\text{crit }P,\) _we have_ \((x,\lambda)\) _a stationary point of (_1_)._
Proof: See Appendix.
Let us now prove that the sequence \(\left\{\|\Delta x_{k}\|+\|\Delta\lambda_{k}\|\right\}_{k\geq 1}\) has finite length, provided that a KL condition holds.
Lemma 10: _Let \(\{(x_{k},\lambda_{k})\}_{k\geq 1}\) be the sequence generated by Algorithm 1. Let Assumptions 1, 2, 3 and 4 hold, with \(\mathcal{S}=\mathcal{S}_{\hat{\alpha}}^{0}\) and \(\hat{\alpha}\) defined in (16) for any fixed constant \(c_{0}\), and \(D_{S}\) is the radius of \(\mathcal{S}_{\hat{\alpha}}^{0}\). Moreover, assume that \(P(\cdot)\) defined in (4) satisfies the KL property on \(\Omega\). Then, \(\{z_{k}\}_{k\geq 1}=\{(x_{k},\lambda_{k})\}_{k\geq 1}\) satisfies the finite length property, i.e.,_
\[\sum_{k=1}^{\infty}\|\Delta x_{k}\|+\|\Delta\lambda_{k}\|<\infty,\]
_and consequently converges to a stationary point of (1)._
Proof: See Appendix.
Lemma 10 shows that the set of limit points of the sequence \(\{(x_{k},\lambda_{k})\}_{k\geq 1}\) is a singleton. Let us denote its limit by \((x^{*},\lambda^{*})\). We are now ready to present the convergence rates of the whole sequence generated by Algorithm 1 (see also [30] for a similar reasoning).
Theorem 3.3: _[Convergence rates of \(\{(x_{k},\lambda_{k})\}_{k\geq 1}\)] If Assumptions 1, 2, 3 and 4 hold and \(z^{*}:=(x^{*},\lambda^{*})\) is a limit point of the sequence \(\{z_{k}:=(x_{k},\lambda_{k})\}_{k\geq 1}\) generated by Algorithm 1, then:_
1. _If_ \(P(\cdot)\) _satisfies KL property at_ \(u^{*}:=(x^{*},\lambda^{*},x^{*},\gamma^{*})\)_, where_ \(\gamma^{*}\) _is a limit point of the sequence_ \(\{\gamma_{k}\}_{k\geq 1}\)_, then there exists_ \(k_{1}\geq 1\) _such that for all_ \(k\geq k_{1}\) _we have:_ \[\|z_{k}-z^{*}\|\leq C\max\{\varphi(\mathcal{E}_{k}),\sqrt{\mathcal{E}_{k-1}}\},\] _where_ \(C>0\) _and_ \(\varphi\in\Psi_{\tau}\)_, with_ \(\tau>0\)_, denotes a desingularizing function._
2. _Moreover, if_ \(P(\cdot)\) _satisfies KL property with desingularizing function_ \[\varphi:[0,\tau)\to[0,+\infty),\;\varphi(s)=s^{1-\nu},\;\text{where }\nu\in[0,1),\] _then the following rates hold_ 1. _If_ \(\nu=0\)_, then_ \(z_{k}\) _converges to_ \(z^{*}\) _in a finite number of iterations._ 2. _If_ \(\nu\in(0,\frac{1}{2})\)_, then for all_ \(k\geq k_{1}\)_, we have:_ \[\|z_{k}-z^{*}\|\leq\frac{\sqrt{\mathcal{E}_{k_{1}}}}{\sqrt{(1+\bar{c} \mathcal{E}_{k_{1}}^{2\nu-1})^{k-k_{1}}}},\quad\text{where }\;\bar{c}=\frac{\gamma}{8(\Gamma_{max}+D_{S}+2\bar{\gamma})^{2}}.\]
3. _If_ \(\nu\in(\frac{1}{2},1)\)_, then for all_ \(k>k_{1}\)_, we have:_ \[\|z_{k}-z^{*}\|\leq\left(\frac{1}{\mu(k-k_{1})+\mathcal{E}_{k_{1}}^{1-2\nu}} \right)^{\frac{1-\nu}{2\nu-1}}.\]
Proof:
1. Using Lemma 4, we get: \[\|\Delta\lambda_{k+1}\|^{2} \leq c_{1}(\beta)\|\Delta x_{k+1}\|^{2}+c_{2}(\beta)\|\Delta x_{k} \|^{2}\] \[\leq\max\{c_{1}(\beta),c_{2}(\beta)\}\Big{[}\|\Delta x_{k+1}\|^{2 }+\|\Delta x_{k}\|^{2}\Big{]}.\] (28)
Adding the term \(\|\Delta x_{k+1}\|^{2}+\|\Delta x_{k}\|^{2}\) on both sides in (28), we obtain:
\[\|z_{k+1}-z_{k}\|^{2} =\|\Delta x_{k+1}\|^{2}+\|\Delta\lambda_{k+1}\|^{2}\] \[\leq\|\Delta x_{k+1}\|^{2}+\|\Delta\lambda_{k+1}\|^{2}+\|\Delta x _{k}\|^{2}\] \[\stackrel{{\eqref{eq:L1}}}{{\leq}}\big{(}\max\{c_{1}( \beta),c_{2}(\beta)\}+1\big{)}\Big{[}\|\Delta x_{k+1}\|^{2}+\|\Delta x_{k}\|^ {2}\Big{]}. \tag{29}\]
Considering (25), we can then rewrite (14) as follows:
\[P_{k+1}-P_{k}\stackrel{{\eqref{eq:14}}}{{\leq}}-\frac{ \gamma}{4}\Big{[}\|\Delta x_{k+1}\|^{2}+\|\Delta x_{k}\|^{2}\Big{]}\] \[\stackrel{{\eqref{eq:29}}}{{\leq}}-\frac{\gamma}{4 \big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}}\|z_{k+1}-z_{k}\|^{2}. \tag{30}\]
Based on our choice of \(\rho\) and \(\gamma_{k}\), the sequence \(\{P_{k}\}_{k\geq 1}\) is monotonically decreasing, see (14), and consequently \(\{\mathcal{E}_{k}\}_{k\geq 1}\) is monotonically decreasing. Using (30) and the fact that \(\{\mathcal{E}_{k}\}_{k\geq 1}\) is non-negative, we have for all \(k\geq 1\):
\[\|\Delta x_{k+1}\|+\|\Delta\lambda_{k+1}\|\leq 2\sqrt{\frac{2\big{(}\max\{c_{1}( \beta),c_{2}(\beta)\}+1\big{)}}{\gamma}}\sqrt{\mathcal{E}_{k}}. \tag{31}\]
Without loss of generality, we assume that \(\gamma^{*}\) is unique. Since \(P_{k}\to P^{*}\), \(u_{k}\to u^{*}\) and \(P(\cdot)\) satisfies the KL property at \(u^{*}\), then there exists \(k_{1}=k_{1}(\epsilon,\tau)\geq 1\) such that \(\forall k>k_{1}\), we have \(\|u_{k}-u^{*}\|\leq\epsilon\) and \(P^{*}<P_{k}<P^{*}+\tau\), and the following KL property holds:
\[\varphi^{\prime}(\mathcal{E}_{k})\|\nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_ {k})\|\geq 1. \tag{32}\]
Since \(\varphi\) is concave function, we have \(\varphi(\mathcal{E}_{k})-\varphi(\mathcal{E}_{k+1})\geq\varphi^{\prime}( \mathcal{E}_{k})(\mathcal{E}_{k}-\mathcal{E}_{k+1})\).
Therefore, from (30) and (32), we get:
\[\|z_{k+1}-z_{k}\|^{2}\] \[\leq\varphi^{\prime}(\mathcal{E}_{k})\|z_{k+1}-z_{k}\|^{2}\| \nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_{k})\|\] \[\leq\frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}}{ \gamma}\varphi^{\prime}(\mathcal{E}_{k})(\mathcal{E}_{k}-\mathcal{E}_{k+1}) \|\nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_{k})\|\] \[\leq\frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}}{ \gamma}\Big{(}\varphi(\mathcal{E}_{k})-\varphi(\mathcal{E}_{k+1})\Big{)}\| \nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_{k})\|.\]
One can notice the following trivial inequality: for any \(a,b,c,d\geq 0\), if \(a^{2}+b^{2}\leq c\times d\), then \((a+b)^{2}\leq 2a^{2}+2b^{2}\leq 2c\times d\leq c^{2}+d^{2}\leq(c+d)^{2}\). Using this relation and the fact that \(\|z_{k+1}-z_{k}\|^{2}=\|\Delta x_{k+1}\|^{2}+\|\Delta\lambda_{k+1}\|^{2}\), we have for any \(\theta>0\):
\[\|\Delta x_{k+1}\|+\|\Delta\lambda_{k+1}\|\leq \frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}\theta}{ \gamma}\Big{(}\varphi(\mathcal{E}_{k})-\varphi(\mathcal{E}_{k+1})\Big{)}\] \[+\frac{1}{\theta}\|\nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_{k} )\|. \tag{33}\]
Furthermore, we have:
\[\|\nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_{k})\|\leq \|\nabla\mathcal{L}_{\rho}(x_{k},\lambda_{k})\|+2\bar{\gamma}\|x _{k}-x_{k-1}\|\] \[\leq (\Gamma_{\max}+D_{S}+2\bar{\gamma})\left(\|\Delta x_{k}\|+\| \Delta\lambda_{k}\|\right),\]
where the last inequality follows from the definition of \(\mathcal{L}_{\rho}\) and the properties of the derivative together with Lemma 7 and (22). Then, (33) becomes:
\[\|\Delta x_{k+1}\|+\|\Delta\lambda_{k+1}\|\leq \frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}\theta}{ \gamma}\Big{(}\varphi(\mathcal{E}_{k})-\varphi(\mathcal{E}_{k+1})\Big{)}\] \[+\frac{\Gamma_{\max}+D_{S}+2\bar{\gamma}}{\theta}\Big{(}\|\Delta x _{k}\|+\|\Delta\lambda_{k}\|\Big{)}.\]
Let us now choose \(\theta>0\) such that \(0<\frac{\Gamma_{\max}+D_{S}+2\bar{\gamma}}{\theta}<1\) and define a parameter \(\delta_{0}\) as \(\delta_{0}=1-\frac{\Gamma_{\max}+D_{S}+2\bar{\gamma}}{\theta}>0\). Then, summing up the above inequality over \(k>k_{1}\), we get:
\[\sum_{k\geq k_{1}}\|\Delta x_{k+1}\|+\|\Delta\lambda_{k+1}\|\leq \frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}\theta}{ \gamma\delta_{0}}\varphi(\mathcal{E}_{k_{1}})\] \[+\frac{\Gamma_{\max}+D_{S}+2\bar{\gamma}}{\theta\delta_{0}}\Big{(} \|\Delta x_{k_{1}}\|+\|\Delta\lambda_{k_{1}}\|\Big{)}.\]
Hence, using the triangle inequality, we get for any \(k\geq k_{1}\):
\[\|z_{k}-z^{*}\|\leq\sum_{l\geq k}\|z_{l}-z_{l+1}\|\leq\sum_{l\geq k }\|\Delta x_{l+1}\|+\|\Delta\lambda_{l+1}\|\] \[\leq\frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)} \theta}{\gamma\delta_{0}}\varphi(\mathcal{E}_{k})+\frac{\Gamma_{\max}+D_{S}+ 2\bar{\gamma}}{\theta\delta_{0}}\Big{(}\|\Delta x_{k}\|+\|\Delta\lambda_{k}\| \Big{)}.\]
Further, using (31), it follows that:
\[\|z_{k}-z^{*}\|\leq\frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta) \}+1\big{)}\theta}{\gamma\delta_{0}}\varphi(\mathcal{E}_{k})\] \[\qquad\qquad+\frac{2(\Gamma_{\max}+D_{S}+2\bar{\gamma})}{\theta \delta_{0}}\sqrt{\frac{2\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}}{ \gamma}}\sqrt{\mathcal{E}_{k-1}}\] \[\leq C\max\{\varphi(\mathcal{E}_{k}),\sqrt{\mathcal{E}_{k-1}}\},\]
where
\[C=\max\Bigg{\{}\frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1 \big{)}\theta}{\gamma\delta_{0}},\] \[\qquad\qquad\frac{2(\Gamma_{\max}+D_{S}+2\bar{\gamma})}{\theta \delta_{0}}\sqrt{\frac{2\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}}{ \gamma}}\Bigg{\}}.\]
2. Let \(\nu\in[0,1)\) and for all \(s\in[0,\tau),\varphi(s)=s^{1-\nu}\) and \(\varphi^{\prime}(s)=(1-\nu)s^{-\nu}\). It follows that \(\forall k\geq k_{1}\), we have: \[\|z_{k}-z^{*}\|\leq C\max\{\mathcal{E}_{k}^{1-\nu},\sqrt{\mathcal{E}_{k-1}}\}.\] (34)
Furthermore, (32) yields:
\[{\mathcal{E}_{k}}^{\nu}\leq\|\nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_{k})\|\ \ \ \ \forall k\geq k_{1}.\]
Moreover, from (27), we have for any \(k\geq 1\):
\[\|\nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_{k})\|^{2}\leq\frac{8(\Gamma_{ \max}+D_{S}+2\bar{\gamma})^{2}}{\gamma}({\mathcal{E}_{k-1}}-{\mathcal{E}_{k}}).\]
Hence,
\[{\mathcal{E}_{k}}^{2\nu}\leq\frac{8(\Gamma_{\max}+D_{S}+2\bar{\gamma})^{2}}{ \gamma}({\mathcal{E}_{k-1}}-{\mathcal{E}_{k}})\ \ \ \ \forall k>k_{1}.\]
Setting \(\bar{c}=\frac{\gamma}{8(\Gamma_{\max}+D_{S}+2\bar{\gamma})^{2}}>0\), we get the recurrence
\[\bar{c}{\mathcal{E}_{k}}^{2\nu}\leq{\mathcal{E}_{k-1}}-{\mathcal{E}_{k}}\ \ \ \ \forall k>k_{1}.\]
1. Let \(\nu=0\). If \({\mathcal{E}_{k}}>0\) for any \(k>k_{1}\), we have \(\bar{c}\leq{\mathcal{E}_{k-1}}-{\mathcal{E}_{k}}\). As \(k\) goes to infinity, the right hand side approaches zero. Then, \(0<\bar{c}\leq 0\) which is a contradiction. Hence, there exists \(k>k_{1}\) such that \({\mathcal{E}_{k}}=0\). Then, \({\mathcal{E}_{k}}\to 0\) in a finite number of steps and from (34), \(z_{k}\to z^{*}\) in a finite number of steps.
2. Let \(\nu\in(0,\frac{1}{2})\). Then, \(2\nu-1<0\). Let \(k>k_{1}\). Since \(\{{\mathcal{E}_{i}}\}_{i\geq k_{1}}\) is monotonically decreasing, then \({\mathcal{E}_{i}}\leq{\mathcal{E}_{k_{1}}}\) for any \(i\in\{k_{1}+1,k_{1}+2,...,k\}\) and \[\bar{c}{\mathcal{E}_{k_{1}}}^{2\nu-1}{\mathcal{E}_{k}}\leq{\mathcal{E}_{k-1}} -{\mathcal{E}_{k}}\ \ \ \ \forall k>k_{1}.\] Rearranging this, we get for all \(k>k_{1}\): \[{\mathcal{E}_{k}}\leq\frac{{\mathcal{E}_{k-1}}}{1+\bar{c}{\mathcal{E}_{k_{1}} }^{2\nu-1}}\leq\frac{{\mathcal{E}_{k-2}}}{(1+\bar{c}{\mathcal{E}_{k_{1}}}^{2 \nu-1})^{2}}\leq...\leq\frac{{\mathcal{E}_{k_{1}}}}{(1+\bar{c}{\mathcal{E}_{ k_{1}}}^{2\nu-1})^{k-k_{1}}}.\] Then, we have \(\max\{{\mathcal{E}_{k}}^{1-\nu},\sqrt{{\mathcal{E}_{k-1}}}\}=\sqrt{{ \mathcal{E}_{k-1}}}\). It then follows that: \[\|z_{k}-z^{*}\|\leq\frac{\sqrt{{\mathcal{E}_{k_{1}}}}}{\sqrt{(1+\bar{c}{ \mathcal{E}_{k_{1}}}^{2\nu-1})^{k-k_{1}}}},\]
3. Let \(\nu\in(1/2,1)\), we have: \[\bar{c}\leq({\mathcal{E}_{k-1}}-{\mathcal{E}_{k}}){\mathcal{E}_{k}}^{-2\nu} \ \ \ \ \forall k>k_{1}.\] (35) Let \(h:{\mathbb{R}}_{+}\to{\mathbb{R}}\) be defined as \(h(s)=s^{-2\nu}\) for any \(s\in{\mathbb{R}}_{+}\). It is clear that \(h\) is monotonically decreasing and \(\forall s\in{\mathbb{R}}_{+},h^{\prime}(s)=-2\nu s^{-(1+2\nu)}<0\). Since \({\mathcal{E}_{k}}\leq{\mathcal{E}_{k-1}}\) for all \(k>k_{1}\), then \(h({\mathcal{E}_{k-1}})\leq h({\mathcal{E}_{k}})\) for all \(k>k_{1}\). We consider two cases:
**Case 1**: Let \(r_{0}\in(1,+\infty)\) such that: \(h(\mathcal{E}_{k})\leq r_{0}h(\mathcal{E}_{k-1})\;k>k_{1}\). Then, from (35) we get:
\[\bar{c}\leq r_{0}(\mathcal{E}_{k-1}-\mathcal{E}_{k})h(\mathcal{E}_ {k-1})\leq r_{0}h(\mathcal{E}_{k-1})\int_{\mathcal{E}_{k}}^{\mathcal{E}_{k-1}} 1\,ds\] \[\leq r_{0}\int_{\mathcal{E}_{k}}^{\mathcal{E}_{k-1}}h(s)\,ds=r_{0 }\int_{\mathcal{E}_{k}}^{\mathcal{E}_{k-1}}s^{-2\nu}\,ds=\frac{r_{0}}{1-2\nu}( \mathcal{E}_{k-1}^{\phantom{k-1}1-2\nu}-\mathcal{E}_{k}^{\phantom{k-1}1-2\nu}).\]
Since \(\nu>\frac{1}{2}\), it follows that:
\[0<\frac{\bar{c}(2\nu-1)}{r_{0}}\leq\mathcal{E}_{k}^{\phantom{k}1-2\nu}- \mathcal{E}_{k-1}^{\phantom{k-1}1-2\nu}.\]
Let us define \(\hat{c}=\frac{\bar{c}(2\nu-1)}{r_{0}}\) and \(\hat{\nu}=1-2\nu<0\). We then get:
\[0<\hat{c}\leq\mathcal{E}_{k}^{\phantom{k}\hat{\nu}}-\mathcal{E}_{k-1}^{ \phantom{k-1}\hat{\nu}}\;\;\;\;\;\forall k>k_{1}. \tag{36}\]
**Case 2**: Let \(r_{0}\in(1,+\infty)\) such that: \(h(\mathcal{E}_{k})>r_{0}h(\mathcal{E}_{k-1}),\;k>k_{1}\). We then have \(\mathcal{E}_{k}^{-2\nu}\geq r_{0}\mathcal{E}_{k-1}^{\phantom{k-1}-2\nu}\). This leads to
\[q\mathcal{E}_{k-1}\geq\mathcal{E}_{k},\]
where \(q=r_{0}^{-\frac{1}{2\nu}}\in(0,1)\). Since \(\hat{\nu}=1-2\nu<0\) we have \(q^{\hat{\nu}}\mathcal{E}_{k-1}^{\phantom{k-1}\hat{\nu}}\leq\mathcal{E}_{k}^{ \phantom{k}\hat{\nu}}\) and then, it follows that:
\[(q^{\hat{\nu}}-1)\mathcal{E}_{k-1}^{\phantom{k-1}\hat{\nu}}\leq\mathcal{E}_{k -1}^{\phantom{k-1}\hat{\nu}}-\mathcal{E}_{k}^{\phantom{k}\hat{\nu}}.\]
Since \(q^{\hat{\nu}}-1>0\) and \(\mathcal{E}_{k}\to 0^{+}\) as \(k\to\infty\), there exists \(\tilde{c}\) such that \((q^{\hat{\nu}}-1)\mathcal{E}_{k-1}^{\phantom{k-1}\hat{\nu}}\geq\tilde{c}\) for all \(k>k_{1}\). Therefore, we obtain:
\[0<\tilde{c}\leq\mathcal{E}_{k}^{\phantom{k}\hat{\nu}}-\mathcal{E}_{k-1}^{ \phantom{k-1}\hat{\nu}}\;\;\;\;\forall k>k_{1}. \tag{37}\]
By choosing \(\mu=\min\{\hat{c},\tilde{c}\}>0\), one can combine (36) and (37) to obtain
\[0<\mu\leq\mathcal{E}_{k}^{\phantom{k}\hat{\nu}}-\mathcal{E}_{k-1}^{\phantom {k-1}\hat{\nu}}\;\;\;\;\forall k>k_{1}.\]
Summing the above inequality from \(k_{1}+1\) to some \(k>k_{1}\) gives
\[\mu(k-k_{1})+\mathcal{E}_{k_{1}}^{\phantom{k}\hat{\nu}}\leq\mathcal{E}_{k}^{ \phantom{k}\hat{\nu}}.\]
Hence,
\[\mathcal{E}_{k}\leq(\mu(k-k_{1})+\mathcal{E}_{k_{1}}^{\phantom{k}\hat{\nu}})^ {\frac{1}{\nu}}=(\mu(k-k_{1})+\mathcal{E}_{k_{1}}^{\phantom{k_{1}}1-2\nu})^{ \frac{1}{1-2\nu}}.\]
Since \(\nu\in(\frac{1}{2},1)\), then \(\max\{\mathcal{E}_{k-1}^{\phantom{k-1}1-\nu},\sqrt{\mathcal{E}_{k-1}}\}= \mathcal{E}_{k-1}^{\phantom{k-1}1-\nu}\). Then, (34) becomes:
\[\|z_{k}-z^{*}\|\leq\left(\frac{1}{\mu(k-k_{1})+\mathcal{E}_{k_{1}}^{1-2\nu}} \right)^{\frac{1-\nu}{2\nu-1}},\;\;\;\;\;\forall k>k_{1}.\]
This concludes our proof.
## 5 Numerical results
In this section we numerically compare Algorithm 1 (Linearized AL) with SCP algorithm [20] and IPOPT [28] on quadratic problems with quadratic equality constraints (QPQCs). Note that Proximal Augmented Lagrangian method presented in [29] is very slow compared to these three methods, hence we decided not to include it in our comparisons. The simulations are implemented in MATLAB and executed on a PC with (CPU 2.70GHz, 16GB RAM). For the implementation of our method, we choose \(\beta_{k}\) constant and equal to 1 and the penalty parameter \(\rho=10^{3}\) if not specified differently. Since one cannot guarantee that the SCP iterates converge to a KKT point, we choose the following stopping criteria: we stop the algorithms when the difference between two consecutive values of the objective function is less than a tolerance \(\epsilon_{1}=10^{-3}\) and the norm of constraints is less than a tolerance \(\epsilon_{2}=10^{-5}\). The numerical results are illustrated in Table 1 and Figure 1.
In Table 1, we compare the number of iterations, cpu time (sec), objective value and feasibility violation for Linearized AL, SCP and IPOPT on QPQCs from the CUTEst collection (the first part of the table) and randomly generated QCQP (the last 10 test cases in Table 1). Note that for the test cases from CUTEst our algorithm, Linearized AL, is the best in terms of cpu time. The "-" indicates that an algorithm is unable to solve the corresponding problem in less than two hours. As one can see from Table 1, IPOPT could not find solutions of some random problems during the fixed time. Note that the last five random cases (in the second part of the table) have sparse data, while the first five random problems have dense data. This explains why the algorithms are faster for the sparse problems compared to the first five ones in this second part of the table, even though the latter have much smaller dimensions. Table 1 shows that our method is faster than SCP and IPOPT (in average it is 4 times faster than SCP and 10 times faster than IPOPT). For the optimal value function, we can see that sometimes Linearized AL and SCP methods obtain a worse optimal value than IPOPT. However, in our case by increasing the penalty parameter \(\rho\), we can also find the same optimal value as IPOPT, whereas this is not the case for SCP.
Figure 1 shows the performance profiling for computation time (left) and number of iterations (right) for the 3 algorithms. In the performance profiling, the vertical axis \(P(r_{p,s}\leq\tau)\) (\(P(r_{p,s}\leq k)\)) represent the proportion of problems in the numerical experiments for which \(r_{p,s}\) does not exceed \(\tau\) (\(k\)), respectively, where \(r_{p,s}\) is the ratio of the computational time (number of iterations) that the \(s\) solver takes to solve problem \(p\) to the shortest computational time (number of iterations) among the three algorithms to solve problem \(p\), respectively. It is clear from the computational time profile, Figure 1 (left), that the proposed algorithm, Liniarized AL, approaches 1 faster than SCP and IPOPT. However, for the number of iterations this is not always the case. From these preliminary experiments we can conclude that our new algorithm, Linearized AL, is an efficient method to solve optimization problems with nonlinear equality
\begin{table}
\begin{tabular}{|c|c c|c c|c c|} \hline (n,m)Alg & \multicolumn{2}{c|}{Linearized AL} & \multicolumn{2}{c|}{SCP} & \multicolumn{2}{c|}{IPOPT} \\ \cline{2-7} & \# iter & cpu & \# iter & cpu & \# iter & cpu \\ & \(f^{*}\) & \(\|F\|\) & \(f^{*}\) & \(\|F\|\) & \(f^{*}\) & \(\|F\|\) \\ \hline OPTCTRL3 & 5 & **0.12** & 5 & 0.17 & 7 & 7.40 \\ (119,80) & 2048.01 & 5.42e-14 & 2048.01 & 4.18e-12 & 2048.01 & 1,84e-08 \\ \hline OPTCTRL3; \(\rho=10^{3}\) & 15 & **1.38** & 16 & 3.34 & 10 & 11.99 \\ (1199,800) & 19877.75 & 1.10e-08 & 19877.75 & 1.56e-09 & 18460.22 & 6.33e-09 \\ \hline OPTCTRL3; \(\rho=7\times 10^{5}\) & 87 & 9.44 & 16 & **3.34** & 10 & 11.99 \\ (1199,800) & 18460.23 & 4.09e-07 & 19877.75 & 1.56e-09 & 18460.22 & 6.33e-09 \\ \hline OPTCTRL3 & 44 & **15.44** & 44 & 209.41 & 11 & 26.95 \\ (4499,3000) & 74465.03 & 8.43e-09 & 74465.03 & 2.82e-09 & 74465.03 & 1.09e-08 \\ \hline DTOC4 & 3 & **0.05** & 3 & 0.24 & 3 & 7.61 \\ (299,198) & 2.95 & 3.65e-07 & 2.95 & 2.75e-07 & 2.94 & 1.02e-08 \\ \hline DTOC4 & 3 & **0.35** & 3 & 0.91 & 3 & 12.45 \\ (1497,998) & 2.88 & 4.26e-07 & 2.88 & 7.85e-09 & 2.88 & 1.20e-08 \\ \hline DTOC4 & 3 & **4.35** & 3 & 16.74 & 3 & 29.02 \\ (4497,2998) & 2.87 & 2.06e-07 & 2.87 & 4.87e-10 & 2.87 & 3.66e-08 \\ \hline DTOC5; \(\rho=10^{3}\) & 4 & **0.33** & 4 & 0.51 & 3 & 12.06 \\ (998,499) & 1.61 & 1.67e-07 & 1.61 & 6.75e-08 & 1.53 & 7.76e-07 \\ \hline DTOC5; \(\rho=2\times 10^{4}\) & 120 & 7.63 & 4 & **0.51** & 3 & 12.06 \\ (998,499) & 1.54 & 5.79e-07 & 1.61 & 6.75e-08 & 1.53 & 7.76e-07 \\ \hline DTOC5 & 5 & **53.05** & 4 & 138.74 & 3 & 75.25 \\ (9998,4999) & 1.62 & 3.92e-08 & 1.62 & 1.68e-10 & 1.53 & 2.49e-07 \\ \hline ORTHREGA & 37 & **0.91** & 39 & 1.73 & 76 & 10.14 \\ (517,256) & 1414.05 & 1.23e-06 & 1664.80 & 1.24e-06 & 1414.05 & 6.19e-10 \\ \hline ORTHREGA & 53 & **13.27** & 67 & 31.78 & 14 & 23.99 \\ (2053,1024) & 5661.43 & 7.90e-07 & 6654.78 & 2.07e-06 & 5661.43 & 9,25e-07 \\ \hline ORTHREGA & 38 & **65.72** & 193 & 3798.87 & 20 & 71.78 \\ (8197,4096) & 22647.84 & 1.83e-07 & 22647.84 & 3.12e-06 & 22647.84 & 1.86e-09 \\ \hline \hline (10,9) & 27 & **0.07** & 34 & 0.14 & 7 & 0.21 \\ (20,13) & -2.61 & 6.01e-06 & -2.61 & 6.01e-06 & -2.61 & 3.85e-15 \\ \hline \multirow{2}{*}{(50,43)} & 62 & **3.90** & 61 & 4.68 & 12 & 87.75 \\ & -12.95 & 8.43e-06 & -12.95 & 9.86e-06 & -1.98 & 2.92e-14 \\ \hline \multirow{2}{*}{(100,91)} & 629 & 22.53 & 204 & **6.42** & 21 & 143.74e \\ & -742.21 & 4.16e-06 & -300.72 & 9.70e-06 & -113.52 & 5.44e-12 \\ \hline \multirow{2}{*}{(150,140)} & 1165 & 75.53 & 505 & **24.89** & 37 & 417.87 \\ & -1027.24 & 1.51e-06 & -358.67 & 9.79e-06 & -231.26 & 3.382-11 \\ \hline \multirow{2}{*}{(1000,500)} & 17 & **1.37** & 25 & 1.76 & 5 & 140.34 \\ & -0.27 & 3.13e-06 & -0.27 & 1.76e-08 & -0.27 & 1.36e-10 \\ \hline \multirow{2}{*}{(1000,500)} & 7 & **3.67** & 7 & 11.64 & - & - \\ & 1.46 & 9.48e-07 & 1.46 & 3.40e-07 & - & - \\ \hline \multirow{2}{*}{(10000,500)} & 7 & **34.12** & 7 & 73.43 & - & - \\ & 2.47 & 6.59e-06 & 2.47 & 5.87e-06 & - & - \\ \hline \multirow{2}{*}{(10000,1000)} & 7 & **76.21** & 7 & 134.07 & - & - \\ & 33.81 & 1.59e-06 & 33.81 & 1.54e-09 & - & - \\ \hline \multirow{2}{*}{(300000,700)} & 7 & **164.54** & 7 & 178.35 & - & - \\ & 33.59 & 2.55e-06 & 33.59 & 2.44e-07 & - & - \\ \hline \end{tabular}
\end{table}
Table 1: Comparison between Linearized AL, SCP and IPOPT on QCQPs from CUTEst (top) and randomly generated QCQPs (bottom).
constraints, usually much faster than well established solvers such as IPOPT and SCP.
## 6 Conclusion
In this paper, we have proposed a linearized augmented Lagrangian method for solving (locally) smooth optimization problems with nonlinear equality constraints. In this method we have linearized the functional constraints within the augmented Lagrangian function and added a regularization term. By dynamically generating the regularization (proximal) parameter, we have proved global asymptotic convergence, convergence rate to an \(\epsilon\) first-order optimal solution and improved convergence rates under the KL condition. Moreover, we have numerically shown that the proposed algorithm is efficient, comparing it with two known algorithms, SCP and IPOPT.
## Conflict of interest
The authors declare that they have no conflict of interest.
## Data availability
It is not applicable.
Figure 1: Performance profiles for the computation time (left) and the number of iterations (right).
## Appendix
**Proof of Lemma 2** Using the definition of \(\psi\) and \(\nabla_{x}\psi\), we have:
\[\|\nabla_{x}\psi(x,\lambda)-\nabla_{x}\psi(x^{\prime},\lambda^{ \prime})\|\] \[=\|\nabla F(x)^{T}(\lambda+\rho F(x))-\nabla F(x^{\prime})^{T}( \lambda^{\prime}+\rho F(x^{\prime}))\|\] \[=\left\|\left(\nabla F(x)-\nabla F(x^{\prime})\right)^{T}\left( \lambda+\rho F(x)\right)+\nabla F(x^{\prime})^{T}\left(\lambda-\lambda^{ \prime}+\rho\big{(}F(x)-F(x^{\prime})\big{)}\right)\right\|\] \[\leq\|\nabla F(x)-\nabla F(x^{\prime})\|\|\lambda+\rho F(x)\|+\| \nabla F(x^{\prime})\|\left(\|\lambda-\lambda^{\prime}\|+\rho\|F(x)-F(x^{ \prime})\|\right)\] \[\stackrel{{\text{Ass. }2}}{{\leq}}\left(L_{F}\|\lambda+ \rho F(x)\|+\rho M_{F}^{2}\right)\|x-x^{\prime}\|+M_{F}\|\lambda-\lambda^{ \prime}\|\] \[\leq\left(L_{F}\|\lambda+\rho F(x)\|+M_{F}(1+\rho M_{F})\right) \left\|\left(\begin{matrix}x\\ \lambda\end{matrix}\right)-\left(\begin{matrix}x^{\prime}\\ \lambda^{\prime}\end{matrix}\right)\right\|\] \[\leq\sup_{x,\lambda}\{L_{\psi}(x,\lambda)\}\left\|\left( \begin{matrix}x\\ \lambda\end{matrix}\right)-\left(\begin{matrix}x^{\prime}\\ \lambda^{\prime}\end{matrix}\right)\right\|,\]
where \(L_{\psi}(x,\lambda)=L_{F}\|\lambda+\rho F(x)\|+M_{F}(1+\rho M_{F})\). Similarly, using the expression of \(\nabla_{\lambda}\psi\), we get:
\[\|\nabla_{\lambda}\psi(x,\lambda)-\nabla_{\lambda}\psi(x^{\prime},\lambda^{\prime})\| =\|F(x)-F(x^{\prime})\|\] \[\stackrel{{\text{Ass. }2}}{{\leq}}M_{F}\|x-x^{\prime}\|\leq M _{F}\left\|\left(\begin{matrix}x\\ \lambda\end{matrix}\right)-\left(\begin{matrix}x^{\prime}\\ \lambda^{\prime}\end{matrix}\right)\right\|.\]
Then, using basic properties of the Euclidean norm, we obtain:
\[\|\nabla\psi(x,\lambda)-\nabla\psi(x^{\prime},\lambda^{\prime})\|\] \[\leq\|\nabla_{x}\psi(x,\lambda)-\nabla_{x}\psi(x^{\prime}, \lambda^{\prime})\|+\|\nabla_{\lambda}\psi(x,\lambda)-\nabla_{\lambda}\psi(x^ {\prime},\lambda^{\prime})\|\] \[\leq\sup_{(x,\lambda)\in\mathcal{S}\times\lambda}\left\{L_{F}\| \lambda+\rho F(x)\|+M_{F}(2+\rho M_{F})\right\}\left\|\left(\begin{matrix}x-x^ {\prime}\\ \lambda-\lambda^{\prime}\end{matrix}\right)\right\|.\]
This concludes our proof.
**Proof of Lemma 3** Using the optimality of \(x_{k+1}\), we have:
\[\bar{\mathcal{L}}_{\rho}(x_{k+1},\lambda_{k};x_{k})+\frac{\beta_{k+1}}{2}\|x_ {k+1}-x_{k}\|^{2}\leq\bar{\mathcal{L}}_{\rho}(x_{k},\lambda_{k};x_{k})= \mathcal{L}_{\rho}(x_{k},\lambda_{k}).\]
Further, from definitions of \(\bar{\mathcal{L}}_{\rho}\) and \(\mathcal{L}_{\rho}\), we get:
\[f(x_{k+1})+\langle\lambda_{k}\,\ F(x_{k})+\nabla F(x_{k}) \Delta x_{k+1}\rangle+\frac{\rho}{2}\|F(x_{k})+\nabla F(x_{k})\Delta x_{k+1} \|^{2}\] \[\leq f(x_{k})+\langle\lambda_{k}\,\ F(x_{k})\rangle+\frac{\rho}{2}\|F (x_{k})\|^{2}-\frac{\beta_{k+1}}{2}\|\Delta x_{k+1}\|^{2}.\]
Rearranging this inequality, it follows:
\[f(x_{k+1})-f(x_{k})\] \[\leq -\frac{\rho}{2}\langle\nabla F(x_{k})\Delta x_{k+1}\,\ 2F(x_{k})+ \nabla F(x_{k})\Delta x_{k+1}\rangle-\langle\nabla F(x_{k})\Delta x_{k+1}\,\ \lambda_{k}\rangle\] \[-\frac{\beta_{k+1}}{2}\|\Delta x_{k+1}\|^{2}\] \[\leq -\frac{\rho}{2}\|\nabla F(x_{k})\Delta x_{k+1}\|^{2}-\langle \nabla F(x_{k})^{T}(\lambda_{k}+\rho F(x_{k}))\,\ \Delta x_{k+1}\rangle-\frac{\beta_{k+1}}{2}\| \Delta x_{k+1}\|^{2}\] \[\stackrel{{\text{Ass. }2}}{{\leq}}-\frac{\rho}{2}\sigma^{2}\| \Delta x_{k+1}\|^{2}-\frac{\beta_{k+1}}{2}\|\Delta x_{k+1}\|^{2}-\langle \nabla F(x_{k})^{T}(\lambda_{k}+\rho F(x_{k}))\,\ \Delta x_{k+1}\rangle.\]
Using the definitions of \(\mathcal{L}_{\rho}\) and \(\psi\), we further obtain:
\[\mathcal{L}_{\rho}(x_{k+1},\lambda_{k})-\mathcal{L}_{\rho}(x_{k}, \lambda_{k}) =f(x_{k+1})-f(x_{k})+\psi(x_{k+1},\lambda_{k})-\psi(x_{k},\lambda_{k})\] \[\stackrel{{(\ref{eq:2})}}{{\leq}}-\frac{\rho\sigma^{2 }+\alpha\beta_{k+1}}{2}\|x_{k+1}-x_{k}\|^{2}.\]
This proves our statement.
**Proof of Lemma 4** Using the optimality condition for \(x_{k+1}\), we have:
\[\nabla f(x_{k+1}) +\nabla F(x_{k})^{T}\lambda_{k}+\rho\nabla F(x_{k})^{T}\Big{(}F( x_{k})+\nabla F(x_{k})(x_{k+1}-x_{k})\Big{)}\] \[+\beta_{k+1}(x_{k+1}-x_{k})=0.\]
Combining this with the update in Step 6 of Algorithm 1, we get:
\[\nabla f(x_{k+1})+\nabla F(x_{k})^{T}\lambda_{k+1}+\beta_{k+1}(x_{k+1}-x_{k})=0. \tag{38}\]
By replacing \(k\) with \(k-1\), we obtain:
\[\nabla f(x_{k})+\nabla F(x_{k-1})^{T}\lambda_{k}+\beta_{k}(x_{k}-x_{k-1})=0. \tag{39}\]
Subtracting (39) from (38), we have:
\[\nabla f(x_{k+1})-\nabla f(x_{k}) +\nabla F(x_{k})^{T}\Delta\lambda_{k+1}+\big{(}\nabla F(x_{k})- \nabla F(x_{k-1})\big{)}^{T}\lambda_{k}\] \[+\beta_{k+1}\Delta x_{k+1}-\beta_{k}\Delta x_{k}=0\ \ \ \ \ \forall k\geq 1.\]
Further, using Assumption 2, we have:
\[\|\Delta\lambda_{k+1}\|\leq \frac{1}{\sigma}\Big{(}\|\nabla f(x_{k+1})-\nabla f(x_{k})\|+\| \nabla F(x_{k})-\nabla F(x_{k-1})\|\|\lambda_{k}\|\] \[+\beta_{k+1}\|\Delta x_{k+1}\|+\beta_{k}\|\Delta x_{k}\|\Big{)} \ \ \ \ \forall k\geq 1. \tag{40}\]
From (39), we further have:
\[\|\lambda_{k}\|\leq\frac{1}{\sigma}\Big{(}\|\nabla f(x_{k})\|+\beta_{k}\| \Delta x_{k}\|\Big{)}\leq\frac{1}{\sigma}\Big{(}M_{f}+\beta_{k}\|\Delta x_{k} \|\Big{)}. \tag{41}\]
Moreover, from Assumption 2, we have:
\[\|\nabla F(x_{k})-\nabla F(x_{k-1})\|\leq L_{F}\|\Delta x_{k}\|\ \ \ \text{and}\ \ \ \|\nabla F(x_{k})-\nabla F(x_{k-1})\|\leq 2M_{F}.\]
By replacing, the above inequalities and (41) in (40), we obtain:
\[\|\Delta\lambda_{k+1}\|\] \[\leq\frac{1}{\sigma}\left(L_{f}\|\Delta x_{k+1}\|+\frac{M_{f}L_{F }+2M_{F}\beta_{k}}{\sigma}\|\Delta x_{k}\|+\beta_{k+1}\|\Delta x_{k+1}\|+ \beta_{k}\|\Delta x_{k}\|\right)\] \[=\frac{L_{f}+\beta_{k+1}}{\sigma}\|\Delta x_{k+1}\|+\frac{M_{f}L_ {F}+(2M_{F}+\sigma)\beta_{k}}{\sigma^{2}}\|\Delta x_{k}\|. \tag{42}\]
Since \((a+b)^{2}\leq 2a^{2}+2b^{2}\), we finally get (8).
**Proof of Lemma 5** We prove this result using induction arguments. From Lemma 3 for \(k=0\), we have the following:
\[f(x_{1})+\langle\lambda_{0}\,\ F(x_{1})\rangle+\frac{\rho}{2}\|F(x_ {1})\|^{2}+\frac{\rho\sigma^{2}+\alpha\beta_{1}}{2}\|x_{1}-x_{0}\|^{2}\] \[\leq f(x_{0})+\langle\lambda_{0}\,\ F(x_{0})\rangle+\frac{\rho}{2}\|F(x_ {0})\|^{2}\] \[\stackrel{{(\ref{eq:2})}}{{\leq}}\tilde{U}+\frac{1}{ 2\rho}\|\lambda_{0}\|^{2}+c_{0}. \tag{43}\]
For \(i=0\) and \(i=1\), we obtain:
\[f(x_{i})+\frac{\rho_{0}}{2}\|F(x_{i})\|^{2}\overset{(\rho\geq 3 \rho_{0})}{\leq}f(x_{i})+\frac{\rho}{6}\|F(x_{i})\|^{2}\] \[\overset{(\ref{eq:f1}),(\ref{eq:f2})}{\leq}\bar{U}+\frac{1}{2\rho }\|\lambda_{0}\|^{2}+c_{0}-\langle\lambda_{0}\,\ F(x_{i})\rangle-\frac{\rho}{3}\|F(x_{i})\|^{2}\] \[\leq\bar{U}+\frac{1}{2\rho}\|\lambda_{0}\|^{2}+c_{0}-\frac{\rho}{ 3}\|F(x_{i})+\frac{3\lambda_{0}}{2\rho}\|^{2}+\frac{3\|\lambda_{0}\|^{2}}{4\rho}\] \[\leq\bar{U}+c_{0}+\frac{5\|\lambda_{0}\|^{2}}{4\rho}\] \[\overset{(\ref{eq:f1})}{\leq}\bar{U}+c_{0}+\frac{5\|\lambda_{0} \|^{2}}{4\rho}+3(\bar{U}+c_{0}-\bar{L}+\frac{5\|\lambda_{0}\|^{2}}{4\rho})\] \[\overset{(\rho\geq 1)}{\leq}4\bar{U}+4c_{0}-3\bar{L}+5\| \lambda_{0}\|^{2}\leq\hat{\alpha}.\]
Therefore, we find that \(x_{0},x_{1}\in\mathcal{S}^{0}_{\hat{\alpha}}\). Moreover, using the optimality condition (39) for \(k=1\), we have:
\[\nabla f(x_{1})+\nabla F(x_{0})^{T}\lambda_{1}+\beta_{1}(x_{1}-x_{0})=0.\]
Since \(x_{0},x_{1}\in\mathcal{S}^{0}_{\hat{\alpha}}\) and \(D_{S}\) is the diameter of \(\mathcal{S}^{0}_{\hat{\alpha}}\), then from Assumption 2:
\[\|\lambda_{1}\|\leq\frac{1}{\sigma}(\|\nabla f(x_{1})\|+\beta_{1}\|\Delta x_{1 }\|)\leq\frac{1}{\sigma}(M_{f}+\beta D_{S})\leq 2(\rho-\rho_{0}).\]
Furthermore, exploiting the definition of \(P_{k}\) for \(k=1\), we have:
\[P_{1} =\mathcal{L}_{\rho}(x_{1},\lambda_{1})+\frac{\gamma_{1}}{2}\|x_{ 1}-x_{0}\|^{2}\] \[=\mathcal{L}_{\rho}(x_{1},\lambda_{1})-\mathcal{L}_{\rho}(x_{1}, \lambda_{0})+\mathcal{L}_{\rho}(x_{1},\lambda_{0})-\mathcal{L}_{\rho}(x_{0}, \lambda_{0})+\mathcal{L}_{\rho}(x_{0},\lambda_{0})\] \[\quad+\frac{\gamma_{1}}{2}\|x_{1}-x_{0}\|^{2}\] \[\overset{\text{Lemma \ref{lem:f1}}}{\leq}\langle\lambda_{1}- \lambda_{0}\,\ F(x_{1})\rangle-\frac{\rho\sigma^{2}+\alpha\beta_{1}-\gamma_{1}}{2}\|x_{ 1}-x_{0}\|^{2}+\mathcal{L}_{\rho}(x_{0},\lambda_{0})\] \[\overset{(\ref{eq:f1})}{\leq}\frac{\rho}{2}\|F(x_{1})\|^{2}+ \frac{1}{2\rho}\|\lambda_{1}-\lambda_{0}\|^{2}+\mathcal{L}_{\rho}(x_{0}, \lambda_{0}). \tag{44}\]
From (43), it follows that:
\[\frac{\rho}{6}\|F(x_{1})\|^{2}\] \[\leq\bar{U}+\frac{1}{2\rho}\|\lambda_{0}\|^{2}+c_{0}-\frac{\rho} {6}\|F(x_{1})\|^{2}-\langle\lambda_{0}\,\ F(x_{1})\rangle-f(x_{1})-\frac{\rho}{6}\|F(x_{1})\|^{2}\] \[=\bar{U}+\frac{1}{2\rho}\|\lambda_{0}\|^{2}+c_{0}-\frac{\rho}{6} \|F(x_{1})+\frac{3\lambda_{0}}{\rho}\|^{2}+\frac{3}{2\rho}\|\lambda_{0}\|^{2}- f(x_{1})-\frac{\rho}{6}\|F(x_{1})\|^{2}\] \[\overset{(\rho\geq 3\rho_{0})}{\leq}\bar{U}+\frac{1}{2\rho}\| \lambda_{0}\|^{2}+c_{0}+\frac{3}{2\rho}\|\lambda_{0}\|^{2}-f(x_{1})-\frac{\rho _{0}}{2}\|F(x_{1})\|^{2}\] \[\overset{(\ref{eq:f1})}{\leq}\bar{U}+\frac{2}{\rho}\|\lambda_{0} \|^{2}+c_{0}-\bar{L}\overset{(\rho\geq 1)}{\leq}\bar{U}+c_{0}-\bar{L}+2\| \lambda_{0}\|^{2}. \tag{45}\]
Using (45) in (44), we obtain:
\[P_{1} \leq 3(\bar{U}+c_{0}-\bar{L}+2\|\lambda_{0}\|^{2})+\frac{1}{\rho} \|\lambda_{1}\|^{2}+\frac{1}{\rho}\|\lambda_{0}\|^{2}+\bar{U}+c_{0}+\frac{1}{2 \rho}\|\lambda_{0}\|^{2}\] \[\leq 4\bar{U}+4c_{0}-3\bar{L}+8\|\lambda_{0}\|^{2}+2.\]
It then follows that for \(k=1\), (20) is verified. Now, assume that (20) holds for some \(k\geq 1\) (induction hypothesis) and we will prove that it continues to hold for \(k+1\). Using Lemma 3 together with the definition of \(P_{k}\), we have:
\[\mathcal{L}_{\rho}(x_{k+1},\lambda_{k})\leq\mathcal{L}_{\rho}(x_{k},\lambda_{k })\leq P_{k}.\]
By using the expression of \(\mathcal{L}_{\rho}\), we have that:
\[f(x_{k+1})+\langle\lambda_{k}\,\ F(x_{k+1})\rangle+\frac{\rho}{2}\|F(x_{k+1}) \|^{2}\leq P_{k}.\]
Thus, using (7), it follows that:
\[f(x_{k+1})-\frac{\|\lambda_{k}\|^{2}}{2(\rho-\rho_{0})}-\frac{(\rho-\rho_{0}) \|F(x_{k+1})\|^{2}}{2}+\frac{\rho}{2}\|F(x_{k+1})\|^{2}\leq P_{k},\]
which yields the following:
\[f(x_{k+1})+\frac{\rho_{0}}{2}\|F(x_{k+1})\|^{2} \leq P_{k}+\frac{\|\lambda_{k}\|^{2}}{2(\rho-\rho_{0})}\leq P_{k}+1\] \[\leq 4\bar{U}+4c_{0}-3\bar{L}+8\|\lambda_{0}\|^{2}+3=\hat{\alpha},\]
where the last two inequalities are due to the induction hypothesis. Therefore, \(x_{k+1}\in\mathcal{S}^{0}_{\hat{\alpha}}\). Using the same arguments as for \(k=1\), the optimality condition (38) and the fact that \(x_{k}\in\mathcal{S}^{0}_{\hat{\alpha}}\) from the induction hypothesis, it follows that:
\[\|\lambda_{k+1}\|^{2}\leq\frac{(M_{f}+\beta D_{S})^{2}}{\sigma^{2}}\leq 2( \rho-\rho_{0}).\]
Since \(x_{k},x_{k+1}\in\mathcal{S}^{0}_{\hat{\alpha}}\) and from (14), we have:
\[P_{k+1}-P_{k}\leq-\frac{\gamma_{k+1}}{4}\|\Delta x_{k+1}\|^{2}-\frac{\gamma_{ k}}{4}\|\Delta x_{k}\|^{2}\leq 0.\]
Together with the induction hypothesis, we obtain:
\[P_{k+1}\leq P_{k}\leq 4\bar{U}+4c_{0}-3\bar{L}+8\|\lambda_{0}\|^{2}+2.\]
Finally, (20) is proved, which completes our proof.
**Proof of Lemma 6** Using (5), we have:
\[P_{k} \geq f(x_{k})+\frac{\rho}{2}\|F(x_{k})\|^{2}+\langle\lambda_{k} \,\ F(x_{k})\rangle\] \[\geq f(x_{k})+\frac{\rho}{2}\|F(x_{k})\|^{2}-\frac{\|\lambda_{k}\|^ {2}}{2(\rho-\rho_{0})}-\frac{\rho-\rho_{0}}{2}\|F(x_{k})\|^{2}\] \[\stackrel{{\eqref{eq:P_k}}}{{\geq}}f(x_{k})+\frac{ \rho_{0}}{2}\|F(x_{k})\|^{2}-1\stackrel{{\eqref{eq:P_k}}}{{\geq} }^{2}\bar{L}-1.\]
It follows that the sequence \(\{P_{k}\}_{k\geq 1}\) is bounded from below.
**Proof of Lemma 7** Using the optimality condition (38), we have:
\[\nabla f(x_{k+1})=-\nabla F(x_{k})^{T}\lambda_{k+1}-\beta_{k+1}(x_{k+1}-x_{k}).\]
It then follows, by exploiting the definition of \(\mathcal{L}_{\rho}\) and the properties of the derivative, that:
\[\nabla_{x}\mathcal{L}_{\rho}(x_{k+1},\lambda_{k+1})=\nabla f(x_{k +1})+\nabla F(x_{k+1})^{T}\big{(}\lambda_{k+1}+\rho F(x_{k+1})\big{)}\] \[= \big{(}\nabla F(x_{k+1})-\nabla F(x_{k})\big{)}^{T}\lambda_{k+1} +\nabla F(x_{k+1})^{T}\Delta\lambda_{k+1}-\beta_{k+1}\Delta x_{k+1}\] \[+\rho\nabla F(x_{k+1})^{T}\big{(}F(x_{k+1})-F(x_{k})-\nabla F(x_{k })\Delta x_{k+1}\big{)}.\]
Using basic properties of the Euclidean norm, we further get:
\[\|\nabla_{x}\mathcal{L}_{\rho}(x_{k+1},\lambda_{k+1})\|\] \[\leq \|\nabla F(x_{k+1})-\nabla F(x_{k})\|\|\lambda_{k+1}\|+\|\nabla F(x _{k+1})\|\|\Delta\lambda_{k+1}\|\] \[+\beta_{k+1}\|\Delta x_{k+1}\|+\rho\|\nabla F(x_{k+1})\|\|F(x_{k +1})-F(x_{k})-\nabla F(x_{k})\Delta x_{k+1}\|\] \[\stackrel{{\text{Ass.\ref{eq:2.4}},\eqref{eq:2.4}}}{{ \leq}} \frac{M_{f}L_{F}+2M_{F}\beta_{k+1}}{\sigma}\|\Delta x_{k+1}\|+M_{F}\| \Delta\lambda_{k+1}\|\] \[+\beta_{k+1}\|\Delta x_{k+1}\|+2\rho M_{F}^{2}\|\Delta x_{k+1}\|\] \[= \left(\frac{M_{f}L_{F}+(2M_{F}+\sigma)\beta_{k+1}}{\sigma}+2 \rho M_{F}^{2}\right)\|\Delta x_{k+1}\|+M_{F}\|\Delta\lambda_{k+1}\| \tag{46}\] \[\stackrel{{\eqref{eq:2.4}}}{{\leq}} \left(\frac{M_{f}L_{F}+M_{F}L_{f}+(3M_{F}+\sigma)\beta_{k+1}}{ \sigma}+2\rho M_{F}^{2}\right)\|\Delta x_{k+1}\|\] \[+\frac{M_{F}}{\sigma}\ \frac{M_{f}L_{F}+(2M_{F}+\sigma)\beta_{k}}{ \sigma}\|\Delta x_{k}\|\] \[\leq \left(\frac{M_{F}}{\sigma}\ \frac{M_{f}L_{F}+M_{F}L_{f}+(3M_{F}+ \sigma)\beta_{k+1}}{\sigma}+2\rho M_{F}^{2}\right)\|\Delta x_{k+1}\|\] \[+\left(\frac{M_{F}}{\sigma}\ \frac{M_{f}L_{F}+M_{F}L_{f}+(3M_{F}+ \sigma)\beta_{k}}{\sigma}+2\rho M_{F}^{2}\right)\|\Delta x_{k}\|\]
Similarly, we have:
\[\|\nabla_{\lambda}\mathcal{L}_{\rho}(x_{k+1},\lambda_{k+1})\|=\|F (x_{k+1})\|\] \[\leq \|F(x_{k+1})-F(x_{k})-\nabla F(x_{k})\Delta x_{k+1}\|+\frac{1}{ \rho}\|\Delta\lambda_{k+1}\|\] \[\stackrel{{\text{Ass.\ref{eq:2.4}}}}{{\leq}} 2M_{F}\|\Delta x_{k+1}\|+\frac{1}{\rho}\|\Delta\lambda_{k+1}\| \tag{47}\] \[\stackrel{{\eqref{eq:2.4}}}{{\leq}} \left(2M_{F}+\frac{1}{\rho}\frac{L_{f}+\beta_{k+1}}{\sigma} \right)\|\Delta x_{k+1}\|+\frac{1}{\rho}\ \frac{M_{f}L_{F}+(2M_{F}+\sigma)\beta_{k}}{ \sigma^{2}}\|\Delta x_{k}\|\] \[\leq \left(2M_{F}+\frac{1}{\rho}\ \frac{L_{f}\sigma+M_{f}L_{F}+(2M_{F}+ \sigma)\beta_{k+1}}{\sigma^{2}}\right)\|\Delta x_{k+1}\|\] \[+\left(2M_{F}+\frac{1}{\rho}\ \frac{L_{f}\sigma+M_{f}L_{F}+(2M_{F}+ \sigma)\beta_{k}}{\sigma^{2}}\right)\|\Delta x_{k}\|\]
where the first inequality is obtained from the multipliers update in Step 6 of Algorithm 1. Hence, it follows that:
\[\|\nabla\mathcal{L}_{\rho}(x_{k+1},\lambda_{k+1})\| \leq \|\nabla_{x}\mathcal{L}_{\rho}(x_{k+1},\lambda_{k+1})\|+\|\nabla _{\lambda}\mathcal{L}_{\rho}(x_{k+1},\lambda_{k+1})\|\] \[\leq \Gamma(\beta_{k+1})\|x_{k+1}-x_{k}\|+\Gamma(\beta_{k})\|x_{k}-x_{ k-1}\|,\]
where
\[\Gamma(\beta_{k})=\left(M_{F}+\frac{1}{\rho}\right)\frac{M_{f}L_{F}+M_{F}L_{f} +(3M_{F}+\sigma)\beta_{k}}{\sigma^{2}}+2M_{F}(\rho M_{F}+1).\]
This proves our claim.
**Proof of Lemma 8** By exploiting the definition of \(P(\cdot)\) defined in (4), we have that for any \(k\geq 1\):
\[\nabla_{x}P(x,\lambda,y,\gamma)=\nabla_{x}\mathcal{L}_{\rho}(x, \lambda)+\gamma(x-y),\ \ \ \ \ \nabla_{\lambda}P(x,\lambda,y,\gamma)=\nabla_{\lambda}\mathcal{L}_{\rho}(x,\lambda)\] \[\nabla_{y}P(x,\lambda,y,\gamma)=\gamma(y-x)\ \ \ \text{and}\ \ \ \nabla_{\gamma}P(x,\lambda,y,\gamma)=\frac{1}{2}\|x-y\|^{2}.\]
Hence,
\[\|\nabla P(x_{k+1},\lambda_{k+1},x_{k},\gamma_{k+1})\| \leq\|\nabla\mathcal{L}_{\rho}(x_{k+1},\lambda_{k+1})\|++2\gamma_{k +1}\|x_{k+1}-x_{k}\|+\frac{1}{2}\|\Delta x_{k+1}\|^{2}\] \[\leq (\Gamma_{\max}+D_{S}+2\bar{\gamma})\left(\|\Delta x_{k+1}\|+\| \Delta x_{k}\|\right),\]
where the last inequality follows from Lemma 7, (22) and (25). This proves our claim.
Proof of Lemma 9.: (i) From Lemma 5 and Assumption 4, it follows that \(\{u_{k}\}_{k\geq 1}\) is bounded and therefore, there exists a convergent subsequence \(\{u_{k}\}_{k\in\mathcal{K}}\) such that \(\lim_{k\in\mathcal{K}}u_{k}=u^{*}\). Hence \(\Omega\) is nonempty. Moreover, \(\Omega\) is compact since it is bounded and closed. On the other hand, for any \(u^{*}\in\Omega\), there exists a sequence of increasing integers \(\mathcal{K}\) such that \(\lim_{k\in\mathcal{K}}u_{k}=u^{*}\) and using Lemma 8 and (24), it follows that:
\[\|\nabla P(u^{*})\|=\lim_{k\in\mathcal{K}}\|\nabla P(u_{k})\|=0.\]
Hence, \(u^{*}\in\mathrm{crit}\ P\) and \(0\leq\lim_{k\to\infty}\mathrm{dist}(u_{k},\Omega)\leq\lim_{k\in\mathcal{K}} \mathrm{dist}(u_{k},\Omega)=\mathrm{dist}(u^{*},\Omega)=0\).
(ii) Since \(P(\cdot)\) is continuous and \(\{P(u_{k})=P_{k}\}_{k\geq 1}\) converges to \(P^{*}\), then any subsequence \(\{P(u_{k})=P_{k}\}_{k\in\mathcal{K}}\) that converges, it converges to the same limit \(P^{*}\).
(iii) Let \((x,\lambda,y,\gamma)\in\mathrm{crit}\ P\) that is \(\nabla P(x,\lambda,y,\gamma)=0\). It then follows that:
\[\nabla_{x}P(x,\lambda,y,\gamma)=\nabla_{x}\mathcal{L}_{\rho}(x, \lambda)+\gamma(x-y)=0,\quad\nabla_{\lambda}P(x,\lambda,y,\gamma)=\nabla_{ \lambda}\mathcal{L}_{\rho}(x,\lambda)=0\] \[\nabla_{y}P(x,\lambda,y,\gamma)=\gamma(y-x)=0\quad\text{and} \quad\nabla_{\gamma}P(x,\lambda,y,\gamma)=\frac{1}{2}\|x-y\|^{2}=0.\]
With some minor rearrangements, we obtain:
\[\nabla f(x)+\nabla F(x)^{T}\lambda=0,\quad\quad F(x)=0.\]
Hence, \((x,\lambda)\) is a stionary point of (1). This concludes our proof.
Proof of Lemma 10.: From the boundedness of \(\|\Delta\lambda_{k+1}\|^{2}\) derived in (8), we have:
\[\|\Delta\lambda_{k+1}\|^{2} \leq c_{1}(\beta)\|\Delta x_{k+1}\|^{2}+c_{2}(\beta)\|\Delta x_{k} \|^{2}\] \[\leq \max\{c_{1}(\beta),c_{2}(\beta)\}\Big{[}\|\Delta x_{k+1}\|^{2}+ \|\Delta x_{k}\|^{2}\Big{]}. \tag{48}\]
Adding the term \(\|\Delta x_{k+1}\|^{2}+\|\Delta x_{k}\|^{2}\) on both sides in (48), we have:
\[\|z_{k+1}-z_{k}\|^{2} =\|\Delta x_{k+1}\|^{2}+\|\Delta\lambda_{k+1}\|^{2}\] \[\leq\|\Delta x_{k+1}\|^{2}+\|\Delta\lambda_{k+1}\|^{2}+\|\Delta x _{k}\|^{2}\] \[\stackrel{{\eqref{eq:def_def_def_def}}}{{\leq}}\big{(} \max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}\Big{[}\|\Delta x_{k+1}\|^{2}+\| \Delta x_{k}\|^{2}\Big{]}. \tag{49}\]
Considering (25), we can then rewrite (14) as follows:
\[P_{k+1}-P_{k}\stackrel{{\eqref{eq:def_def_def_def}}}{{\leq}} -\frac{\gamma}{4}\Big{[}\|\Delta x_{k+1}\|^{2}+\|\Delta x_{k}\|^{2} \Big{]}\] \[\stackrel{{\eqref{eq:def_def_def_def}}}{{\leq}} -\frac{\gamma}{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)} }\|z_{k+1}-z_{k}\|^{2}. \tag{50}\]
Since \(P_{k}\to P^{*}\) and \(\{P_{k}\}_{k\geq 1}\) is monotonically decreasing to \(P^{*}\), then it follows that the error sequence \(\{\mathcal{E}_{k}\}_{k\geq 1}\), is non-negative, monotonically decreasing and converges to \(0\). We distinguish two cases.
**Case 1**: There exists \(k_{1}\geq 1\) such that \(\mathcal{E}_{k_{1}}=0\). Then, \(\mathcal{E}_{k}=0\ \forall k\geq k_{1}\) and using (50), we have:
\[\|z_{k+1}-z_{k}\|^{2}\leq\frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1 \big{)}}{\gamma}(\mathcal{E}_{k}-\mathcal{E}_{k+1})=0\quad\forall k\geq k_{1}.\]
From Lemma 5 the sequence \(\{z_{k}\}_{k\geq 1}\) is bounded, and thus:
\[\sum_{k=1}^{\infty}\|\Delta x_{k}\|+\|\Delta\lambda_{k}\|=\sum_{k=1}^{k_{1}}\| \Delta x_{k}\|+\|\Delta\lambda_{k}\|{<}\infty.\]
**Case 2**: The error \(\mathcal{E}_{k}>0\)\(\forall k\geq 1\). Then, there exists \(k_{1}=k_{1}(\epsilon,\tau)\geq 1\) such that \(\forall k\geq k_{1}\) we have \(\mathrm{dist}(u_{k},\Omega)\leq\epsilon\), \(P^{*}<P(u_{k})<P^{*}+\tau\) and
\[\varphi^{\prime}(\mathcal{E}_{k})\|\nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_{ k})\|\geq 1, \tag{51}\]
where \(\epsilon>0,\tau>0\) and \(\varphi\in\Psi_{\tau}\) are well defined and correspond to those in Definition 2, recall that \(P(\cdot)\) satisfies the KL property on \(\Omega\). Since \(\varphi\) is concave, we have \(\varphi(\mathcal{E}_{k})-\varphi(\mathcal{E}_{k+1})\geq\varphi^{\prime}( \mathcal{E}_{k})(\mathcal{E}_{k}-\mathcal{E}_{k+1})\). Then, from (50) and (51) we get:
\[\|z_{k+1}-z_{k}\|^{2}\] \[\leq\varphi^{\prime}(\mathcal{E}_{k})\|z_{k+1}-z_{k}\|^{2}\| \nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_{k})\|\] \[\leq\frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}}{ \gamma}\varphi^{\prime}(\mathcal{E}_{k})(\mathcal{E}_{k}-\mathcal{E}_{k+1}) \|\nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_{k})\|\] \[\leq\frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}}{ \gamma}\Big{(}\varphi(\mathcal{E}_{k})-\varphi(\mathcal{E}_{k+1})\Big{)}\| \nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_{k})\|.\]
Since \(\|z_{k+1}-z_{k}\|^{2}=\|\Delta x_{k+1}\|^{2}+\|\Delta\lambda_{k+1}\|^{2}\). Using the fact that for any \(a,b,c,d\geq 0\), if \(a^{2}+b^{2}\leq c\times d\), then \((a+b)^{2}\leq 2a^{2}+2b^{2}\leq 2c\times d\leq c^{2}+d^{2}\leq(c+d)^{2}\), it follows that for any \(\theta>0\), we have:
\[\|\Delta x_{k+1}\|+\|\Delta\lambda_{k+1}\|\leq \frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}\theta}{ \gamma}\Big{(}\varphi(\mathcal{E}_{k})-\varphi(\mathcal{E}_{k+1})\Big{)}\] \[+\frac{1}{\theta}\|\nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_{k} )\|. \tag{52}\]
Furthermore, we have:
\[\|\nabla P(x_{k},\lambda_{k},x_{k-1},\gamma_{k})\|\leq\|\nabla \mathcal{E}_{\rho}(x_{k},\lambda_{k})\|++2\bar{\gamma}\|x_{k}-x_{k-1}\|\] \[\overset{\eqref{eq:2.1},\eqref{eq:2.2}}{\leq}(\Gamma_{\max}+D_{S }+2\bar{\gamma})\left(\|\Delta x_{k}\|+\|\Delta\lambda_{k}\|\right).\]
Then, (52) becomes:
\[\|\Delta x_{k+1}\|+\|\Delta\lambda_{k+1}\|\leq \frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}\theta}{ \gamma}\Big{(}\varphi(\mathcal{E}_{k})-\varphi(\mathcal{E}_{k+1})\Big{)}\] \[+\frac{\Gamma_{\max}+D_{S}+2\bar{\gamma}}{\theta}\Big{(}\|\Delta x _{k}\|+\|\Delta\lambda_{k}\|\Big{)}.\]
Let us now choose \(\theta>0\) so that \(0<\frac{\Gamma_{\max}+D_{S}+2\bar{\gamma}}{\theta}<1\) and define the parameter \(\delta_{0}\) as: \(\delta_{0}=1-\frac{\Gamma_{\max}+D_{S}+2\bar{\gamma}}{\theta}>0\). Then, by summing up the above inequality from \(k=\underline{k}\) to \(k=K\) and using the property: \(\sum_{k=\underline{k}}^{K}\|\Delta x_{k}\|=\sum_{k=\underline{k}}^{K}\|\Delta x _{k+1}\|+\|\Delta x_{\underline{k}}\|-\|\Delta x_{K+1}\|\), we get:
\[\sum_{k=\underline{k}}^{K}\|\Delta x_{k+1}\|+\|\Delta\lambda_{k+1 }\|\leq\frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}\theta}{ \gamma\delta_{0}}\Big{(}\varphi(\mathcal{E}_{k})-\varphi(\mathcal{E}_{K+1}) \Big{)}\] \[\qquad+\frac{\Gamma_{\max}+D_{S}+2\bar{\gamma}}{\theta\delta_{0}} \Big{(}\|\Delta x_{\underline{k}}\|+\|\Delta\lambda_{k}\|\Big{)}-\frac{\Gamma_ {\max}+D_{S}+2\bar{\gamma}}{\theta\delta_{0}}\Big{(}\|\Delta x_{K+1}\|+\| \Delta\lambda_{K+1}\|\Big{)}.\]
Using the fact that \(\{\mathcal{E}_{k}\}_{k\geq k_{1}}\) is monotonically decreasing and that the function \(\varphi\) is positive and increasing, which yields \(\varphi(\mathcal{E}_{k})\geq\varphi(\mathcal{E}_{k+1})>0\), then:
\[\sum_{k=k}^{K}\|\Delta x_{k+1}\|+\|\Delta\lambda_{k+1}\|\leq \frac{4\big{(}\max\{c_{1}(\beta),c_{2}(\beta)\}+1\big{)}\theta}{ \gamma\delta_{0}}\varphi(\mathcal{E}_{k})\] \[+\frac{\Gamma_{\max}+D_{S}+2\bar{\gamma}}{\theta\delta_{0}}\Big{(} \|\Delta x_{k}\|+\|\Delta\lambda_{k}\|\Big{)}.\]
It is clear that the right-hand side of the above inequality is bounded for any \(K\geq k\). Letting \(K\to\infty\), we get that:
\[\sum_{k=k}^{\infty}\|\Delta x_{k+1}\|+\|\Delta\lambda_{k+1}\|<\infty.\]
From Lemma 5, the sequence \(\{(x_{k},\lambda_{k})\}_{k\geq 1}\) is bounded. Then, it follows that:
\[\sum_{k=1}^{k}\|\Delta x_{k}\|+\|\Delta\lambda_{k}\|<\infty.\]
Hence: \(\sum_{k=1}^{\infty}\|\Delta x_{k}\|+\|\Delta\lambda_{k}\|<\infty\). Let \(m,n\in\mathcal{Z}_{+}\) such that \(n\geq m\), we have:
\[\|z_{n}-z_{m}\|=\|\sum_{k=m}^{n-1}\Delta z_{k+1}\|\leq\sum_{k=m}^{n-1}\|\Delta z _{k+1}\|\leq\sum_{k=m}^{n-1}\|\Delta x_{k+1}\|+\|\Delta\lambda_{k+1}\|.\]
Since \(\sum_{k=1}^{\infty}\|\Delta x_{k+1}\|+\|\Delta\lambda_{k+1}\|<\infty\), it follows that \(\forall\varepsilon>0,\exists N\in\mathcal{Z}_{+}\) such that \(\forall m,n\in\mathcal{Z}_{+}\) where \(n\geq m\), we have: \(\|z_{n}-z_{m}\|\leq\varepsilon\). This implies that \(\{z_{k}\}_{k\geq 1}\) is a Cauchy sequence and converges. Moreover, by Theorem 1, \(\{z_{k}\}_{k\geq 1}\) converges to a staionary point of (1). This concludes our proof.
|
2308.13508 | CECILIA: The Faint Emission Line Spectrum of z~2-3 Star-forming Galaxies | We present the first results from CECILIA, a Cycle 1 JWST NIRSpec/MSA program
that uses ultra-deep ~30 hour G235M/F170LP observations to target multiple
electron temperature-sensitive auroral lines in the spectra of 33 galaxies at
z~1-3. Using a subset of 23 galaxies, we construct two ~600 object-hour
composite spectra, both with and without the stellar continuum, and use these
to investigate the characteristic rest-optical (5700-8500 Angstrom) spectrum of
star-forming galaxies at the peak epoch of cosmic star formation. Emission
lines of eight different elements (H, He, N, O, Si, S, Ar, and Ni) are
detected, with most of these features observed to be <3% the strength of
H-alpha. We report the characteristic strength of three auroral lines
([NII]5756, [SIII]6313, and [OII]7322,7332), as well as other semi-strong and
faint emission lines, including forbidden [NiII]7380,7414 and the OI 8449
recombination line, some of which have never before been observed outside of
the local universe. Using these measurements, we find T_e[NII]=13630+/-2540 K,
representing the first measurement of electron temperature using [NII] in the
high-redshift universe. We also see evidence for broad line emission with a
FWHM of ~536 km/s; the broad component of H-alpha is 6.01-28.31% the strength
of the narrow component and likely arises from star-formation driven outflows.
Finally, we briefly comment on the feasibility of obtaining large samples of
faint emission lines using JWST in the future. | Allison L. Strom, Gwen C. Rudie, Ryan F. Trainor, Gabriel B. Brammer, Michael V. Maseda, Menelaos Raptis, Noah S. J. Rogers, Charles C. Steidel, Yuguang Chen, David R. Law | 2023-08-25T17:38:31Z | http://arxiv.org/abs/2308.13508v2 | # CECILIA: The Faint Emission Line Spectrum of \(z\sim 2-3\) Star-forming Galaxies
###### Abstract
We present the first results from CECILIA, a Cycle 1 JWST NIRSpec/MSA program that uses ultra-deep \(\sim 30\) hour G235M/F170LP observations to target multiple electron temperature-sensitive auroral lines in the spectra of 33 galaxies at \(z\sim 1-3\). Using a subset of 23 galaxies, we construct two \(\sim 600\) object-hour composite spectra, both with and without the stellar continuum, and use these to investigate the characteristic rest-optical (\(\lambda_{\rm rest}\approx 5700-8500\) A) spectrum of star-forming galaxies at the peak epoch of cosmic star formation. Emission lines of eight different elements (H, He, N, O, Si, S, Ar, and Ni) are detected, with most of these features observed to be \(\lesssim 3\%\) the strength of H\(\alpha\). We report the characteristic strength of three auroral lines ([N ii]\(\lambda 5756\), [S iii]\(\lambda 6313\), and [O ii]\(\lambda\lambda 7322,7332\)), as well as other semi-strong and faint emission lines, including forbidden [Ni ii]\(\lambda\lambda 7380\), 7414 and the O l8449 recombination line, some of which have never before been observed outside of the local universe. Using these measurements, we find \(T_{\rm e}\)[N ii] \(=13630\pm 2540\) K, representing the first measurement of electron temperature using [N ii] in the high-redshift universe. We also see evidence for broad line emission with a FWHM of \(544^{+45}_{-164}\) km s\({}^{-1}\); the broad component of H\(\alpha\) is \(6.01-28.31\%\) the strength of the
emission lines (e.g., Masters et al., 2014; Steidel et al., 2014; Shapley et al., 2015; Sanders et al., 2015; Wisnioski et al., 2015; Strom et al., 2017), allowing their metallicity, ionization and excitation properties, and gas density to be studied in comparable detail to large samples of \(z\sim 0\) galaxies (e.g., Kauffmann et al., 2003; Brinchmann et al., 2004; Tremonti et al., 2004; Belfiore et al., 2015; Mingozzi et al., 2020). In the last year, JWST/NIRSpec and JWST/NIRCam grism observations have extended these efforts to even higher redshifts (\(z\gtrsim 3-6\)) by enabling IR spectroscopy out to longer wavelengths (e.g., Kashino et al., 2023; Kocevski et al., 2023; Oesch et al., 2023; Shapley et al., 2023; Sun et al., 2023).
Other lines--including metal recombination lines and the \(T_{e}\)-sensitive auroral lines of heavy elements, which are both key probes of chemical enrichment--are faint enough that they are not routinely detected even in spectra of nearby galaxies. Despite significant investment of observing time on some of the largest ground-based telescopes in the world, measurements of auroral [O iii]\(\lambda 4364\) were only possible for a handful of individual galaxies at \(z\gtrsim 2\) prior to the launch of JWST (Christensen et al., 2012; James et al., 2014; Sanders et al., 2020).
Spectroscopic observations with JWST promise to yield unprecedented numbers of auroral emission line measurements in high-\(z\) galaxies. The first analyses of the early release observations (ERO) in the SMACS J0723.37327 field reinforced this expectation, with significant detections of [O iii]\(\lambda 4364\) in several \(z\sim 8\) galaxies (Arellano-Cordova et al., 2022; Schaerer et al., 2022; Taylor et al., 2022; Brinchmann, 2023; Curti et al., 2023; Katz et al., 2023; Rhoads et al., 2023; Trump et al., 2023; Trussler et al., 2023). At the same time, many of these studies reported conflicting gas-phase oxygen abundance (O/H) measurements in the same objects, and it was unclear how representative this early, very high-\(z\) sample might be. Subsequent work has revisited the issue of auroral line detections in JWST observations of tens of high-\(z\) galaxies (albeit primarily at low to moderate O/H), confirming suspicions that locally-calibrated metallicity diagnostics are likely unsuitable for the majority of high-\(z\) galaxies (Laseter et al., 2023; Sanders et al., 2023). To date, however, consensus regarding how best to leverage these measurements to, e.g., understand the overall distribution of chemical enrichment in the early universe has not yet been achieved.
In spite of these challenges, the community has collectively recognized the goal of using auroral line measurements and the resulting direct-method metallicities to construct more accurate methods of measuring high-\(z\) galaxy enrichment _in situ_. This is evidenced by the selection of three separate Cycle 1 JWST programs (PIDs 1879, 1914, and 2593) by the time allocation committee, with a total investment of over 150 hrs, or \(\sim 2.5\%\) of all the GO time available in Cycle 1. Here, we report the first results from PID 2593, also known as CECILIA (Chemical Evolution Constrained Using Ionized Lines in Interstellar Aurorae; Strom et al., 2021).
CECILIA was designed to measure auroral [S iii]\(\lambda 6313\) and [O ii]\(\lambda\lambda 7322,7332\) in the spectra of a carefully selected sample of \(z\sim 2-3\) star-forming galaxies, using \(\sim 30\) hr G235M/F170LP observations. Owing to the unique depth of these data, CECILIA is also able to detect myriad other lines in the galaxies' rest-optical spectra, some of which are stronger than any auroral emission line and, thus, more likely to be observed in more typical integration times with JWST. Consequently, it is important to understand the expected strength of these faint and semi-strong emission lines, in order to guide future studies using JWST as well as with other current and future facilities.
The remainder of this let focuses on two \(\sim 600\) object-hour rest-optical composite spectra of \(z\sim 2-3\) galaxies observed as part of the CECILIA survey, with the aim of providing an "atlas" of the characteristic faint emission line spectrum of high-\(z\) galaxies. We describe the CECILIA survey--including the galaxy sample, the JWST program, and the data reduction--in Section 2. Section 3 outlines the construction of the composite spectra and their key features, with a more in-depth discussion of individual emission lines in Section 4. In Section 5, we close with a summary of our findings and a brief discussion of implications for future observations of faint emission lines in \(z\gtrsim 2\) galaxies. Throughout the text, we refer to specific spectral features using their vacuum wavelengths.
## 2 The CECILIA survey
The principal goal of CECILIA is to measure multiple faint rest-optical auroral lines in the spectra of \(z\sim 2-3\) galaxies, which can then be used to calibrate new high-\(z\) metallicity diagnostics. Some of the galaxies observed as part of CECILIA have preexisting rest-optical spectra obtained using Keck/MOSFIRE (Steidel et al., 2014; Trainor et al., 2015; Strom et al., 2017), but even the strongest auroral lines are not routinely detected for individual galaxies in deep (\(\sim 8-10\) hr) observations. Although JWST/NIRSpec provides greater sensitivity and spectral coverage than ground-based NIR spectrographs, achieving this goal still pushes the limits of the observatory. To make the best use of JWST observing time, we first used detailed photoionization models and existing ground-based rest-ultraviolet (UV) and
rest-optical spectra of the same galaxies to robustly predict the auroral line strengths. We then used these predictions together with the (pre-flight) JWST Exposure Time Calculator (ETC)1 to identify the depth needed to detect the auroral lines in individual galaxies. Below, we describe the parent galaxy sample, the emission line predictions, the design of the NIRSpec program, including exposure time requirements and microshutter assembly (MSA) design, and the reduction of the JWST data.
Footnote 1: [https://jwst.etc.stsci.edu/](https://jwst.etc.stsci.edu/)
s\({}^{-1}\) cm\({}^{-2}\)). All of these properties increase the ease of detection with the NIRSpec/MSA. The highest priority targets were galaxies with detailed emission line models (Section 2.2) whose predicted auroral line surface brightnesses exceeded the detection threshold of the planned observations, and galaxies with models predicting non-detections were de-prioritized. Narrow-band selected Ly\(\alpha\) emitters (LAEs) from Trainor et al. (2016) with spectroscopic detections of Ly\(\alpha\) and [O iii]\(\lambda\)5008 or H\(\alpha\) were also prioritized as a way of extending the galaxy sample to lower stellar masses (M\({}_{*}\)) and SFRs.
Of the 15 KBSS fields, we selected the Q2343+125 field due to its high density of high-priority sources and large catalog of LAEs at \(z\approx 2.55\) with spectroscopic redshifts. Further, this field also has an existing HST/WFC3 F140W mosaic (Figure 1) that provided both the precision astrometry required for mask design and the galaxy size measurements needed for target prioritization--without requiring additional (pre-)imaging from space using JWST or HST.
The CECILIA JWST/NIRSpec observations contain a total sample of 34 galaxies.2 We include 23 of these objects here (Figure 1), omitting the Ly\(\alpha\)-selected galaxies that do not have secure SED models (4 galaxies), galaxies at \(z<2\) (4 galaxies), and sources that were severely impacted by shutter failures in the NIRSpec/MSA (3 galaxies). The final sample is reasonably typical of KBSS galaxies, with masses spanning log(M\({}_{*}\)/M\({}_{\odot}\)) = \(8.5-10.7\) and a median value of log(M\({}_{*}\)/M\({}_{\odot}\)) = 9.7 (assuming a Chabrier 2003 stellar initial mass function). Based on H\(\alpha\) and H\(\beta\) measurements from ancillary MOSFIRE spectra, the included galaxies have SFRs ranging from \(16-42\) M\({}_{\odot}\) yr\({}^{-1}\), with a median SFR\({}_{\rm H\alpha}\) = 21 M\({}_{\odot}\) yr\({}^{-1}\). These are slightly lower than median values reported in Strom et al. (2017), which were determined in the same manner, but similar in terms of M\({}_{*}\) to the subsample of KBSS galaxies used to construct the deep "LM1" composite in Steidel et al. (2016).
Footnote 2: One target, Q2343-D27, appears to be a \(z=0.0890\) interloper in the JWST/NIRSpec observations.
### Emission Line Predictions
The expected strengths of the auroral emission lines targeted by CECILIA were determined using photoionization models designed to reconcile the rest-UV and rest-optical spectra of \(z\sim 2-3\) galaxies. We used a combination of the Binary Population and Spectral Synthesis models (BPASSv2; Stanway et al. 2016; Eldridge et al. 2017) and Cloudy photoionization models (Cloudy13; Ferland et al. 2013) to predict line
Figure 2: The hatched regions show the model predictions for auroral lines in the rest-optical spectra of typical \(z\sim 2-3\) galaxies, separated on the basis of whether they fall in the G140M bandpass (top panel) or the G235M bandpass (bottom panel). The width of the hatched regions reflects the typical range of ionization parameter \(U\) in high-\(z\) galaxies. The predicted line fluxes for [O iii]\(\lambda\)4364 (blue hatched region) are \(\sim 1\) dex fainter than the depth of typical ground-based spectra of individual galaxies, represented by the distribution of 3\(\sigma\) upper limits on [O iii]\(\lambda\)4364 from KBSS (dashed black histogram). Sanders et al. (2020) reported four ground-based detections of unlensed [O iii]\(\lambda\)4364 (blue points, shifted up by 0.24 dex to match the photoionization model abundance scale), but these galaxies appear atypical. Estimates using the pre-flight ETC indicated that detecting [O iii]\(\lambda\)4364 in a representative sample of high-\(z\) galaxies would be prohibitively expensive; the 3\(\sigma\) limiting line flux of \(6\times 10^{-19}\) erg s\({}^{-1}\) cm\({}^{-2}\) achievable in a combined 29 hr G140M exposure (red line) probes \(\lesssim 30\%\) of the \(z\sim 2-3\) sample from Strom et al. (2018). Fortunately, the typical predicted line fluxes for the sum of the [O ii]\(\lambda\lambda\)7322, 7332 lines (purple hatched region) and [S iii]\(\lambda\)6313 (orange hatched region) could be detected for galaxies with a wider range of \(U\) and O/H in the same exposure time, due to the higher sensitivity of NIRSpec in G235M. A 3\(\sigma\) limiting line flux of \(4.1\times 10^{-19}\) erg s\({}^{-1}\) cm\({}^{-2}\) reaches \(\sim 90\%\) of typical galaxies.
strengths as a function of gas-phase metallicity (O/H). We matched the model parameters (gas density \(n_{\rm H}\), stellar Fe/H, and ionization parameter \(U\)) to the properties of \(z\sim 2-3\) KBSS galaxies reported by Strom et al. (2018), which are consistent with the values reported for other \(z\sim 2-3\) samples (e.g., Topping et al., 2020). The model outputs were then converted to line fluxes using a representative range of SFRs and dust extinction.
Figure 2 presents the predictions for three of the brightest rest-optical auroral emission lines as a function of O/H, with the width of the hatched regions corresponding to the typical range of \(U\) in high-\(z\) galaxies; lower ionization galaxies have fainter lines at fixed metallicity. Re-calibrating strong line metallicity diagnostics and photoionization models at \(z\gtrsim 2\) requires measuring auroral lines in galaxies spanning both O/H and \(U\), as both directly influence the strength of nebular emission lines. The top panel in Figure 2 shows the steep decline in [O iii]\(\lambda 4364\) with increasing O/H and implies a limited ability to detect typical \(z\sim 2\) galaxies at high O/H and/or low \(U\) using JWST/NIRSpec, even with long exposures in G140M.
In contrast, the bottom panel of Figure 2 shows that a 3\(\sigma\) line flux sensitivity of 4.1\(\times 10^{-19}\) cgs in G235M (corresponding to a total \(\approx 30\) hr exposure time using the pre-flight ETC; see Section 2.3.1) enables the detection of [S iii]\(\lambda 6313\)_and_ [O ii]\(\lambda\lambda 7322,7332\) at virtually all \(U\), even in galaxies with relatively high gas-phase O/H. It is comparatively easier to detect [S iii]\(\lambda 6313\) and [O ii]\(\lambda\lambda 7322,7332\) not only because they are predicted to be intrinsically brighter than [O iii]\(\lambda 4364\) in the same galaxies, but also because of the increasing sensitivity of JWST/NIRSpec at longer wavelengths. On the basis of these predictions, we elected to obtain deep spectra of galaxies in a _single_ configuration, in order to maximize the overall number of auroral lines detected for individual galaxies with a range of O/H and \(U\).
### JWST/NIRSpec Program Design
To optimize the efficiency of the JWST program, we generated a large grid of ETC simulations comparing a range of galaxy sizes, limiting line fluxes, MSA centering constraints, and redshifts, as well as a comparable grid of MSA Planning Tool (MPT) simulations that considered the full range of available centering constraints. In this section, we describe the most salient elements of the program design.
#### 2.3.1 Exposure time requirements
NIRSpec G235M observations of [S iii]\(\lambda 6313\) and [O ii]\(\lambda\lambda 7322,7332\) in CECILIA galaxies were modeled using an exponential surface brightness profile (Sersic index \(n=1\)) with a projected semi-major axis of 0\(\farcs\)26 and an axis ratio of \(b/a=0.6\), consistent with the measured morphologies and median sizes of galaxies in our parent sample (Law et al., 2012).
Pre-flight ETC simulations showed that reaching the required 3\(\sigma\) limiting line flux of \(4.1\times 10^{-19}\) erg s\({}^{-1}\) cm\({}^{-2}\) for a median-sized galaxy at \(z=2.3\) at the edge of the midpoint tolerance (see Section 2.3.2) required 29.5 hours of exposure time (20 groups \(\times\) 6 integrations \(\times\) 12 exposures) using NRS IRS2 readouts. Our exposure time calculations assumed "MSA Full Shutter Extraction" and assumed we would need pixel-level subtraction from A-B pairs. As we discuss below in Section 2.4.1, we have instead implemented a global background model drawn from slits across the full MSA, which reduces the overall noise in the final combined data compared to the conservative assumptions in our original calculations. For the majority of the sources in our catalog, ETC calculations demonstrated that some of the background region in each spectrum would be contaminated with light from the source, and the derived exposure time requirements took this effect into account.
#### 2.3.2 MSA design
The MSA configuration is central to the success of CECILIA, and considerable experience with ground-based multi-object mask design led us to conduct extensive trials using different mask parameters in the MSA Planning Tool (MPT). We experimented with all possible centering constraints, dithering and nodding options, and three- and five-slitlet length slits. We ran trial masks spanning the full range of allowable PAs, using small steps in both position and PA to understand the sensitivity of the optimal configuration to changes in PA. Based on more than 100 runs of the MPT considering more than 70 million unique configurations, we determined that many PAs have \(<60\%\) as many high-priority targets as the best masks.
We optimized the MSA centering constraint, which trades exposure time against sample size, by considering a grid of ETC and MPT runs. Our ETC calculations spanned the redshift range of source galaxies and sizes ranging between the 1st and 3rd quartile of the KBSS size distribution. We considered the S/N penalty for galaxies at maximal offset in the dispersion direction for each of the three possible centering restrictions.3 For galaxies with the median size in our sample, the S/N penalties compared to a perfectly centered target are
\(7-13\%\) for "constrained," \(11-19\%\) for "midpoint," and \(14-26\%\) for "entire open shutter," where the reported ranges represent different relative angles between the short axis of the slit and the major axis of the galaxies. MPT runs showed that "constrained" configurations allowed for only 60% of the high-priority targets to be placed on a mask compared to the "midpoint" criteria. Relaxing the centering further via "entire open shutter" constraint only increased the number of high-priority targets by 7%. Therefore, we selected the optimal "midpoint" centering constraint for CECILIA observations.
We designed custom software that processed the MPT MSA configurations to check the wavelength coverage4 (using MSAViz5) and confirmed that primary targets assigned to a slit on the MSA would have spectral coverage of the required auroral and nebular lines. This software also considers the known emission line properties, M\({}_{*}\), and SFRs of target galaxies, which we used to select a final mask configuration that appropriately sampled the parent sample to enable an effective metallicity calibration. We selected a default three-shutter slitlet shape with a three point nod pattern within the slitlet.
Footnote 4: The post-flight version of MPT now has the ability to output the wavelength coverage of individual slitlets.
Upon scheduling, we were assigned a Aperture Position Angle, APA = 20.0, with values from \(18.5<\) APA \(<\) 20.0 able to be accommodated within the scheduling window. At this point, we completed a second set of MPT simulations, including PA steps of 0.1 degree and \(0\farcs 025-0\farcs 01\) position steps to optimize the PA and final mask. We did not reach convergence,6 even with angle and position steps much finer than suggested by JDox, suggesting that significant computational resources would be required to fully optimize NIRSpec/MSA observations. Based on our simulations, we ultimately selected an APA = 19.3. Over the 1.5 degree range allowable within the plan window, the MPT resulted in more than a 30% variation in the number of high-priority targets, and we advocate for conducting similar PA optimization to maximize the efficiency of other NIRSpec/MSA programs with low to moderate density of high priority targets.
Footnote 5: [https://github.com/spacetelescope/msaviz](https://github.com/spacetelescope/msaviz)
Footnote 6: We define a converged mask as one where the same optimal mask is returned even when the step size is decreased.
Following the selection of pointing and PA, we ran MPT with an expanded catalog to (1) check for contamination in any of the shutters known to be stuck open as of June 2022 and (2) open shutters on dark regions of the sky to sample the background light across the field. These sky slitlets are described in our modeling of the global sky background in Section 2.4.1. Once the automated MSA configuration was determined by MPT, the solution was hand-edited using the MSA configuration editor to (1) elongate slits for high priority targets where possible, (2) add more background shutters close to high priority targets to better sample relevant wavelength or field-position changes in the background, and (3) add high priority targets that did not meet our centering constraints but could be placed on a mask without conflicting with other high priority targets. The final MSA design included 34 sources, 23 of which are included in the stacked spectra presented in this letter.
### JWST/NIRSpec Data Reduction
The uncalibrated raw G235M data (uncal) frames were processed using the jwst_level1 pipeline in the grizli7 package version 1.8.9 from Brammer (2023). The level 1 pipeline in grizli uses the calwebJWST Detector pipeline (version 1.10.0, CRDS_CONTEXT = jwst_1100.pmap) for the group_scale correction, initial flagging of bad pixels and saturated pixels, bias subtraction (including corrections to the bias using reference pixels), as well as corrections for detector linearity and persistence and subtraction of the dark current. Following these calweb steps, clusters of pixels affected by snowball cosmic ray events were flagged and the ramp fit was calculated, including additional processing to detect and remove the effects of cosmic rays and detector defects. Finally, the gain correction was applied, resulting in the level 1 processed rate files.
Footnote 7: [https://github.com/gbrammer/grizli](https://github.com/gbrammer/grizli)
Next, the level 1 processed files were corrected for correlated read noise, which manifests as vertical banding in the rate files. This \(1/f\) noise, driven by small temperature variations in the ASIC readout electronics, was modeled and removed using the NSClean algorithm from Rauscher (2023). NSClean requires the user to create a mask that identifies areas on each of the two NIRSpec detectors that are unilluminated by source light; these areas are thus relatively clean tracers of the correlated readout noise. We tested many different mask-design strategies in order to remove as much of the large-scale vertical banding as possible while also limiting the introduction of additional high-frequency noise, which we found to be a side effect of the NSClean algorithm in many cases. We determined the most effective masks for our program omitted entire rows of pixels in the rectified full-detector image if any portion of that row was illuminated. Mask designs that omitted only the limited range of pixels that were illuminated in a given row
resulted in higher levels of high-frequency noise being introduced in the regions of the detectors that were illuminated by source light.
Following the \(1/f\) noise correction by NSClean, we applied the preprocessing routine steps from msaexp8 version 0.6.11 from Brammer (2022) aside from the \(1/f\) noise correction. This routine repeated the search for snowballs and additional detector defects, which were also masked. We applied a bias offset correction calculated from the median of unilluminated pixels in each frame and rescaled the read noise array associated with each exposure so that it reflected the distribution of the same unilluminated pixels.
Footnote 8: [https://github.com/gbrammer/msaexp](https://github.com/gbrammer/msaexp)
Next, we used msaexp to call the calweb_spec2 JWST Spectroscopic pipeline (version 1.10.0), which computed the world coordinate system reference frame for the data (including the wavelength calibration), extracted the individual 2D spectra for each slit, and flat-fielded each 2D spectral cutout. Each spectral cutout was corrected for path loss assuming the sources uniformly illuminate the slit (i.e., using the PATHLOSS_UN correction). Note that the current calweb_spec2 pipeline does not apply path loss corrections for slits more than three shutters in length (of which there are many in CECILIA, see Section 2.3.2), so we modified the pipeline to apply the uniform source path loss correction to all slits. The pipeline correction for the bar shadows produced by the discretized MSA slitlets was then applied to the data, although, as described in Section 2.4.1, the pipeline correction left residual bar shadows on the data and background illumination. The calwebb_spec2 photom step then provided a final correction to the photometric calibration of the data, resulting in flux-calibrated 2D spectra for each slit and exposure. Finally, we used the msaexp drizzle routine to resample the individual 2D spectra onto a common rectified pixel grid and combine the exposures for each slit with outlier rejection, using a threshold of 100.
#### 2.4.1 Background subtraction and extraction
To correct the data for background light, we opted to use a full-MSA background solution, rather than a paired exposure differencing algorithm, for several reasons. First, subtracting a global background model maximizes the S/N in the final spectra by excluding the shot noise that would be added by using a low-S/N measure of the background available in single adjoining shutters. Second, the CECILIA targets are extended objects with light from each galaxy contaminating the shutters above and below the primary shutter. As such, the typical background algorithms that directly subtract the detected signal above and below the primary shutter inevitably subtract some source light as well. This over-subtraction poses a particular issue at the wavelengths of bright emission lines, which our background-subtracted 2D spectra showed to frequently extend well beyond the typical 0\(\farcs\)6 dither spacing of our observations. Finally, as described below we found that a single global background model provided a good description of the background across the field, while also enabling useful checks on the systematics of our observations.
We constructed the global background model by combining data from all the illuminated shutters in the MSA. Each rectified and drizzled 2D science spectrum was masked to omit rows corresponding to continuum emission from target galaxies or from other sources identified in the slit. Pixels illuminated by extended emission lines were also masked. The full set of masked science spectra (including those from dedicated sky slitlets, which were not masked unless they included coincidental sources) were then median-combined into a single 2D background model. The 2D background was averaged in the spectral direction in order to model the residual bar shadows that were not fully corrected by the calwebb_spec2 pipeline, and these residual bar shadows were then removed from the 2D background model. We then averaged the resulting 2D background model in the spatial direction, weighting each pixel by the number of spectra contributing to the 2D median at the corresponding point in order to construct a 1D average background model as a function of wavelength.
As a cross-check on the consistency of our global background model, we created similar models from subsets of the observed slits grouped by their position on the sky (quartiles in right ascension and declination), on the MSA (quadrants 1, 2, 3, and 4), as well as by separating the portions of spectra falling on each detector (NRS1 and NRS2). The estimated 1D background was consistent across the field, but we found a small additive offset9 between the NRS1 and NRS2 detectors. We therefore applied a compensatory offset to the portions of each object's spectrum falling on NRS2 before constructing the final background model. The 1D background model was then subtracted from each 2D science spectrum, with the additive constant mentioned above being removed before being subtracted from the NRS2 portion of a given spectrum. Notably, the background model we derived differs in both normal
ization and shape with respect to the predictions of the JWST Background Tool10 (JBT) for our observations; the JBT prediction requires both an additive offset and a \(\sim\)1/2\(\times\) multiplicative scaling to match our empirical background model.
Footnote 10: [https://jwst-docs.stsci.edu/jwst-other-tools/](https://jwst-docs.stsci.edu/jwst-other-tools/)
jwst-backgrounds-tool
Footnote 11: [https://jwst-docs.stsci.edu/jwst-calibration-pipeline-caveats/](https://jwst-docs.stsci.edu/jwst-calibration-pipeline-caveats/)
known-issues-with-jwst-data-products
Optimal extraction of the 1D spectra was performed using routines from msaexp. A spatial profile of the continuum emission for each background-subtracted 2D spectrum was created by averaging along the wavelength dimension after weighting by the pipeline-produced 2D weight mask and applying a sigma-clipping algorithm to mask contaminated pixels and bright emission lines. An analogous spatial profile of the nebular emission was also created for each source by averaging the 2D spectrum over small wavelength ranges centered at the locations of bright emission lines. Each resulting 1D spatial profile was then fit independently with a Gaussian model. The resulting fits were typically similar for the continuum and emission line profiles, with the median profile being 20% wider for the emission lines than the continuum. We used the Gaussian emission-line spatial model to provide the weights for the optimal extraction, except in one case where the continuum profile was used owing to a visibly-poor fit to the emission lines.
Despite the efforts described above, there are still unresolved issues in the data reduction resulting from known issues with JWST data products,11 including uncertain variations in the spectral response as a function of slit position. Likewise, the unexplained additive offset between the NRS1 and NRS2 detectors, the residual bar shadows in the pipeline-processed 2D spectra, and the disagreement between our estimated background and the JBT predictions suggest that there are systematic effects (perhaps related to detector bias) that are incorrectly handled by the current pipeline tools and have uncertain downstream effects. While these uncertainties are not tolerable for the primary goal of CECILIA--precise abundance determinations of individual galaxies--we expect the stacking and normalization procedures described in Section 3 likely mitigate any systematic effects on our composite spectra.
Footnote 11: [https://jwst-docs.stsci.edu/jwst-calibration-pipeline-caveats/](https://jwst-docs.stsci.edu/jwst-calibration-pipeline-caveats/)
known-issues-with-jwst-data-products
## 3 The Characteristic Rest-Optical Spectrum of \(\langle Z\rangle\sim 2.3\) Galaxies
CECILIA contains some of the deepest spectra obtained during Cycle 1, with \(\sim 30\) hr observations of individual galaxies using the NIRSpec/MSA and the G235M/F170LP configuration. These data offer a unique opportunity to investigate the spectra of high-\(z\) star-forming galaxies, revealing features that have long remained out of reach of ground-based observations. Given the uncertainties in the data reduction at the present time, we use composite spectra as a tool to investigate the nebular emission lines observed in our data. We have two principal aims: (1) to illustrate the archetypal red rest-optical (\(\lambda_{\rm rest}\approx 5700-8500\) A) spectrum of a \(z\sim 2\) galaxy and (2) determine the typical range of emission line strengths. To achieve these goals, we construct two composite spectra, one including the stellar continuum and one only including the nebular emission. In this section, we describe how the two composite spectra are created, as well as their key features.
### The Total Composite Spectrum
The flux scale of each reduced 1D spectrum is adjusted by comparing the observed continuum with best-fit spectral energy distribution (SED) model of the same galaxy. This strategy has become common practice in analyzing JWST spectra of high-\(z\) galaxies as a way of accounting for uncertainties in the flux calibration. Specifically, we mask regions of the spectra with large deviations from the median flux level (\(\geq 2\times\) the median absolute deviation), which excludes not only strong emission lines but also any serious artifacts remaining in the data due to bad pixels and cosmic rays. We then use a low-order polynomial to define a multiplicative "slit loss" function for each object that forces the observed continuum to match the best-fit SED.
After this additional flux correction step, the spectra are shifted into the rest frame and normalized by the median observed continuum flux in the region between \(\lambda_{\rm rest}=6800-7000\) A, where there are no emission lines; this portion of the spectrum is also approximately centered with respect to the auroral [S iii]\(\lambda 6313\) and [O ii]\(\lambda \lambda 7322,7332\) lines. The spectra are then interpolated onto a common rest-frame wavelength array and median-combined. The final stack is subsequently re-scaled to match the median rest-frame continuum of all the constituent galaxies between \(\lambda_{\rm rest}=6800-7000\) A. Uncertainties are estimated by generating 1000 bootstrap-resampled composite spectra and calculating the 68% confidence interval (CI; analogous to asymmetric error bars) at each wavelength.
Figure 3 shows this composite spectrum (in medium blue) and the corresponding uncertainties (in light blue) over the range of rest-wavelengths with continuum S/N \(\gtrsim 15\), where we define the S/N as the ratio of the composite spectrum to half the 68% CI. This requirement
results in \(\gtrsim 75\%\) (\(\geq 17/23\) galaxies) contributing to the final composite at each wavelength. At the center of the wavelength range where the targeted auroral lines are found, the stack represents \(\sim 690\) object-hours of exposure time.
Aside from the "strong" H\(\alpha\), [N ii]\(\lambda\lambda 6550,6585\), and [S ii]\(\lambda\lambda 6718,6733\) lines (highlighted in the inset panel in Figure 3), no other emission lines are routinely detected in ground-based spectra of individual \(z\sim 2\) galaxies. Lines longward of \(\sim 7000\) A are virtually inaccessible from the ground at \(z\gtrsim 2\), due to a combination of the rising thermal background in \(K\)-band and declining atmospheric transparency. In more recent studies of high-\(z\) galaxies using JWST/NIRSpec, emission lines in this wavelength range that are fainter than nebular [N ii] and [S ii] are only infrequently observed in individual galaxy spectra (e.g., Cameron et al., 2023; Shapley et al., 2023; Sanders et al., 2023b)--and even these relatively strong lines are not always visible in the spectra of some distant galaxies. In the composite spectrum shown in Figure 3, we identify emission lines from eight different elements (H, He, N, O, Si, S, Ar, and Ni, denoted by the vertical dotted grey lines). Many of these have only rarely, if ever, been observed outside of the nearby universe.
### The Nebular Composite Spectrum
To quantify the strength of these emission lines, we construct a second, continuum-subtracted composite spectrum. In this case, after each spectrum is flux-corrected to match the best-fit SED, the model continuum is subtracted before the spectrum is shifted into the rest frame. To remove any remaining irregular wavelength-dependent errors in the continuum subtraction, we subtract a running median, using a large window (\(\Delta\lambda_{\rm rest}\sim 200\) A) to avoid over-correcting near the emission lines. The spectra are then normalized by the measured flux in H\(\alpha\) and median-combined. Because H\(\alpha\) falls in the detector gap for three galaxies, the nebular composite only includes 18 of the 23 galaxies used
Figure 3: The median-combined composite spectrum for the CECILIA sample is shown by the medium blue line, where the individual galaxy spectra are scaled by the observed continuum at \(6800-7000\) Å before being combined. The stack is then re-scaled so that the continuum in the same wavelength interval matches the median rest-frame continuum for the constituent galaxies. The 68% CI for the composite is indicated by the light blue shading, and detected emission lines of eight different elements (H, He, N, O, Si, S, Ar, and Ni) are identified by dotted grey lines. The inset panel shows a zoomed-out version of the composite, centered on H\(\alpha\), [N ii]\(\lambda\lambda 6550,6585\), and [S ii]\(\lambda\lambda 6718,6733\), which are the only lines routinely observed in ground-based observations and shallower JWST spectra of individual high-\(z\) galaxies; the grey shaded region indicates the flux range shown in the full figure, where many fainter lines are visible.
Figure 4: The nebular composite spectrum of the CECILIA sample is shown in medium blue, with the light blue shading representing the 68% CI determined via bootstrapping. The dark blue curve shows the best-fit model, which includes emission lines of eight different elements, identified by the dashed grey lines. The strengths of these lines relative to the narrow component of H\(\alpha\) are reported in Table 1. The residuals from the model are shown in the bottom panels (medium blue), compared to the uncertainties on the median stack (light blue).
to construct the total composite spectrum. Finally, the resulting composite spectrum is converted to flux units (\(\lambda F_{\lambda}\)) and re-scaled so that the peak flux of (narrow) H\(\alpha\) is 100. Figure 4 shows this composite spectrum (in medium blue), with the flux limit chosen to facilitate inspection of the semi-strong and faint features. As before, uncertainties are estimated using bootstrapping (shown by the light blue shading). The same lines identified in Figure 3 are marked by dashed grey lines here.
We determine the typical strength of these emission lines by first fitting the median composite with a model containing 73 emission lines, drawn from the catalog reported by Esteban et al. (2004), who conducted a detailed analysis of Very Large Telescope (VLT) UVES (Dekker et al., 2000) echelle spectrophotometry of the Orion nebula. We select those lines in the wavelength range sampled by the CECILIA nebular composite that are measured to have a flux \(>0.01\)% of H\(\alpha\) in the Esteban et al. (2004) spectrum. All of the lines are modeled as single Gaussians, have fixed relative wavelengths (i.e., the line centers are not allowed to move relative to one another), and are required to have the same width. For the strong [N ii]\(\lambda\lambda 6550,6585\) and semi-strong [O i]\(\lambda\lambda 6302,6365\) doublets, which have relative strengths set by atomic physics, the ratios are fixed at 1:2.96 and 3.15:1, respectively (Baluja & Zeippen, 1988a,b; Tachiev & Froese Fischer, 2001). A second Gaussian is included to account for broad components under the strongest lines (H\(\alpha\), [N ii]\(\lambda\lambda 6550,6585\), and [S ii]\(\lambda\lambda\)6718,6733) and allowed to be offset in velocity relative to the narrow components of the same lines; all of the broad components are required to have the same line width and velocity offset. The addition of these components significantly improves the residuals from the model by accounting for excess flux detected near the H\(\alpha+\) [N ii] complex.
The 1000 bootstrap samples are fit using the same model, and the 68% highest density interval (HDI) for the distributions of measured fluxes are used to determine uncertainties on the reported line fluxes. Lines are considered well-detected when they have a nonzero flux in \(>99\)% of fits _and_ the maximum _a posteriori_ (MAP) value for the line flux is \(>3\sigma\). Nineteen emission lines satisfy these criteria and are listed in Table 1. We also include Si ii \(\lambda 6373\) (2.9\(\sigma\)), the weaker [Ni ii] line at 7414 A (2.7\(\sigma\)), the Paschen line at 8470 A (3.1\(\sigma\), but only nonzero in 97.5% of the bootstrap stacks), and all of the broad components.
The dark blue curve in the top panels of Figure 4 represents the best-fit model containing the emission lines in Table 1, with the fit residuals shown by the medium blue line in the bottom panels. Recall that the peak of (narrow) H\(\alpha\) is set to 100 in the nebular composite, so that the peak of the semi-strong and faint emission lines corresponds to their strengths relative to the narrow component of H\(\alpha\). The MAP values are also reported in Table 1, alongside the 68% HDI for each line. Because the uncertainties are calculated via bootstrap, note that these ranges reflect contributions from both observational uncertainties on the individual line measurements as well as physical variation among the objects in our sample.
## 4 Faint Emission Lines in High-Redshift Star-Forming Galaxies
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{ Ion} & \(\lambda_{\rm vac}\) & F(\(\lambda\)) & Range & Notes \\ & (Å) & (\%) & (\%) & \\ \hline \multicolumn{5}{c}{Narrow components} \\ \hline
[N II] & 5756.24 & 0.20 & \(0.15-0.25\) & Auroral line \\ He I & 5877.27 & 4.12 & \(3.74-4.48\) & \\
[O I] & 6302.04 & 2.80 & \(2.36-3.08\) & \\
[S III] & 6313.85 & 0.34 & \(0.29-0.41\) & Auroral line \\
[O I] & 6365.54 & 0.91 & \(0.77-1.01\) & \\ Si II & 6373.12 & 0.12 & \(0.09-0.21\) & \\
[N II] & 6549.86 & 2.55 & \(1.86-3.02\) & \\ H\(\alpha\) & 6564.62 & 100.00 & & \\
[N II] & 6585.27 & 7.59 & \(5.54-8.99\) & \\ He I & 6679.99 & 1.21 & \(1.09-1.34\) & \\
[S II] & 6718.29 & 7.33 & \(5.55-7.76\) & \\
[S II] & 6732.68 & 6.49 & \(4.94-6.93\) & \\ He I & 7067.23 & 1.25 & \(1.07-1.51\) & \\
[Ar III] & 7137.75 & 2.63 & \(2.22-2.86\) & \\
[O II] & 7321.94 & 1.21 & \(1.10-1.38\) & Auroral line \\
[O II] & 7332.21 & 0.95 & \(0.88-1.12\) & Auroral line \\
[Ni II] & 7379.86 & 0.22 & \(0.15-0.30\) & \\
[Ni II] & 7413.65 & 0.17 & \(0.11-0.24\) & \\
[Ar III] & 7753.23 & 0.64 & \(0.57-0.72\) & \\ P18 & 8440.28 & 0.20 & \(0.13-0.27\) & \\ O I & 8448.57 & 0.73 & \(0.64-0.87\) & \\ P17 & 8469.58 & 0.13 & \(0.09-0.18\) & \\ \hline \multicolumn{5}{c}{Broad components} \\ \hline
[N II] & 6549.86 & 0.86 & \(0.15-1.56\) & \\ H\(\alpha\) & 6564.62 & 12.63 & \(6.01-28.31\) & \\
[N II] & 6585.27 & 2.55 & \(0.45-4.65\) & \\
[S II] & 6718.29 & 0.95 & \(0.00-2.48\) & \\
[S II] & 6732.68 & 0.81 & \(0.00-1.50\) & \\ \hline \end{tabular} Note. –[F(H\(\alpha_{\rm narrow})=100\)]
\end{table}
Table 1: Observed Line Fluxes Relative to H\(\alpha\).
In this section, we highlight individual semi-strong (\(\approx 2-3\%\) of H\(\alpha\)) and faint (\(\lesssim 1\%\) of H\(\alpha\)) emission lines detected in the CECILIA composite spectra and briefly comment on how they may be used to study high-\(z\) galaxies.
### Auroral Lines
Of all the faint lines present in the rest-optical spectra of star-forming regions, the auroral12 lines that can be used to implement the direct method of measuring metallicities have received the most attention in studies of high-\(z\) galaxies. Foremost among these is [O iii]\(\lambda 4364\), which falls at \(\lambda_{\rm obs}\approx 1.3-1.7\)\(\mu\)m for the \(z\sim 2-3\) CECILIA galaxies. As described in Section 2.2, CECILIA instead targets two auroral lines at longer wavelengths that are not only predicted to be stronger than [O iii]\(\lambda 4364\), but also fall at \(\lambda_{\rm obs}\gtrsim 2.0\)\(\mu\)m, where JWST is more sensitive. In total, three auroral lines fall in the wavelength range sampled by the composite spectra shown in Figures 3 and 4 and form the basis of our discussion here: [N ii]\(\lambda 5756\), [S iii]\(\lambda 6313\) [O ii]\(\lambda\lambda 7322,7332\).
Footnote 12: “Auroral” lines are forbidden transitions from the second excited state to the first excited state of ions of heavy elements and can be paired with observations of the corresponding “nebular” (first excited state to ground state) lines to determine \(T_{\rm e}\) in low-density gas where collisional de-excitation does not play a significant role.
The strongest of these is [O ii]\(\lambda\lambda 7322,7332\), with both lines observed to be \(\sim 1\%\) the strength of the narrow component of H\(\alpha\) (right panel of Figure 5). Sanders et al. (2023c) recently reported the detection of this feature (actually a quadruplet) in two \(z=2.18\) galaxies, which had each been observed for \(\sim 15\) hr using Keck/MOSFIRE. Using their measurements to calculate direct-method oxygen abundances, they find moderate \(12+\log({\rm O/H})=7.89\pm 0.20\) and \(12+\log({\rm O/H})=8.24\pm 0.27\). Comparing these abundances to the predictions in the bottom panel of Figure 2, we see that they lie near the broad peak of the predicted line strengths and, thus, likely represent only the "tip of the iceberg": other deep spectroscopic studies should uncover auroral [O ii] lines in galaxies with a wider range of O/H. This is one of the main goals of the CECILIA program (Section 2.2), which includes many high-confidence detections of these lines in individual galaxy spectra that will be investigated in a subsequent paper.
The other auroral line specifically targeted by CECILIA is [S iii]\(\lambda 6313\) (middle panel of Figure 5), which samples a higher ionization zone than auroral [O ii]. Whereas this line is routinely used to measure abundances in nearby extragalactic H ii regions, it has never been reported in observations of galaxies outside the local universe. It is significantly detected in the CECILIA composites and we have preliminary evidence of its presence in spectra of individual CECILIA galaxies, but because of its faintness (\(0.29-0.41\%\) the strength of H\(\alpha\)) and proximity to the comparatively stronger [O i]\(\lambda 6302\) line, it is unlikely to be accessible in shallower or lower resolution JWST observations of objects similar to the CECILIA sample.
The third auroral line observed in the composite spectra is [N ii]\(\lambda 5756\), at \(0.15-0.25\%\) the strength of H\(\alpha\) (left panel of Figure 5). It is formally detected at \(4.6\sigma\) in the stack but is likely too faint to be detected in the individual spectra of high-\(z\) galaxies, even using long exposure times with JWST. Still, for the sample of galaxies where it _is_ possible to detect, it may serve as an important tool for calibrating the \(T_{\rm e}\) relation between different ionization zones (e.g., Garnett, 1992; Esteban et al., 2009; Croxall et al., 2016; Yates et al., 2020; Rogers et al., 2021).
We use our measurement of [N ii]\(\lambda 5756\) to calculate \(T_{\rm e}\) and the corresponding direct-method ionic abundance. First, we use the line strengths for [S ii]\(\lambda 6718,6733\) to determine the electron density and find \(n_{e}\approx 285\) cm\({}^{-3}\), which is consistent with values previously reported for KBSS galaxies (Strom et al., 2017) and other \(z\sim 2-3\) galaxy samples (e.g., Sanders et al., 2016). This density is then combined with the measurements of nebular and auroral [N ii] to calculate \(T_{\rm e}\) using the PyWeb package (Luridiana et al., 2015). However, because the nebular stack does not contain both H\(\alpha\) and H\(\beta\), which are required to determine the Balmer decrement and robustly constrain the reddening, we adopt the interquartile range in E(B\(-\)V) for the KBSS parent sample, E(B\(-\)V)\(=0.06-0.47\)(Strom et al., 2017). Using these values, we find \(T_{\rm e}\)[N ii]\(=13630\pm 2540\) K, where the reported uncertainties also capture the likely range in reddening for \(z\sim 2-3\) galaxies. We ultimately calculate \(12+\log({\rm N^{+}/H^{+}})=6.33^{+0.18}_{-0.30}\) and \(12+\log({\rm S^{+}/H^{+}})=5.70^{+0.16}_{-0.26}\) using this \(n_{e}\) and \(T_{\rm e}\)[N ii].
### Other Semi-strong and Faint Lines
In addition to the three auroral emission lines, we also detect eight semi-strong and faint emission lines from four different heavy elements. Cutouts of the nebular composite spectrum near these features are shown in Figure 6, in the same manner as Figures 4 and 5.
The strongest of the lines is [O i]\(\lambda 6302\) (upper left panel of Figure 6), which is observed to be \(2.36-3.08\%\) the strength of the narrow component of H\(\alpha\) and is significantly stronger than the auroral [S iii]\(\lambda 6313\) line in its red wing. Its partner line at 6365 A is \(\approx 3.15\times\)
Figure 5: The three auroral lines observed in the CECILIA stack are shown in separate panels with the same flux scale, in the same manner as Figure 4. Auroral [N ii]\(\lambda 5756\) (left panel) is the weakest of the three and is estimated to be \(0.15-0.25\%\) the strength of H\(\alpha\). Both [S iii]\(\lambda 6313\) (center panel) and [O ii]\(\lambda 7322,7332\) (right panel) are noticeably stronger, and the oxygen lines are clearly the brightest auroral features in the \(\lambda=5700-8500\) Å range, with each being \(\sim 1\%\) the strength of H\(\alpha\).
Figure 6: Semi-strong and faint emission lines of four elements—O, Si, Ar, and Ni—are shown in the same manner as Figures 4 and 5, but with a different flux range used each panel. The top row shows forbidden [O i]\(\lambda 6302,6565\) (upper left), with [S iii]\(\lambda 6313\) and Si ii \(\lambda 6373\) in the red wings of each line, respectively, and permitted O i at 8449 Å, blended with Pa18 (upper right). The bottom row shows forbidden lines of [Ar iii]\(\lambda\lambda 7138,7753\) (lower left) and [N ii]\(\lambda 7380,7414\) (lower right). The stronger [O i] and [Ar iii] lines have now been observed in deep spectra of individual high-\(z\) galaxies (e.g., Cameron et al., 2023; Sanders et al., 2023c), but the O i \(\lambda 8449\) recombination line and [Ni ii] are only rarely observed, even in observations of nearby galaxies and H ii regions.
weaker as set by atomic physics and is blended with Si ii \(\lambda 6373\), which also probes mostly neutral and low-ionization gas. Rather than being an abundance diagnostic, this line is most commonly used as a way to identify the principal ionization mechanism in emission line galaxies using a form of the Baldwin-Philips-Terlevich (BPT) diagram (Baldwin et al., 1981; Veilleux and Osterbrock, 1987). It can also be useful for identifying contributions from diffuse ionized gas and shocks (e.g., Tullmann and Dettmar, 2000; Moy and Rocca-Volmerange, 2002; Zhang et al., 2017). Although widely studied in the local universe, including in large samples such as the Sloan Digital Sky Survey (SDSS; Kewley et al., 2006; Law et al., 2021), [O i]\(\lambda 6302\) is not commonly detected in ground-based observations of individual \(z\gtrsim 2\) galaxies. More recently, however, it has been observed in a handful of high-\(z\) galaxies with JWST spectroscopy (e.g., Cameron et al., 2023). At \(\sim 2-3\%\) the strength of H\(\alpha\), this semi-strong line should provide an accessible and promising method for discriminating between AGN and star formation and probing low-ionization gas in high-\(z\) galaxies.
The lower left panel of Figure 6 shows the widely-spaced [Ar iii]\(\lambda\lambda 7138,7753\) lines, which, like lines of O\({}^{++}\), trace the gas in the high ionization zone of star-forming regions. In low-\(z\) galaxies, the stronger [Ar iii]\(\lambda 7138\) line has been used to determine absolute argon abundances and relative abundance ratios, such as Ar/O (e.g., Berg et al., 2015; Croxall et al., 2016; Rogers et al., 2021), after accounting for unseen ionization states of Ar. At \(2.22-2.86\%\) the strength of H\(\alpha\), comparable to [O i]\(\lambda 6302\), this line is one of the strongest heavy metal lines present in the CECILIA nebular composite spectrum, aside from the familiar strong lines; Sanders et al. (2023c) find similar ratios for their two \(z=2.18\) galaxies. Thus, given the early evidence that [O i]\(\lambda 6302\) may be more easily accessible with JWST for \(z\sim 2-3\) galaxies, [Ar iii]\(\lambda 7138\) is also an attractive target for spectroscopic studies of galaxy enrichment. Although both Ar and O are nominally produced by the same mechanism in massive stars, differences in Ar/O as a function of overall enrichment could reflect a dependence of stellar nucleosynthesis on metallicity (e.g., Kennicutt et al., 2003; Izotov et al., 2006).
Permitted O i at 8449 A (upper right panel of Figure 6) is one of the most commonly used recombination lines for measuring metallicity, in astrophysical environments where such transitions can be observed (Maiolino and Mannucci, 2019). Typically, however, metal recombination lines are too weak to be useful diagnostics, even in \(z\sim 0\) galaxies and H ii regions, so its presence in the CECILIA composite is unexpected. We consulted the database maintained by the Atomic Spectroscopy Data Center at the National Institute of Standards and Technology (NIST), but no other likely candidates for emission lines at the same rest wavelength were identified. O i \(\lambda 8449\) is blended with the Paschen series line at 8440 A (P18) in its left wing, and the neighboring Paschen line at 8470 A (P17) is also detected in the nebular composite, relieving concerns that poor wavelength calibration may have led to misidentifying the line. Garcia-Rojas et al. (2006) and Garcia-Rojas et al. (2007) both use O i \(\lambda 7771\) and O i \(\lambda 8449\) to derive O\({}^{+}\) abundances but find much higher O\({}^{+}\)/H\({}^{+}\) using O i \(\lambda 8449\) than O i \(\lambda 7771\). Garcia-Rojas et al. (2007) suggest that this may be due to the fact that starlight can contribute significantly to the strength of this line (Grandi, 1975), warranting further investigation of its origin in the high-\(z\) stack.
Also puzzling is the detection of [Ni ii]\(\lambda\lambda 7380,7414\) (shown in the lower right panel of Figure 6), which are \(0.15-0.30\%\) and \(0.11-0.24\%\) the strength of H\(\alpha\), respectively. There are comparatively few references to the observation of this line in astrophysical objects, but a handful of studies have reported measurements of [Ni ii]\(\lambda 7380\) and the corresponding Ni\({}^{+}\) abundances in gaseous nebulae in the Milky Way (Dennefeld, 1982; Fesen and Kirshner, 1982; Henry and Fesen, 1988; Esteban et al., 1999). In many of these cases, [Ni ii]\(\lambda 7380\) was seen to be much stronger than expected relative to the associated [Ni ii]\(\lambda 7414\) line, which is only marginally detected in the CECILIA composite spectra. Other authors have explained this by invoking fluorescence by the UV continuum, similar to [Fe ii] lines (Lucy, 1995), and/or collisional excitation in very high density (\(n_{e}\approx 10^{6}\) cm\({}^{-3}\)) gas (Bautista et al., 1996). Similar to O i \(\lambda 8449\), no reasonable alternatives were identified in the NIST database, and without significantly detecting [Fe ii] lines that may also be impacted by the same physical mechanism, however, it is difficult to speculate about its appearance here.
### Broad H\(\alpha\)
Broad line emission has been observed in both spectra of individual galaxies and composite spectra of galaxies at \(z\sim 2\) and is attributed to galaxy-scale ionized gas outflows (see Section 4.6 of the review by Forster Schreiber and Wuyts, 2020, and references therein); in contrast, the frequently brighter, narrow components of emission lines trace galaxies' star-forming regions. Both active galactic nuclei (AGN; e.g., Nesvadba et al., 2008; Genzel et al., 2014; Forster Schreiber et al., 2014; Cresci et al., 2015) and star-formation (e.g., Genzel et al., 2011; Davies et al., 2019) can generate these outflows, resulting
in differences in inferred outflow velocity (i.e., emission line width), with AGN typically driving higher velocity outflows than feedback from massive stars.
In order to achieve a good fit to the nebular composite spectrum, we include two Gaussian components for the strongest lines to account for excess flux that results in large residuals from a model with only a single (narrow) component. Based on the results from fitting the 1000 bootstrap samples, these broad components have a FWHM of \(544^{+45}_{-164}\) km s\({}^{-1}\) and are consistent with no velocity offset relative to the narrow components, which have a FWHM of \(303^{+14}_{-19}\) km s\({}^{-1}\). The broad H\(\alpha\) line is \(6.01-28.31\)% the strength of the narrow component of H\(\alpha\), with significantly weaker broad components observed for nebular [N ii] and [S ii]. If these components do indeed reflect the presence of ionized gas outflows, the evidence for broad (albeit weak) line emission in forbidden transitions and the moderate velocity width suggest that they are likely driven by star formation. Comparable FWHM velocities of \(\sim 400-500\) km s\({}^{-1}\) are observed in deep VLT/SINFONI (Eisenhauer et al., 2003; Bonnet et al., 2004) spectra of star-forming clumps at \(z\sim 2\)(e.g., Newman et al., 2012; Forster Schreiber et al., 2019). However, because of the additional median-filtering required to remove remaining fluctuations in the continuum of individual galaxy spectra (Section 3.2), the detailed properties of any broad line emission in the CECLIA stack could be systematically biased.
## 5 Conclusions
We have reported the first results from the CECLIA program (JWST PID 2593), which obtained ultra-deep \(\sim 30\) hr NIRSpec/G235M observations of 33 star-forming galaxies at \(z\sim 1-3\). Using data for 23 of these galaxies, we constructed rest-optical composite spectra, both with and without the stellar continuum, corresponding to exposure times of 690 object-hours and 540 object-hours, respectively. These composites, shown in Figures 3 and 4, provide one of the most detailed views to date of star-forming galaxies in the early universe and function as an atlas of their characteristic rest-optical emission line spectra.
The principal findings based on our analysis of the stacked spectra are as follows:
* We significantly detect emission lines of eight different elements (H, He, N, O, Si, S, Ar, and Ni), including evidence for broad line emission under H\(\alpha\), [N ii]\(\lambda\lambda 6550,6585\), and [S ii]\(\lambda\lambda 6718,6733\). The strengths of these lines relative to the narrow component of H\(\alpha\) are reported in Table 1.
* Aside from strong [N ii], H\(\alpha\), and [S ii], which have previously been studied in large ground-based spectroscopic samples, the majority of emission lines are \(\lesssim 3\)% the strength of H\(\alpha\). Some of these features, such as [O i]\(\lambda 6302\) (shown in the upper left panel of Figure 6), are now being detected in JWST spectra of individual high-\(z\) galaxies, and we expect other lines with strengths \(\gtrsim 2-3\)% that of H\(\alpha\) to be good candidates for spectroscopic follow-up of large samples. In addition to the stronger forbidden [O i] line, these semi-strong lines include the He i line at \(\lambda 5877\) and [Ar iii]\(\lambda 7138\) (shown in the bottom left panel of Figure 6).
* The three auroral emission lines present at \(\lambda_{\rm rest}\approx 5700-8500\) ([N ii]\(\lambda 5756\), [S iii]\(\lambda 6313\), [O ii]\(\lambda\lambda 7322,7332\), shown in Figure 5) are \(\lesssim 1\)% the strength of H\(\alpha\). Using our measurements of auroral and nebular [N ii], we find \(T_{e}\)[N ii] = \(13630\pm 2540\) K, which is the first time a \(T_{e}\) has been reported for high-redshift galaxies using this ion. Although we have not reported the significance of detections in individual galaxy spectra in this work, it seems likely that these auroral lines will remain out of reach of typical observations of
Figure 7: The same nebular composite spectrum from Figure 4 is shown near the strong H\(\alpha\), [N ii]\(\lambda\lambda 6550,6585\), and [S ii]\(\lambda\lambda 6718,6733\) lines, with a more extended flux range to show the peaks of the lines. The inset panel is a zoom-in on the grey shaded region, highlighting the broad components of H\(\alpha\) and [N ii] (in red). The narrow components of all three lines are illustrated by the dot-dashed navy curves.
high-\(z\) galaxies, particularly those with low SFRs, low ionization, and/or high metallicity. This only underscores the need for more accurate line ratio diagnostics for metallicity that make use of the strong and semi-strong emission lines present in galaxies' rest-optical spectra.
* We measure broad (\(544^{+45}_{-164}\) km s\({}^{-1}\) FWHM) line emission under the strongest lines, and a broad component of H\(\alpha\) that is \(6.01-28.31\%\) the strength of the narrow component (Figure 7). These results appear indicative of star-formation driven outflows; however, we caution that, owing to remaining uncertainties in the flux calibration (see the discussion in Section 2.4), the appearance of this component should not be over-interpreted. We defer a more detailed discussion of broad line emission and its connection to galaxy outflows to a future paper.
JWST is delivering on its promise to provide access to faint emission lines in the spectra of \(\gtrsim 2\) galaxies, evidenced not only by what we have presented in this let, but also by the many exciting results based on NIRSpec/MSA and NIRCam grism spectroscopy that have been published over the last year. Deep observations, such as those obtained as part of CECILIA and outlined here, will be critical for developing and testing the new tools necessary to accurately interpret this wealth of data. As known issues with JWST data products continue to be resolved, it will benefit the extragalactic community to revisit some of the earliest observations--with the benefit of hindsight and these new tools--in order to maximize the scientific impact of these data. To aid in this effort, forthcoming work with CECILIA will focus on (1) \(T_{e}\) measurements and direct-method metallicities for the sample of galaxies introduced here, as well as (2) new line ratio diagnostics for gas-phase oxygen abundance.
The authors thank Jane Rigby and Marcia Rieke for their advice regarding the reduction of the JWST data, as well as Jenny Greene for her input regarding the scope of the discussion. We are also grateful to the JWST/NIRSpec team for their ongoing work to support this complex and powerful instrument.
ALS, GCR and RFT acknowledge partial support from the JWST-GO-02593.001-A, JWST-GO-02593.004-A, and JWST-GO-02593.006-A grants, respectively. RFT also acknowledges support from the Pittsburgh Foundation (grant ID UN2021-121482) and the Research Corporation for Scientific Advancement (Cottrell Scholar Award, grant ID 28289).
This work is primarily based on observations made with NASA/ESA/CSA JWST, associated with PID 2593. The data were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. Some of the data used in generating the original line flux predictions were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership between the California Institute of Technology, the University of California, and NASA. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation, and the authors wish to recognize and acknowledge the significant cultural role and reverence that the summit of Maunakea has within the indigenous Hawaiian community. JWST (NIRSpec) BPASSv2 (Stanway et al., 2016; Eldridge et al., 2017), Cloudy (Ferland et al., 2013), GalDNA (Strom et al., 2018), grizli (Brammer, 2023), msaexp (Brammer, 2022), PyNeb (Luridiana et al., 2015)
|
2308.10001 | AltNeRF: Learning Robust Neural Radiance Field via Alternating
Depth-Pose Optimization | Neural Radiance Fields (NeRF) have shown promise in generating realistic
novel views from sparse scene images. However, existing NeRF approaches often
encounter challenges due to the lack of explicit 3D supervision and imprecise
camera poses, resulting in suboptimal outcomes. To tackle these issues, we
propose AltNeRF -- a novel framework designed to create resilient NeRF
representations using self-supervised monocular depth estimation (SMDE) from
monocular videos, without relying on known camera poses. SMDE in AltNeRF
masterfully learns depth and pose priors to regulate NeRF training. The depth
prior enriches NeRF's capacity for precise scene geometry depiction, while the
pose prior provides a robust starting point for subsequent pose refinement.
Moreover, we introduce an alternating algorithm that harmoniously melds NeRF
outputs into SMDE through a consistence-driven mechanism, thus enhancing the
integrity of depth priors. This alternation empowers AltNeRF to progressively
refine NeRF representations, yielding the synthesis of realistic novel views.
Extensive experiments showcase the compelling capabilities of AltNeRF in
generating high-fidelity and robust novel views that closely resemble reality. | Kun Wang, Zhiqiang Yan, Huang Tian, Zhenyu Zhang, Xiang Li, Jun Li, Jian Yang | 2023-08-19T12:41:35Z | http://arxiv.org/abs/2308.10001v2 | # AltNeRF: Learning Robust Neural Radiance Field
###### Abstract
Neural Radiance Fields (NeRF) have shown promise in generating realistic novel views from sparse scene images. However, existing NeRF approaches often encounter challenges due to the lack of explicit 3D supervision and imprecise camera poses, resulting in suboptimal outcomes. To tackle these issues, we propose AltNeRF--a novel framework designed to create resilient NeRF representations using self-supervised monocular depth estimation (SMDE) from monocular videos, without relying on known camera poses. SMDE in AltNeRF masterfully learns depth and pose priors to regulate NeRF training. The depth prior enriches NeRF's capacity for precise scene geometry depiction, while the pose prior provides a robust starting point for subsequent pose refinement. Moreover, we introduce an alternating algorithm that harmoniously methods NeRF outputs into SMDE through a consistence-driven mechanism, thus enhancing the integrity of depth priors. This alternation empowers AltNeRF to progressively refine NeRF representations, yielding the synthesis of realistic novel views. Additionally, we curate a distinctive dataset comprising indoor videos captured via mobile devices. Extensive experiments showcase the compelling capabilities of AltNeRF in generating high-fidelity and robust novel views that closely resemble reality.
## 1 Introduction
Neural rendering has achieved unprecedented progress on the long-standing view synthesis task in computer vision communities [11, 13, 14, 15]. One prominent exemplar of this task is NeRF [16], which comprehensively captures the continuous volumetric essence of real-world scenes using multi-view images alongside precise camera poses, consequently generating lifelike new perspectives. Nonetheless, NeRF often graphes with suboptimal outcomes that compromise novel view synthesis and distort scene geometry, as evidenced in Fig. 1. We identify two primary catalysts for this issue: 1) _The lack of explicit 3D supervision_. NeRF solely hinges on 2D image supervision, which may furnish inadequate geometric constraints for textureless or view-limited scenes. Introducing explicit 3D supervision holds potential to shepherd NeRF towards superior convergence. 2) _Inaccurate camera poses_.NeRF's reliance on precise camera poses for constructing accurate volumetric scenes becomes a stumbling block in the face of pose inaccuracies or noise. Such errors in camera poses compound the optimization challenges for NeRF.
Although existing methods have endeavored to tackle either of these issues, they remain encumbered by certain limitations. Firstly, some methods [1, 13] leverage depth priors to facilitate NeRF's convergence towards improved solutions. These methods derive depth priors from structure-from-motion methodologies or depth completion techniques, employing them as rigid constraints for NeRF. However, these depth priors might not attain the requisite accuracy, potentially skewing NeRF's optimization trajectory and yielding deteriorated performance. Secondly, some methods [23, 14, 15] alternative strategies undertake the
joint optimization of NeRF and camera poses to mitigate the ramifications of imprecise camera poses. Nevertheless, this combined task encompasses a non-convex optimization conundrum that is acutely sensitive to the initialization of camera poses. Consequently, these approaches necessitate initial camera poses that closely approximate the optimal values; otherwise, they frequently converge towards unfavorable local minima. Illustratively, Fig. 2 (b) depicts the scenario where BARF [12] employs identity matrices as green-hued initializations, eventually converging to nonsensical poses after numerous iterations.
To addresses the above problems, we propose AltNeRF--a novel framework designed to generate robust neural radiance fields from unposed images. The core concept involves a cyclic process of self-supervised monocular depth estimation (SMDE) and NeRF optimization, synergistically enhancing both methodologies. Leveraging SMDE from monocular videos (as described in [23, 24]), we infer depth and pose for each frame without the need for manual annotations. The estimated pose serves as an effective initialization, facilitating smoother optimization akin to the orange poses depicted in Fig. 2 (b). Furthermore, the estimated depth provides an initial objective that steers NeRF away from optimizing inaccurate scene geometries. After further optimizing NeRF, we can obtain better pose and depth to refine the depth of SMDE. This alternation continually updates the depth objective to converge towards actual scene depths, as illustrated in Fig. 2 (a). Our AltNeRF harnesses the complementary strengths of SMDE and NeRF, leading to more robust scene representations. Overall, our contributions can be summarized as:
* We introduce depth-pose priors learned from monocular videos to simultaneously regularize the scene geometries and initialize the camera poses to enhance the novel view synthesis of NeRF.
* To the best of our knowledge, we are the first to propose AltNeRF--a novel framework that alternately optimizes self-supervised monocular depth estimation and NeRF, synergistically boosting both components.
* We also collect a new dataset of indoor videos captured with a cellphone. Extensive experiments on LLFF, ScanNet, CO3D and our dataset demonstrate that our AltNeRF can synthesize realistic novel views with high fidelity and robustness, and outperforms the realted NeRF methods.
## 2 Related Work
Self-supervised Monocular Depth Estimation.The learning of SMDE is an image reconstruction problem. It is supervised by the photometric loss that measures the difference between a target frame and frames warped from nearby views. SfM-Learner [23] is a seminal work that proposed to jointly predict scene depth and relative camera poses. Follow-up works enhanced SfM-Learner by enforcing depth scale consistency [25, 26], introducing more powerful neural networks [27, 28, 29], and applying iterative refinement [2]. Furthermore, MonoDepth [2] proposed a minimum reprojection loss to handle occlusions, and some works addressed the dynamic object problem by compensating and masking pixels within dynamic areas using optical flow [23, 28] and pretrained segmentation models [20, 21]. Some other works boosted the performance of self-supervised depth estimation by introducing a feature-metric loss [26], proposing a resolution adaptive framework [10], and exploring the knowledge distilling approaches [11, 12]. Recently, some works have focused on challenging environments, such as indoor [13, 14, 15] and nighttime [25, 26, 27] scenes and shown impressive performance.
View Synthesis with NeRF.NeRFs [13] are a powerful technique for novel view synthesis, but they face several challenges in different scenarios. Many works have extended NeRFs to handle dynamic [28, 29, 10], unbounded [24, 25], and large-scale scenes [26, 27, 28], as well as to optimize NeRFs from in-the-wild [12] and dark images [13]. Some works have also improved the generalization [29, 14, 28, 15], bundle sampling [26, 25], initialization [20, 21] and data structure [28, 29, 27] of NeRFs. However, these methods still rely on accurate camera poses, which are not always available or realistic. To address this problem, recent works [26, 27, 28, 29] have studied the joint task of optimizing NeRF model and camera poses. However, they are restricted to simple or known pose distribution. Moreover, some methods use depth priors [27, 28] from external sources, which may be noisy or inaccurate and lead to suboptimal NeRF representations. In contrast, we propose a novel framework that can learn robust NeRF representations from monocular videos. Our framework leverages self-supervised depth estimation to obtain depth and pose priors that regularize NeRF learning. Fur
Figure 2: (a) Existing methods establish a rigid target (the blue dot) using inaccurate depth prior, whereas we leverage valuable intermediate results from NeRF to dynamically adjust the objective (the green dots) towards the real depth (the red dot). (b) Pose refinement starting from different initial poses. The experiment is conducted on Vasedeck scene.
thermore, we devise an alternating algorithm that refines the depth and pose priors with consistent NeRF outputs, leading to better 3D geometry and camera poses.
## 3 Preliminary
In this section, we review the key concepts and techniques of Self-supervised Monocular Depth Estimation (SMDE) and Neural Radiance Field (NeRF) to provide the necessary background for our method.
Self-supervised Monocular Depth Estimation.SMDE is a training method that only requires monocular videos \(\mathcal{V}\) and known camera intrinsic \(K\). It employs two neural networks, \(f_{d}:I\to D\) and \(f_{p}:(I_{t},I_{s})\to P_{t\to ss}\), to predict the depth map \(D\) of an input image \(I\) and relative camera pose \(P_{t\to s}\) between frames \(I_{t}\) and \(I_{s}\). The training objective is to reconstruct the target frame \(I_{t}\) from nearby views \(I_{s}\) by mapping pixels \(x_{s}\) from the source image to the target image \(x_{t}\) based on the predicted depth and camera pose: \(x_{s}\sim KP_{t\to s}D(x_{t})K^{-1}x_{t}\). The photometric loss is used to supervise this process, which consists of the structural similarity term and the \(\ell_{1}\) term:
\[\begin{split} L_{p}(I_{t},\hat{I}_{t})=&\frac{ \alpha}{2}(1-SSIM(I_{t},\hat{I}_{t}))+\\ &(1-\alpha)\|I_{t}-\hat{I}_{t}\|_{1},\end{split} \tag{1}\]
where \(\alpha\) is often set to 0.85. An edge-aware smoothness loss is also added to ensure smoothness in predicted depth maps. This loss is based on the image gradients \(\partial_{x}\) and \(\partial_{y}\) along the horizontal and vertical axes, and is weighted by an exponential function of the image gradients to preserve edges:
\[L_{s}=|\partial_{x}D|e^{-|\partial_{x}I|}+|\partial_{y}D|e^{-|\partial_{y}I|}, \tag{2}\]
where \(|\cdot|\) returns the absolute value.
Neural Radiance Field.NeRF represents a scene as a continuous volumetric field. NeRF takes in a 3D point \(\mathrm{p}\in\mathbb{R}^{3}\) and a unit viewing direction \(\mathrm{d}\in\mathbb{R}^{3}\), and returns the corresponding density \(\sigma\) and color \(c\): \(f_{n}:(\mathrm{p},\mathrm{d})\rightarrow(\sigma,c)\). The volumetric field can be rendered to 2D images using volume rendering techniques [11]:
\[\hat{C}(r)=\int_{t_{n}}^{t_{f}}T(t)\sigma(t)c(t)dt. \tag{3}\]
Similarly, the scene depths are created by computing the mean terminating distance of a ray \(r=\mathrm{o}+t\mathrm{d}\) parameterized by camera origin \(\mathrm{o}\) and viewing direction \(\mathrm{d}\):\(\hat{D}(r)=\int_{t_{n}}^{t_{f}}T(t)\sigma(t)tdt\), where \(T(t)=exp(-\int_{t_{n}}^{t}\sigma(s)ds)\) handles occlusions, and \(t_{n}\) and \(t_{f}\) are near and far depth bounds, respectively. The optimization objective of NeRF is to minimize the reconstruction loss, which is computed as the squared differences between the rendered and ground truth colors for all rays:
\[L_{c}=\|\hat{C}(r)-C(r)\|_{2}. \tag{4}\]
## 4 AltNeRF Framework
In this section, we introduce our AltNeRF framework, which comprises two components: the Scene Prior Module (SPM) and the Scene Representation Module (SRM). These modules work together under an alternating algorithm. In the following sections, we will delve into the details.
### Scene Prior Module
PretrainingSPM leverages SMDE to provide initial scene depths and camera poses. For better robustness, it is first pretrained on many monocular videos to learn prior knowledge. To improve the generalization ability of the model on unseen scenes, we employ the knowledge distilling strategy introduced in [23] and distill knowledge from an off-the-shelf relative depth estimator, DPT [12], via
\[\begin{split} L_{r}=1-SSIM(D,D_{r})+\\ 0.1\times(E_{r}\oplus E/\mathit{size}(E)),\end{split} \tag{5}\]
where \(D_{r}\) is the reference depth map produced by DPT, \(\oplus\) denotes XOR operation, \(size(\cdot)\) returns the size of a set, and
Figure 3: The overall pipeline of our AltNeRF. The scene prior module learns depth and pose priors, which serves as the depth reference and initial poses, respectively. The scene representation module simultaneously refines the initial poses with \(\Delta P_{i}\) and learns 3D scene representation, which is regularized by \(D_{i}\), and produces more accurate poses \(\hat{P}_{i+1}\) and finer depth maps \(\hat{D}_{i+1}\). These materials are then fed back to the scene prior module as guidance to improve its performance.
\(E_{r}\) and \(E\) are occluding boundary maps of \(D_{r}\) and \(D\), respectively. Overall, the loss function for pretraining is:
\[L_{pt}=L_{p}+L_{r}+1.0e^{-3}\times L_{s}. \tag{6}\]
#### 4.1.2 Test-time Adaptation
SPM predicts relative depths, which are defined up to an unknown scale factor, leading to potential inconsistencies across frames. Moreover, the data distribution of the target video often differs from that of the training data. To mitigate these challenges, we employ self-supervised finetuning to adapt SPM to the video before generating predictions. To ensure scale-consistent depth-pose estimates, we additionally introduce the geometry consistency loss from Bian et al. (2019):
\[L_{g}=\frac{\|D_{s}(x_{s})-D_{t}(x_{t})\|_{1}}{D_{s}(x_{s})+D_{t}(x_{t})}, \tag{7}\]
where \(D_{s}\) and \(D_{t}\) are predicted depth map of \(I_{s}\) and \(I_{t}\), respectively. Finally, the loss used in adaptation step is
\[L_{ad}=L_{pt}+0.5\times L_{g}. \tag{8}\]
#### 4.1.3 Pose Conversion
SPM predicts relative 3D transformations between frames, while SRM requires absolute camera poses. To reconcile these requirements, we establish a world coordinate system that aligns with the camera coordinate system of the first frame \(I_{0}\), whose pose matrix is a identity matrix. We then use the chain rule to calculate the camera poses \(P_{i}\) of subsequent frames based on their relative pose \(P_{i\to i+1}\) predicted by SPM: \(P_{i+1}=P_{i\to i+1}\times P_{i}\).
### Scene Representation Module
SRM serves a dual purpose of learning 3D scene representation and refining camera poses simultaneously. It extends the BARF approach Lin et al. (2021) by introducing three improvements: depth regularization, improved pose initialization, and warmup learning. These enhancements will be discussed in more detail below.
#### 4.2.1 Depth Regularization
Learning the NeRF representation only from 2D images is intrinsically a non-convex problem, which can result in a multitude of incorrect solutions that fit the training images well but fail to generate plausible novel views. These degenerate solutions are more likely to occur when the training image set is small Deng et al. (2022); Roessle et al. (2022) or the image texture is weak. Typically, such solutions show up as inaccurate scene depths, as shown in Fig. 1. To overcome this issue, we propose introducing depth prior from SPM as explicit 3D supervision to regularize the learned depth \(\hat{D}\). However, the scene depths produced by SPM are inaccurate and may provide incorrect guidance. To address this, we introduce an error-tolerant depth loss that specifies the possible depth range and uses Huber loss Huber (1964) \(H(\cdot)\) to prevent the model from being significantly affected by large gradients resulting from SPM's inaccurate predictions:
\[L_{e}=H\left(\max\left(\frac{\|\hat{D}(r)-D(r)\|_{1}}{\hat{D}(r)+D(r)}-\epsilon,0\right)\right), \tag{9}\]
where \(\epsilon\) is a tolerance coefficient and \(D\) is the depth prior.
#### 4.2.2 Improved Pose Initialization
When without pose prior, BARF initializes the camera poses with identity matrices and refines them via bundle adjustment. But this fails to capture the complex camera motions, as shown in Fig. 2 (b). In our framework, we use SPM to obtain a better initial pose \(P\) for each camera, which is closer to the true pose. Then, we optimize a residual pose \(\Delta P\) that represents the difference between the initial and refined poses. We update the camera poses as \(\hat{P}=\Delta P\times P\). This way, our framework can efficiently estimate plausible camera poses for scenes with complex camera poses.
#### 4.2.3 Warmup Learning
While SRM learns the scene representation from scratch, it refines camera poses using a good initialization that is already close to the ideal ones. We discovered that this asynchrony in the learning process results in an incorrect update direction for camera poses. To address this issue, we propose a warmup learning strategy that synchronizes the learning process for these two tasks. Specifically, we set the learning rate of \(\Delta P\) to a small value \(l_{s}\) at the beginning of training and gradually increase it to the original learning rate \(l_{t}\) after 1K iterations.
#### 4.2.4 Overall Loss
The learning of SRM is supervised by both reconstruction loss and depth regularization, with a scalar hyper-parameter \(\gamma\) that balances these two terms of losses:
\[L_{sr}=L_{c}+\gamma\cdot L_{t}. \tag{10}\]
### Alternating Algorithm
Optimizing the scene representations and camera poses jointly is highly underdetermined, so the model can easily converge to a "bad" local optimum. However, we show that our model can converge to a reasonable solution by combining the prior knowledge of SPM and the scene-dependent optimization of SRM. We propose a novel alternating algorithm for this, as shown in Fig. 3. Next, we explain the workflow and introduce the multi-view consistency check that can extract confident scene depths from SPM and SRM.
Figure 4: The intermediate results of depth maps and color images from SPM (the first row) and SRM (the last two rows) after 0, 1 and 2 alternating steps.
WorkflowWe denote the alternating step as \(i\), SPM at step \(i\) as \(\Phi_{i}\), and SRM as \(\Psi_{i}\). The alternating process when \(i>0\) can be formulated as:
\[\begin{split}&\Psi_{i}:D_{i}\rightarrow(\hat{D}_{i+1},\hat{P}_{i+1}), \\ &\Phi_{i}:(\hat{D}_{i+1},\hat{P}_{i+1})\to D_{i+2},\end{split} \tag{11}\]
The process begins at \(i=0\), when SPM generates the initial depth maps \(D_{0}\) and camera poses \(P_{0}\) using Eq. 8: \(\Phi_{0}:\mathcal{V}\rightarrow(D_{0},P_{0})\). Next, \(\Psi_{i}\) takes the depth maps \(D_{i}\) predicted by \(\Phi_{i-1}\) as input to regularize the scene representation learning via Eq. 9, while simultaneously optimizing the residual poses \(\Delta P_{i}\). After \(S_{r}\) iterations, \(\Psi_{i}\) produces finer depth maps \(\hat{D}_{i+1}\) and more accurate camera poses \(\hat{P}_{i+1}=\Delta P_{i}\times P_{0}\), which are fed back to \(\Phi_{i}\). Since the poses \(\hat{P}_{i+1}\) are relatively accurate after refinement, \(\Phi_{i}\) directly uses them instead of predicting new ones. The relative camera pose \(P_{i+s}^{t\to s}\) is calculated by converting \(P_{i+1}^{t}\) and \(P_{i+1}^{s}\) through \(P_{i+s}^{t\to s}=(P_{i+1}^{s})^{-1}P_{i+1}^{t}\). Furthermore, we apply Eq. 5 to distill knowledge from the finer depth maps by treating \(\hat{D}_{i+1}\) as the reference. As a result, \(\Phi_{i}\) is fine-tuned for \(S_{p}\) iterations with Eq. 6, producing more accurate depth maps \(D_{i+2}\), which are fed into the next alternating step. By repeating these steps, the performance of both SPM and SRM are improved, as shown in Fig. 4.
Multi-view Consistency CheckTo account for the potential unreliability of the depth predictions from SPM and SRM, we use a multi-view consistency check to assess the uncertainty of the predicted depth. Specifically, we denote the depth map of a target image \(I_{t}\) as \(D_{t}\), then we compute the depth maps \(D_{s\to t}\) warped from nearby source views \(I_{s}\) using camera poses \(P_{t\to s}\) from SPM or \(P\) from SRM. Since \(D_{t}\) and \(D_{s\to t}\) are expected to be identical without considering the occlusions, we define the uncertainty \(U_{t}\) of depth map \(D_{t}\) as the difference between \(D_{t}\) and \(D_{s\to t}\): \(U_{t}=\|D_{t}-D_{s\to t}\|_{1}\). In practice, we compute the mean from the four views with the smallest differences to account for occlusions. To incorporate this depth uncertainty into our loss functions (_i.e_. Eq. 5 and Eq. 9), we weight them with the \(Softmin(\cdot)\) function. This helps to mitigate the impact of unreliable depth predictions on our optimization process.
DiscussionOur alternating algorithm is a general method that can actually leverage any depth-pose priors, not just those learned from SMDE. By using the valuable intermediate results of SPM and SRM, the algorithm can tolerate imprecise priors and still create high-quality NeRF representations, which helps reduce the cost to create NeRF models.
## 5 Experiment
In this section, we evaluate AltNeRF on 16 scenes of four datasets and compare it with existing methods to demonstrate its state-of-the-art (SOTA) performance. We first describe the datasets and implementation details, and then report the experiment results.
### Dataset
We evaluate AltNeRF on LLFF Mildenhall et al. (2019), CO3D Reizenstein et al. (2021), ScanNet Dai et al. (2017) and our collected dataset, Captures. **LLFF**: we include five scenes from original LLFF: Fern, Flower, Fortress, Orchids, and Room. We also use the Vasedeck scene from the NeRF dataset. We follow the data split strategy of BARF Lin et al. (2021), which uses the first 90% of frames for training and the remaining 10% for testing. **CO3D**: we select three scenes from the Couch category: _193_20797_40499, _349_36504_68102_ and _415_57184_110444_ These scenes have more than 80 frames per scene and exhibit complex camera motions with simultaneous panning and rotation. **ScanNet**: we choose three scenes, _scene0079_00, scene0553_00_ and _scene0653_00_, to evaluate the depth estimation performance of AltNeRF. We use the data processed by NerfingMVS Wei et al. (2021) and downsample each scene to 20 frames. **Captures**: we collect four scenes using a cellphone, which form our Captures dataset. The captured data contains two types of challenging scenes: 1) scenes with few frames and weak textures (Scene_01 and Scene_02), and 2) scenes with complex camera motions (Scene_03 and Scene_04). Please refer to the supplement for more details.
### Implementation Detail
The depth estimation network \(f_{d}(\cdot)\) in SPM is based on the U-Net Ronneberger, Fischer, and Brox (2015) architecture. The encoder is a ResNet-50 He et al. (2016) with the fully-connected layer removed, and the decoder consists of ten \(3\times 3\) convolutional layers, two for each scale, and uses bilinear up-sampling. The pose estimation network \(f_{p}(\cdot,\cdot)\) is structured with a ResNet-34 and outputs a vector of nine element length, where the first six elements are continuous rotation representation Zhou et al. (2019) and the last three elements denote translations. The scene representation function \(f_{n}(\cdot,\cdot)\) in SRM employs the same network structure as NeRF, _i.e_. eight fully-connected layers with skip connections for density output, and one linear layer for color output. The \(\gamma\) in Eq. 10 is set to \(0.08\) for LLFF and CO3D, and \(0.15\) for ScanNet and Captures. We pretrain the SPM with a learning rate of \(1.0e-4\), and finetune it with \(5.0e-5\). For SRM, the initial learning rate to learn NeRF model is set to \(1.0e-3\), and exponentially decays to \(1.0e-4\) throughout the training process. The initial learning rate for pose refinement is set to \(1.0e-5\), and linearly increases to \(2.0e-3\) after 1K iterations before exponentially decaying to \(1.0e-5\). We employ a hierarchical sampling strategy similar to NeRF, with \(64\) coarse samples and \(64\) fine samples, but we do not add coarse samples to the fine pass to save training time. The number of
\begin{table}
\begin{tabular}{c||c c|c c c} \hline \hline Method & Rot (\(\cdot\)) \(\downarrow\) & Trans (\(10^{-2}\)) \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline NeRF & - & - & 26.009 & 0.821 & 0.127 \\ BARF & 33.082 & 21.311 & 22.613 & 0.715 & 0.327 \\ NeRFm & 65.254 & 28.625 & 20.083 & 0.602 & 0.432 \\ DS-NeRF & - & - & 26.605 & 0.821 & 0.148 \\ DDP-NeRF & - & - & 23.081 & 0.753 & 0.205 \\ NoFe-NeRF & 32.197 & 19.343 & 24.262 & 0.743 & 0.252 \\ \hline Our & 1.317 & 0.725 & 28.333 & 0.851 & 0.110 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative results of camera pose estimation (middle) and novel view synthesis (right) tasks. The best is in red, and the second is in orange. The reported results are average over ten scenes of LLFF and Captures.
iterations \(S_{r}\) and \(S_{p}\) is set to 50K and \(500\), respectively, and we perform two alternating steps in all experiments unless otherwise specified. Our method is trained for 150K-200K iterations according to the number of frames, which costs around 4.0-6.4 hours totally on single RTX 3090.
### Comparing with Existing Method
Here, we evaluate AltNeRF on novel view synthesis, camera pose estimation, and depth estimation tasks, and compare it with seven existing methods.
Evaluation on LLFF and CapturesWe evaluate AltNeRF and six SOTA methods from related fields on camera pose estimation and novel view synthesis tasks. The compared methods are NeRF [13], BARF [14], NeRFmm [15], DS-NeRF [16], DDP-NeRF [17] and NoPe-NeRF [18]. We use ten scenes from LLFF and Captures datasets for this comparison. Tab. 1 shows the mean quantitative results for each method and task. We use _Rot_ and _Trans_ to present the rotation and translation error between the estimated camera poses and the pseudo ground truth poses from COLMAP [15], and PSNR, SSIM [15], and LPIPS [16] to measure the quality of the synthesized images. We initialize the camera poses of BARF and NeRFmm with identity matrices. AltNeRF significantly outperforms the methods that do not use pose priors, _i.e_. BARF, NeRFmm and NoPe-NeRF, on camera pose estimation task. For example, it reduces the Rot and Trans by 95.91% and 96.15%, respectively, compared to NoPe-NeRF. This demonstrates the importance of pose priors for accurate camera pose estimation. AltNeRF also surpasses the COLMAP assisted methods, _i.e_. NeRF, DS-NeRF, and DDP-NeRF, on novel view synthesis task. For example, it improves DS-NeRF by 6.50%, 3.65%, and 13.36%, respectively, on PSNR, SSIM, and LPIPS metrics.
Fig. 5 shows the qualitative comparisons of AltNeRF and
\begin{table}
\begin{tabular}{c||c c|c c c} \hline Method & Rot (\(\uparrow\)) & Trans (\(10^{-2}\)) \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline NeRF & - & - & **34.363** & **0.928** & **0.132** \\ BARF & 106.58 & 140.38 & 12.697 & 0.549 & 0.762 \\ NoPe-NeRF & 102.99 & 116.33 & 14.885 & 0.595 & 0.667 \\ \hline Our & **2.29** & **0.89** & **34.951** & **0.930** & **0.135** \\ \hline \end{tabular}
\end{table}
Table 2: Quantitative results of camera pose estimation (middle) and novel view synthesis (right) tasks. The reported results are average over three scenes of CO3D Couch.
Figure 5: Qualitative comparisons of novel view synthesis and depth estimation on Fortress, Vasedeck, Scene_03 and Scene_04.
Figure 6: Quantitative comparison of pose estimation on CO3D Couch. The estimated poses are in green and the COLMAP poses are in gray.
four methods on novel view synthesis and depth estimation tasks. It uses four scenes: Fortress and Vasedeck from LLFF, and Scene_03 and Scene_04 from Captures. AltNeRF can synthesize realistic novel views and more accurate depth maps than the competitors. For example, it estimates the depth of the distant chairs in the Fortress scene more accurately, while the other methods underestimate their depth or fail to capture their details. BARF completely fails on the last three scenes, which contain complex camera motions. In contrast, our method can still synthesize realistic novel views and estimate reasonable depth maps with the help of the depth-pose priors and our alternating strategy.
Evaluation on Co3dWe evaluate AltNeRF and three existing methods, namely NeRF, BARF, and NoPe-NeRF, on CO3D dataset. We report the mean quantitative results in Tab. 2, and the qualitative results of pose estimation in Fig. 6, respectively. We use the pseudo ground truth poses from COLMAP as the reference to measure the pose error. AltNeRF significantly outperforms BARF and NoPe-NeRF on camera pose estimation task. Our predictions are very close to those of COLMAP, while BARF and NoPe-NeRF fail to produce meaningful pose outputs. AltNeRF also surpasses NeRF on novel view synthesis task. It improves the PSNR metric of NeRF by 1.71%. This demonstrates the effectiveness of our method on challenging scenes.
Evaluation on ScanNetWe evaluate AltNeRF and three existing methods, namely NeRF, DS-NeRF, and NerfingMVS [14], on depth estimation task. We use the ScanNet dataset for this comparison. Tab. 3 shows the quantitative results for each method. AltNeRF outperforms the existing methods by a large margin on depth estimation task. It reduces the Sq Real and RMSE metrics by 68.0% and 35.37%, respectively, compared to the second best method, NerfingMVS. It also achieves a performance very close to 1.0 on the \(\sigma_{3}\) metric, which indicates a high accuracy of depth estimation. This demonstrates the superior performance of AltNeRF on depth estimation task, and also shows that it can learn a more reasonable scene representation than the existing methods.
### Ablation Study
Here, we demonstrate the effectiveness of each component through ablation study. Tab. 4 shows the quantitative results on Flower and Scene_02. We use BARF with identity matrices as initial camera poses as baseline. First, we introduce the pose prior by initializing the camera poses of BARF with SPM estimates. This improves the performance on all metrics, which indicates the importance of the pose prior. Second, we introduce the depth prior and regularize NeRF with the proposed error-tolerant loss \(L_{e}\). This improves the PSNR metric of Scene_02 by \(8.89\%\), which indicates the importance of the depth prior. Third, we alternate between SPM and SRM for one, two and four times. The first alternating step significantly improves the PSNR, SSIM and LPIPS metrics of Flower by \(3.87\%\), \(3.83\%\) and \(17.02\%\), respectively, which indicates the effectiveness of our alternating algorithm. The gain of more alternating steps is not as significant as the first one, but can still improve performance. We think this is because the model learning converges fast at the beginning of the training and slows down afterwards, thus most of the useful information is already exchanged in the first alternation step.
We evaluate the performance of each component on pose estimation and show the results in Fig. 7. We use BARF as baseline and gradually enable the pose priors, the error-tolerant depth loss \(L_{e}\) and the warmup learning strategy to test their effects. The results show that the camera pose errors decrease as more components are enabled. In particular, enabling the pose priors significantly reduces the pose error, which indicates the importance of pose priors. The error-tolerant loss \(L_{e}\) also improves the performance over + _pose prior_, which verifies its effectiveness. With the warmup learning strategy, the errors are further reduced, leading to the most accurate pose estimation. This justifies the necessity of the warmup learning strategy for pose estimation.
## 6 Conclusion
Robust high-quality NeRFs require accurate camera pose and scene depth, which are hard and expensive to obtain, especially for non-technical users. In this paper, we propose a more practical approach that uses inaccurate depth-pose priors from self-supervised depth estimation to address this problem. By combining our proposed contributions, we can generate robust and high-quality NeRF models and estimate accurate camera poses at low cost.
\begin{table}
\begin{tabular}{l||c c c||c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c||}{Flower} & \multicolumn{3}{c}{Scene\_02} \\ \cline{2-7} & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\uparrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline BARF & 23.988 & 0.744 & 0.183 & 29.682 & 0.932 & 0.053 \\ + pose prior & 25.213 & 0.752 & 0.154 & 31.311 & 0.962 & 0.038 \\ + depth prior & 24.989 & 0.757 & 0.141 & 34.093 & 0.974 & 0.03 \\ + one alternating step & 25.955 & 0.786 & 0.117 & 35.049 & 0.976 & 0.03 \\ + two alternating step & 26.073 & 0.794 & 0.114 & 35.049 & 0.978 & 0.029 \\ + four alternating step & 26.127 & 0.793 & 0.112 & 35.051 & 0.978 & 0.028 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Quantitative results of ablation study. BARF is the baseline method and we gradually enable each component to demonstrate their effectiveness.
\begin{table}
\begin{tabular}{c||c c c|c c c} \hline \hline Method & Abs Rel \(\downarrow\) & Sq Rel \(\downarrow\) & RMSE \(\downarrow\) & \(\sigma_{1}\uparrow\) & \(\sigma_{2}\uparrow\) & \(\sigma_{3}\uparrow\) \\ \hline NeRF & 0.143 & 0.072 & 0.312 & 0.805 & 0.958 & 0.967 \\ DS-NeRF & 0.075 & 0.025 & 0.169 & 0.904 & 0.956 & 0.995 \\ \cline{2-7} NerfingMVS & 0.075 & 0.025 & 0.164 & 0.938 & 0.980 & 0.998 \\ \hline Our & 0.051 & 0.008 & 0.106 & 0.987 & 0.998 & 0.999 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Quantitative results of depth estimation. The reported results are average over three scenes of ScanNet.
Figure 7: Ablation study on pose estimation of Scene_04. BARF is the baseline method, and we gradually enable the pose prior, the depth supervision \(L_{e}\) and the warmup learning to evaluate their effectiveness. |
2310.05120 | Linear Loop Synthesis for Quadratic Invariants | Invariants are key to formal loop verification as they capture loop
properties that are valid before and after each loop iteration. Yet, generating
invariants is a notorious task already for syntactically restricted classes of
loops. Rather than generating invariants for given loops, in this paper we
synthesise loops that exhibit a predefined behaviour given by an invariant.
From the perspective of formal loop verification, the synthesised loops are
thus correct by design and no longer need to be verified.
To overcome the hardness of reasoning with arbitrarily strong invariants, in
this paper we construct simple (non-nested) while loops with linear updates
that exhibit polynomial equality invariants. Rather than solving arbitrary
polynomial equations, we consider loop properties defined by a single quadratic
invariant in any number of variables. We present a procedure that, given a
quadratic equation, decides whether a loop with affine updates satisfying this
equation exists. Furthermore, if the answer is positive, the procedure
synthesises a loop and ensures its variables achieve infinitely many different
values. | S. Hitarth, George Kenison, Laura Kovács, Anton Varonka | 2023-10-08T11:23:39Z | http://arxiv.org/abs/2310.05120v2 | # Linear Loop Synthesis for Quadratic Invariants
###### Abstract
Invariants are key to formal loop verification as they capture loop properties that are valid before and after each loop iteration. Yet, generating invariants is a notorious task already for syntactically restricted classes of loops. Rather than generating invariants for given loops, in this paper we synthesise loops that exhibit a predefined behaviour given by an invariant. From the perspective of formal loop verification, the synthesised loops are thus correct by design and no longer need to be verified.
To overcome the hardness of reasoning with arbitrarily strong invariants, in this paper we construct simple (non-nested) while loops with linear updates that exhibit polynomial equality invariants. Rather than solving arbitrary polynomial equations, we consider loop properties defined by a single quadratic invariant in any number of variables. We present a procedure that, given a quadratic equation, decides whether a loop with affine updates satisfying this equation exists. Furthermore, if the answer is positive, the procedure synthesises a loop and ensures its variables achieve infinitely many different values.
program synthesis, loop invariants, verification, Diophantine equations 2012 acmshortshortshort [1][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][]][][][][][][][][][][
accurately, they admit fewer false positives. That is, a program verifier using polynomial loop invariants infers less frequently that a true assertion can be violated [7].
Loop Synthesis.Generating invariants, in particular polynomial invariants, is a notorious task, shown to be undecidable for loops with arbitrary polynomial arithmetic [15]. Rather than generating invariants for loops, _in this paper we work in the reverse direction: generating loops from invariants_. Thus we ensure that the constructed loops exhibit intended invariant properties and are thus correct by design. Loop synthesis therefore provides an alternative approach for proving program correctness. If intermediate assertions of an involved program are written in terms of polynomial equalities, automated loop synthesis can provide a code fragment satisfying that assertion, while being correct by construction with respect to the specification.
To overcome hardness of polynomial reasoning and solving arbitrary polynomial equations, we restrict our attention to linear loops, and provide a decision procedure for computing linear loops from (quadratic) polynomial invariants (Algorithm 1).
Linear loop synthesis showcases how a simple model (linear loop) can express complicated behaviours (quadratic invariants), as also witnessed in sampling algorithms of real algebraic geometry [2, 11]. A non-trivial linear loop for a polynomial invariant allows to sample infinitely many points from the algebraic variety defined by the polynomial. Moreover, the computational cost to generate a new sample point only involves a matrix-vector multiplication. From a similar perspective, the result of a loop synthesis process for a polynomial equation (invariant) is an infinite family of solutions defined by recurrence relations. This family is parameterised by \(n\), the number of loop iterations: \(n\)-th terms of the synthesised recursive sequences yield a solution of the polynomial equation. Whether the solution set of an equation admits a parameterisation of a certain kind is, in general, an open problem [33, 35].
Our Contributions.The main contributions of this work are as follows:
1. We present a procedure that, given a quadratic equation \(P(x_{1},\dots,x_{d})=0\) with an arbitrary number of variables and rational coefficients, generates an affine loop such that \(P=0\) is invariant under its execution; i.e., the equality holds after any number of loop iterations. If such a loop does not exist, the procedure returns a negative answer. The values of the loop variables are rational. Moreover, the state spaces of the loops synthesised by this procedure are infinite and, notably, the same valuation of loop variables is never reached twice. The correctness of this procedure is established in Theorem 5. A summary of the procedure is given in Algorithm 3 (Appendix B).
2. If the equation \(Q(x_{1},\dots,x_{d})=c\) under consideration is such that \(Q\) is a quadratic form, we present a stronger result: a procedure (Algorithm 1) that generates a _linear_ loop with \(d\) variables satisfying the invariant equation.
Paper Outline.Section 2 introduces relevant preliminary material. We defer the discussion of _polynomial equation solving_, a key element of loop synthesis, to Section 3. Then, in Section 4, we provide a method to synthesise _linear loops_ for invariants, where the invariants restricted to be equations with _quadratic forms_. We extend these results in Section 5 and present a procedure that synthesises _affine loops_, and hence also linear loops, for invariants that are _arbitrary quadratic_ equations. We discuss remarkable aspects of our approach and propose further directions in Section 6, in relation to known results.
## 2 Preliminaries
### Linear and quadratic forms
[Quadratic form] A \(d\)-ary quadratic form over the field \(\mathbb{K}\) is a homogeneous polynomial of degree 2 with \(d\) variables:
\[Q(x_{1},\dots,x_{d})=\sum_{i\leq j}c_{ij}x_{i}x_{j},\]
where \(c_{ij}\in\mathbb{K}\). It is convenient to associate a quadratic form \(Q\) with the symmetric matrix:
\[A_{Q}:=\begin{pmatrix}c_{11}&\frac{1}{2}c_{12}&\dots&\frac{1}{2}c_{1d}\\ \frac{1}{2}c_{12}&c_{22}&\dots&\frac{1}{2}c_{2d}\\ \vdots&\vdots&\ddots&\vdots\\ \frac{1}{2}c_{1d}&\frac{1}{2}c_{2d}&\dots&c_{dd}\end{pmatrix}.\]
We note that since \(A_{Q}\) is symmetric, its eigenvalues are all real-valued. Further, \(Q(\mathbf{x})=\mathbf{x}^{\mathsf{T}}A_{Q}\mathbf{x}\) for a vector \(\mathbf{x}=(x_{1},\dots,x_{d})\) of variables.
We consider quadratic forms over the field \(\mathbb{Q}\) of rational numbers by default. Therefore, a quadratic form has a rational quadratic matrix associated with it.
A quadratic form \(Q\) is _non-degenerate_ if its matrix \(A_{Q}\) is not singular; that is, \(\det A_{Q}\neq 0\). A quadratic form \(Q\) over \(\mathbb{Q}\)_represents_ the value \(a\in\mathbb{Q}\) if there exists a vector \(\mathbf{x}\in\mathbb{Q}^{d}\) such that \(Q(\mathbf{x})=a\). A quadratic form \(Q\) over \(\mathbb{Q}\) is called _isotropic_ if it represents \(0\) non-trivially, i.e., there exists a non-zero vector \(\mathbf{x}\in\mathbb{Q}^{d}\) with \(Q(\mathbf{x})=0\). The vector itself is then called _isotropic_. If no isotropic vector exists, the form is _anisotropic_. A quadratic form \(Q\) is called _positive_ (resp. _negative_) _definite_ if \(Q(\mathbf{x})>0\) (resp. \(Q(\mathbf{x})<0\)) for all \(\mathbf{x}\neq\mathbf{0}\). Note that definite forms are necessarily anisotropic.
Let \(Q_{1}\) and \(Q_{2}\) be \(d\)-ary quadratic forms. One says \(Q_{1}\) and \(Q_{2}\) are equivalent, denoted by \(Q_{1}\sim Q_{2}\), if there exists \(\sigma\in\operatorname{GL}_{d}(\mathbb{Q})\) such that \(Q_{2}(\mathbf{x})=Q_{1}(\sigma\cdot\mathbf{x})\).
The definition above means that there exists an (invertible) linear change of variables over \(\mathbb{Q}\) under which representations by \(Q_{2}\) are mapped to the representations by \(Q_{1}\). It is clear that two equivalent quadratic forms represent the same values. In terms of matrices, we have \((\sigma\mathbf{x})^{\mathsf{T}}A_{Q_{1}}\sigma\mathbf{x}=\mathbf{x}^{\mathsf{T}}A_{Q_{2}} \mathbf{x}\), and hence
\[A_{Q_{2}}=\sigma^{\mathsf{T}}A_{Q_{1}}\sigma.\]
[Linear form] A linear form in \(d\) variables over the field \(\mathbb{Q}\) is a homogeneous polynomial \(L(x_{1},\dots,x_{d})=\sum_{i=1}^{d}b_{i}x_{i}\) of degree 1, where \(b_{1},\dots,b_{d}\in\mathbb{Q}\).
One can write a linear form using vector notation \(L(\mathbf{x})=\mathbf{b}^{\mathsf{T}}\mathbf{x}\), where \(\mathbf{b}=(b_{1},\dots,b_{d})^{\mathsf{T}}\in\mathbb{Q}^{d}\) is a non-zero vector of the linear form.
### Loops and their Synthesis
Linear loops are a class of single-path loops whose update assignments are determined by a homogeneous system of linear equations in the program variables.
[Linear loop] A linear loop \(\langle M,\mathbf{s}\rangle\) is a loop program of the form
\[\mathbf{x}\leftarrow\mathbf{s};\text{ {while} }\star\text{ {do} }\mathbf{x}\gets M\mathbf{x},\]
where \(\mathbf{x}\) is a \(d\)-dimensional column vector of program variables, \(\mathbf{s}\) is an _initial_\(d\)-dimensional _vector_, and \(M\) is a \(d\times d\)_update matrix_. For the procedures, which we introduce here, to be effective, we assume that the entries of \(M\) and \(\mathbf{s}\) are rational.
We employ the notation \(\star\), instead of using true as loop guard, as our focus is on loop synthesis rather than proving loop termination.
[Affine loop] An _affine loop_\(\langle M,\mathbf{s},\mathbf{t}\rangle\) is a loop program of the form
\[\mathbf{x}\leftarrow\mathbf{s};\;\text{while}\;\star\;\text{do}\;\mathbf{x}\gets M \mathbf{x}+\mathbf{t},\]
where, in addition to the previous definition, \(\mathbf{t}\in\mathbb{Q}^{d}\) is a translation vector.
[Linear and Affine Loops] A standard observation allows simulating affine loops by linear ones at a cost of one additional variable constantly set to \(1\). An _augmented matrix_ of an affine loop with \(d\) variables is a matrix \(M^{\prime}\in\mathbb{Q}^{(d+1)\times(d+1)}\) of the form
\[M^{\prime}:=\left(\begin{array}{c|c}1&0_{1,d}\\ \hline\mathbf{t}&M\end{array}\right).\]
It follows that a linear loop \(\langle M^{\prime},(1,\mathbf{s})^{\mathsf{T}}\rangle\) simulates the affine loop in its last \(d\) variables.
A linear (or affine) loop with variables \(\mathbf{x}=(x_{1},\ldots,x_{d})\) generates \(d\) sequences of numbers. For each loop variable \(x_{j}\), let \(\langle x_{j}(n)\rangle_{n=0}^{\infty}\) denote the sequence whose \(n\)-th term is given by the value of \(x_{j}\) after the \(n\)-th loop iteration. Similarly, define the sequence of vectors \(\langle\mathbf{x}(n)\rangle_{n}\). One can also talk of a subset of \(\mathbb{Q}^{d}\) that consists of vectors that occur in \(\langle\mathbf{x}(n)\rangle_{n}\). In this case, we refer to it as the _orbit_ of a loop. A loop with variables \(x_{1},\ldots,x_{d}\) is called _non-trivial_ if the orbit
\[\mathcal{O}_{\mathbf{x}}:=\{(x_{1}(n),\ldots,x_{d}(n)):n\geq 0\}\subseteq\mathbb{Q}^{d}\]
is infinite. A _polynomial invariant_ of a loop is a polynomial \(P\in\mathbb{Q}[\mathbf{x}]\) such that
\[P(x_{1}(n),\ldots,x_{d}(n))=0\]
holds for all \(n\geq 0\).
[Loop Synthesis] Given a polynomial invariant \(P\in\mathbb{Q}[x_{1},\ldots,x_{d}]\), find a non-trivial linear (affine) loop with vector sequence \(\langle\mathbf{x}(n)\rangle_{n}\) such that
\[P(x_{1}(n),\ldots,x_{d}(n))=0\]
holds for any \(n\geq 0\).
We emphasise that, unless stated differently, the objective of the loop synthesis process from Problem 2 is to find a loop with the same number of variables \(d\) as in the input invariant. That is, \(\langle\mathbf{x}(n)\rangle_{n}=(\langle x_{1}(n)\rangle_{n},\ldots,\langle x_{d}( n)\rangle_{n})\)
Note that \(P=0\) in Problem 2 does not need to be an _inductive invariant_ for the synthesised loop. We do not require the matrix \(M\) to preserve the equality for all vectors \(\mathbf{x}\). There might still exist a vector \(\mathbf{s^{\prime}}\) such that \(P(\mathbf{s^{\prime}})=0\) but \(P(M\cdot\mathbf{s^{\prime}})\neq 0\). By contraposition, proving that no matrix \(M\) preserves the equality \(P=0\) does not imply that a non-trivial loop does not exist. In summary, the search for an update matrix \(M\) (or the augmented matrix \(M^{\prime}\) in the affine loop version of Problem 2), is integrally linked to the search of \(\mathbf{s}\), a solution of the polynomial \(P=0\).
**Remark 2.8** (Loop Synthesis and Polynomial Equation Solving).: We note that solving loop synthesis from Problem 2.7 relies on, but it is not equivalent to, solving polynomial equations. This is so as we focus on _non-trivial_ loops in Problem 2.7. Allowing loops with finite orbits would mean that a loop with an identity matrix update \(I_{d}\) is accepted as a solution:
\[\mathbf{x}:=\mathbf{s};\ \ \text{while\ true\ do}\ \mathbf{x}:=I_{d}\cdot\mathbf{x};\]
Then, the loop synthesis problem would be equivalent to the problem of finding a rational solution of a polynomial equation \(P=0\) (see Problem 3.1). The problem, as we define it in Problem 2.7, neglects loops that satisfy a desired invariant but reach the same valuation of variables twice. Due to this, the Problem 2.7 of loop synthesis is different from the Problem 3.1 of solving polynomial equations.
## 3 Solving Quadratic Equations
As showcased in Problem 2.7 and discussed in Remark 2.8, loop synthesis for a polynomial invariant \(P=0\) is closely related to the problem of solving a polynomial equation \(P=0\).
**Problem 3.1** (Solving Polynomial Equations).: _Given a polynomial \(P\in\mathbb{Q}[x_{1},\ldots,x_{d}]\), decide whether there exists a rational solution \((s_{1},\ldots,s_{d})\in\mathbb{Q}^{d}\) to the equation \(P(x_{1},\ldots,x_{d})=0\)._
We emphasise that determining whether a given polynomial equation has a rational solution, is a fundamental open problem in number theory [28], see also Section 6.1.
Clearly, this poses challenges to our investigations of loops satisfying arbitrary polynomial invariants. In light of this, it is natural to restrict Problem 2.7 to loop invariants given by _quadratic_ equations. Given a single equation \(P(\mathbf{x})=0\) of degree \(2\), the challenge from now on is to find a rational solution \(\mathbf{s}\) and an update matrix \(M\) such that iterative application of \(M\) to \(\mathbf{s}\) of the equation does not violate the invariant: \(P(M^{n}\mathbf{s})=0\) for all \(n\geq 0\).
In this section, we recall well-known methods for solving quadratic equations. In the sequel, we will employ said methods in the novel setting of loop synthesis for quadratic polynomial invariants (Sections 4-5).
**Problem 3.2** (Solving Quadratic Equations).: _Given a quadratic equation in \(d\) variables with rational coefficients, decide whether it has rational solutions. If it does, generate one of the solutions._
### Solutions of Quadratic Equations in Two Variables
We first prove two lemmas when using binary equations in Problem 3.2, needed for Section 4.
**Lemma 3.3**.: _For all \(a,b\in\mathbb{Q}\setminus\{0\}\), Pell's equation \(x^{2}+\frac{b}{a}y^{2}=1\) has a rational solution \((\alpha,\beta)\) with \(\alpha\not\in\{\pm 1,\pm\frac{1}{2},0\}\) and \(\beta\neq 0\)._
Proof.: So long as \(a\neq-b\), it is easy to see that \(\left(\frac{b-a}{a+b},\frac{2a}{a+b}\right)\) is a rational solution to Pell's equation. Recall that \(a\neq 0\), hence \(\beta\neq 0\) and \(\alpha\neq\pm 1\). However, the generic solution might have \(\alpha=0\) or \(|\alpha|=\frac{1}{2}\). We thus explicitly pick alternative solutions for the cases when it occurs: (i) \(x^{2}+y^{2}=1\) has another rational point, e.g., \((\frac{3}{5},\frac{5}{5})\); (ii) \(x^{2}+3y^{2}=1\) has a rational point \((-\frac{11}{13},\frac{1}{13})\); (iii) \(x^{2}+\frac{1}{3}y^{2}=1\) has a rational point \((\frac{1}{7},\frac{12}{7})\).
Finally, if \(a=-b\), we can take a rational point \((\frac{5}{3},\frac{4}{3})\) on the hyperbola \(x^{2}-y^{2}=1\)
**Lemma 3.4**.: _An equation \(ax^{2}+by^{2}=c\) with \(a,b\in\mathbb{Q}\setminus 0\) has either no rational solutions different from \((0,0)\), or infinitely many rational solutions different from \((0,0)\)._
Proof.: Define \(R:=\left(\begin{smallmatrix}\alpha&-\frac{1}{a}\beta\\ \beta&\alpha\end{smallmatrix}\right)\) where \((\alpha,\beta)\in\mathbb{Q}^{2}\setminus\mathbf{0}\) satisfies \(\alpha^{2}+\frac{b}{a}\beta^{2}=1\) (is a solution to Pell's equation) for which \(\alpha\notin\{\pm 1,\pm\frac{1}{2},0\}\) (as in Lemma3.3). What follows can be viewed as an application of the multiplication principle for the generalised Pell's equation [1]. Observe that if \(\boldsymbol{v}=(x,y)^{\mathsf{T}}\) is a solution to \(ax^{2}+by^{2}=c\), then so is \(R\boldsymbol{v}\).
We now show how to generate infinitely many rational solutions to \(ax^{2}+by^{2}=c\) from a single rational solution. Assume, towards a contradiction, that \(R^{n+k}\boldsymbol{v}=R^{n}\boldsymbol{v}\) holds for some \(n\geq 0\), \(k\geq 1\). Therefore, there exists an integer \(k\) such that \(1\) is an eigenvalue of \(R^{k}\). Equivalently, there exists a root of unity \(\omega\) which is an eigenvalue of \(R\). We proceed under this assumption.
Let \(\varphi\) be the argument of a complex number \(\omega\). By construction, the eigenvalues of \(R\) are \(\omega\) and \(\omega^{-1}\). Moreover, the real part \(\cos(\varphi)\) of \(\omega\) is equal to \(\alpha\) (and thus rational). Since \(\omega\) is a root of unity, \(\varphi\) is a rational multiple of \(2\pi\). By Niven's theorem [26], the only rational values for \(\cos(\varphi)\) are \(0\), \(\pm\frac{1}{2}\) and \(\pm 1\). We arrive at a contradiction, as \(\alpha\) was carefully picked to avoid these values.
In summary, we have shown that \(R\) has no eigenvalues that are roots of unity, from which we deduce the desired result.
### Solving Isotropic Quadratic Forms
We next present an approach to solving Problem3.2 that uses the theory of representations of quadratic forms. First, we prove a lemma concerning non-zero representations of \(0\).
**Lemma 3.5**.: _Let \(Q(x_{1},\ldots,x_{n})=a_{1}x_{1}^{2}+\cdots+a_{n}x_{n}^{2}\) be an isotropic quadratic form with \(a_{1},\ldots,a_{n}\neq 0\). There exists a representation \((\alpha_{1},\ldots,\alpha_{n})\) of \(0\); i.e., \(a_{1}\alpha_{1}^{2}+\cdots+a_{n}\alpha_{n}^{2}=0\) such that \(\alpha_{1},\ldots,\alpha_{n}\neq 0\)._
Proof.: Let \((\beta_{1},\ldots,\beta_{n})\in\mathbb{Q}^{n}\) be a representation of \(0\) by \(Q\). We further assume that \(\beta_{1},\ldots,\beta_{r}\neq 0\) while \(\beta_{r+1}=\cdots=\beta_{n}=0\), and \(r<n\). Moreover, let \(\lambda:=a_{r}\beta_{r}^{2}+a_{r+1}\beta_{r+1}^{2}\).
Consider the equation \(x^{2}+\frac{a_{r+1}}{a_{r}}y^{2}=1\). From Lemma3.3, it has a rational solution \((\alpha,\beta)\) such that \(\alpha,\beta\neq 0\). This implies \(a_{r}\alpha^{2}+a_{r+1}\beta^{2}=a_{r}\). A pair \((\beta_{r},0)\) is one solution to \(a_{r}x_{r}^{2}+a_{r+1}x_{r+1}^{2}=\lambda\). Following the steps in the proof of Lemma3.4, we can construct a matrix \(R\) for which \(R\cdot(\beta_{r},0)^{\mathsf{T}}=(\alpha\beta_{r},\beta\beta_{r})^{\mathsf{T}}\) where \((\alpha\beta_{r},\beta\beta_{r})\) is a solution of \(a_{r}x_{r}^{2}+a_{r+1}x_{r+1}^{2}=\lambda\) with both components being non-zero. Therefore, \((\beta_{1},\ldots,\beta_{r-1},\alpha\beta_{r},\beta\beta_{r},\beta_{r+2}, \ldots,\beta_{n})\) is an isotropic vector of \(Q\) with fewer zero entries. By repeating the process, we obtain an isotropic vector \((\alpha_{1},\ldots,\alpha_{n})\) as desired.
We emphasise that the process of eliminating zeros from the isotropic vector is effective. A similar proof is given in [3, p.294, Theorem 8].
In this discussion, we focus on solving equations of the form \(Q(x_{1},\ldots,x_{d})=c\), where \(Q\) is a quadratic form. As it will be shown later in Section4, it is always possible to find an equivalent diagonal quadratic form \(D\sim Q\). Therefore, we restrict our attention to equations of the form \(a_{1}x_{1}^{2}+\cdots+a_{d}x_{d}^{2}=c\). Assuming \(c\neq 0\), we start by homogenising the equation, and so consider the solutions of
\[a_{1}x_{1}^{2}+\cdots+a_{d}x_{d}^{2}-cx_{d+1}^{2}=0. \tag{1}\]
In other words, we are searching for a rational isotropic vector of a quadratic form.
**Proposition 3.6**.: _An equation_
\[a_{1}x_{1}^{2}+\cdots+a_{d}x_{d}^{2}=c \tag{2}\]
_has a rational solution different from \((0,\ldots,0)\) if and only if the quadratic form \(Q=a_{1}x_{1}^{2}+\cdots+a_{d}x_{d}^{2}-cx_{d+1}^{2}\) has an isotropic vector._
Proof.: For \(c=0\), the statement is a recitation of a definition. We continue under the assumption \(c\neq 0\). Recall from Lemma 3.5 that if the form \(Q\) is isotropic, then there is an isotropic vector \((\alpha_{1},\ldots,\alpha_{d+1})\) with \(\alpha_{i}\neq 0\) for all \(i\in\{1,\ldots,d+1\}\). Therefore, we can find a non-zero solution \((\alpha_{1}/\alpha_{d+1},\ldots,\alpha_{d}/\alpha_{d+1})\) to Eq. (2). Conversely, if (2) has a non-trivial solution \((\beta_{1},\ldots,\beta_{d})\), it follows that \((\beta_{1},\ldots,\beta_{d},1)\) is an isotropic vector for \(Q\).
### Finding Isotropic Vectors
Proposition 3.6 implies that solving Problem 3.1, and hence also loop synthesis in Problem 2.7, requires detecting whether a certain quadratic form is isotropic. Effective isotropy tests are known for quadratic forms \(Q(x_{1},\ldots,x_{d+1})\) as in Eq. (1). A more difficult task is the problem of finding an isotropic vector for such a form.
The abstract arithmetic techniques employed in finding an isotropic vector are beyond the scope of this paper; however, we give a brief overview of the computational task and a number of references to literature in Appendix A. Our takeaways from the theory, summarised in Appendix A, are the following functions:
* isIsotropic: a function that, given an indefinite quadratic form over the rationals as an input, determines whether the input is isotropic and duly returns the answers yes and no (as appropriate).
* findIsotropic: a function that accepts isotropic quadratic forms over the rationals as inputs and returns an isotropic vector for each such form.
* solve: a function that takes Eq. (2) as an input and will return a non-zero solution if the form \(a_{1}x_{1}^{2}+\cdots+a_{d}x_{d}^{2}-cx_{d+1}^{2}\) is isotropic; otherwise solve returns "no solutions". The function solve calls both isIsotropic and findIsotropic as shown in Algorithm 2.
We note the solve subroutine in the sequel: the function linLoop defined in Algorithm 1, calls on solve; and, in turn, the function linLoop is called by the function affLoop defined in Algorithm 3.
## 4 Quadratic Forms: Linear Loops
The core of this section addresses equations, and hence loop invariants, that involve quadratic forms. The equations (invariants) of this section do not have a linear part; they are quadratic forms equated to constants; that is, equations of the form
\[Q(x_{1},\ldots,x_{d})=c, \tag{3}\]
where \(Q\) is an arbitrary \(d\)-ary quadratic form with rational coefficients, \(c\) is a rational number.
The main result of this section is the following theorem, proving the existence of linear loops that exhibit quadratic invariants given by quadratic forms.
**Theorem 4.1** (Linear Loops for Quadratic Forms).: _There exists a procedure that, given an equation \(Q(x_{1},\ldots,x_{d})=c\) of the form (3), decides whether a non-trivial linear loop satisfying \(Q(x_{1},\ldots,x_{d})=c\) exists and, if so, synthesises a loop._
We prove Theorem 4.1 in several steps. The first of them is to diagonalise the quadratic form \(Q\) and thus reduce to Eq. (3) without mixed terms on the left-hand side.
### Rational Diagonalisation
A rational quadratic form can be diagonalised by an invertible change of variables with only rational coefficients.
Let \(Q\) be a (possibly degenerate) \(d\)-ary quadratic form. There exists an equivalent quadratic form \(D\) with a diagonal matrix \(A_{D}\in\mathbb{Q}^{d\times d}\), i.e., \(Q\sim D\). Furthermore,
\[A_{D}=\sigma^{\mathsf{T}}A_{Q}\sigma\]
holds with \(\sigma\in\mathrm{GL}_{d}(\mathbb{Q})\).
A diagonalisation algorithm is described in [22, Algorithm 12.1], see also "diagonalisation using row/column operations" in [34, Chapter 7, 2.2]. The idea, as presented in [34], is to perform row operations on the matrix \(Q\). Different from the usual Gauss-Jordan elimination, the analogous column operations are performed after each row operation. We emphasise that the change-of-basis matrix \(\sigma\) is invertible as a product of elementary matrices.
[Degeneracy] Let \(A_{D}:=\mathrm{diag}(a_{1},\ldots,a_{d})\) be the diagonal matrix of the quadratic form \(D\) as in Proposition 4. The product \(a_{1}\cdots a_{d}\) is zero if and only if the initial quadratic form \(Q\) is degenerate.
Let \(Q_{1}\) and \(Q_{2}\) be two equivalent \(d\)-ary quadratic forms. If there exists a linear loop \(\mathcal{L}=\langle M,\mathbf{s}\rangle\) with invariant \(Q_{2}=c\) for a constant \(c\in\mathbb{Q}\), then \(Q_{1}=c\) is an invariant of the linear loop \(\mathcal{L}^{\prime}=\langle\sigma M\sigma^{-1},\sigma\mathbf{s}\rangle\). Here, \(\sigma\in\mathrm{GL}_{d}(\mathbb{Q})\) is a change-of-basis matrix such that \(Q_{2}(\mathbf{x})=Q_{1}(\sigma\cdot\mathbf{x})\).
Proof.: If \((M^{n}\mathbf{s})^{\mathsf{T}}A_{Q_{2}}(M^{n}\mathbf{s})=c\) for all \(n\geq 0\), then
\[\left((\sigma M\sigma^{-1})^{n}\sigma\mathbf{s}\right)^{\mathsf{T}}A_ {Q_{1}}\left((\sigma M\sigma^{-1})^{n}\sigma\mathbf{s}\right)=\left(\sigma M^{n} \mathbf{s}\right)^{\mathsf{T}}A_{Q_{1}}\left(M^{n}\mathbf{s}\right)\] \[=\mathbf{s}^{\mathsf{T}}(M^{n})^{\mathsf{T}}\sigma^{\mathsf{T}}A_{Q_{1 }}\sigma M^{n}\mathbf{s}=\mathbf{s}^{\mathsf{T}}(M^{n})^{\mathsf{T}}A_{Q_{2}}M^{n}\mathbf{ s}=(M^{n}\mathbf{s})^{\mathsf{T}}A_{Q_{2}}M^{n}\mathbf{s}=c\]
for all \(n\geq 0\) as well. We emphasise that \(\sigma\) is a bijection from \(\mathbb{Q}^{d}\) to itself, so the reduction described here preserves the infiniteness of loop orbits.
We conclude from Propositions 4.1 and 4.1 that for a general quadratic form \(Q\), a linear loop with an invariant \(Q(\mathbf{x})=c\) exists if and only if a linear loop exists for an invariant \(D(\mathbf{x})=c\), where \(D\) is an equivalent diagonal form.
### Diagonal Quadratic Forms
We only need to consider 2. Recall that it has the form \(a_{1}x_{1}^{2}+\cdots+a_{d}x_{d}^{2}=c\), where \(a_{1},\ldots,a_{d},c\in\mathbb{Q}\). If the equation is homogeneous, that is, \(c=0\), loop synthesis is essentially just searching for a rational solution \(\mathbf{\alpha}=(\alpha_{1},\ldots,\alpha_{d})\). Indeed, a loop with a matrix \(\lambda\cdot I_{d}\) (scaling each variable by \(\lambda\in\mathbb{Q}\setminus\{-1,0,1\}\)) and the initial vector \(\mathbf{\alpha}\) is a non-trivial linear loop satisfying the invariant \(Q(\mathbf{x})=0\).
From Section 3, we know that one can generate a solution (or prove it does not exist) to 2 in its general form, also with \(c\neq 0\). The bottleneck of loop synthesis in 2.7 is thus finding an update matrix \(M\) for the linear loop. En route to this goal, we state a straightforward corollary of Lemma 3.1.
If an equation \(ax^{2}+by^{2}=c\) with \(a,b\in\mathbb{Q}\setminus 0\) has infinitely many rational solutions different from \((0,0)\), then there exists a non-trivial linear loop with polynomial invariant \(ax^{2}+by^{2}=c\).
Proof.: We use the construction in the proof of Lemma3.4, which demonstrates that the orbit of the linear loop \(\langle R,\mathbf{v}\rangle\) is infinite with polynomial invariant \(ax^{2}+by^{2}=c\).
Proof of Theorem4.1.: Due to Proposition4.4, we can consider an equation of the form (2):
\[a_{1}x_{1}^{2}+\dots+a_{d}x_{d}^{2}=c.\]
We describe the loop synthesis procedure in this case. If \(d=1\), the equation only has finitely many solutions, hence any loop for Eq.2 is trivial. We assume \(d\geq 2\) further.
In order to generate an initial vector of the loop for Eq.2, we exploit the results of Section3. Either Eq.2 has no rational solutions and hence no loop exists, or we effectively construct a solution \(\mathbf{\alpha}=(\alpha_{1},\dots,\alpha_{d})\in\mathbb{Q}^{d}\) using procedure solve. Recall that we can guarantee \(\alpha_{i}\neq 0\) for all \(i\in\{1,\dots,d\}\) due to Lemma3.5.
Note that some of the coefficients \(a_{i}\), \(i\in\{1,\dots,d\}\), may be zero if the original quadratic form \(Q\) is degenerate. We have to consider the case when all coefficients but one are \(0\), separately. That is, \(a_{1}x_{1}^{2}+0x_{2}^{2}+\dots+0x_{d}^{2}=c\). For this form, a solution exists if and only if \(c/a_{1}\) is a square of a rational number. Subsequently, if a solution \(\mathbf{\alpha}\) is found, set \(M:=\operatorname{diag}(1,2,\dots,2)\) to be a diagonal update matrix. Since \(d\geq 2\), we guarantee that the orbit of the linear loop \(\langle M,\mathbf{\alpha}\rangle\) is infinite.
Without loss of generality, we now assume \(a_{1}\neq 0\) and \(a_{2}\neq 0\). Define \(\gamma:=a_{1}\alpha_{1}^{2}+a_{2}\alpha_{2}^{2}\), then the equation \(a_{1}x_{1}^{2}+a_{2}x_{2}^{2}=\gamma\) has a non-trivial solution \((\alpha_{1},\alpha_{2})\).
From Corollary4.5, there exists a matrix \(R\in\mathbb{Q}^{2\times 2}\) that preserves the value of the quadratic form \(a_{1}x_{1}^{2}+a_{2}x_{2}^{2}\). This matrix can be constructed as in the proof of Corollary4.5 by considering the equation \(x_{1}^{2}+\frac{a_{2}}{a_{1}}x_{2}^{2}=1\). Let \(M\) be the matrix given by the direct sum
\[R\oplus I_{d-2}=\left(\begin{array}{c|c}R&0\\ \hline 0&I_{d-2}\end{array}\right)\]
where \(I_{n}\) is an identity matrix of size \(n\).
A desired loop is \((M,\mathbf{\alpha})\) as for each \(n\geq 0\), \(M^{n}\mathbf{\alpha}\) satisfies Eq.2. The loop is non-trivial because its orbit, restricted to \(x_{1},x_{2}\), is infinite.
The process of synthesising a loop for the quadratic invariant \(Q(x_{1},\dots,x_{n})=c\) is summarised in Algorithm1. It starts with a diagonalisation step, proceeds with finding a loop for an equation of the form (2), and applies the inverse transformation to obtain a linear loop for the initial invariant. Whenever Algorithm1 returns a loop, this loop is linear.
## 5 Arbitrary Quadratic Equations: Affine Loops
In this section, we leave the realm of quadratic forms and consider general quadratic invariants that may have a linear part. Any quadratic equation can be written in terms of a quadratic form \(Q\), a linear form \(L\), and a constant term \(c\):
\[Q(x_{1},\dots,x_{d})+L(x_{1},\dots,x_{d})=c. \tag{4}\]
On our way to a complete solution of Problem2.7 for arbitrary quadratic equations, we carefully analyse Eq.4. A standard technique (see e.g. [14, Proposition 1]) allows to reduce Eq.4 with a non-degenerate quadratic form \(Q\) to Eq.3 considered in Section4. We now give the details of this reduction and describe how to synthesise an affine loop for an invariant (4) in the non-degenerate case. Subsequently, we close the gap by discussing the case when \(Q\) is degenerate. Using Remark2.6, our results on affine loop synthesis imply then linear loop synthesis.
### Non-Degenerate Quadratic Forms
For convenience, we rewrite the equation in the matrix-vector form: \(\mathbf{x}^{\mathsf{T}}A_{Q}\mathbf{x}+\mathbf{b}^{\mathsf{T}}\mathbf{x}-c=0\). Here, \(A_{Q}\) is the non-singular matrix of the quadratic form \(Q\), and \(\mathbf{b}\) is the vector of the linear form. Let \(\delta:=\det A_{Q}\neq 0\) and \(C\) be the cofactor matrix of \(A_{Q}\), i.e., \(A_{Q}\cdot C=C\cdot A_{Q}=\delta\cdot I_{d}\). We further define \(\mathbf{h}:=C\cdot\mathbf{b}\) and \(\tilde{c}=4\delta^{2}c+Q(\mathbf{h})\). It can be checked directly that
\[Q(2\delta\cdot\mathbf{x}+\mathbf{h})=\tilde{c}\Leftrightarrow Q(\mathbf{x})+L(\mathbf{x})=c \tag{5}\]
In words, every equation of the form Eq.4 can be reduced to an equation of the form \(Q(\mathbf{y})=\tilde{c}\)_by an affine transformation_\(f\) that maps each \(\mathbf{x}\in\mathbb{Q}^{d}\) to \(2\delta\cdot\mathbf{x}+\mathbf{h}\in\mathbb{Q}^{d}\). As such, this means that solutions of Eq.4 under the non-degeneracy assumption are in a one-to-one correspondence with representations of \(\tilde{c}\) for \(Q\).
Let \(Q\) be a non-degenerate quadratic form and \(L\) a linear form, both in \(d\geq 2\) variables. Define \(\delta:=\det(A_{Q})\), \(\mathbf{h}\) and \(\tilde{c}\), as in the discussion above. The following are equivalent:
1. There exists a linear loop \(\langle M,\mathbf{s}\rangle\) satisfying the invariant \(Q(\mathbf{x})=\tilde{c}\).
2. There exists an affine loop \[\langle M,\frac{1}{2\delta}\left(\mathbf{s}-\mathbf{h}\right),\frac{1}{2\delta}\left( M-I_{d}\right)\mathbf{h}\rangle,\] satisfying the invariant \(Q(\mathbf{x})+L(\mathbf{x})=c\).
Start with the first assumption. For all \(n\geq 0\), it holds \(Q(M^{n}\mathbf{s})=\tilde{c}\). Equivalently,
\[Q(f^{-1}\left(M^{n}\mathbf{s}\right))+L(f^{-1}\left(M^{n}\mathbf{s}\right))=c,\text{ or }Q\left(\frac{1}{2\delta}(M^{n}\mathbf{s}-\mathbf{h})\right)+L\left(\frac{1}{2\delta}(M^{n} \mathbf{s}-\mathbf{h})\right)=c\]
for all \(n\geq 0\).
On the other hand, let \(\mathbf{x}(n)\) be the variable vector after the \(n\)-th iteration of an affine loop from the statement. We prove by induction that \(\mathbf{x}(n)=\frac{1}{2\delta}(M^{n}\mathbf{s}-\mathbf{h})\). The base case is true since the initial vector of the affine loop is \(\frac{1}{2\delta}(\mathbf{s}-\mathbf{h})=\frac{1}{2\delta}(M^{0}\mathbf{s}-\mathbf{h})\). Now, assume that \(\mathbf{x}(k)=\frac{1}{2\delta}(M^{k}\mathbf{s}-\mathbf{h})\) for an arbitrary \(k\geq 0\). Then, by applying the loop update once, we have
\[\mathbf{x}(k+1)=M\cdot\left(\frac{1}{2\delta}(M^{k}\mathbf{s}-\mathbf{h}) \right)+\frac{1}{2\delta}\left(M-I_{d}\right)\mathbf{h}\\ =\frac{1}{2\delta}\left(M^{k+1}\mathbf{s}-M\mathbf{h}+M\mathbf{h}-\mathbf{h} \right)=\frac{1}{2\delta}\left(M^{k+1}\mathbf{s}-\mathbf{h}\right),\]
and the inductive step has been shown. By the above work, we conclude that \(Q(\mathbf{x}(n))+L(\mathbf{x}(n))=c\) holds for all \(n\geq 0\).
Consider an invariant \(p(x,y):=x^{2}+y^{2}-3x-y=0\). After an affine change of coordinates \(f(x,y)=(2x-3,2y-1)\), it becomes \(x^{2}+y^{2}=10\) (that corresponds to \(\delta=1\), \(\mathbf{h}=(-3,-1)^{\mathsf{T}}\), \(\tilde{c}=10\)). There exists a linear loop for this equation:
\[M=\begin{pmatrix}\frac{3}{5}&-\frac{4}{5}\\ \frac{4}{5}&\frac{3}{5}\end{pmatrix}\text{ and }\mathbf{s}=\begin{pmatrix}1\\ -3\end{pmatrix}.\]
Next, compute the components of an affine loop. The update matrix is \(M\), whereas initial and translation vectors are respectively
\[\frac{1}{2}\left[\begin{pmatrix}1\\ -3\end{pmatrix}-\begin{pmatrix}-3\\ -1\end{pmatrix}\right]=\begin{pmatrix}2\\ -1\end{pmatrix}\qquad\text{ and }\qquad\frac{1}{2}\left[\begin{pmatrix}\frac{3}{5}&- \frac{4}{5}\\ \frac{4}{5}&\frac{3}{5}\end{pmatrix}-\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\right]\begin{pmatrix}-3\\ -1\end{pmatrix}=\begin{pmatrix}1\\ -1\end{pmatrix}.\]
The resulting affine loop is non-trivial with invariant \(p(x,y)=0\) due to Proposition 5.1:
\[\begin{pmatrix}x\\ y\end{pmatrix}\leftarrow\begin{pmatrix}2\\ -1\end{pmatrix};\text{ while }\star\text{ do }\begin{pmatrix}x\\ y\end{pmatrix}\leftarrow\begin{pmatrix}\frac{3}{5}x-\frac{4}{5}y+1\\ \frac{4}{5}x+\frac{5}{5}y-1\end{pmatrix}.\]
### Degenerate Quadratic Forms
Let \(r<d\) be the rank of \(A_{Q}\). There exist \(k:=d-r\) linearly independent vectors \(\mathbf{v}_{1},\dots,\mathbf{v}_{k}\in\mathbb{Q}^{d}\) such that \(A_{Q}\cdot\mathbf{v}_{i}=\mathbf{0}\). Construct a matrix \(\tau\in\operatorname{GL}_{d}(\mathbb{Q})\) such that \(\mathbf{v}_{1},\dots,\mathbf{v}_{k}\) constitute its first columns. It follows that every non-zero entry \((M)_{ij}\) of a matrix \(M:=\tau^{\mathsf{T}}A_{Q}\tau\) is located in the bottom right corner, that is, \(i>k\) and \(j>k\). We rewrite \(Q(\tau\mathbf{x})=\tilde{Q}(x_{k+1},\dots,x_{d})\) and \(L(\tau\mathbf{x})=\tilde{L}(x_{k+1},\dots,x_{d})+\lambda_{1}x_{1}+\dots+\lambda_{k} x_{k}\) in Eq. (4). Now we have:
\[\tilde{Q}(x_{k+1},\dots,x_{d})+\tilde{L}(x_{k+1},\dots,x_{d})=c-\lambda_{1}x_ {1}-\dots-\lambda_{k}x_{k}, \tag{6}\]
where \(\tilde{Q}\) is a non-degenerate quadratic form of \(r\) variables.
In the rest of this subsection, we are concerned with finding an affine loop satisfying Eq. (6). We emphasise that such a loop \(\langle M,\mathbf{s},\mathbf{t}\rangle\) exists if and only if \(\langle\tau M\tau^{-1},\tau\mathbf{s},\tau\mathbf{t}\rangle\) satisfies Eq. (4). The proof is due to \(\tau\) inducing an automorphism of \(\mathbb{Q}^{d}\), cf. Proposition 4.4.
If \(\lambda_{1}=\dots=\lambda_{k}=0\), we have arrived at an instance of Eq. (4) with a non-degenerate quadratic form and fewer variables. Let \(\delta\) be the determinant of \(\tilde{Q}(x_{k+1},\dots,x_{d})\) and define an affine transformation \(f\) of the general case on the subset of variables \(\{x_{k+1},\dots,x_{d}\}\). The constant \(\tilde{c}\) and the vector \(\mathbf{h}\in\mathbb{Q}^{r}\) are defined respectively.
After the change of coordinates that corresponds to \(f\), we have
\[0x_{1}^{2}+\dots+0x_{k}^{2}+\tilde{Q}(x_{k+1},\dots,x_{d})=\tilde{c}. \tag{7}\]
Recall (e.g. from the proof of Theorem 4.1) that once Eq. (7) with \(k\geq 1\) has a solution, there is a non-trivial linear loop satisfying the polynomial invariant defined by the equation. Now, let \(\langle M,\mathbf{s}\rangle\) be a linear loop for Eq. (7), where \(\mathbf{s}=(s_{1},\ldots,s_{d})^{\mathsf{T}}\). In fact, one can assume \(M:=\operatorname{diag}(2,\ldots,2,1,\ldots,1)\) with \(k\) twos and \(r\) ones. Define \(\mathbf{s}^{\prime}:=\frac{1}{2\delta}\left(\mathbf{s}-\left(\begin{smallmatrix}\mathbf{ 0}\\ \mathbf{h}\end{smallmatrix}\right)\right)\). It is not hard to see that a non-trivial _linear_ loop \(\langle M,\mathbf{s}^{\prime}\rangle\) satisfies \(\tilde{Q}(x_{k+1},\ldots,x_{d})+\tilde{L}(x_{k+1},\ldots,x_{d})=c\) if and only if \(\tilde{Q}(x_{k+1},\ldots,x_{d})=\tilde{c}\) has a solution \((s_{k+1},\ldots,s_{d})\).
From now on, we assume that \(k\geq 1\) is the number of non-zero \(\lambda_{i}\)'s on the right-hand side of Eq. (6). We show next that the loop synthesis question has a positive answer.
[Affine Loops for Quadratic Forms] Given a quadratic equation of the form in Eq. (6), there exists a non-trivial affine loop in variables \(x_{1},\ldots,x_{d}\) for which said equation is a polynomial invariant.
Proof.: Since \(k\geq 1\) and \(\lambda_{1}\neq 0\), the right-hand side \(c-\sum_{i=1}^{k}\lambda_{i}x_{i}\) represents every rational number. Set the values of \(x_{k+1},\ldots,x_{d}\) to some fixed values \(\mathbf{\alpha}=(\alpha_{1},\ldots,\alpha_{d-k})\) such that \(\mathbf{\alpha}\neq\mathbf{0}\) and solve the equation for \(x_{1},\ldots,x_{k}\) attaining a vector of values \(\mathbf{\beta}=(\beta_{1},\ldots,\beta_{k})\). We have \(\tilde{Q}(\mathbf{\alpha})+\tilde{L}(\mathbf{\alpha})=A(\mathbf{\beta})\), where \(A(x_{1},\ldots,x_{k}):=c-\lambda_{1}x_{1}-\cdots-\lambda_{k}x_{k}\).
We introduce the following case distinction.
1. [label=Case 0.,ref=Case 0.
satisfies \(\tilde{Q}(\mathbf{x})+\tilde{L}(\mathbf{x})=A(\beta_{1})\) for all \(n\geq 0\). Then,
\[\binom{y(n)}{\mathbf{x}(n)}=\begin{pmatrix}1&0_{1,r}\\ \hline\frac{1}{\beta_{1}}\mathbf{t}&M\end{pmatrix}^{\!\!n}\binom{\beta_{1}}{\mathbf{s}^ {\prime}}\]
satisfies \(\tilde{Q}(\mathbf{x}(n))+\tilde{L}(\mathbf{x}(n))=A(y(n))\) as in Eq. (6) for all \(n\geq 0\). We denote by \(M_{\beta}\) the \(d\)-dimensional square matrix in the preceding displayed equation. Observe that \(\langle M_{\beta},(\beta_{1},\mathbf{s}^{\prime})^{\mathsf{T}}\rangle\) is a _linear_ loop satisfying the invariant of Eq. (4).
Finally, we come to the special case, Case 3, that considers quadratic equations of the form \(ax^{2}+bx=c-dy\) where \(d\neq 0\). It suffices to observe that an affine transformation of \(\mathbb{Q}^{2}\) defined by \((x,y)\mapsto(2x,2\frac{b}{2}x+4y-3\frac{c}{d})\) preserves the equation \(ax^{2}+bx=c-dy\). We conclude that \(ax^{2}+bx=c-dy\) is a polynomial invariant of the affine loop with initial vector \((1,\frac{c-a-b}{d})^{\mathsf{T}}\), translation vector \((0,-3\frac{c}{d})^{\mathsf{T}}\), and update matrix \(\left(\begin{smallmatrix}2&0\\ 2\frac{b}{2}&4\end{smallmatrix}\right)\).
### The Procedure: Affine Loop Synthesis for Quadratic Invariants
[Affine Loops for Quadratic Equations] There exists an effective procedure that, given a quadratic equation (i.e. invariant)
\[Q(x_{1},\ldots,x_{d})+L(x_{1},\ldots,x_{d})=c,\]
decides whether a non-trivial affine loop satisfying it exists and, if so, synthesises a loop.
The theorem is essentially proved in Propositions 5 and 3. If the quadratic form is non-degenerate, Proposition 5 reduces the search for an affine loop to the search for a linear loop satisfying \(Q(x_{1},\ldots,x_{d})=\tilde{c}\). The solution of this problem was given in Theorem 4. If the quadratic form is degenerate, we consider Eq. (6). If at least one of the \(\lambda_{i}\)'s is non-zero, a loop exists, as shown by the ad hoc constructions of Proposition 5. In two of the three cases there, the loop is not just affine, but linear. Otherwise, if all of the \(\lambda_{i}\)'s are zero, we obtain a linear loop by essentially testing whether a solution to an equation \(\tilde{Q}(x_{1},\ldots,x_{d})=\tilde{c}\) exists. Finally, in order to obtain an affine loop satisfying the original equation, we apply transformation \(\tau\) to the loop synthesised for Eq. (6).
The procedure in Algorithm 3 (see Appendix B) summarises the discussion of this section. By analysing the procedure, one can argue that a negative output implies that Eq. (4) has no solutions. The problem of _deciding_ whether a loop exists for a given invariant, as in Problem 2 and opposed to the synthesis of numerical values, is thus solved as follows.
Let \(Q\) be a quadratic form, \(L\) a linear form over variables \(\mathbf{x}=(x_{1},\ldots,x_{d})\).
1. A non-trivial affine loop satisfying the quadratic equation \(Q(\mathbf{x})+L(\mathbf{x})=c\) exists if and only if the equation has a rational solution different from \(\mathbf{x}=\mathbf{0}\).
2. A non-trivial linear loop satisfying the equation \(Q(\mathbf{x})=c\) exists if and only if the equation has a rational solution different from \(\mathbf{x}=\mathbf{0}\).
Let \(-11x^{2}+y^{2}-3z^{2}+2xy-12xz+x+z=-1\) be a quadratic invariant in 3 variables. The quadratic form \(Q(x,y,z)=-11x^{2}+y^{2}-3z^{2}+2xy-12xz\) is degenerate with rank \(r=2\) and so we can compute \(\tau=\left(\begin{smallmatrix}-1&0&0\\ 1&3&0\\ 2&0&3\end{smallmatrix}\right)\) such that \(\tau^{\mathsf{T}}A_{Q}\tau=\operatorname{diag}(0,9,-27)\) is the matrix of an equivalent form. We have \(Q(\tau\mathbf{x})=\tilde{Q}(y,z)=9y^{2}-27z^{2}\). For the linear part, \(L(x,y,z)=x+z\), the change of coordinates results in \(L(\tau\mathbf{x})=\tilde{L}(y,z)+x=3z+x\). Continue with the equation of the form Eq. (6): \(9y^{2}-27z^{2}+3z=-1-x\). Here, \(\lambda_{1}=1\), and so we set \((y,z)\) to \((\alpha_{1},\alpha_{2})=(\frac{1}{3},0)\) and find a solution for \(x\): \(\beta_{1}=-2\). Next, find an affine
transformation \(f\) associated with \(9y^{2}-27z^{2}+3z=1\). We have \(\delta=243\), \(\boldsymbol{h}=(0,27)^{\mathsf{T}}\) and \(\tilde{c}=216513\). The solutions of \(9y^{2}-27z^{2}+3z=1\) are exactly the solutions of \(9y^{2}-27z^{2}=216513\) under the action of \(f\).
Using the linLoop procedure, we find a linear loop \(\langle M,\boldsymbol{s}\rangle\) for the invariant \(9y^{2}-27z^{2}=216513\) with \(M=\left(\begin{smallmatrix}2&3\\ 1&2\end{smallmatrix}\right)\) and \(\boldsymbol{s}=(-162,27)^{\mathsf{T}}\). Therefore, an affine loop
\[\mathcal{A}:=\langle M,\frac{1}{2\delta}\left(\boldsymbol{s}-\boldsymbol{h} \right),\frac{1}{2\delta}\left(M-I_{2}\right)\boldsymbol{h}\rangle;\]
that is, an affine loop with augmented matrix \(M^{\prime}\) and initial vector \(\boldsymbol{s}^{\prime}\) given by
\[M^{\prime}=\left(\begin{array}{c|cc}1&0_{1,r}\\ \hline\frac{1}{2\delta}\left(M-I_{2}\right)\boldsymbol{h}&M\end{array}\right) =\begin{pmatrix}1&0&0\\ -1/6&2&3\\ -1/18&1&2\end{pmatrix}\quad\text{and}\quad\boldsymbol{s}^{\prime}=\frac{1}{2 \delta}\left(\boldsymbol{s}-\boldsymbol{h}\right)=\begin{pmatrix}\frac{1}{3} \\ 0\end{pmatrix}\!,\]
satisfies the invariant \(9y^{2}-27z^{2}+3z=1\). Consequently, a _linear_ loop with update matrix
\[M_{\beta}:=\begin{pmatrix}1&0&0\\ 1/12&2&3\\ 1/36&1&2\end{pmatrix}\]
and initial vector \((-2,1/3,0)^{\mathsf{T}}\) satisfies the invariant \(9y^{2}-27z^{2}+3z=-1-x\). We conclude by applying transformation \(\tau\): a linear loop with matrix
\[\tau M_{\beta}\tau^{-1}=\begin{pmatrix}1&0&0\\ 27/4&2&3\\ 35/12&1&2\end{pmatrix}\quad\text{and initial vector}\quad\tau\begin{pmatrix}-2 \\ 1/3\\ 0\end{pmatrix}=\begin{pmatrix}2\\ -1\\ -4\end{pmatrix}\]
satisfies the original invariant \(-11x^{2}+y^{2}-3z^{2}+2xy-12xz+x+z=-1\).
## 6 Conclusion
### Related Work
Loop Synthesis.Work by Humenberger et al. on loop synthesis employs an approach based on algebraic reasoning about linear recurrences and translating loop synthesis into an SMT solving task in non-linear arithmetic. Their approach is relatively complete as it generates all linear loops (with an a priori fixed upper bound on the number of variables) satisfying a given invariant. Another SMT-based algorithm for template-based synthesis of general polynomial programs is given in work by Goharshady et al. [13]. However, loops generated for an invariant \(P=0\) using the latter approach necessarily have \(P=0\) as an inductive invariant and, more importantly, are not guaranteed to have infinite orbits. Recent work by Kenison et al. addresses the loop synthesis problem for multiple polynomial invariants, where each of the polynomials is a binomial of a certain type [20]. In our work, we restrict not the number of monomials in an invariant, but its degree, and thus achieve a complete solution for a single quadratic invariant.
Solving Polynomial Equations.As noted in Remark 2.8, one of the fundamental challenges towards loop synthesis arises from the study of integer and rational solutions to polynomial equations. A _Diophantine equation_\(F(x_{1},x_{2},\ldots,x_{d})=0\) is a polynomial equation with rational coefficients in at least two variables. A general decision procedure for the existence of rational solutions to a Diophantine equation (Problem 3.1) is not known. Over the ring of
integers, this is Hilbert's 10th Problem, proven undecidable by Matiyasevich in 1970 [24]. Furthermore, there does not exist an algorithm that for an arbitrary Diophantine equation, decides whether it has infinitely many integer solutions [10].
In contrast to the algorithmic unsolvability of Hilbert's 10th Problem and the open status of Problem 3.1, algorithms exist that allow finding rational solutions for special classes of equations. For instance, there exist procedures [14, 29, 23] completely solving the specialisation of the problem to quadratic equations. Masser introduced an approach based on the effective search bound for rational solutions [23]. A further improvement of this approach for \(d\geq 5\) is provided in [5]. An alternative procedure to decide whether an arbitrary quadratic equation has a rational solution is described in [14] (see Corollary, pg. 2 therein). Determining the existence of integer solutions to a _system_ of quadratic equations is, however, undecidable [4].
### Discussion
We conclude by sketching some observations and pointing out the directions for future work.
Multiple loops.The approach of Algorithm 1 can be adapted to generate multiple linear loops satisfying a given invariant. One reason comes from calling the procedure solve. Different solutions of the quadratic equation can be found in this step and subsequently used as an initial vector. Moreover, in line 11 (Algorithm 1), it is possible to pick two variables \((x_{1},x_{2})\) in different ways, thus obtaining different matrices \(M\) in line 12. Each of the matrices synthesised so is an element of the orthogonal group \(\Gamma(Q^{\prime})\)1 of the quadratic form \(Q^{\prime}\). Therefore, all possible products of these matrices also preserve the value of \(Q^{\prime}\) and can be eventually used to generate different update matrices for a loop with invariant \(Q=c\).
Footnote 1: The orthogonal group \(\Gamma(Q)\) of a quadratic form \(Q\) is the group of all linear automorphisms \(M\in\operatorname{GL}_{d}(\mathbb{K})\) such that \(Q(\mathbf{x})=Q(M\mathbf{x})\) for all \(\mathbf{x}\in\mathbb{K}^{d}\).
Number of loop variables.Let \(P(x_{1},\ldots,x_{d})=0\) be a quadratic invariant in \(d\) variables. Note that Theorem 5.4 can be interpreted in terms of linear loops with variables \(x_{0},x_{1},\ldots,x_{d}\). Specifically, we can redefine the loop synthesis problem (Problem 2.7) by searching for linear loops with \(s=d+1\) variables. To this end, update Algorithm 3 as follows: if the output of the original procedure is positive; that is, an affine loop \(\left\langle M,\mathbf{s},\mathbf{t}\right\rangle\), then output the linear loop
Due to Corollary 5.5, the updated procedure solves the problem of loop synthesis with one additional variable. What follows is a reinterpretation of Theorem 5.4:
**Corollary 6.1**.: _There exists an effective procedure for the following problem: given a quadratic equation_
\[Q(x_{1},\ldots,x_{d})+L(x_{1},\ldots,x_{d})=c,\]
_decide whether there exists a non-trivial linear loop in \(d+1\) variables \(\{x_{0},x_{1},\ldots,x_{d}\}\) that satisfies it. Furthermore, the procedure synthesises a loop, if one exists._
The potential of changing the number of variables in the loop template leads to the question:
Let \(P\) be an arbitrary polynomial in \(d\) variables. Does there exist an upper bound \(N\) such that if a non-trivial linear loop satisfying \(P=0\) exists, then there exists a non-trivial linear loop with at most \(N\) variables satisfying the same invariant?
If an upper bound exists and can be computed, a template-based loop synthesis approach by Humenberger et al. [16] can be used to synthesise all loops satisfying an invariant, provided such loops exist.
[together with Corollary 5.5] shows that, _for quadratic polynomials, \(N\) is at most \(d+1\)_. Moreover, we show in Section 4 that in the class of polynomial equations \(Q(\mathbf{x})-c\), where \(Q\) is a quadratic form, the bound \(N=d\) is tight. A full characterisation of quadratic equations for which linear loops with \(d\) variables exist would also be of interest.
When loops exist for sure.The results of Sections 4 and 5 witness another class of polynomial invariants for which non-trivial linear (or affine) loops always exist. Similar to the setting of equations with pure difference binomials in [20], we can claim this for invariants \(Q(x_{1},\ldots,x_{d})=c\) with isotropic quadratic forms \(Q\). In particular, for every equation of the form \(a_{1}x_{1}^{2}+\cdots+a_{d}x_{d}^{2}+c=0\) with \(d\geq 4\) and \(a_{1},\ldots,a_{d},c\) not having the same sign, there exists a non-trivial linear loop with \(d\) variables due to Meyer's Theorem and Corollary 5.5(2).
Beyond quadratic.One future work direction concerns loop synthesis from invariants that are polynomial equalities of higher degrees, and, in particular, algebraic forms. However, we are limited by the hardness of Problem 3, as before. For Diophantine equations defined with homogeneous polynomials of degree \(3\), the questions of loop synthesis are related to the investigations of rational points on elliptic curves, a central topic in computational number theory [30, 8]. |
2304.09685 | Stochastic theory of ferroelectric domain structure formation dominated
by quenched disorder | A self-consistent stochastic model of domain structure formation in a
uniaxial ferroelectric, quenched from a high-temperature paraelectric phase to
a low-temperature ferroelectric phase, is developed with an account of the
applied electric field and the feedback effect via local depolarization fields.
Both polarization and field components are considered as Gauss random
variables. A system of integro-differential equations for correlation functions
of all involved variables is derived and solved analytically and numerically.
Phase diagram in terms of the average value and dispersion of polarization
reveals different possible equilibrium states and available final single-domain
and multi-domain states. The time-dependent evolution of the average
polarization and dispersion discloses a bifurcation behavior and the
temperature-dependent value of the electric field, deciding between the
single-domain and multi-domain final states, which can be interpreted as the
coercive field. Analytical and numerical results for the time-dependent
correlation length and correlation functions exhibit plausible agreement with
available experimental data. | Olga Y. Mazur, Leonid I. Stefanovich, Yuri A. Genenko | 2023-04-19T14:23:28Z | http://arxiv.org/abs/2304.09685v1 | # Stochastic theory of ferroelectric domain structure formation dominated by quenched disorder
###### Abstract
A self-consistent stochastic model of domain structure formation in a uniaxial ferroelectric, quenched from a high-temperature paraelectric phase to a low-temperature ferroelectric phase, is developed with an account of the applied electric field and the feedback effect via local depolarization fields. Both polarization and field components are considered as Gauss random variables. A system of integro-differential equations for correlation functions of all involved variables is derived and solved analytically and numerically. Phase diagram in terms of the average value and dispersion of polarization reveals different possible equilibrium states and available final single-domain and multi-domain states. The time-dependent evolution of the average polarization and dispersion discloses a bifurcation behavior and the temperature-dependent value of the electric field, deciding between the single-domain and multi-domain final states, which can be interpreted as the coercive field. Analytical and numerical results for the time-dependent correlation length and correlation functions exhibit plausible agreement with available experimental data.
## 1 Introduction
Domain structures in ferroelectrics have a decisive effect on their fundamental physical properties and functionality, so that the domain engineering is a prerequisite for many ferroelectric implementations like piezoelectric sensor and actuator technology [1], nonlinear optics and photonics [2]. A possible way of formation of domain structures with desired characteristics is the controlled quenching from a high-temperature paraelectric to a low-temperature ferroelectric phase. This intrinsically stochastic process depending on material properties, sample shape, quenching temperature, and applied field regimes was a subject of experimental studies over many decades. One of the most intensively investigated systems was thereby single-crystalline triglycine sulfate (TGS) because this model ferroelectric material possesses the only polar axis and exhibits only 180\({}^{\circ}\)-domains [3-17]. A thorough review of experimental methods used for visualization of static and dynamic domain structures in TGS including the etching and decoration methods, the powder deposition technique, the nematic liquid crystal coating, the scanning electron microscopy, the X-ray diffraction microscopy, and others was given by Nakatani [4]. Later an in-situ 3D
observation of domain formation dynamics near a ferroelectric-paraelectric phase transition was performed by Wehmeier et al. [14] using the second harmonic generation microscopy.
Temperature dependence of domain patterns in TGS, represented by isolated lenticular domains and extended lamellar domains, was investigated in Refs. [9, 10-16] in different annealing and cooling regimes. The evolution of domain structures with time after quenching at different temperatures was studied in Refs. [3, 5-9, 14, 16]. Main features of time-dependent characteristics, represented essentially by the correlation length and the two-site correlation function, were revealed already in the pioneering works by Nakatani [3] and the Ishibashi group [5, 6]. A dominating characteristic length \(L(t)\) was found to grow with time \(t\) as the power function \(L(t)\)\(\sim\)\((t-t_{0})^{\nu}\), where the exponent \(\nu\) was reported to adopt values about 0.2-0.3 [5, 6], 1/3 [7, 8], 0.086-0.155 [9], 0.3 [10] while the offset time \(t_{0}\) was either positive [8, 9] or negative [5-7, 9], revealing then a finite initial value \(L(0)\). Golitsyna et al. reported that \(\nu\) takes values about 0.3 at temperature \(T\) difference from the transition temperature \(T_{c}-T>1\)\(K\) but rapidly approaches unity when \(T_{c}-T<0.3\)\(K\)[16] confirming the observation by Nakatani [3]. At longer times the asymptotic dependence \(L(t)\)\(\sim\)\([\ln(t/t_{0})]^{4}\) was observed [7-9]. Tolstikhina et al. applied the Fourier analysis to the 2D domain pattern allowing a consistent quantification of the correlation length [13].
A time-dependent two-site correlation function was introduced as \(\langle\pi(\mathbf{r}_{1},t)\pi(\mathbf{r}_{2},t)\rangle\), where \(\mathbf{s}=\mathbf{r}_{1}-\mathbf{r}_{2}\) and \(\pi(\mathbf{r},t)=P_{\rm z}(\mathbf{r},t)/P_{\rm S}\) is the normalized polarization component with the spontaneous polarization \(P_{\rm S}\)[5, 6]. The statistical average \(\langle...\rangle\) was determined experimentally as the spatial average over the sample volume. The correlation function turned out to be anisotropic in the plane perpendicular to the polarization direction \(z\) and demonstrated oscillations in the direction perpendicular to the typical lamellar domain alignment, while no oscillations were observed along the lamellas [5, 16]. Scaling features of the correlation function represented as a function of \(s/L(t)\) were established at least at short distances \(s\)[6, 7-9, 16]. For the system with applied total [5] or local [17] electric fields the dynamical scaling features were observed too.
Initial theoretical considerations of the stochastic domain formation were mostly based on either the time-dependent Landau-Ginzburg-Devonshire (LGD) model or the Kolmogorov-Avrami-Ishibashi (KAI) statistical approach. Thus, a stochastic formation of domain structures in a ferroelectric subject to a periodic electric field was studied in a KAI-based model neglecting statistical correlations between nucleating domains [18]. Together with the empirical knowledge on the electrical field dependence of the domain wall velocity and relaxation time [19] the KAI approach allowed an explanation of the time dependence of the field-driven switching of the average polarization [20], however the correlation radius and correlation function behavior remained elusive.
The first attempt of the LGD-based stochastic analysis of the domain formation was made by Rao and Chakrabarti [21]. In this seminal work, the frozen-in disorder (quenched random field) as well as the thermodynamic fluctuations represented by the white noise were accounted. Numerical integration of the time-dependent LGD equation for a one-component order parameter revealed an initial diffusive regime of the correlation length with \(L(t){\sim}t^{1/2}\) followed by an asymptotic logarithmic dependence due to the frozen-in randomness. Correlation properties of the domain growth were not studied.
Darinskii et al. considered the LGD model with a two-component polarization order parameter accounting self-consistently for the electric depolarization fields emerging from the polarization inhomogeneities [22]. Deterministic solutions of the coupled LGD and Poisson equations revealed single-domain and periodic structures appearing in different regimes. The spatial period was derived from the condition of the maximum growth rate of the order parameter and could be related to the characteristic length \(L(t)\), however, the transient development of the inhomogeneous phase was not studied.
The phase-field approach to ferroelectrics generalizing the LGD theory considers typically much more complicated problem statements including multi-component polarization and elastic variables, their electro-elastic [23] and flexoelectric [24, 25] coupling, together with other degrees of freedom and features, such as long-range dipole-dipole interactions [26], semiconducting effects and mobile charged defects [27, 28]. Analytical treatment of such problems is usually not possible, but a great advantage of phase-field simulations is that they allow simultaneous accounting of multiple physical effects resulting in structures and features which could hardly be conceived theoretically [23, 24, 25, 26, 27, 28, 29, 30]. New perspectives were opened by molecular dynamics simulations of lead titanate, based on interatomic potentials parameterized from first-principles, supported by the phase-field simulations with properly adjusted parameters [31, 32]. Particularly, molecular dynamics gave insight in microscopic mechanisms of the field-driven domain switching in perovskites [31, 32, 33].
An LGD-based stochastic treatment of the domain formation in the presence of an external weak electric field in uniaxial ferroelectrics dominated by the quenched disorder was elaborated by Stefanovich in terms of the time-dependent two-site correlation function and the mean polarization [34]. It was analytically shown that the correlation length grows with time as \(L(t)=\sqrt{L^{2}(0)+2t/3}\) starting with an initial value \(L(0)\). A phase diagram in coordinates of the polarization dispersion and the average polarization was constructed which demonstrated a tendency to the formation of single-domain states at lower temperatures and multi-domain states at higher temperatures closer to the transition point. The further numerical study of this model disclosed generally non-monotonic time-dependences of both the mean polarization and the
polarization dispersion revealing a characteristic applied field deciding between the single-domain and multi-domain asymptotic state [35]. The approach proved to be fruitful when applied to the problem of domain formation under an applied ac field [36] as well as by the construction of pressure-temperature diagram of phase states of a barium titanate single-crystal confirmed experimentally [37]. The drawback of the approach [34, 35, 36, 37] is an assumption of a uniform electric field in the ferroelectric hardly compatible with the formation of stochastic nonuniform polarization domain structures.
In the current study, we extend the LGD-based stochastic approach suggested in Ref. [34] by introducing stochastic electric field variables, self-consistently related to the polarization via the Poisson equation. We derive a system of integro-differential equations for self- and cross-correlation functions for all involved stochastic variables considering them as Gauss random variables and solve these equations analytically and numerically. The results for the time-dependent correlation length and correlation functions are compared with available experiments [3, 5-9, 15, 16]. In spite of the superficial similarity with the previous considerations [34, 35, 36, 37] the actual approach is physically quite different. Random spatial variations of polarization in the studied system create local charge density, which in turn generates substantial stochastic depolarization fields. These fields have a great impact on the domain formation and development with time. In the previous studies [34, 35, 36, 37], the electric field in the system was assumed to be equal to a uniform external field and the appearance of the depolarization fields was neglected, therefore the consideration was, in fact, limited to infinitesimal external fields. In the current work, the electric field includes both the external one and the emerging depolarization fields described self-consistently via the Poisson equation with proper boundary conditions. This allows consideration of a finite sample subject to arbitrary applied electric fields and a construction of the parametric phase diagram. The final state of the evolution of the system, which can be a single- or multi-domain one, is determined by the external applied electric field, and the quenching temperature. The characteristics of this state given by the mean polarization and the polarization dispersion (variance) determine the functional properties of the ferroelectric. Among others we show a crucial impact of cross-correlations between the electric field and polarization on the choice between a single- and a multi-domain final state.
## 2 LGD-based stochastic model
### LGD model of a uniaxial ferroelectric/nonferroelastic
We consider a uniaxial single-crystalline ferroelectric exhibiting only 180\({}^{\circ}\) polarization domains as in the case of TGS, LiNbO\({}_{3}\), and LiTaO\({}_{3}\) materials. We study the evolution of the system from a disordered initial state obtained by quenching from the high-temperature
paraelectric phase to the ferroelectric phase at some fixed temperature \(T<T_{c}\). In this model, we assume the stochastic formation of domain structures dominated by initial quenched polarization disorder. The conditions at which thermodynamic fluctuations can be neglected in comparison with the quenched disorder are derived in Appendix A and apply in the temperature region \(T_{c}-T>0.02\;K\) if the characteristic length of the initial disorder \(L(0)\) exceeds the length scale of thermodynamic fluctuations.
We choose a typical experimental geometry [3-6,14] of a single-crystalline ferroelectric plate of thickness \(h_{f}\), attached to a bottom metallic electrode and a dielectric layer of thickness \(h_{d}\) at the top side (see Fig. 1) covered with a top metallic electrode. Both electrodes are kept at fixed electric potentials providing a desired applied electric field regime. The Gibbs free energy of the system can be presented in the form [23,38]
\[\Phi=\Phi_{0}+\int_{V_{f}}\left[\tfrac{1}{2}AP_{z}^{2}+\tfrac{1}{4}BP_{Z}^{4}+ \tfrac{1}{2}\,G\left(\nabla\mathrm{P}_{Z}\right)^{2}-P_{Z}E_{z}-\tfrac{\epsilon _{0}\epsilon_{B}}{2}\,\mathbf{E}^{2}\right]dV-\int_{V_{d}}\tfrac{\epsilon_{0} \epsilon_{d}}{2}\,\mathbf{E}^{2}dV \tag{1}\]
where \(A=\alpha_{0}(T-T_{c}),\;\alpha_{0}>0,\;T<T_{c}\), which is the temperature of the paraelectric-ferroelectric phase transition. The other coefficients of the LGD expansion are \(B>0\) and \(G>0\). This form accounts for the only polarization component \(P_{Z}\) in the Cartesian frame \((x,y,z)\) perpendicular to the ferroelectric plate surface and allows description of the second-order phase transition which is the case for TGS. Polarization is the primary order parameter of the phase transition; elastic
Figure 1: Problem layout: A ferroelectric slab of thickness \(h_{f}\), attached to a bottom electrode and separated from a top electrode by a dielectric layer of thickness \(h_{d}\), is infinite in a plane parallel to the slab surface. Polarization direction is along the vertical \(z\)-axis of the Cartesian \((x,y,z)\)-frame. The scheme of \(180^{\circ}\)-domains roughly reproduces the domain pattern observed in Ref. [15].
variables are the secondary order parameters and can be neglected in TGS since it is nonferroelastic. \(\mathbf{E}\) is the local electric field, \(\varepsilon_{0}\), \(\varepsilon_{d}\) and \(\varepsilon_{b}\) are the permittivity of vacuum, of the dielectric layer and the background permittivity of the ferroelectric, respectively. \(E_{z}\) denotes the \(z\)-component of the local electric field which may originate from external sources and/or from spatial variation of polarization. \(V_{f}\) and \(V_{d}\) denote the volumes of the ferroelectric plate and the dielectric layer, respectively.
The according Landau-Khalatnikov equation governing the evolution of polarization reads
\[\Gamma\frac{\partial P_{\mathbf{z}}}{\partial t}=-\frac{\delta\varphi}{\delta P_{ \mathbf{z}}}=-AP_{\mathbf{z}}-BP_{\mathbf{z}}^{3}+G\Delta P_{\mathbf{z}}+E_{\mathbf{z}}, \tag{2}\]
with the Khalatnikov constant \(\Gamma\) and the Laplace operator \(\Delta\). Variation of the electric field \(\mathbf{E}=-\nabla\varphi\), where \(\varphi\) is the electric potential, is described by the Poisson equation in the ferroelectric,
\[\varepsilon_{0}\varepsilon_{b}\Delta\varphi=\frac{\partial P_{\mathbf{z}}}{ \partial z} \tag{3}\]
and by the Laplace equation in the dielectric,
\[\Delta\varphi=0, \tag{4}\]
which can be derived by minimization of the functional (1) with respect to \(\varphi\). The boundary conditions for the above equations include the continuity of the electric potential at all interfaces and of the normal electric displacement at the ferroelectric/dielectric interface [38],
\[\varphi|_{\mathbf{z}=0}=0,\ \varphi|_{\mathbf{z}=h_{f}-0}=\varphi|_{\mathbf{z}=h_{f}+0}, \quad\varphi|_{\mathbf{z}=h_{f}+h_{d}}=-V,\ \ D_{\mathbf{z}}|_{\mathbf{z}=h_{f}-0}=D_{\mathbf{z}}|_{\mathbf{z}=h_{f}+0}, \tag{5}\]
where the electric displacement equals \(D_{\mathbf{z}}=\varepsilon_{0}\varepsilon_{b}E_{\mathbf{z}}+P_{\mathbf{z}}\) in the ferroelectric and \(D_{\mathbf{z}}=\varepsilon_{0}\varepsilon_{d}E_{\mathbf{z}}\) in the dielectric.
## 2 Stochastic variables and their mean values
The evolution of the system from the initial disordered quenched state makes all involved physical variables time-dependent random variables. Statistical averaging of random variables is indicated with a symbol \(\langle...\rangle\) and is assumed to coincide with the average over the material volume. Thus, we introduce a local polarization \(P_{\mathbf{z}}(\mathbf{r},t)=\langle P_{\mathbf{z}}\rangle+\delta P_{\mathbf{z}}(\mathbf{r},t)\), where the mean polarization may be a function of time while \(\langle\delta P_{\mathbf{z}}\rangle=0\). The mean values of the electric field in the ferroelectric and the dielectric regions are denoted \(E_{f}\) and \(E_{d}\), respectively. If the mean polarization value \(\langle P_{\mathbf{z}}\rangle\) is not zero, this creates a mean electric depolarization field in the system. The electric potential can be split in a regular and a stochastic part as \(\varphi=\bar{\varphi}+\delta\varphi\), where \(\langle\delta\varphi\rangle=0\) and the regular part \(\bar{\varphi}\) satisfies the continuity conditions (5). Thus,
\[\bar{\varphi}(z)=\begin{cases}-E_{f}z,&0<z<h_{f}\\ -E_{f}h_{f}-E_{d}\big{(}z-h_{f}\big{)},h_{f}<z<h_{f}+h_{d}\end{cases}. \tag{6}\]
By satisfying the boundary condition at \(z=h_{f}+h_{d}\) and the continuity of the electric displacement (5) at \(z=h_{f}\) one finds the expressions for the mean electric fields in both media, depending on the mean polarization,
\[E_{d}=\frac{\varepsilon_{b}}{\varepsilon_{d}h_{f}+\varepsilon_{b}h_{d}}V+\frac{ h_{f}}{\varepsilon_{a}h_{f}+\varepsilon_{b}h_{d}}\frac{\langle P_{z}\rangle}{ \varepsilon_{0}},\ \ \ E_{f}=E_{a}-\rho_{z}\,\frac{\langle P_{z}\rangle}{ \varepsilon_{0}} \tag{7}\]
with
\[E_{a}=\frac{\varepsilon_{d}}{\varepsilon_{d}h_{f}+\varepsilon_{b}h_{d}}V,\ \ \rho_{z}=\frac{h_{d}}{\varepsilon_{d}h_{f}+\varepsilon_{b}h_{d}}. \tag{8}\]
Note that the field is present in the whole structure even if the electrodes are short-circuited, \(V=0\). The local electric field in the ferroelectric is given by
\[E(\mathbf{r},t)=E_{a}-\frac{\rho_{z}}{\varepsilon_{0}}\langle P\rangle\ -\mathbf{\nabla}\delta\varphi(\mathbf{r},t) \tag{9}\]
with \(E_{a}=(0,0,E_{a})\) and \(P=(0,0,P_{z})\).
C. Equations in dimensionless units
It is convenient to normalize physical variables to their natural characteristic magnitudes in the phase transition problem. This leads to a dimensionless polarization \(\pi=P_{z}/P_{s}\) normalized to the spontaneous equilibrium polarization \(P_{s}=\sqrt{|A|/B}\), and a dimensionless electric field \(\mathbf{\epsilon}=\)\(E/E_{0}\) with the value of \(E_{0}=P_{s}|A|\) close to the thermodynamic coercive field, \(E_{cr}=2P_{s}|A|/3\sqrt{3}\)[39]. Spatial coordinates are normalized to a characteristic length \(\lambda=\sqrt{G/|A|}\) (the characteristic domain wall thickness) and time \(t\) to a characteristic time \(t_{0}=\Gamma/|A|\), \(\tau=t/t_{0}\). Thus, we introduce a dimensionless local electric field as
\[\mathbf{\epsilon}(\mathbf{r},\tau)=\mathbf{\epsilon}_{a}-\alpha_{z}\bar{\pi}(\tau) \hat{\mathbf{z}}\ -\mathbf{\nabla}\phi(\mathbf{r},\tau) \tag{10}\]
where the first two terms represent the mean electric field in the ferroelectric along the \(z\)-axis, as in Eq. (9), with the dimensionless mean polarization magnitude \(\bar{\pi}(\tau)=\)\(\langle P_{z}\rangle/P_{s}\), dimensionless field \(\epsilon_{a}=E_{a}/\,E_{0}\), dimensionless stochastic potential \(\phi=\delta\varphi/(E_{0}\lambda)\) and the coefficient \(\alpha_{z}=\rho_{z}/(\varepsilon_{0}|A|)\), while the mean fluctuation depolarization field due to spatial polarization variations vanishes, \(\langle-\mathbf{\nabla}\phi(\mathbf{r},\tau)\rangle=0\). The local dimensionless polarization along the \(z\)-axis can be represented as \(\pi(\mathbf{r},\tau)=\bar{\pi}(\tau)+\xi(\mathbf{r},\tau)\) where \(\xi=\delta P_{z}/P_{s}\) and \(\langle\xi(\mathbf{r},\tau)\rangle=0\).
Introducing the dimensionless variables reduces the governing equation (2) to
\[\frac{\partial\pi}{\partial\tau}=\Delta\pi+\pi-\pi^{3}+\epsilon_{z}. \tag{11}\]
and the Poisson equation (3) to
\[\Delta\phi=\eta\frac{\partial\pi}{\partial z} \tag{12}\]
with \(Z=z/\lambda\) and a dimensionless parameter \(\eta=1/(\varepsilon_{0}\varepsilon_{b}|A|)\). Considering the ferroelectric susceptibility below \(T_{c}\) given by \(\chi_{f}=1/(2\varepsilon_{0}|A|)\)[39] reveals that the parameter \(\eta=2\chi_{f}/\varepsilon_{b}\) is virtually the ferroelectric susceptibility normalized to the background one.
## 3 Correlation functions and their governing equations
Assuming that all physical variables are Gaussian random fields [40], their correlation properties can be completely characterized by two-site autocorrelation functions for polarization, \(K(\mathbf{s},\tau)=\langle\xi(\mathbf{r_{1}},\tau)\xi(\mathbf{r_{2}},\tau)\rangle\), and electric potential, \(g(\mathbf{s},\tau)=\langle\phi(\mathbf{r_{1}},\tau)\phi(\mathbf{r_{2}},\tau)\rangle\), with \(\mathbf{s}=\mathbf{r_{1}}-\mathbf{r_{2}}\), together with cross-correlation functions \(\Psi_{xz}(\mathbf{s},\tau)=\langle\epsilon_{x}(\mathbf{r_{1}},\tau)\xi( \mathbf{r_{2}},\tau)\rangle\), \(\Psi_{yz}(\mathbf{s},\tau)=\left\langle\epsilon_{y}(\mathbf{r_{1}},\tau)\xi( \mathbf{r_{2}},\tau)\right\rangle\), \(\Psi_{zz}(\mathbf{s},\tau)=\langle\epsilon_{z}(\mathbf{r_{1}},\tau)\xi( \mathbf{r_{2}},\tau)\rangle\). The two-site correlation functions depend only on the position difference \(\mathbf{s}\) in infinite uniform media that is assumed to apply also in macroscopic finite samples. Correlation functions are a useful tool for describing the domain structure in ferroelectrics. For example, the function \(K(\mathbf{s},\tau)\) determines the degree of similarity of different fragments of a static domain picture separated by a position vector \(\mathbf{s}\) at a given time. The function \(K(\mathbf{s},\tau)\) is at maximum when the fragments are completely identical, as expected within one domain, and 0 if there are no correlations between the fragments. Changing the sign of the function \(K(\mathbf{s},\tau)\) indicates that it has entered the domain region with the opposite direction of polarization. Thus, it contains information on the characteristic size of the domains, the regularity of domain structure, and its development with time. The cross-correlation function \(\Psi_{\alpha\beta}(\mathbf{s},\tau)\) reveals how sensitive the local polarization to the electric fields generated by polarization variation at the other locations is. Correlation functions were experimentally studied over thirty years [3, 5-9,16] but their theoretical description is still missing. A couple of examples of correlation functions evaluated for regular domain structures are presented in Appendix B.
Considering the potential nature of electric field (10) the correlation function for the electric field components can be expressed via \(g(\mathbf{s},\tau)\) as
\[\left\langle\epsilon_{\alpha}(\mathbf{r_{1}},\tau)\epsilon_{\beta}(\mathbf{r_{ 2}},\tau)\right\rangle=[\epsilon_{a}-\alpha_{z}\overline{\pi}(\tau)]^{2} \delta_{\alpha x}\delta_{\beta z}-\partial^{2}g(\mathbf{s},\tau)/\partial s_{ \alpha}\partial s_{\beta} \tag{13}\]
with indices \(\alpha\) and \(\beta\) adopting values \(x\), \(y\) and \(z\). Cross-correlation functions \(\Psi_{\alpha\beta}(\mathbf{s},\tau)\) and the autocorrelation function for the potential \(g(\mathbf{s},\tau)\) can be related to each other when considering the Gauss equation according to Eq. (12),
\[\frac{\partial\epsilon_{x}(\mathbf{r_{2}},\tau)}{\partial X_{2}}+\frac{ \partial\epsilon_{y}(\mathbf{r_{2}},\tau)}{\partial Y_{2}}+\frac{\partial \epsilon_{x}(\mathbf{r_{2}},\tau)}{\partial Z_{2}}=-\eta\frac{\partial\xi( \mathbf{r_{2}},\tau)}{\partial Z_{2}}\,. \tag{14}\]
By multiplying it with \(\epsilon_{x}(\mathbf{r_{1}},\tau)\) and consequent averaging and using Eq. (13) one obtains a relation
\[\frac{\partial}{\partial s_{x}}\Delta g(\mathbf{s},\tau){=}\eta\frac{\partial }{\partial s_{x}}\Psi_{xz}(\mathbf{s},\tau). \tag{15}\]
Similarly, by multiplying Eq. (14) with \(\epsilon_{y}(\mathbf{r_{1}},\tau)\) and consequent averaging and using Eq. (13) one gets a relation
\[\frac{\partial}{\partial s_{y}}\Delta g(\mathbf{s},\tau){=}\eta\,\frac{\partial}{ \partial s_{z}}\Psi_{yz}(\mathbf{s},\tau). \tag{16}\]
In the same manner, by multiplying Eq. (14) with \(\epsilon_{z}(\mathbf{r_{1}},\tau)\) and consequent averaging and using Eq. (13) one finds a relation
\[\frac{\partial}{\partial s_{z}}\Delta g(\mathbf{s},\tau){=}\eta\,\frac{ \partial}{\partial s_{z}}\Psi_{zz}(\mathbf{s},\tau). \tag{17}\]
The last equation means that \(\Delta g(\mathbf{s},\tau)\) and \(\eta\Psi_{zz}(\mathbf{s},\tau)\) can differ only by a constant. Since asymptotically at s\(\rightarrow\infty\) the correlations must vanish, this constant equals zero and thus
\[\Delta g(\mathbf{s},\tau){=}\eta\Psi_{zz}(\mathbf{s},\tau). \tag{18}\]
If in the above derivation \(\mathbf{r_{1}}\) and \(\mathbf{r_{2}}\) were interchanged the same formulas would result with \(\Psi_{\alpha\beta}(\mathbf{-s},\tau)\) meaning that \(\Psi_{\alpha\beta}(\mathbf{-s},\tau)=\Psi_{\alpha\beta}(\mathbf{s},\tau)\).
Similarly, a relation between \(g(\mathbf{s},\tau)\) and \(K(\mathbf{s},\tau)\) can be derived. To this end Eq. (14) taken at \((\mathbf{r_{1}},\tau)\) is multiplied with \(\xi(\mathbf{r_{2}},\tau)\) and averaged. This results in equation
\[\frac{\partial\psi_{xz}(\mathbf{s},\tau)}{\partial s_{x}}+\frac{\partial\Psi_ {yz}(\mathbf{s},\tau)}{\partial s_{y}}+\frac{\partial\Psi_{xz}(\mathbf{s}, \tau)}{\partial s_{z}}=-\eta\,\frac{\partial K(\mathbf{s},\tau)}{\partial s_{ z}}\,. \tag{19}\]
By differentiating this relation with respect to \(s_{z}\) and utilizing relations (15-17) one finds finally
\[\Delta^{2}g(\mathbf{s},\tau){=}-\eta^{2}\,\frac{\partial^{2}}{\partial s_{z}^ {2}}K(\mathbf{s},\tau), \tag{20}\]
or, using Eq. (18),
\[\Delta\Psi_{zz}(\mathbf{s},\tau){=}-\eta\,\frac{\partial^{2}}{\partial s_{z}^ {2}}K(\mathbf{s},\tau). \tag{21}\]
We proceed now with the derivation of equations of evolution for the mean polarization \(\overline{\pi}(\tau)\) and the correlation function \(K(\mathbf{s},\tau)\). The first one can be derived by statistical averaging of Eq. (11). By doing this, we substitute \(\pi(\mathbf{r},\tau)=\overline{\pi}(\tau)+\xi(\mathbf{r},\tau)\) into Eq. (11) and average the latter, taking into account that \(\langle\Delta\pi\rangle=0\) and that the averages of odd number of Gaussian variables vanish, if they have zero central moments [40], which is the case for \(\xi(\mathbf{r},\tau)\), so that \(\langle\xi^{3}(\mathbf{r},\tau)\rangle=0\). This results in equation
\[\frac{d\pi}{d\tau}=\overline{\pi}\big{(}1-\alpha_{z}-3K(0,\tau)\big{)}- \overline{\pi}^{3}+\epsilon_{a} \tag{22}\]
similar to Refs. [34, 35].
To derive an equation for \(K(\mathbf{s},\tau)\) we consider the time derivative of the average product \(\langle\pi(\mathbf{r_{1}},\tau)\pi(\mathbf{r_{2}},\tau)\rangle\):
\[\frac{\partial}{\partial\tau}\langle\pi(\mathbf{r_{1}},\tau)\pi(\mathbf{r_{2} },\tau)\rangle= 2\overline{\pi}(\tau)\,\frac{d\overline{\pi}(\tau)}{d\tau}+ \frac{\partial K(\mathbf{s},\tau)}{\partial\tau}. \tag{23}\]
On the other hand, it equals
\[\langle\tfrac{d\pi(\mathbf{r}_{1},\tau)}{d\tau}\pi(\mathbf{r}_{2},\tau)+\pi( \mathbf{r}_{1},\tau)\tfrac{d\pi(\mathbf{r}_{2},\tau)}{d\tau}\rangle. \tag{24}\]
When substituting Eq. (11) into Eq. (24) and averaging, it is accounted that, for Gaussian random variables, \(\langle\xi^{3}(\mathbf{r_{1}},\tau)\xi(\mathbf{r_{2}},\tau)\rangle=3K(0,\tau) K(\mathbf{s},\tau)\) as well as \(\langle\xi(\mathbf{r_{1}},\tau)\xi^{3}(\mathbf{r_{2}},\tau)\rangle=3K(0,\tau) K(\mathbf{s},\tau)\), while \(\langle\xi(\mathbf{r_{1}},\tau)\xi^{2}(\mathbf{r_{2}},\tau)\rangle=0\). Then, by substituting Eq. (22) into Eq. (23) one obtains finally
\[\tfrac{dK(\mathbf{s},\tau)}{d\tau}=2\Delta K(\mathbf{s},\tau)+2K(\mathbf{s}, \tau)[1-3\bar{\pi}^{2}(\tau)-3K(0,\tau)]+2\Psi_{zz}(\mathbf{s},\tau). \tag{25}\]
Equations (21), (22) and (25) present together a closed system of equations which allow determination of functions \(\bar{\pi}(\tau)\), \(K(\mathbf{s},\tau)\) and \(\Psi_{zz}(\mathbf{s},\tau)\). The other correlation functions can be consequently derived from them.
The number of equations and unknown functions can be further reduced by introducing Fourier transforms:
\[K(\mathbf{s},\tau)=\tfrac{1}{(2\pi)^{3}}\int d^{3}q\,\exp(\,i\mathbf{q} \mathbf{s})\widetilde{K}(\mathbf{q},\tau), \tag{26}\]
\[\widetilde{K}(\mathbf{q},\tau)=\int d^{3}s\,\exp(-i\mathbf{q}\mathbf{s})K( \mathbf{s},\tau)\;\;. \tag{27}\]
In terms of these, Eq. (21) is converted to an algebraic relation
\[q^{2}\widetilde{\Psi}_{zz}(\mathbf{q},\tau){=}{-}\eta q_{z}^{2}\widetilde{K} (\mathbf{q},\tau). \tag{28}\]
By applying the Fourier transforms to Eq. (25) and implementing Eq. (28) the cross-correlation function \(\widetilde{\Psi}_{zz}\) can be excluded leading to the equation
\[\tfrac{d\widetilde{K}(\mathbf{q},\tau)}{d\tau}=2\left[1-3\bar{\pi}^{2}(\tau) -3K(0,\tau)-\left(q^{2}+\eta\tfrac{q_{z}^{2}}{q^{2}}\right)\right]\widetilde{ K}(\mathbf{q},\tau). \tag{29}\]
Eqs. (22) and (29) form together a closed system of integro-differential equations, since \(K(0,\tau)\) is defined by the integral (26). This system of equations will be solved analytically for particular cases and studied numerically for various field and temperature regimes in the next sections.
## 4 Correlation length
Assuming Gaussian properties of the involved random fields [40] allows analytic calculation of the correlation length, a characteristic, well studied experimentally [5-9,13,15,16] and numerically [21] over three decades. To this end we assume an initial isotropic Gaussian form of the correlation function immediately after quenching to the ferroelectric state,
\[K(\mathbf{s},0)=K_{0}\exp\left(-\frac{s^{2}}{2\tau_{c}^{2}}\right) \tag{30}\]
with an initial polarization dispersion \(D(\tau=0)=K(0,0)=K_{0}\) and the Gauss parameter \(\tau_{c}\) which will be later related to the initial value of the correlation length. Using Eq. (30) the Fourier transform of the correlation function at \(\tau=0\) can be evaluated from its definition, Eq. (27), as
\[\widetilde{K}({\bf q},0)=(2\pi)^{\frac{3}{2}}K_{0}r_{c}^{3}\exp\left(-\frac{r_{c}^{ 2}q^{2}}{2}\right). \tag{31}\]
To evaluate the time-dependent development of the Fourier transform from its initial value (31) we note that the first order differential equation (29) can be explicitly solved as
\[\widetilde{K}({\bf q},\tau)=\widetilde{K}({\bf q},0)\mu(\tau)\exp\left[-2\left( q^{2}+\eta\frac{q_{2}^{2}}{q^{2}}\right)\tau\right] \tag{32}\]
with an auxiliary function of time
\[\mu(\tau)=\exp\{2\tau-6\int_{0}^{\tau}d\tau^{\prime}\ \left[\overline{\pi}^{2}( \tau^{\prime})+K(0,\tau^{\prime})\right]\} \tag{33}\]
which will be evaluated later.
In terms of the Fourier transform \(\widetilde{K}({\bf q},\tau)\) the correlation length \(L(\tau)\) can be defined by a relation [34]
\[L^{-2}(\tau)=\int d^{3}q\ q^{2}\widetilde{K}({\bf q},\tau)/\!\int d^{3}q \widetilde{K}({\bf q},\tau). \tag{34}\]
Using the presentation (32) the correlation length can represented as
\[L^{-2}(\tau)=\zeta_{2}(\tau)/\zeta_{0}(\tau) \tag{35}\]
with auxiliary functions
\[\zeta_{2}(\tau)=(2\pi)^{-3}\!\int d^{3}q\ q^{2}\widetilde{K}({\bf q},0)\exp \left[-2\left(q^{2}+\eta\frac{q_{2}^{2}}{q^{2}}\right)\tau\right] \tag{36}\]
and
\[\zeta_{0}(\tau)=(2\pi)^{-3}\!\int d^{3}q\ \widetilde{K}({\bf q},0)\exp\left[-2 \left(q^{2}+\eta\frac{q_{2}^{2}}{q^{2}}\right)\tau\right]\!. \tag{37}\]
The integrals in Eqs. (36,37) can be evaluated in a spherical coordinate system in \({\bf q}\)-space as
\[\zeta_{0}(\tau)=\tfrac{1}{2}K_{0}r_{c}^{3}\sqrt{\pi/2\eta\tau}\ {\rm erf}\ ( \sqrt{2\eta\tau})\ (r_{c}^{2}+4\tau)^{-3/2} \tag{38}\]
and
\[\zeta_{2}(\tau)=\tfrac{3}{2}K_{0}r_{c}^{3}\sqrt{\pi/2\eta\tau}\ {\rm erf}\ ( \sqrt{2\eta\tau})\ (r_{c}^{2}+4\tau)^{-5/2}. \tag{39}\]
By substituting Eqs. (38,39) in Eq. (35) one obtains
\[L(\tau)=\sqrt{L^{2}(0)+4\tau/3} \tag{40}\]
with \(r_{c}^{2}=3L^{2}(0)\). This analytical form can be compared with experimental results [5-9,13,15,16] as is exemplarily shown in Fig. 2. Experimental data are better fitted by the dependence \(L(t){\sim}(t-t_{0})^{\nu}\) with \(\nu\cong 1/3\) that is discussed in detail below in section 8.
## 5 Phase diagram of asymptotic behavior of the quenched ferroelectric sample
Having established the time-dependent correlation length, a closed system of differential equations for the polarization dispersion (or variance) \(D(\tau)=K(0,\tau)\) and the mean polarization \(\bar{\pi}(\tau)\) can be derived. To this end we consider Eq. (25) at \({\bf s}=0\). First, we note that
\[\Delta K({\bf s}=0,\tau)=-\frac{K(0,\tau)}{L^{2}(\tau)}. \tag{41}\]
Considering further the term \(2\Psi_{zz}({\bf s}=0,\tau)\) we use Eq. (28) to obtain the relation
\[2\Psi_{zz}({\bf s}=0,\tau)=-2\eta K(0,\tau)\zeta_{1}(\tau)/\zeta_{0}(\tau) \tag{42}\]
with an auxiliary function
\[\zeta_{1}(\tau)=(2\pi)^{-3}\!\int d^{3}q\frac{q_{Z}^{2}}{q^{2}}\ \widetilde{K}({\bf q},0)\exp\left[-2\left(q^{2}+\eta \frac{q_{Z}^{2}}{q^{2}}\right)\tau\right] \tag{43}\]
which can be evaluated as
\[\zeta_{1}(\tau)=\tfrac{1}{2}K_{0}r_{c}^{3}\tfrac{1}{2\eta\tau}\left[\tfrac{ \sqrt{\pi}}{2\sqrt{2\eta\tau}}\operatorname{erf}\left(\sqrt{2\eta\tau}\right) -\exp(-2\eta\tau)\right](r_{c}^{2}+4\tau)^{-3/2}. \tag{44}\]
Then Eq. (25) at \({\bf s}=0\) can be transformed to
\[\tfrac{dD(\tau)}{d\tau}=[2-6\bar{\pi}^{2}(\tau)-6D(\tau)-\nu(\tau)]D(\tau) \tag{45}\]
Figure 2: (a) Correlation length data from Orihara et al. [6] for different in-plane directions perpendicular to polarization in TGS are shown by symbols. The solid blue line presents fitting with Eq. (40) and the black dashed line empirical fitting with the exponent \(1/3\). (b) Correlation length data for TGS from Golitsyna et al. [16] shown by symbols are fitted with Eq. (40) (solid line). In both cases the trial parameter \(L(0)=1\) was used.
with an auxiliary function
\[\nu(\tau)=\tfrac{2}{L^{2}(\tau)}+\tfrac{1}{2\tau}\left[1-\tfrac{2\sqrt{2\eta\tau} \exp(-2\eta\tau)}{\mathrm{erf}(\sqrt{2\eta\tau})}\right]. \tag{46}\]
A comparison with previous studies reveals that the last term in Eq. (46) uniquely represents the effect of depolarization fields neglected before [34, 35, 36, 37].
By dividing Eq. (45) with \(D(\tau)\) and integrating, a useful relation for the auxiliary function \(\mu(\tau)\), Eq. (33), can be established:
\[\mu(\tau)=\tfrac{D(\tau)}{D(0)}\exp\Big{(}\int_{0}^{\tau}d\tau^{\prime}\,\nu( \tau^{\prime})\Big{)}. \tag{47}\]
This can be further evaluated by substituting \(\nu(\tau)\) from Eq. (46) in Eq. (47) to obtain
\[\mu(\tau)=\tfrac{D(\tau)}{D(0)}\Big{(}1+\tfrac{4\tau}{r_{c}^{2}}\Big{)}^{3/2} \tfrac{2}{\sqrt{\pi}}\tfrac{\sqrt{2\eta\tau}}{\mathrm{erf}(\sqrt{2\eta\tau})}. \tag{48}\]
Thus, the problem of finding the correlation function \(\widetilde{K}(\mathbf{q},\tau)\), Eq. (32), is reduced to the finding of the dispersion \(D(\tau)\).
The latter can be determined by solving Eq. (45) together with equation (22), rewritten as
\[\tfrac{d\overline{\pi}(\tau)}{d\tau}=\bar{\pi}(\tau)\big{(}1-\alpha_{\mathrm{ z}}-3D(\tau)\big{)}-\bar{\pi}^{3}(\tau)+\epsilon_{a}. \tag{49}\]
A closed system of differential equations (45) and (49) for \(D(\tau)\) and \(\bar{\pi}(\tau)\) will be studied in the following numerically. First, however, we will investigate equilibrium points of these equations under the asymptotic condition \(\tau\to\infty\) that allows the formulation of a phase diagram in terms of \(\overline{\pi}\) and \(D\).
The solution of the system of evolution equations (45, 49) for average polarization \(\bar{\pi}\) and its dispersion \(D\) with given initial conditions (\(\bar{\pi}(0)=\pi_{0}\), \(D(0)=D_{0}\)) provides information about the development of domain structures and the final stages of the ordering process. We note that the parameter \(\alpha_{\mathrm{z}}=2\chi_{f}\rho_{\mathrm{z}}=2\rho_{\mathrm{z}}C/(T_{c}-T)\), where \(C\) is the Curie constant of a material, is both temperature and geometry dependent via Eq. (8) and can vary in a wide range. On the one hand, it can be strongly decreased to \(\alpha_{\mathrm{z}}\ll 1\) by reducing the dielectric layer thickness \(h_{d}\); on the other hand, for any fixed geometry, it can be strongly enhanced to \(\alpha_{\mathrm{z}}\gg 1\) by approaching the transition temperature \(T_{c}\), limited, however, by the criterion of the applicability of the LGD theory (see Appendix A). The other variable parameter in equations (45,49), \(\epsilon_{a}\), is the externally induced mean field in the ferroelectric normalized to the very large field \(E_{0}\), which is comparable with the thermodynamic coercive field. Thus, the interplay between the single-domain and multi-domain states is expected at fields \(\epsilon_{a}{<}1\).
Consideration of ordering processes at large times suggests that the right-hand side of equations of the system (45,49) vanishes according to \(\partial\bar{\pi}/\partial\tau\to 0,\partial D/\partial\tau\to 0\). This reduces the system of evolution equations to an algebraic system which can be qualitatively analyzed using the phase diagram concept [41]:
\[\begin{cases}(1-\alpha_{z}-3D)\tilde{\pi}-\tilde{\pi}^{3}+\epsilon_{a}=0\\ (1-3\tilde{\pi}^{2}-3D)D=0.\end{cases} \tag{50}\]
There are six equilibrium points in the ferroelectric phase (\(\alpha_{z}>0,T<T_{c}\)) in the phase diagram on the \((\tilde{\pi},D)\)-plane (Fig. 3) with coordinates:
\[\begin{cases}&\tilde{\pi}_{1}=\frac{2}{\sqrt{3}}\sqrt{1-\alpha_{z}}\cdot\cos \left(\frac{1}{3}\arccos\frac{3\sqrt{3}\,\epsilon_{a}}{2(1-\alpha_{z})^{2}}+ \frac{2\pi m}{3}\right)\\ \text{(I):}&\tilde{\pi}=\tilde{\pi}_{1},\,\,\,n=n_{1}=2+3(k-1),k\in Z;&D=0\\ \text{(II):}&\tilde{\pi}=\tilde{\pi}_{1},\,\,\,n=n_{2}=3(k-1),k\in Z;&D=0\\ \text{(III):}&\tilde{\pi}=\tilde{\pi}_{1},\,\,\,n=n_{3}=1+3(k-1),k\in Z;&D=0\\ \end{cases} \tag{51a}\] \[\begin{cases}&\tilde{\pi}_{2}=\frac{2}{\sqrt{6}}\sqrt{\alpha_{z}}\cdot\cos \left(\frac{1}{3}\arccos\left(-\frac{3\sqrt{6}\,\epsilon_{a}}{2\alpha_{z}^{3/2} }\right)+\frac{2\pi m}{3}\right)\\ \text{(IV):}&\tilde{\pi}=\tilde{\pi}_{2}\text{ with }n=n_{1};&D=1/3-\tilde{\pi}_{2}^{2}\\ \text{(V):}&\tilde{\pi}=\tilde{\pi}_{2}\text{ with }n=n_{2};&D=1/3-\tilde{\pi}_{2}^{ 2}\\ \text{(VI):}&\tilde{\pi}=\tilde{\pi}_{2}\text{ with }n=n_{3};&D=1/3-\tilde{\pi}_{2}^{ 2}\\ \end{cases} \tag{51b}\]
The equilibrium point I is an unstable node corresponding to the unpoled state. The system falls into the vicinity of this point after the quenching from the paraelectric phase into the ferroelectric one. This is a starting point of domain growth and evolution to one of thermodynamically (meta-)stable states - the nodes II, III, and IV. Points II and III with \(D=0\) correspond to the formation of single-domain states with polarization vectors directed along and opposite to the applied field, respectively. Point IV has a remarkable dispersion value and corresponds to the formation of a thermodynamically (meta-)stable multi-domain state of quasi-periodic type [13] with 180\({}^{\circ}\) domain walls.
The evolution of ferroelectric domain structures in time can proceed non-monotonically with a formation of short-lived intermediate states corresponding to the saddle points V and VI. These points are distinguished by a pronounced asymmetry of the volume fractions of domains of the opposite direction of polarization (depending on the applied field). Phase trajectory of the domain structure evolution can pass very close to the saddle point, but never reach it, and continue its evolution after a short-lived slowing down to the (meta-)stable states (II, III, or IV). Points V and VI are intersection nodes of the separatrices dividing the phase diagram into three sectors characterizing the "attraction areas" of (meta-)stable single-domain (_1_ and _2_) and multi-domain (_3_) states (Fig. 3).
It is seen from Eqs. (51) that the positions of equilibrium points in the phase diagram significantly depend on the values of \(\alpha_{z}\) (controlled by the geometry and temperature) and the field \(\epsilon_{a}\). In the absence of the external electric field the phase diagram is symmetrical (Fig. 4). Point I is located exactly in the origin, point IV does not have a displacement on the polarization axis, and points II and III as well as V and VI are located symmetrically to each other. However, the presence of even a very weak external electric field remarkably shifts the position of equilibrium points in the phase diagram. With an increase in electric field, equilibrium points continue to move and can disappear at some critical field values. These values can be obtained from the analysis of coordinates of equilibrium points (51) or directly from the system (50).
For a single-domain state with \(D=0\), the electric field can be expressed from the first equation (50) as \(\epsilon_{a}=\bar{\pi}^{3}-(1-\alpha_{z})\bar{\pi}\), where two extremes in symmetric points exist, \(\bar{\pi}_{c}^{(1,2)}=\pm\left(\frac{1-\alpha_{z}}{3}\right)^{1/2}\). As a result, a minimal \(\epsilon_{s}^{min}\) and a maximal \(\epsilon_{s}^{max}\) critical electric fields can be obtained for positive and negative order parameters \(\bar{\pi}\), limiting their existence to a field region around \(\epsilon_{a}=0\),
\[\epsilon_{s}^{min}=-\frac{2}{3\sqrt{3}}\left(1-\alpha_{z}\right)^{\frac{3}{2} }<\epsilon_{a}<\frac{2}{3\sqrt{3}}(1-\alpha_{z})^{\frac{3}{2}}=\epsilon_{s}^{ max}. \tag{52}\]
Figure 3: Schematic view of the phase diagram of the system in variables \((\bar{\pi},D)\) at exemplary values of \(\alpha_{z}=1/2\) and \(\epsilon_{a}=0.002\) with indication of equilibrium points (Roman numerals), separatrices (dashed lines), and sectors of attraction of thermodynamically (meta-)stable states: single-domain [1 and 2] and multi-domain [3]. Arrows show the direction of entry and exit of separatrices in saddle points V and VI. Solid lines show the variety of possible phase trajectories and ways of the domain structure evolution to the states of thermodynamic equilibrium or metastability.
The existence of stable and metastable states II and III is possible in the said region only if the parameter \(\alpha_{z}<1\), as is apparent from Eqs. (51a) and (52). Particularly, in the case \(\alpha_{z}=0\) critical fields coincide with the thermodynamical coercive fields [39]. In contrast, for \(\alpha_{z}>1\), only the point II persists as a possible single-domain state for all positive fields, while the point III persists as the only possible single-domain state for all negative fields.
For a multi-domain state with \(D\neq 0\), the dispersion can be expressed from the second equation (50) as \(D=1/3-\bar{\pi}^{2}\) resulting in the expression for the electric field from the first equation (50), \(\epsilon_{a}=\alpha_{z}\bar{\pi}-2\bar{\pi}^{3}\). The analysis of the latter cubic equation reveals an existence of the multi-domain state IV, as well as of the two saddle points V and VI, in the field region
\[\epsilon_{m}^{min}=-\tfrac{2}{3\sqrt{6}}\alpha_{z}^{3/2}\ <\epsilon_{a}<\tfrac{2}{3 \sqrt{6}}\alpha_{z}^{3/2}=\epsilon_{m}^{max}, \tag{53}\]
which is valid for any positive \(\alpha_{z}\). Beyond this field region only single-domain states may exist.
Considering the smooth increase of the electric field \(\epsilon_{a}\) the motion of equilibrium points in the phase diagram can be observed (arrows in Fig. 4). When the value of the electric field becomes critical, \(\epsilon_{a}=\epsilon_{s}^{max}\), points I and III converge at the point A, similarly to previous studies [42]. In the same way, points IV and V converge at a point B at the field value \(\epsilon_{a}=\epsilon_{m}^{max}\). The succession of these events depends on the value of \(\alpha_{z}\). When an electric field exceeds both values \(\epsilon_{s}^{max}\) and \(\epsilon_{m}^{max}\), the sectors \(2\) and 3 in the phase diagram disappear (Fig. 3). Then only the point II continues to move to the right with the increasing field (Fig. 4). Thus, a high value of the electric field inhibits
Figure 4: Transformation of the phase diagram of the system in variables \((\bar{\pi},D)\) (Fig. 3) upon the smooth increase of the external electric field \(\epsilon_{a}\) at an exemplary value \(\alpha_{z}=1/2\) with an indication of equilibrium points (Roman numerals). Red dashed lines indicate the separatrices dividing the symmetrical phase diagram in the absence of electric field, \(\epsilon_{a}=0\). Arrows show the displacement of equilibrium points with increasing electric field. Point A indicates the place of convergence of the equilibrium points I and III which occurs at the field value \(\epsilon_{a}=\epsilon_{s}^{max}\), whereas point B corresponds to the convergence of the equilibrium points IV and V which occurs at \(\epsilon_{a}=\epsilon_{m}^{max}\).
the intermediate stages (points V and VI), and the phase diagram consists of one sector \(I\) (Fig. 3), so that the only one single-domain state in point C directed along the electric field exists in Fig. 4.
A phase diagram in terms of the electric field and the parameter \(\alpha_{z}\), depending in turn on the material parameters, structure geometry, and temperature, summarizes all possible states in Fig. 5. The region of the possible existence of (meta-)stable single-domain states II and III as well as of the unstable point I is delineated by red solid lines. In the region delineated by green solid lines the single-domain state II or the multi-domain state IV can be realized for positive electric fields, while the single-domain state III or the multi-domain state IV can be realized for negative electric fields. In the area of overlapping of the two mentioned regions, the existence of both (meta-)stable single-domain states II and III, and the multi-domain state IV is possible, while the unstable equilibrium point I and the saddle points V and VI are available too. Outside the said regions only the single-domain state II can exist for positive and the single-domain state III for negative electric fields. We note, however, that introducing even a slight inhomogeneity of the ferroelectric is known to lead to the stabilization of the multi-domain state [43].
It is known that the physical reason of the multi-domain state formation is the reduction of the energy accumulated in depolarization fields [44, 45]. The stabilizing effect from the spatial variation of the polarization can be observed in the fluctuation contribution to the energy (1) which amounts to \(\Delta\Phi\sim-|A|P_{s}^{2}D^{2}(\tau\rightarrow\infty)V_{f}\) as is shown in more detail in Appendix C.
## 6 Temporal behavior of the ordering process
The nonlinearity of the obtained evolution equations (45,49) does not allow to describe all stages of the ordering process analytically. Therefore, to trace in detail the evolution of the domain structure towards the thermodynamic equilibrium, a numerical analysis of the system of differential equations (45,49) with certain initial conditions \((\bar{\pi}_{0},D_{0})\) was carried out in MATLAB
Figure 5: Phase diagram of possible states on the parameter plane \((\epsilon,\alpha_{z})\).
package. The ferroelectric sample in this study is considered to be quenched without any external electric field. Thus, the average polarization of the crystal is zero in the initial stage \(\bar{\pi}_{0}=0\) after quenching and changes in time during relaxation when the external electric field is turned on. However, small polarization inhomogeneities may emerge in the nonequilibrium system during the quench and become a certain size by the time relaxation begins, characterized by some initial values of dispersion \(D_{0}\neq 0\) and the correlation length \(L(0)\). The process of domain nucleation in a quenched sample is a random one and can hardly be controlled experimentally. But the values of quenching temperature and external electric field still can be used to manage the domain growth and direct it to the formation of a certain type of domain structure. The other parameters used in the calculation were \(\eta=1\) and \(\alpha_{z}=1/2\) chosen from the region I-VI in the phase diagram of Fig. 5 which exhibits the richest variety of evolution scenarios.
A. Influence of external electric field on the domain structure formation
The influence of an external electric field on the ordering process in the ferroelectric was analyzed for a constant finite isothermal exposure of the sample. It was shown that varying the magnitude of the electric field can change not only the phase trajectory of the evolution of the system but also its final result. In the absence of the external electric field, the domains of both polarization directions gradually arise and monotonously grow (curve 1 in Fig. 6) with the formation of a stable multi-domain structure corresponding to the point IV in the phase diagram of Fig. 3. In the case of a TGS crystal which has \(180^{\circ}\) domain walls, this state is a one-dimensional quasi-periodic domain structure of irregular stripes which clearly exhibits a principal mode by the Fourier analysis of experimental data [13]. When a positive electric field is imposed on the sample some deviation of the average polarization in the direction of this field is observed (curve 2 in Fig. 6) so that, in the end, a stable multi-domain structure is formed in the system. The curves 3 and 4 in Fig. 6 are of particular interest because they run in the area of separation of single-domain (\(\it{1}\) in Fig. 3) and multi-domain (\(\it{3}\) in Fig. 3) ordering regions. Compared to the curves 1, 2, 5, and 6 in Fig. 6 which pass far from the saddle point V (Fig. 3), the phase trajectories 3 and 4 pass along the separatrix on opposite sides of it and fall into the vicinity of the saddle point. This is a region where the electric field can drastically change both the further evolution of the system and the type of the final domain structure. This can be interpreted as hitting a bifurcation point where the slightest variation of the magnitude of the electric field can change the trend of the further ordering of the system from a multi-domain state (curve 3 in Fig. 6) to a single-domain one (curve 4 in Fig. 6).
The vicinity of the saddle point V (Fig. 3) can be described as a region of the kinetic slowing down where the relaxation process is retarded for a while (Fig. 7). The dynamics of domain growth and rearrangement hang on here, and the further scenario of the system evolution has a probabilistic character. It is important to note that the domain structure, which had already formed by the time the system enters the intermediate phase, is characterized by a pronounced asymmetry in the volume fractions of domains of different signs. This means that the width of domains directed along the external electric field is much larger than the width of the reversely directed domains. However, the further evolution of the system to the state of thermodynamic equilibrium still may proceed either as the further growth of the prevailing domains (curve 4 in Fig. 7) or their decrease with a tendency to establish a balance between the domains of opposite signs (curve 3 in Fig. 7).
The duration of kinetic slowing down of the system near the saddle point V (Fig. 3) can be estimated from the length of the plateau or step (curves 3,4) on the time evolution curves for the average polarization \(\bar{\pi}\) (Fig. 7\((a)\)) and its dispersion \(D\) (Fig. 7\((b)\)). The appearance of an intermediate stage during the evolution of the system significantly slows down the overall process of ordering and delays the onset of thermodynamic equilibrium. Evolution plots (Fig. 7) show that phase trajectories passing far from the saddle point come to the stable multi-domain (curves 1, 2) and single-domain (curves 5, 6) states much faster than curves 3 and 4, respectively.
### Influence of polarization-induced electric fields on the domain structure formation
The effect of depolarization fields, induced by stochastic nonuniform polarization development, on the ordering kinetics can be studied by formal setting the parameter \(\eta=0\) in Eq. (45). Then the second term in the auxiliary function \(\nu(\tau)\), Eq. (46), describing depolarization fields, disappears and the results can be compared with those in Figs. 6 and 7 with \(\eta=1\). It turned out that depolarization fields crucially affect the time development of domain structures though the final states in Figs. 3 and 4 are independent of \(\eta\). A comparison of phase trajectories calculated at the same parameters and values of the applied field with an account of depolarization fields and without it demonstrates the tendency to the multi-domain states at low applied fields in the first case (curves 2 and 3 in Figs. 6-7) and the tendency to the single-domain state for the same field
Figure 7: (a) Evolution curves for average polarization \(\bar{\pi}\) and (b) its dispersion \(D\) calculated for the same parameter values as in Fig. 6 against dimensionless time \(\tau\).
values in the second case (curves 1 and 2 in Fig. 8(a)). In the absence of the stochastic depolarization fields, the system develops towards the multi-domain state first at a much lower field of \(\epsilon_{a}=0.015\). (curves 5 in Fig. 8).
Thus, the depolarization fields significantly contribute to the system response to an external electric field. Trying to compensate for it, they prevent a rapid orientation of domains along the external electric field, retaining the tendency to form a multi-domain structure. Considering the phase diagram in Fig. 5, the switching to the multi-domain state occurs well below the boundaries of the region I-VI if the stochastic fields are neglected. With an account of them, the switching to the multi-domain state occurs virtually immediately when crossing the boundaries towards the region I-VI.
Figure 8: (a) Phase trajectories of the system without the effect of depolarization fields (\(\eta=0\)). Curves 1–4 correspond to the values of the external electric field in the sample \(\epsilon_{a}=\{0.03;\) 0.05; 0.0501; 0.07\(\}\), respectively (cf. Fig. 7), while the curve 5 is for the field \(\epsilon_{a}=0.015\). The insert (b) shows the upscaled curves 1-4. (c) Evolution curves for average polarization \(\bar{\pi}\) and (d) polarization dispersion \(D\) in the absence of depolarization fields were calculated for the same parameters vs. dimensionless time \(\tau\). The inset (e) shows the upscaled curves 1–4.
## 7 Longitudinal and transverse polarization correlations
Having established numerically the behavior of the dispersion \(D(\tau)\) the correlation function \(K(\mathbf{s},\tau)\) can be derived from Eqs. (26), (32) and (48) as
\[K(\mathbf{s},\tau)=\mu(\tau)\,\frac{1}{(2\pi)^{3}}\int d^{3}q\;\widetilde{K}( \mathbf{q},0)\exp\Big{(}\mathbf{i}\mathbf{q}\mathbf{s}-2(q^{2}+\eta\frac{q_{ 2}^{2}}{q^{2}})\tau\Big{)}. \tag{54}\]
Now, by substituting \(\widetilde{K}(\mathbf{q},0)\) from Eq. (31) in Eq. (54) one finds a general expression
\[K(\mathbf{s},\tau)=\frac{\mu(\tau)K_{0}r_{c0}^{3}}{(2\pi)^{3/2}}\int d^{3}q\, \exp\Big{(}\mathbf{i}\mathbf{q}\mathbf{s}-q^{2}\left(\frac{r_{c}^{2}}{2}+2 \tau\right)-\frac{q_{2}^{2}}{q^{2}}2\,\eta\tau\Big{)}. \tag{55}\]
We note that, in experiments [5-9,16], not the correlation function but the normalized correlation coefficient is actually measured, which can be expressed as \(C(\mathbf{s},\tau)=K(\mathbf{s},\tau)/D(\tau)\). Below this quantity is evaluated separately for the cases of longitudinal correlations along the polarization direction, \(\mathbf{s}=(0,0,s_{z})\), and transverse correlations in the plane perpendicular to the polarization, \(\mathbf{s}=(\mathbf{s}_{\perp},0)\), with \(\mathbf{s}_{\perp}=(s_{x},s_{y})\), typically studied experimentally.
We start with the longitudinal correlations reducing the correlation coefficient to
\[C_{\parallel}(s_{z},\tau)=\frac{\mu(\tau)K_{0}r_{c}^{3}}{D(\tau)(2\pi)^{1/2}} \int_{0}^{\infty}dq\,q^{2}\,\exp\Biggl{(}-q^{2}\left(\frac{r_{c}^{2}}{2}+2\tau \right)\Biggr{)}\]
\[\times\int_{0}^{\pi}d\theta\,\sin\theta\,\exp(\mathrm{i}qs_{z}\cos\,\theta\, -2\,\eta\tau\,\cos^{2}\,\theta). \tag{56}\]
By integration and using the explicit formula for \(\mu(\tau)\), Eq. (48), this function can be obtained in a closed form
\[C_{\parallel}(S_{z},\tau)\;=\frac{\sqrt{2\eta\tau}}{\mathrm{erf}(\sqrt{2\eta \tau})}\left\{\frac{\mathrm{erf}(\sqrt{2\eta\tau}+{s_{z}}^{2}/(6L^{2}(\tau)) }{(2\eta\tau+{s_{z}}^{2}/(6L^{2}(\tau))]^{3/2}}2\eta\tau+\frac{2}{\sqrt{\pi}} \frac{{s_{z}}^{2}}{6L^{2}(\tau)}\frac{\exp[-2\eta\tau-{s_{z}}^{2}/(6L^{2}( \tau))]}{[2\eta\tau+{s_{z}}^{2}/(6L^{2}(\tau))]}\right\} \tag{57}\]
with the correlation length \(L(\tau)\) defined by Eq. (40).
We continue now with the transverse correlations by reducing Eq. (55) to
\[C_{\perp}(\mathbf{s}_{\perp},\tau)=\frac{2}{\sqrt{\pi}}\frac{\sqrt{2\eta\tau} \,\exp(-2\,\eta\tau)}{\mathrm{erf}(\sqrt{2\eta\tau})}\int_{0}^{1}\!dz\,\frac{ 1}{\sqrt{1-z}}\exp[-z(u-2\,\eta\tau)]\]
\[\times\left[\left(1-\frac{u}{2}\,z\right)I_{0}(uz)+\frac{u}{2}\,zI_{1}(uz)\right] \tag{58}\]
with the modified Bessel functions \(I_{0}(uz)\) and \(I_{1}(uz)\), where a combined variable \(u={s_{\perp}}^{2}/(12L^{2}(\tau))\) was introduced for convenience. Interestingly, in the limit of the absence of field-mediated correlations, which is formally realized at \(\eta\to 0\), the correlation coefficient can be calculated in the closed form for arbitrary \(\mathbf{s}\) and equals simply
\[C(\mathbf{s},\tau)=\exp\left(\frac{-s^{2}}{6L^{2}(\tau)}\right) \tag{59}\]
retaining thus the initial Gaussian form, Eq. (30), with the time-dependent width.
Spatial dependences of correlation coefficients according to the analytical formulas (57) and (58) are presented for different times \(\tau\) in Fig. 9. Note the Gaussian spatial dependence of both correlation coefficients for \(\eta\ll 1\). Furthermore, for the longitudinal correlation coefficient, Eq. (57), an approximate Gaussian behavior is observed for all times at arbitrary \(\eta\) in Fig. 9(a). In
Figure 10: Spatial dependences of the “transverse” correlation coefficient of polarization in TGS for different times from the work by Tomita et al. [5](© (1989) The Physical Society of Japan) are shown by black dots and lines. Colored lines exhibit the transverse correlation coefficient \(C_{\perp}(\mathbf{s}_{\perp},\tau)\), Eq. (58) evaluated for different dimensionless times \(\tau=1,2\) and \(5\) as indicated on the plot.
Figure 9: Spatial dependence of the longitudinal (a) and the transverse (b) correlation coefficients of polarization for a substantial (\(\eta=1\), black lines) or infinitesimal (\(\eta=0.01\), red lines) normalized ferroelectric susceptibility, corresponding to the neglection or the account of the depolarization field effect, evaluated exemplarily for two dimensionless times, \(\tau=1\) and \(\tau=10\).
contrast, the transverse coefficient, Eq. (58), is changing its sign, as is seen in Fig. 9(b), and thus exhibits clearly non-Gaussian behavior.
A comparison of the spatial dependence of the transverse correlation coefficient \(\mathcal{C}_{\perp}(\mathbf{s}_{\perp},\tau)\) with experimental data by Tomita et al. [5] is shown in Fig. 10 for different times. Note the distinct definitions of the "transverse" and "longitudinal" correlation coefficients in experimental works [5-9] referred to the orientations with respect to the domain boundaries at the surface of TGS crystals.
## 8 Discussion and conclusions
In this work we advanced a self-consistent theory of stochastic development of polarization domain structures in a uniaxial ferroelectric/nonferroelastic starting from an unpoled state obtained by quenching from a high-temperature paraelectric phase. The model is based on the LGD approach and accounts for the effect of an applied electric field on the system evolution as well as for the feedback via depolarization electric fields induced by the emerging domain structure. The polarization and electric field components are treated as Gaussian random variables since the structure emerges from an initial unpoled disordered state. The effect of thermodynamic fluctuations is assumed to be negligible which is justified as soon as the spatial scale of initial disorder exceeds that of thermodynamic fluctuations and the Levanyuk criterion of the applicability of the LGD theory is satisfied, as is shown in Appendix A. In the case of TGS this means that the model applies at temperatures \(|T-T_{c}|>0.02\;K\).
A closed system of integro-differential equations (22,29) was formulated for the time-dependent two-site correlation function \(K(\mathbf{s},\tau)\) for polarization and the time-dependent mean polarization \(\bar{\pi}(\tau)\). The other correlation functions can be derived from the said functions. Considering the polarization dispersion \(D(\tau)=K(\mathbf{s}\mathbf{=}0,\tau)\) reduces the problem to a closed system of differential equations (45,49) for \(D(\tau)\) and \(\bar{\pi}(\tau)\). The study of the field-dependent equilibrium points of the latter system allows a construction of a phase diagram (Fig. 3) of possible stable and metastable states of the system in coordinates \((\bar{\pi},D)\). Considering the effect of the applied field \(\epsilon_{a}\) on the above phase diagram (Fig. 4) reveals areas of possible stable and metastable states on the plane \((\alpha_{\mathrm{z}},\epsilon_{a})\) (Fig. 5) where the combined characteristic \(\alpha_{\mathrm{z}}\) depends on the system geometry, material parameters, and temperature. Particularly, the field region of the existence of multi-domain states is reduced, when \(\alpha_{\mathrm{z}}\ll 1\) together with the dielectric layer thickness \(h_{d}\ll h_{f}\), and enhanced, when \(\alpha_{\mathrm{z}}\gg 1\) due to approaching the phase transition temperature \(T_{c}\).
A numerical solution of the equations (45,49) exhibits the time development of the dispersion \(D(\tau)\) and the mean polarization \(\bar{\pi}(\tau)\) and reveals a sharp bifurcation behavior from multi-domain towards single-domain states when increasing the applied field (Fig. 6). This is accompanied by
non-monotonic time dependences of both \(D(\tau)\) and \(\hat{\pi}(\tau)\) as well as the slowing down development in the bifurcation regime (Fig. 7) corresponding to the passing by saddle points in the phase diagram of Fig. 3. Comparing the time trajectories for \(D(\tau)\) and \(\hat{\pi}(\tau)\) in Fig. 8 in the presence and in the absence of the stochastic depolarization fields reveals the crucial role of the latter for the domain kinetics. Depolarization fields cannot change the equilibrium points in the phase diagram (and thus the available final states of the evolution) but affect the domain formation kinetics and the choice of a final state at a certain applied field value. Thus, the account of the stochastic depolarization fields supports the existence of multi-domain states in a substantially wider applied field region.
Assuming an initial Gaussian shape of the correlation function \(K(\mathbf{s},\tau=0)\), Eq. (30), allows an exact calculation of the correlation length \(L(\tau)\) defined by Eq. (34). It exhibits a simple diffusion-like behavior, Eq. (40). Though it roughly captures the time development of the correlation length \(L(t)\)\(\sim\)\((t-t_{0})^{\nu}\) observed in experiments, as is shown in Fig. 2, the experimental data are better described by the exponent \(\nu\) about 0.2-0.3 [5-10,16]. Furthermore, an observed linear time behavior for temperatures close to \(T_{c}\)[3,16] is missing in the theory as well as the asymptotic dependence \(L(t)\)\(\sim\)\([\ln(t/t_{0})]^{4}\) observed at longer times [7-9]. The latter behavior was explained theoretically [21] as an effect of the frozen random fields due to the presence of defects which are not accounted in our model.
Though the correlation function \(K(\mathbf{s},\tau)\) can only be evaluated numerically from the integro-differential equations (22,29), the correlation coefficient \(C(\mathbf{s},\tau)=K(\mathbf{s},\tau)/D(\tau)\), which is actually measured in experiments [5-9,16], can be found analytically. For the longitudinal correlations along the polarization direction a closed formula (57) was obtained, while for the correlations in the plane perpendicular to the polarization direction (coinciding with the surface of the crystal) an integral expression (58) was derived. The weakness of the current theory consists in the assumption of isotropic correlations in the said plane, while in the available experiment on TGS [5-9,16] the correlations are strongly anisotropic like the crystal itself. For that reason, the theory is still unable to explain the observed oscillations in the correlation coefficient perpendicular to the domain boundaries in the plane surface [5-9]. However, the correlations along the domain boundaries in the plane are roughly captured by Eq. (58) as is shown in Fig. 10. Nevertheless, the initial linear spatial dependence \(C(\mathbf{s},\tau)\cong 1-s/L(\tau)\) typically observed in experiment [7-9,16] is not explained by the theory which assumes the Gaussian dependence. Also, the hypothesis of scaling dependence of \(C(\mathbf{s},\tau)\), supported at \(\eta=0\) by Eq. (59), is not generally confirmed by the theory for arbitrary \(\eta\), since the correlation coefficients in both directions, Eq. (57) and (58), contains both the scaling forms of the type \(s/L(\tau)\) and an explicit time dependence. Furthermore, we note the decoupling of the monotonic time dependence of the correlation length
\(L(t)\) and a generally non-monotonic time dependence of the dispersion \(D(\tau)\) which, however, does not contain a spatial scale.
In conclusion, for a better description of the available experiments an anisotropic LGD thermodynamic potential (1) is required, as well as a non-Gaussian initial shape of the correlation function and probably a consideration of non-Gaussian random variables.
## Appendix A Limitations on the application of the model
In the model presented in Section 2, prevailing of quenched disorder over thermodynamic fluctuations is assumed. Here we identify the parameter range where this assumption is valid. Spatial variations of polarization due to the quenched disorder and due to the thermodynamic fluctuations are both short-range ones with different characteristic lengths. The quenched disorder is characterized initially by the correlation function (30) and can be represented in physical units as
\[\langle\Delta P(\mathbf{r}+\mathbf{s})\Delta P(\mathbf{s})\rangle=D(0)P_{s}^{ 2}\exp\Big{(}-\frac{s^{2}}{6L^{2}(0)}\Big{)}, \tag{10}\]
where the spatial scale of polarization fluctuations is, in principle, time-dependent, Eq. (40), and temperature independent. The thermodynamic fluctuations can be characterized by the correlation function, derived within the Ornstein-Zernicke approach [46],
\[\langle\Delta P(\mathbf{r}+\mathbf{s})\Delta P(\mathbf{s})\rangle=\frac{k_{B} \tau}{4\pi Gs}\exp\Big{(}-\frac{s}{\lambda(T)}\Big{)}, \tag{11}\]
with the Boltzmann constant \(k_{B}\) and the above introduced characteristic length \(\lambda(T)=\sqrt{G/|A|}\), which is, in contrast to \(L(\tau)\), independent of time but temperature dependent. Its temperature dependence can be explicitly expressed as \(\lambda(T)=\ \lambda_{0}\sqrt{T_{c}/(T_{c}-T)}\) with the minimum correlation length reached at \(T=0\), \(\lambda_{0}=\sqrt{G/\alpha_{0}T_{c}}\).
If the correlation length of the initial quenched disorder, \(L(0)\), is much shorter than \(\lambda_{0}\), then the quenched fluctuations, Eq. (10), at the relevant distance \(s\simeq\lambda(T)\) will be exponentially small and thus negligible. Therefore, we will consider an opposite limit of large-scale quenched disorder with \(L(0)\gg\lambda_{0}\) which seems to be realized experimentally [3-9,14,15]. Prevailing of the quenched disorder over the thermodynamic fluctuations is then given by comparison of Eq. (10) and (11) taken at \(s\simeq\lambda(T)\) resulting in inequality
\[D(0)P_{s}^{2}\exp\Big{(}-\frac{\lambda_{0}^{2}}{6L^{2}(0)}\frac{T_{c}}{T_{c}- T}\Big{)}\gg\frac{k_{B}T}{4\pi G\lambda_{0}}\sqrt{\frac{T_{c}-T}{T_{c}}}. \tag{12}\]
This condition can hardly be satisfied by temperatures very close to \(T_{c}\), that is why the validity of the model is in any case limited to the temperature range
\[\frac{T_{c}-T}{T_{c}}>\frac{\lambda_{0}^{2}}{6L^{2}(0)}. \tag{13}\]
Considering this constraint, the inequality can be further reduced to
\[\frac{T_{c}-T}{T_{c}}\gg\left(\frac{k_{B}}{8\pi\Delta C_{\nu}a_{0}^{3}D(0)} \right)^{2} \tag{10}\]
with the jump of the isochoric heat capacity at the second order phase transition \(\Delta C_{\nu}=\alpha o^{2}Tc/(2B)\). With the maximum conceivable dimensionless dispersion of \(D(0)\simeq 1\) the condition (10) is nothing else but the Levanyuk criterion of applicability of the Landau theory [46], which fails in the close vicinity of \(T_{c}\). The critical region near \(T_{c}\) where the LGD approach fails in TGS was estimated in Refs. [47-49] as \(|T_{c}-T|<\)0.02 K. Combining Eqs. (11) and (10) the criterion for applicability of the model can be stated as
\[\frac{T_{c}-T}{T_{c}}\gg max\left\{\frac{a_{0}^{2}}{6L^{2}(0)},\left(\frac{k_{ B}}{8\pi\Delta C_{\nu}a_{0}^{3}D(0)}\right)^{2}\right\}. \tag{11}\]
## Appendix B Examples of correlation functions for stripe domain structures
Correlation functions in section 3 are defined by statistical averaging of random variables indicated with a symbol (...). In experiment, however, they are typically derived from averaging of the observable sample area [4-10,16]. In the following, we will use this empirical definition of the correlations which can be, in particular, applied to regular domain structures. We consider a couple of examples to learn what spatial dependences of the correlation functions and what values of polarization dispersion may be expected.
We begin with a one-dimensional stripe domain structure described by a harmonic function,
\[\pi(x)=\cos(\pi x/2h) \tag{12}\]
The period of this structure along the \(x\)-axis is \(4h\) exhibiting positive and negative domains of the width \(2h\). Let us assume a finite sample of the length \(L=4hN,N>>1\) with periodic boundary conditions. For this structure, obviously, \(\bar{\pi}=0\) and \(\pi(x)=\zeta(x)\). The correlation function can be introduced as
\[K(s)=\langle\zeta(x+s)\,\zeta(x)\rangle=\frac{1}{L}\int_{-L/2}^{ L/2}\mathrm{dx}\,\cos\left(\frac{\pi x+s}{2}\right)\cos\left(\frac{\pi x}{2} \right)\] \[=\frac{1}{2}\cos\left(\frac{\pi\,s}{2}\frac{\,\,\,\,\,\,\,\,\,h} {h}\right)+\frac{h}{h\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
used below. For the considered structure D=1/2.
Let us consider now a stripe structure with alternative properties, a nonvanishing mean polarization \(\overline{\pi}\) and domain walls of zero thickness. It is defined as
\[\pi(x)=\begin{cases}\phantom{-}1,\ -a/2<x<a/2\\ -1,-(a+b)/2<x<-a/2\\ -1,a/2<x<(a+b)/2\end{cases}, \tag{10}\]
where \(a\) and \(b\) indicate the width of the positive and negative domains, respectively. The average value of polarization is given by averaging over one period
\[\overline{\pi}=\tfrac{1}{a+b}\int_{-(a+b)/2}^{(a+b)/2}\mathrm{dx}\ \pi(x)=\tfrac{a-b}{a+b}\,. \tag{11}\]
The variance, or dispersion, of \(\pi(x)\) is easy to calculate without integration, noticing that in this structure \(\langle\pi^{2}\rangle=1\) and, thus,
\[D=1-\overline{\pi}^{2}=\tfrac{4ab}{(a+b)^{2}}. \tag{12}\]
It can be seen that it turns to zero in a single-domain state, when \(a=0\) or \(b=0\), and reaches a maximum of \(D=1\) when \(a=b\).
This structure can be infinitely continued periodically with a period (a+b) as follows:
\[\pi(x)=\sum\nolimits_{n=-\infty}^{\infty}\left\{-\vartheta\left[x+\tfrac{a+b }{2}+n(a+b)\right]+2\vartheta\left[x+\tfrac{a}{2}+n(a+b)\right]-2\vartheta \left[x-\tfrac{a}{2}+n(a+b)\right]\right\}+\vartheta\left[x-\tfrac{a+b}{2}+n( a+b)\right]\right\}. \tag{13}\]
Direct calculation of the correlation function for such a piecewise-defined function turns out to be cumbersome, but can be conveniently performed using the Fourier series representation,
\[\pi(x)=\sum_{n=-\infty}^{\infty}c_{n}\,e^{ik_{n}x}\quad\text{with}\quad k_{n} =2\pi n/(a+b), \tag{14}\]
where
\[c_{n}=\tfrac{1}{a+b}\int_{-(a+b)/2}^{(a+b)/2}\mathrm{dx}\ \pi(x)\cos(k_{n}x). \tag{15}\]
because the polarization is an even function. Substituting the expression (10) into (15), we get, after some calculations, a formula
\[c_{n}=-\delta_{n,0}+\tfrac{2}{\pi n}\sin\left(\pi n\tfrac{a}{a+b}\right). \tag{16}\]
From this we can see, particularly, two cases of single-domain states, \(\pi(x)=1\) at \(b=0\) with \(c_{n}=\delta_{n,0}\) and \(\pi(x)=-1\) at \(a=0\) with \(c_{n}=-\delta_{n,0}\). In general, \(\ \overline{\pi}=c_{0}=(a-b)/(a+b)\).
Now, the correlation function reads
\[K(s)=\tfrac{1}{a+b}\int_{-(a+b)/2}^{(a+b)/2}\mathrm{dx}\ \xi(x+s)\,\xi(x) =\tfrac{1}{a+b}\int_{-(a+b)/2}^{(a+b)/2}\mathrm{dx}\ \pi(x+s)\pi(x)-\overline{\pi}^{2}. \tag{17}\]
Using the Fourier series (14), Eq. (17) is reduced to
\[K(s)=\sum\nolimits_{n=-\infty}^{\infty}|c_{n}|^{2}\,e^{ik_{n}s}\ -\ \overline{\pi}^{2}. \tag{18}\]
which is transformed to the series
\[K(s)=\sum\nolimits_{n=-\infty}^{\infty}|c_{n}|^{2}\,e^{ik_{n}s}\ -\ \overline{\pi}^{2}. \tag{19}\]
\[K(s)=\frac{4}{\pi^{2}}\sum\nolimits_{n=1}^{\infty}\frac{1}{n^{2}}\Big{[}\cos\Big{(} 2\pi n\frac{s}{a+b}\Big{)}-\frac{1}{2}\cos\Big{(}2\pi n\frac{a-s}{a+b}\Big{)}- \frac{1}{2}\cos\Big{(}2\pi n\frac{a+s}{a+b}\Big{)}\Big{]}.\] (B13)
Using the formula 5.4.2.7 from the tables [50] the summation in Eq. (B13) can be performed to give
\[K(s)=4\,\Big{[}B_{2}\,\Big{(}\frac{s}{a+b}\Big{)}-\frac{1}{2}\,B_{2}\,\Big{(} \frac{a-s}{a+b}\Big{)}-\frac{1}{2}\,B_{2}\,\Big{(}\frac{a+s}{a+b}\Big{)}\Big{]},\] (B14)
where \(B_{2}(x)=(1/6)\,-\,x+x^{2}\) is the Bernulli polinomial. Using this, one should account that the argument in Eq. (B14) may vary only within the region \(0\leq x\leq 1\). To satisfy this requirement for \(0\leq s\leq a+b\), the argument in the last term of (B13) can be shifted by \(a+b\) resulting finally in the expression
\[K(s)=4\,\Big{[}B_{2}\,\Big{(}\frac{s}{a+b}\Big{)}-\frac{1}{2}\,B_{2}\,\Big{(} \frac{|a-s|}{a+b}\Big{)}-\frac{1}{2}\,B_{2}\,\Big{(}\frac{|b-s|}{a+b}\Big{)} \Big{]}.\] (B15)
Taking for example \(b\leq a\) the correlation function results as
\[K(s)=\frac{4}{(a+b)^{2}}\begin{cases}\quad ab-s(a+b),0\leq s<b<a\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad-b^{2},b\leq s<a\\ \quad s(a+b)-ab-a^{2}-b^{2},b<a\leq s\end{cases}.\] (B16)
Thus, the correlation function for an unbalanced infinite stripe domain structure is also periodic one with the period \((a+b)\) and varies between \(K(0)=4ab/(a+b)^{2}\) and \(K(b)=-4b^{2}/(a+b)^{2}\) changing sign at \(s=ab/(a+b)\) and \(s=(a+b)-ab/(a+b)\). In realistic random conditions, correlation function may oscillate but decays at a finite correlation length.
## Appendix C Energy contribution due to polarization fluctuations
Let us evaluate the contribution of spatial polarization fluctuations to the energy using the functional (1) in the dimensionless form. By averaging over the ferroelectric volume, one finds
\[\Delta\Phi=P_{s}^{2}|A|\int_{V_{f}}\Big{[}-\frac{1}{2}D(\tau)+\frac{1}{4} \big{(}6\pi^{2}(\tau)+3D(\tau)\big{)}D(\tau)+\frac{1}{2L^{2}(\tau)}D(\tau)- \Psi_{\rm zz}(\tau)\Big{]}\,dV.\] (C1)
Considering that asymptotically \(L(\tau\to\infty\,)\to\infty\) and \(\Psi_{\rm zz}(\tau\to\infty\,)\to 0\) and using Eqs. (50), one obtains the fluctuation contribution to the final state energy
\[\Delta\Phi=-\frac{3}{4}\,P_{s}^{2}|A|D^{2}(\tau\to\infty\,)V_{f}\] (C2)
exhibiting the tendency in favor of the multi-domain state in comparison with the single-domain state with \(D=0\).
## Acknowledgements
We thank Prof. Dragan Damjanovic for useful discussion of the results. O.Y.M. is grateful for the financial support by DAAD during the visit at the TU Darmstadt. This work was supported by the Deutsche Forschungsgemeinschaft (German Research Society, DFG) via the grant No. 405631895 (GE-1171/8-1). |
2307.01951 | A Neural Collapse Perspective on Feature Evolution in Graph Neural
Networks | Graph neural networks (GNNs) have become increasingly popular for
classification tasks on graph-structured data. Yet, the interplay between graph
topology and feature evolution in GNNs is not well understood. In this paper,
we focus on node-wise classification, illustrated with community detection on
stochastic block model graphs, and explore the feature evolution through the
lens of the "Neural Collapse" (NC) phenomenon. When training instance-wise deep
classifiers (e.g. for image classification) beyond the zero training error
point, NC demonstrates a reduction in the deepest features' within-class
variability and an increased alignment of their class means to certain
symmetric structures. We start with an empirical study that shows that a
decrease in within-class variability is also prevalent in the node-wise
classification setting, however, not to the extent observed in the
instance-wise case. Then, we theoretically study this distinction.
Specifically, we show that even an "optimistic" mathematical model requires
that the graphs obey a strict structural condition in order to possess a
minimizer with exact collapse. Interestingly, this condition is viable also for
heterophilic graphs and relates to recent empirical studies on settings with
improved GNNs' generalization. Furthermore, by studying the gradient dynamics
of the theoretical model, we provide reasoning for the partial collapse
observed empirically. Finally, we present a study on the evolution of within-
and between-class feature variability across layers of a well-trained GNN and
contrast the behavior with spectral methods. | Vignesh Kothapalli, Tom Tirer, Joan Bruna | 2023-07-04T23:03:21Z | http://arxiv.org/abs/2307.01951v2 | # A Neural Collapse Perspective on Feature Evolution in Graph Neural Networks
###### Abstract
Graph neural networks (GNNs) have become increasingly popular for classification tasks on graph-structured data. Yet, the interplay between graph topology and feature evolution in GNNs is not well understood. In this paper, we focus on node-wise classification, illustrated with community detection on stochastic block model graphs, and explore the feature evolution through the lens of the "Neural Collapse" (NC) phenomenon. When training instance-wise deep classifiers (e.g. for image classification) beyond the zero training error point, NC demonstrates a reduction in the deepest features' within-class variability and an increased alignment of their class means to certain symmetric structures. We start with an empirical study that shows that a decrease in within-class variability is also prevalent in the node-wise classification setting, however, not to the extent observed in the instance-wise case. Then, we theoretically study this distinction. Specifically, we show that even an "optimistic" mathematical model requires that the graphs obey a strict structural condition in order to possess a minimizer with exact collapse. Interestingly, this condition is viable also for heterophilic graphs and relates to recent empirical studies on settings with improved GNNs' generalization. Furthermore, by studying the gradient dynamics of the theoretical model, we provide reasoning for the partial collapse observed empirically. Finally, we present a study on the evolution of within- and between-class feature variability across layers of a well-trained GNN and contrast the behavior with spectral methods.
## 1 Introduction
Graph neural networks [39] employ message-passing mechanisms to capture intricate topological relationships in data and have become de-facto standard architectures to handle data with non-Euclidean geometric structure [7; 8; 13; 14; 23; 46; 49; 52; 56]. However, the influence of topological information on feature learning in GNNs is yet to be fully understood [28; 50; 55; 58].
In this paper, we study the feature evolution in GNNs in a node-wise supervised classification setting. In order to gain insights into the role of topology, we focus on the controlled environment of the prominent stochastic block model (SBM) [1; 2; 3; 19; 33]. The SBM provides an effective framework to control the level of sparsity, homophily, and heterophily in the random graphs and facilitates analysis of GNN which relies solely on structural information [5; 10; 22; 28; 30; 37]. While inductive supervised learning on graphs is a relatively more difficult problem than transductive learning, it aligns with practical scenarios where nodes need to be classified in unseen graphs [14], and is also amenable to training GNNs that are deeper than conventional shallow Graph Convolution Network (GCN) models [9; 10; 22; 25; 35; 48; 48; 56].
The empirical and theoretical study of GNNs' feature evolution in this paper employs a "Neural Collapse" perspective [36]. When training Deep Neural Networks (DNNs) for classification, it
is common to continue optimizing the networks' parameters beyond the zero training error point [6; 18; 27], a stage that was referred to in [36] as the "terminal phase of training" (TPT). Papyan, Han, and Donoho [15; 36] have empirically shown that a phenomenon, dubbed Neural Collapse (NC), occurs during the TPT of plain DNNs2 on standard instance-wise classification datasets. NC encompasses several simultaneous properties: (NC1) The within-class variability of the deepest features decreases (i.e., outputs of the penultimate layer for training samples from the same class tend to their mean); (NC2) After subtracting their global mean, the mean features of different classes become closer to a geometrical structure known as a simplex equiangular tight frame; (NC3) The last layer's weights exhibit alignment with the classes' mean features. A consequence of NC1-3 is that the classifier's decision rule becomes similar to the nearest class center in the feature space. We refer to [24] for a review on this topic.
Footnote 2: Throughout the paper, by (plain) DNNs we mean networks that output an instance-wise prediction (e.g., image class rather than pixel class), while by GNNs we mean networks that output node-wise predictions.
The common approach to theoretically study the NC phenomenon is the "Unconstrained Features Model" (UFM) [21; 31]. The core idea behind this "optimistic" mathematical model is that the deepest features are considered to be freely optimizable. This idea has facilitated a recent surge of theoretical works in an effort to understand the global optimality conditions and gradient dynamics of these features and the last layer's weights in DNNs [11; 15; 26; 41; 42; 43; 47; 51; 57; 59]. In our work, we extend NC analysis to settings where relational information in data is paramount, and creates a tension with the 'freeness' associated with the UFM model. In essence, we highlight the key differences when analyzing NC in GNNs by identifying structural conditions on the graphs under which the global minimizers of the training objective exhibit full NC1. Interestingly, the structural conditions that we rigorously establish in this paper are aligned with the neighborhood conditions on heterophilic graphs that have been empirically hypothesized to facilitate learning by Ma et al. [28].
Our main contributions can be summarized as follows:
* We conduct an extensive empirical study that shows that a decrease in within-class variability is prevalent also in the deepest features of GNNs trained for node classification on SBMs. However, not to the extent observed in the instance-wise setting.
* We propose and analyze a graph-based UFM to understand the role of node neighborhood patterns and their community labels on NC dynamics. We prove that even this optimistic model requires a strict structural condition on the graphs in order to possess a minimizer with exact variability collapse. Then, we show that satisfying this condition is a rare event, which theoretically justifies the distinction between observations for GNNs and plain DNNs.
* Nevertheless, by studying the gradient dynamics of the graph-based UFM, we provide theoretical reasoning for the partial collapse during GNNs training.
* Finally, we study the evolution of features across the layers of well-trained GNNs and contrast the decrease in NC1 metrics along depth with a NC1 decrease along power iterations in spectral clustering methods.
## 2 Preliminaries and Problem Setup
We focus on supervised learning on graphs for _inductive_ community detection. Formally, we consider a collection of \(K\) undirected graphs \(\{\mathcal{G}_{k}=(\mathcal{V}_{k},\mathcal{E}_{k})\}_{k=1}^{K}\), each with \(N\) nodes, \(C\) non-overlapping balanced communities and a node labelling ground truth function \(y_{k}:\mathcal{V}_{k}\rightarrow\{\mathbf{e}_{1},\ldots,\mathbf{e}_{C}\}\). Here, \(\forall c\in[C],\mathbf{e}_{c}\in\mathbb{R}^{C}\) indicates the standard basis vector, where we use the notation \([C]=\{1,\cdots,C\}\). The goal is to learn a parameterized GNN model \(\psi_{\Theta}(.)\) which minimizes the empirical risk given by:
\[\min_{\Theta}\frac{1}{K}\sum_{k=1}^{K}\mathcal{L}(\psi_{\Theta}(\mathcal{G}_{ k}),y_{k})+\frac{\lambda}{2}\left\|\Theta\right\|_{F}^{2}, \tag{1}\]
where \(\left\|\cdot\right\|_{F}\) represents the Frobenius norm, \(\mathcal{L}\) is the loss function that is invariant to label permutations [10], and \(\lambda>0\) is the penalty parameter. We choose \(\mathcal{L}\) based on the mean squared error (MSE) as:
\[\mathcal{L}(\psi_{\Theta}(\mathcal{G}_{k}),y_{k})=\min_{\pi\in S_{C}}\frac{1}{ 2N}\left\|\psi_{\Theta}\left(\mathcal{G}_{k}\right)-\pi\left(y_{k}\left( \mathcal{V}_{k}\right)\right)\right\|_{2}^{2}, \tag{2}\]
where \(\pi\) belongs to the permutation group over \(C\) elements. Using the MSE loss for training DNN classifiers has become increasingly popular recently. For example, Hui and Belkin [20] have
performed an extensive empirical study that shows that training with MSE loss yields performance that is similar to (and sometimes even better than) training with CE loss. This choice also facilitates theoretical analyses [15; 42; 57].
### Data model
We employ the Symmetric Stochastic Block Model (SSBM) to generate graphs \(\{\mathcal{G}_{k}=(\mathcal{V}_{k},\mathcal{E}_{k})\}_{k=1}^{K}\). Stochastic block models (originated in [19]) are classical random graph models that have been extensively studied in statistics, physics, and computer science. In the SSBM model that is considered in this paper, each graph \(\mathcal{G}_{k}\) is associated with an adjacency matrix \(\mathbf{A}_{k}\in\mathbb{R}^{N\times N}\), degree matrix \(\mathbf{D}_{k}=\mathrm{diag}(\mathbf{A}_{k}\mathbf{1})\in\mathbb{R}^{N\times N}\), and a random node features matrix \(\mathbf{X}_{k}\in\mathbb{R}^{d\times N}\), with entries sampled from a normal distribution. Formally, if \(\mathbf{P}\in\mathbb{R}^{C\times C}\) represents a symmetric matrix with diagonal entries \(p\) and off-diagonal entries \(q\), a random graph \(\mathcal{G}_{k}\) is considered to be drawn from the distribution SSBM\((N,C,p,q)\) if an edge between vertices \(v_{i},v_{j}\) is formed with probability \((\mathbf{P})_{y_{k}(v_{i}),y_{k}(v_{j})}\). We choose the regime of exact recovery [1; 2; 33; 3] in sparse graphs where \(p=\frac{a\ln(N)}{N},q=\frac{b\ln(N)}{N}\) for parameters \(a,b\geq 0\) such that \(|\sqrt{a}-\sqrt{b}|>\sqrt{C}\). The need for exact recovery (information-theoretically) stems from the requirement that \(\psi_{\Theta}\) should be able to reach TPT.
### Graph neural networks
Inspired by the widely studied model of higher-order GNNs by Morris et al. [32], we design \(\psi_{\Theta}\) based on a family of graph operators \(\mathcal{F}=\{\mathbf{I},\widehat{\mathbf{A}}_{k}\},\forall k\in[K]\), and denote it as \(\psi_{\Theta}^{\mathcal{F}}\). Formally, for a GNN \(\psi_{\Theta}^{\mathcal{F}}\) with \(L\) layers, the node features \(\mathbf{H}_{k}^{(l)}\in\mathbb{R}^{d_{l}\times N}\) at layer \(l\in[L]\) is given by:
\[\mathbf{X}_{k}^{(l)} =\mathbf{W}_{1}^{(l)}\mathbf{H}_{k}^{(l-1)}+\mathbf{W}_{2}^{(l)} \mathbf{H}_{k}^{(l-1)}\widehat{\mathbf{A}}_{k}, \tag{3}\] \[\mathbf{H}_{k}^{(l)} =\sigma(\mathbf{X}_{k}^{(l)}),\]
where \(\mathbf{H}_{k}^{(0)}=\mathbf{X}_{k}\), and \(\sigma(\cdot)\) represents a point-wise activation function such as ReLU. \(\mathbf{W}_{1}^{(l)},\mathbf{W}_{2}^{(l)}\in\mathbb{R}^{d_{l}\times d_{l-1}}\) are the weight matrices and \(\widehat{\mathbf{A}}_{k}=\mathbf{A}_{k}\mathbf{D}_{k}^{-1}\) is the normalized adjacency matrix, also known as the random-walk matrix. We also consider a simpler family without the identity operator \(\mathcal{F}^{\prime}=\{\widehat{\mathbf{A}}_{k}\},\forall k\in[K]\) and analyze the GNN \(\psi_{\Theta}^{\mathcal{F}^{\prime}}\) with only graph convolution functionality. Formally, the node features \(\mathbf{H}_{k}^{(l)}\in\mathbb{R}^{d_{l}\times N}\) for \(\psi_{\Theta}^{\mathcal{F}^{\prime}}\) is given by:
\[\mathbf{X}_{k}^{(l)} =\mathbf{W}_{2}^{(l)}\mathbf{H}_{k}^{(l-1)}\widehat{\mathbf{A}}_ {k}, \tag{4}\] \[\mathbf{H}_{k}^{(l)} =\sigma(\mathbf{X}_{k}^{(l)}).\]
Here, the subscript for the weight matrix \(\mathbf{W}_{2}^{(l)}\) is retained to highlight that it acts on \(\mathbf{H}_{k}^{(l-1)}\widehat{\mathbf{A}}_{k}\). Finally, we employ the training strategy of Chen et al. [10] and apply instance-normalization [44] on \(\sigma(\mathbf{X}_{k}^{(l)}),\forall l\in\{1,\cdots,L-1\}\) to prevent training instability.
### Tracking neural collapse in GNNs
In our setup, reaching zero training error (TPT) implies that the network perfectly classifies all the nodes (up to label permutations) in all the training graphs. To this end, we leverage the NC metrics introduced in [36; 42; 43; 59] and extend them to GNNs in an inductive setting. To begin with, let us consider a single graph \(\mathcal{G}_{k}=(\mathcal{V}_{k},\mathcal{E}_{k}),k\in[K]\) with a normalized adjacency matrix \(\widehat{\mathbf{A}}_{k}\). Additionally, we denote \(\mathbf{H}_{k}^{(l)}\in\mathbb{R}^{d_{l}\times N}\) as the output of layer \(l\in[L-1]\), irrespective of the GNN design. Now, by dropping the subscript and superscript for notational convenience, we define the class means and the global mean of \(\mathbf{H}\) as follows:
\[\overline{\mathbf{h}}_{c}:=\frac{1}{n}\sum_{i=1}^{n}\mathbf{h}_{c,i}\ \, \forall c\in[C],\qquad\quad\overline{\mathbf{h}}_{G}:=\frac{1}{Cn}\sum_{c=1}^{C} \sum_{i=1}^{n}\mathbf{h}_{c,i}, \tag{5}\]
where \(n=N/C\) represents the number of nodes in each of the \(C\) balanced communities, and \(\mathbf{h}_{c,i}\) is the feature vector (a column in \(\mathbf{H}\)) associated with \(v_{c,i}\in\mathcal{V}\), i.e., the \(i^{th}\) node belonging to class
\(c\in[C]\). Next, let \(\mathcal{N}(v_{c,i})\) denote all the neighbors of \(v_{c,i}\) and let \(\mathcal{N}_{c^{\prime}}(v_{c,i})\) denote only the neighbors of \(v_{c,i}\) that belong to class \(c^{\prime}\in[C]\). We define the class means and global mean of \(\mathbf{H}\widehat{\mathbf{A}}\), which is unique to the GNN setting as follows:
\[\overline{\mathbf{h}}_{c}^{\mathcal{N}}:=\frac{1}{n}\sum_{i=1}^{n}\mathbf{h}_ {c,i}^{\mathcal{N}}\,\forall c\in[C],\qquad\quad\overline{\mathbf{h}}_{G}^{ \mathcal{N}}:=\frac{1}{Cn}\sum_{c=1}^{C}\sum_{i=1}^{n}\mathbf{h}_{c,i}^{ \mathcal{N}}, \tag{6}\]
where \(\mathbf{h}_{c,i}^{\mathcal{N}}=\left(\sum_{\mathcal{N}_{c}(v_{c,i})}\mathbf{h }_{c,j}+\sum_{\mathcal{N}_{c^{\prime}\neq c}(v_{c,i})}\mathbf{h}_{c^{\prime}, j}\right)/|\mathcal{N}(v_{c,i})|\).
\(\bullet\)**Variability collapse in features**\(\mathbf{H}\): For a given features matrix \(\mathbf{H}\), let us define the within- and between-class covariance matrices, \(\mathbf{\Sigma}_{W}(\mathbf{H})\) and \(\mathbf{\Sigma}_{B}(\mathbf{H})\), as:
\[\mathbf{\Sigma}_{W}(\mathbf{H}) :=\frac{1}{Cn}\sum_{c=1}^{C}\sum_{i=1}^{n}\left(\mathbf{h}_{c,i} -\overline{\mathbf{h}}_{c}\right)\left(\mathbf{h}_{c,i}-\overline{\mathbf{h}} _{c}\right)^{\top}, \tag{7}\] \[\mathbf{\Sigma}_{B}(\mathbf{H}) :=\frac{1}{C}\sum_{c=1}^{C}\left(\overline{\mathbf{h}}_{c}- \overline{\mathbf{h}}_{G}\right)\left(\overline{\mathbf{h}}_{c}-\overline{ \mathbf{h}}_{G}\right)^{\top}. \tag{8}\]
To empirically track the within-class variability collapse with respect to the between-class variability, we define two NC1 metrics:
\[\mathcal{N}\mathcal{C}_{1}(\mathbf{H})=\frac{1}{C}\text{Tr}\left(\mathbf{ \Sigma}_{W}(\mathbf{H})\mathbf{\Sigma}_{B}^{\dagger}(\mathbf{H})\right), \qquad\quad\widetilde{\mathcal{N}\mathcal{C}}_{1}(\mathbf{H})=\frac{\text{Tr }\left(\mathbf{\Sigma}_{W}(\mathbf{H})\right)}{\text{Tr}\left(\mathbf{\Sigma} _{B}(\mathbf{H})\right)}, \tag{9}\]
where \({}^{\dagger}\) denotes the Moore-Penrose pseudo-inverse and \(\mathrm{Tr}(\cdot)\) denotes the trace of a matrix. Although \(\mathcal{N}\mathcal{C}_{1}\) is the original NC1 metric used by Papyan et al. [36], we consider also \(\widetilde{\mathcal{N}\mathcal{C}}_{1}\), which has been proposed by Tirer et al. [43] as an alternative metric that is more amenable to theoretical analysis.
\(\bullet\)**Variability collapse in neighborhood-aggregated features**\(\mathbf{H}\widehat{\mathbf{A}}\): Similarly to the above, we track the within- and between-class variability of the "neighborhood-aggregated" features matrix \(\mathbf{H}\widehat{\mathbf{A}}\) by \(\mathbf{\Sigma}_{W}(\mathbf{H}\widehat{\mathbf{A}})\) and \(\mathbf{\Sigma}_{B}(\mathbf{H}\widehat{\mathbf{A}})\) (computed using \(\overline{\mathbf{h}}_{c}^{\mathcal{N}}\) and \(\overline{\mathbf{h}}_{G}^{\mathcal{N}}\)), as well as \(\mathcal{N}\mathcal{C}_{1}(\mathbf{H}\widehat{\mathbf{A}})\) and \(\widetilde{\mathcal{N}\mathcal{C}}_{1}(\mathbf{H}\widehat{\mathbf{A}})\). (See Appendix A for formal definitions.) Finally, we follow a simple approach and track the mean and variance of \(\mathcal{N}\mathcal{C}_{1}(\mathbf{H}),\widetilde{\mathcal{N}\mathcal{C}}_{1 }(\mathbf{H}),\mathcal{N}\mathcal{C}_{1}(\mathbf{H}\widehat{\mathbf{A}}), \widetilde{\mathcal{N}\mathcal{C}}_{1}(\mathbf{H}\widehat{\mathbf{A}})\) across all \(K\) graphs in our experiments.
As the primary focus of our paper is the analysis of feature variability during training and inference, we defer the definition and examination of metrics based on NC2 and NC3 to Appendix A, F.
## 3 Evolution of penultimate layer features during training
In this section, we explore the evolution of the deepest features of GNNs during training. In Section 3.1, we present empirical results of GNNs in the setup that is detailed in Section 2, showing that a decrease in within-class feature variability is present in GNNs that reach zero training error, but not to the extent observed with plain DNNs. Then, in Section 3.2, we theoretically study a mathematical model that provides reasoning for the empirical observations.
### Experiments
**Setup.** We focus on the training performance of GNNs \(\psi_{\Theta}^{\mathcal{F}},\psi_{\Theta}^{\mathcal{F}^{\prime}}\) on sparse graphs and generate a dataset of \(K=1000\) random SSBM graphs with \(C=2,N=1000,p=0.025,q=0.0017\). The networks \(\psi_{\Theta}^{\mathcal{F}},\psi_{\Theta}^{\mathcal{F}^{\prime}}\) are composed of \(L=32\) layers with graph operator, ReLU activation, and instance-normalization functionality. The hidden feature dimension is set to \(8\) across layers. They are trained for \(8\) epochs using stochastic gradient descent (SGD) with a learning rate \(0.004\), momentum \(0.9\), and a weight decay of \(5\times 10^{-4}\). During training, we track the NC1 metrics for the penultimate layer features \(\mathbf{H}_{k}^{(L-1)}\), by computing their mean and standard deviation across \(k\in[K]\) graphs after every epoch. To measure the performance of the GNN, we compute the 'overlap' [10] between predicted communities and ground truth communities (up to permutations):
\[\mathrm{overlap}(\hat{y},y):=\max_{\pi\in S_{C}}\left(\frac{1}{N}\sum_{i=1}^{N} \delta_{\hat{y}(v_{i}),\pi(y(v_{i}))}-\frac{1}{C}\right)/\left(1-\frac{1}{C}\right) \tag{10}\]
where \(\hat{y}\) is the node labelling function based on GNN design and \(\frac{1}{N}\sum_{i=1}^{N}\delta_{\hat{y}(v_{i}),\pi(y(v_{i}))}\) is the training accuracy (\(\delta\) denotes the Kronecker delta). The overlap allows us to measure the improvements in performance over random guessing while retaining the indication that the GNN has reached TPT. Formally, when \(\frac{1}{N}\sum_{i=1}^{N}\delta_{\hat{y}(v_{i}),\pi(y(v_{i}))}=1\) (zero training error), then \(\mathrm{overlap}(\hat{y},y)=1\). We illustrate the empirical results in Figures 1 and 2, and present extensive experiments (showing similar behavior) along with infrastructure details in Appendix F3.
Footnote 3: Code is available at: [https://github.com/kvignesh1420/gnn_collapse](https://github.com/kvignesh1420/gnn_collapse)
**Observation:** The key takeaway is that \(\mathcal{NC}_{1}(\mathbf{H}_{k}^{(L-1)})\), \(\widetilde{\mathcal{NC}}_{1}(\mathbf{H}_{k}^{(L-1)})\) tend to reduce and plateau during TPT in \(\psi_{\Theta}^{\mathcal{F}}\) and \(\psi_{\Theta}^{\mathcal{F}^{\prime}}\). Notice that even though we consider a controlled SSBM-based setting, the \(\mathcal{NC}_{1}\) values observed here are higher than the values observed in the case of plain DNNs on real-world instance-wise datasets [36; 59]. Additionally, we can observe that trends for \(\mathcal{NC}_{1}(\mathbf{H}_{k}^{(L-1)}\widehat{\mathbf{A}}_{k})\), \(\widetilde{\mathcal{NC}}_{1}(\mathbf{H}_{k}^{(L-1)}\widehat{\mathbf{A}}_{k})\) are similar to those of \(\mathcal{NC}_{1}(\mathbf{H}_{k}^{(L-1)})\), \(\widetilde{\mathcal{NC}}_{1}(\mathbf{H}_{k}^{(L-1)})\).
### Theoretical analysis
In this section, we provide a theory for this empirical behavior. Most, if not all, of the theoretical papers on NC, adopt the UFM approach, which treats the features as free optimization variables - disconnected from data [11; 15; 31; 42; 43; 59]. Here, we consider a graph-based adaptation of this approach, that we dubbed as gUFM. We consider GNNs of the form of \(\psi_{\Theta}^{\mathcal{F}^{\prime}}\), which is more tractable for mathematical analysis. Formally, by considering \(\mathcal{L}\) to be the MSE loss, treating \(\{\mathbf{H}_{k}^{(L-1)}\}_{k=1}^{K}\) as freely optimizable variables, and representing \(\mathbf{W}_{2}^{(L)}\in\mathbb{R}^{C\times d_{L-1}},\mathbf{H}_{k}^{(L-1)} \in\mathbb{R}^{d_{L-1}\times N}\) as \(\mathbf{W}_{2},\mathbf{H}_{k}\) (for notational convenience), the empirical risk based on the gUFM can be formulated as follows:
\[\widehat{\mathcal{R}}^{\mathcal{F}^{\prime}}(\mathbf{W}_{2},\{ \mathbf{H}_{k}\}_{k=1}^{K}):=\frac{1}{K}\sum_{k=1}^{K}\left(\frac{1}{2N}\left\| \mathbf{W}_{2}\mathbf{H}_{k}\widehat{\mathbf{A}}_{k}-\mathbf{Y}\right\|_{F}^{ 2}+\frac{\lambda_{H_{k}}}{2}\left\|\mathbf{H}_{k}\right\|_{F}^{2}\right)+\frac {\lambda_{W_{2}}}{2}\left\|\mathbf{W}_{2}\right\|_{F}^{2} \tag{11}\]
where \(\mathbf{Y}\in\mathbb{R}^{C\times N}\) is the target matrix, which is composed of one-hot vectors associated with the different classes, and \(\lambda_{W_{2}},\lambda_{H_{k}}>0\) are regularization hyperparameters. To simplify the analysis, let us assume that \(\mathbf{Y}=\mathbf{I}_{C}\otimes\mathbf{1}_{n}^{\top}\), where \(\otimes\) denotes the Kronecker product. Namely, the training data is balanced (a common assumption in UFM-based analyses in literature) with \(n=N/C\) nodes per
class in each graph and (without loss of generality) organized class-by-class. Note that for \(K=1\) (which allows omitting the graph index \(k\)) and no graphical structure, i.e., \(\widehat{\mathbf{A}}=\mathbf{I}\) (since \(\mathbf{A}=\mathbf{I}\)), (11) reduces to the plain UFM that has been studied in [15, 42, 57]. In this case, it has been shown that any minimizer \((\mathbf{W}_{2}^{*},\mathbf{H}^{*})\) is _collapsed_, i.e., its features have _exactly zero_ within-class variability:
\[\mathbf{h}_{c,1}^{*}=\cdots=\mathbf{h}_{c,n}^{*}=\overline{\mathbf{h}}_{c}^{* },\quad\forall c\in[C], \tag{12}\]
which implies \(\mathbf{\Sigma}_{W}(\mathbf{H}^{*})=\mathbf{0}\). We will show now that the situation in gUFM is significantly different.
Considering the \(K=1\) case, we start by showing that, to have minimizers of (11) that possess the property in (12), the graph must obey a strict structural condition. For \(K>1\), having a minimizer \((\mathbf{W}_{2}^{*},\{\mathbf{H}_{k}^{*}\})\) where, for some \(j\in[K]\), \(\mathbf{H}_{j}^{*}\) is collapsed directly follows from having the structural condition satisfied by the \(j\)-th graph (as shown in our proof, the sufficiency of the condition does not depend on the shared weights \(\mathbf{W}_{2}\)). On the other hand, generalizing the necessity of the structural condition to the case of \(K>1\) is technically challenging (see the appendix for details). For that reason, we state the condition in the following theorem only for \(K=1\). Note also that, showing that the condition is unlikely to be satisfied per graph is enough for explaining the plateaus above zero of NC metrics (computed over multiple graphs), which are demonstrated in Section 3.1.
**Theorem 3.1**.: _Consider the gUFM in (11) with \(K=1\) and denote the fraction of neighbors of node \(v_{c,i}\) that belong to class \(c^{\prime}\) as \(s_{cc^{\prime},i}=\frac{|\mathcal{N}_{c^{\prime}}(v_{c,i})|}{|\mathcal{N}(v_{ c,i})|}\). Let the condition \(\mathbf{C}\) based on \(s_{cc^{\prime},i}\) be given by:_
\[(s_{c1,1},\cdots,s_{cC,1})=\cdots=(s_{c1,n},\cdots,s_{cC,n}),\quad\forall c\in [C].\] ( \[\mathbf{C}\] )
_If a graph \(\mathcal{G}\) satisfies condition \(\mathbf{C}\), then there exist minimizers of the gUFM that are collapsed (satisfying (12)). Conversely, when either \(\sqrt{\lambda_{H}\lambda_{W_{2}}}=0\), or \(\sqrt{\lambda_{H}\lambda_{W_{2}}}>0\) and \(G\) is regular (so that \(\widehat{\mathbf{A}}=\widehat{\mathbf{A}}^{\top}\)), if there exists a collapsed non-degenerate minimizer4 of gUFM, then condition \(\mathbf{C}\) necessarily holds._
Footnote 4: Note that the condition \(\mathbf{C}\) is not a regular graph, but it is not a regular graph.
**Remark:** The proof is presented in Appendix B. The symmetry assumption on \(\widehat{\mathbf{A}}\) (which implies that \(\mathcal{G}\) is a regular graph) in the second part of the theorem has been made to pass technical obstacles in the proof rather than due to a true limitation. Thus, together with the results of our experiments (where no symmetry is enforced), we believe that this assumption can be dropped. Accordingly, we state the following conjecture.
**Conjecture 3.1**.: _Consider the gUFM in (11) with \(K=1\) and condition \(\mathbf{C}\) as stated in theorem 3.1. The minimizers of the gUFM are collapsed (satisfying (12)) iff the graph \(\mathcal{G}\) satisfies condition \(\mathbf{C}\)._
Let us dwell on the implication of Theorem 3.1. The stated condition \(\mathbf{C}\) essentially holds when any node \(i\in[n]\) of a certain class \(c\) obeys \((s_{c1,i},\cdots,s_{cC,i})=(s_{c1},\cdots,s_{cC})\) for some \((s_{c1},\cdots,s_{cC})\), a tuple of the ratio of neighbors (\(\sum_{c^{\prime}=1}^{C}s_{cc^{\prime}}=1\)) independent of \(i\). That is, \((s_{c1},\cdots,s_{cC})\) must be the same for nodes within the same class but can be different for nodes belonging to different classes. For example, for a plain UFM this condition trivially holds, as \(\widehat{\mathbf{A}}=\mathbf{I}\). Under the SSBM distribution, it is also easy to see that \(\mathbb{E}\widehat{\mathbf{A}}\) satisfies this condition. However, for more practical graphs, such as those _drawn_ from SSBM, the probability of having a graph that obeys condition \(\mathbf{C}\) is negligible. This is shown in the following theorem.
**Theorem 3.2**.: _Let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) be drawn from SSBM\((N,C,p,q)\). For \(N>>C\), we have_
\[\mathbb{P}\left(\mathcal{G}\ \text{obeys}\ \mathbf{C}\right)<\left(\sum_{t=0}^{n} \left[\binom{n}{t}q^{t}(1-q)^{n-t}\right]^{n}\right)^{\frac{C(C-1)}{2}} \left(\sum_{t=0}^{n}\left[\binom{n}{t}p^{t}(1-p)^{n-t}\right]^{n}\right)^{C}. \tag{13}\]
The proof is presented in Appendix C. It is not hard to see that as the number of per-class nodes \(n\) increases, the probability of satisfying condition \(\mathbf{C}\) decreases,5 as numerically exemplified below.
Footnote 5: Note that the condition \(\mathbf{C}\) is not a regular graph, but it is not a regular graph.
**Numerical example.** Let's consider a setting with \(C=2,N=1000,a=3.75,b=0.25\). This gives us \(n=N/C=500,p=0.025,q=0.0017\), for which \(\mathbb{P}(\mathcal{G}\ \text{obeys}\ \mathbf{C})<1.7\times 10^{-1140}\).
In Appendix C we further show by exhaustive computation of \(\mathbb{P}(\mathcal{G}\ \text{obeys}\ \mathbf{C})\) that its value is negligible even for smaller scale graphs. Thus, the probability of sampling a graph structure for which the gUFM minimizers exhibit exact collapse is practically 0.
gUFM experiments.** For a better understanding of these results, we present small-scale experiments using the gUFM model on graphs that satisfy and do not satisfy condition **C**. By training the gUFM (based on \(\psi_{\mathbf{\Theta}}^{\mathcal{F}^{\prime}}\)) on \(K=10\) graphs that satisfy condition **C**, we can observe from Figure 3 that NC1 metrics on \(\mathbf{H},\mathbf{H}\widehat{\mathbf{A}}\) reduce significantly. On the other hand, these metrics plateau after sufficient reduction when the graphs fail to satisfy condition **C**, as shown in Figure 4. In both the cases, the SSBM parameters are \(C=2,N=1000,p=0.025,q=0.0017\), and the gUFM is trained using plain gradient descent for \(50000\) epochs with a learning rate of \(0.1\) and L2 regularization parameters \(\lambda_{W_{1}}=\lambda_{W_{2}}=\lambda_{H}=5\times 10^{-3}\). Extensive experiments with varying choices of \(N,C,p,q\), feature transformation based on \(\psi_{\mathbf{\Theta}}^{\mathcal{F}^{\prime}}\) and additional NC metrics are provided in Appendix F.
**Remark.** Note that previous papers consider UFM configurations for which the minimizers possess exact NC, typically without any condition on the number of samples or on the hyperparameters of the settings. As the UFMs are "optimistic" models, in the sense that they ignore all the limitations on modifying the features that exist in the training of practical DNNs, such results can be understood as "zero-order" reasoning for practical NC behavior. On the other hand, here we show that even the optimistic gUFM will not yield perfectly collapsed minimizers for graph structures that are not rare. This provides a pure understanding of the gaps in GNNs' features from exact collapse and why these gaps are larger than for plain DNNs. We also highlight the observation that _condition **C** applies to homophilic as well as heterophilic graphs_, as the constraint on neighborhood ratios is independent of label similarity. Thus providing insights on the effectiveness of GNNs on highly heterophilic graphs as empirically observed by Ma et al. [28].
**Gradient flow:** By now, we have provided a theory for the distinction between the deepest features of GNNs and plain DNNs. Next, to provide reasoning for the partial collapse in GNNs, which is observed empirically, we turn to study the gradient dynamics of our gUFM.
We consider the \(K=1\) case and, following the common practice [15, 43], analyze the gradient flow along the "central path" -- i.e., when \(\mathbf{W}_{2}=\mathbf{W}_{2}^{*}(\mathbf{H})\) is the optimal minimizer of \(\mathcal{R}^{\mathcal{F}^{\prime}}(\mathbf{W}_{2},\mathbf{H})\)
Figure 4: gUFM for \(\psi_{\mathbf{\Theta}}^{\mathcal{F}^{\prime}}\): Illustration of loss, overlap, and \(\mathcal{NC}_{1}\) plots for \(\mathbf{H},\mathbf{H}\widehat{\mathbf{A}}\) during training on \(10\) SSBM graphs which do not satisfy condition **C**.
Figure 3: gUFM for \(\psi_{\mathbf{\Theta}}^{\mathcal{F}^{\prime}}\): Illustration of loss, overlap, and \(\mathcal{NC}_{1}\) plots for \(\mathbf{H},\mathbf{H}\widehat{\mathbf{A}}\) during training on \(10\) SSBM graphs satisfying condition **C**.
w.r.t. \(\mathbf{W}_{2}\), which has a closed-form expression as a function of \(\mathbf{H}\). The resulting gradient flow is:
\[\frac{d\mathbf{H}_{t}}{dt}=-\nabla\widehat{\mathcal{R}}^{\mathcal{F}^{\prime}}( \mathbf{W}_{2}^{*}(\mathbf{H}_{t}),\mathbf{H}_{t}). \tag{14}\]
Similarly to [15; 43], we aim to gain insights on the evolution of \(\mathbf{\Sigma}_{W}(\mathbf{H}_{t})\) and \(\mathbf{\Sigma}_{B}(\mathbf{H}_{t})\) (in particular, their traces) along this flow. Yet, the presence of the structure matrix \(\widehat{\mathbf{A}}\) significantly complicates the analysis compared to existing works (which are essentially restricted to \(\widehat{\mathbf{A}}=\mathbf{I}\)). Accordingly, we focus on the case of two classes, \(C=2\), and adopt a perturbation approach, analyzing the flow for a graph \(\widehat{\mathbf{A}}=\mathbb{E}\widehat{\mathbf{A}}+\mathbf{E}\), where the expectation is taken with respect to the SSBM distribution and \(\mathbf{E}\) is a sufficiently small perturbation matrix. Our results are stated in the following theorem.
**Theorem 3.3**.: _Let \(K=1\), \(C=2\) and \(\lambda_{W_{2}}>0\). There exist \(\alpha>0\) and \(E>0\), such that for \(0<\lambda_{H}<\alpha\) and \(0<\|\mathbf{E}\|<E\), along the gradient flow stated in (14) associated with the graph \(\widehat{\mathbf{A}}=\mathbb{E}\widehat{\mathbf{A}}+\mathbf{E}\), we have that: (1) \(\operatorname{Tr}(\mathbf{\Sigma}_{W}(\mathbf{H}_{t}))\) decreases, and (2) \(\operatorname{Tr}(\mathbf{\Sigma}_{B}(\mathbf{H}_{t}))\) increases. Accordingly, \(\widetilde{\mathcal{N}}C_{1}(\mathbf{H}_{t})\) decreases._
The proof is presented in Appendix D. The importance of the theorem comes from showing that even graphs that do not satisfy condition \(\mathbf{C}\) (in the context of the analysis: perturbations around \(\mathbb{E}\widehat{\mathbf{A}}\)) exhibit reduction in the within-class covariance and increase in the between-class covariance of the features. This implies a reduction of NC1 metrics (to some extent), which is aligned with the empirical results in Section 3.1.
## 4 Feature separation across layers during inference
Till now, we have analyzed the feature evolution of the deepest GNN layer during training. In this section, we use these well-trained GNNs to classify nodes in unseen SSBM graphs and explore the depthwise evolution of features. In essence, we take an NC perspective on characterizing the weights of these well-trained networks that facilitate good generalization. To this end, we present empirical results demonstrating a gradual decrease of NC1 metrics along the network's depth. The observations hold a resemblance to the case with plain DNNs (shown empirically in [12; 42] and more recently in [16], and theoretically in [43]). To gain insights into this depthwise behavior we also compare it with the behavior of spectral clustering methods along their projected power iterations.
### Experiments
**Setup.** We consider the \(32-\)layered networks \(\psi_{\Theta}^{\mathcal{F}},\psi_{\Theta}^{\mathcal{F}^{\prime}}\) which have been designed and trained as per the setup in section 3.1 and have reached TPT. These networks are now tested on a dataset of \(K=100\) unseen random SSBM graphs with \(C=2,N=1000,p=0.025,q=0.0017\). Additionally, we perform spectral clustering using projected power iterations on the Normalized Laplacian (NL) and Bethe-Hessian (BH) matrices [38] for each of the test graphs. The motivation behind this approach is to obtain an approximation of the Fiedler vector of NL/BH that sheds light on the hidden community structure [1; 4; 34; 53]. Formally, for a test graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), the NL and BH matrices are given by:
\[\text{NL}(\mathcal{G})=\mathbf{I}-\mathbf{D}^{-1/2}\mathbf{A} \mathbf{D}^{-1/2}, \tag{15}\] \[\text{BH}(\mathcal{G},r)=(r^{2}-1)\mathbf{I}-r\mathbf{A}+\mathbf{ D}, \tag{16}\]
where \(r\in\mathbb{R}\) is the BH scaling factor. Now, by treating \(\mathbf{B}\) to be either NL or BH matrix, a projected power iteration to estimate the second largest eigenvector of \(\widetilde{\mathbf{B}}=\|\mathbf{B}\|\,\mathbf{I}-\mathbf{B}\) is given by:
\[\mathbf{x}^{(l)}=\widetilde{\mathbf{B}}\mathbf{w}^{(l-1)},\quad\text{ where}\quad\mathbf{w}^{(l-1)}=\frac{\mathbf{x}^{(l-1)}-\langle\mathbf{x}^{(l-1)}, \mathbf{v}\rangle\mathbf{v}}{\left\|\mathbf{x}^{(l-1)}-\langle\mathbf{x}^{(l -1)},\mathbf{v}\rangle\mathbf{v}\right\|_{2}}\,, \tag{17}\]
with the vector \(\mathbf{v}\in\mathbb{R}^{N}\) denoting the largest eigenvector of \(\widetilde{\mathbf{B}}\). Thus, we start with a random normal vector \(\mathbf{w}^{0}\in\mathbb{R}^{N}\) and iteratively compute the feature vector \(\mathbf{x}^{(l)}\in\mathbb{R}^{N}\), which represents the 1-D feature for each node after \(l\) iterations.
### Towards understanding depthwise behavior
From Figure 5, we can observe that the rate of decrease in NC1 metrics is much higher in \(\psi^{\mathcal{F}}_{\Theta}\) and \(\psi^{\mathcal{F}^{\prime}}_{\Theta}\) (avg test overlap \(=1\)) when compared to the baseline spectral approaches (avg test overlap NL\(=0.04\), BH\(=0.15\)) with random normal feature initialization. For \(\psi^{\mathcal{F}}_{\Theta}\) and \(\psi^{\mathcal{F}^{\prime}}_{\Theta}\), the NC1 metrics and traces of covariance matrices are tracked after each of the components of a layer: graph operator, ReLU and instance normalization. For spectral methods, the components are: the operator \(\widehat{\mathbf{B}}\) and the normalization. Interestingly, this rate seems to be relatively higher in \(\psi^{\mathcal{F}^{\prime}}_{\Theta}\) than in \(\psi^{\mathcal{F}}_{\Theta}\), and the variance of metrics tends to reduce significantly across all the test graphs after a certain depth in \(\psi^{\mathcal{F}^{\prime}}_{\Theta}\) and \(\psi^{\mathcal{F}}_{\Theta}\). Intuitively, the presence of \(\mathbf{W}_{1}\) in \(\psi^{\mathcal{F}}_{\Theta}\) seems to delay this reduction across layers. On the other hand, owing to the non-parametric nature of the spectral approaches, observe that the ratios \(\text{Tr}(\mathbf{\Sigma}_{B}(\mathbf{x}^{(l)}))/\text{Tr}(\mathbf{\Sigma}_{B }(\mathbf{w}^{(l-1)})),\text{Tr}(\mathbf{\Sigma}_{W}(\mathbf{x}^{(l)}))/\text {Tr}(\mathbf{\Sigma}_{W}(\mathbf{w}^{(l-1)}))\) tend to be constant throughout all iterations. However, the GNNs behave differently as \(\text{Tr}(\mathbf{\Sigma}_{B}(\mathbf{X}^{(l)}))/\text{Tr}(\mathbf{\Sigma}_{B }(\mathbf{H}^{(l-1)}))\), \(\text{Tr}(\mathbf{\Sigma}_{W}(\mathbf{X}^{(l)}))/\text{Tr}(\mathbf{\Sigma}_{ W}(\mathbf{H}^{(l-1)}))\) tend to decrease across depth (Figure 6).
For a better understanding of this phenomenon, we consider the case of \(C=2\) (without loss of generality) and assume that the \((l-1)^{th}\)-layer features \(\mathbf{H}^{(l-1)}\) of nodes belonging to class \(c=1,2\) are drawn from distributions \(\mathcal{D}_{1},\mathcal{D}_{2}\) respectively. We do not make any assumptions on the nature of the distributions and simply consider \(\boldsymbol{\mu}_{1}^{(l-1)},\boldsymbol{\mu}_{2}^{(l-1)}\in\mathbb{R}^{d_{l-1}}\) and \(\mathbf{\Sigma}_{1}^{(l-1)},\mathbf{\Sigma}_{2}^{(l-1)}\in\mathbb{R}^{d_{l-1} \times d_{l-1}}\) as their mean vectors and covariance matrices, respectively. In the following theorem, we present bounds on the ratio of traces of feature covariance matrices after the graph operator is applied.
**Theorem 4.1**.: _Let \(C=2,\lambda_{i}(\cdot),\lambda_{-i}(\cdot)\) indicate the \(i^{th}\) largest and smallest eigenvalue of a matrix, \(\beta_{1}=\frac{p-q}{p+q},\beta_{2}=\frac{p}{n(p+q)},\beta_{3}=\frac{p^{2}+q^{2} }{n(p+q)^{2}}\), and denote_
\[\mathbf{T}_{W}={\mathbf{W}_{1}^{*}}^{(l)\top}\mathbf{W}_{1}^{*(l) }+\beta_{2}\left[\mathbf{W}_{2}^{*(l)\top}\mathbf{W}_{1}^{*(l)}+{\mathbf{W}_{ 1}^{*}}^{(l)\top}\mathbf{W}_{2}^{*(l)}\right]+\beta_{3}{\mathbf{W}_{2}^{*}}^{( l)\top}\mathbf{W}_{2}^{*(l)},\] \[\mathbf{T}_{B}=\left(\mathbf{W}_{1}^{*(l)}+\beta_{1}\mathbf{W}_{2 }^{*(l)}\right)^{\top}\left(\mathbf{W}_{1}^{*(l)}+\beta_{1}\mathbf{W}_{2}^{*( l)}\right).\]
_Then, the ratios of traces \(\frac{\operatorname{Tr}(\mathbf{\Sigma}_{B}(\mathbf{X}^{(l)}))}{\operatorname{Tr }(\mathbf{\Sigma}_{B}(\mathbf{H}^{(l-1)}))},\frac{\operatorname{Tr}(\mathbf{ \Sigma}_{W}(\mathbf{X}^{(l)}))}{\operatorname{Tr}(\mathbf{\Sigma}_{W}(\mathbf{ H}^{(l-1)}))}\) for layer \(l\in\{2,\cdots,L\}\) of a network \(\psi_{\Theta}^{\mathcal{F}}\) are bounded as follows:_
\[\frac{\sum_{i=1}^{d_{l-1}}\lambda_{-i}\left(\mathbf{\Sigma}_{B}( \mathbf{H}^{(l-1)})\right)\lambda_{i}\left(\mathbf{T}_{B}\right)}{\sum_{i=1}^{ d_{l-1}}\lambda_{i}\left(\mathbf{\Sigma}_{B}(\mathbf{H}^{(l-1)})\right)}\leq \frac{\operatorname{Tr}(\mathbf{\Sigma}_{B}(\mathbf{X}^{(l)}))}{\operatorname {Tr}(\mathbf{\Sigma}_{B}(\mathbf{H}^{(l-1)}))}\leq\frac{\sum_{i=1}^{d_{l-1}} \lambda_{i}\left(\mathbf{\Sigma}_{B}(\mathbf{H}^{(l-1)})\right)\lambda_{i} \left(\mathbf{T}_{B}\right)}{\sum_{i=1}^{d_{l-1}}\lambda_{i}\left(\mathbf{ \Sigma}_{B}(\mathbf{H}^{(l-1)})\right)},\] \[\frac{\sum_{i=1}^{d_{l-1}}\lambda_{-i}\left(\mathbf{\Sigma}_{W}( \mathbf{H}^{(l-1)})\right)\lambda_{i}\left(\mathbf{T}_{W}\right)}{\sum_{i=1}^{ d_{l-1}}\lambda_{i}\left(\mathbf{\Sigma}_{W}(\mathbf{H}^{(l-1)})\right)}\leq\frac{ \operatorname{Tr}(\mathbf{\Sigma}_{W}(\mathbf{H}^{(l)}))}{\operatorname{Tr}( \mathbf{\Sigma}_{W}(\mathbf{H}^{(l-1)}))}\leq\frac{\sum_{i=1}^{d_{l-1}} \lambda_{i}\left(\mathbf{\Sigma}_{W}(\mathbf{H}^{(l-1)})\right)\lambda_{i} \left(\mathbf{T}_{W}\right)}{\sum_{i=1}^{d_{l-1}}\lambda_{i}\left(\mathbf{ \Sigma}_{W}(\mathbf{H}^{(l-1)})\right)}.\]
The proof is presented in Appendix E. To understand the implications of this result, first observe that by setting \(\mathbf{W}_{1}^{*}=\mathbf{0}\) and modifying \(\mathbf{T}_{W}=\beta_{3}{\mathbf{W}_{2}^{*}}^{(l)\top}\mathbf{W}_{2}^{*(l)}, \mathbf{T}_{B}=\beta_{1}^{2}{\mathbf{W}_{2}^{*}}^{(l)\top}\mathbf{W}_{2}^{*(l)}\), we can obtain a similar bound formulation for \(\psi_{\Theta}^{\mathcal{F}}\). To this end, as \(\mathbf{T}_{W},\mathbf{T}_{B}\) depend on the spectrum of \(\mathbf{W}_{2}^{*(l)}\), the ratios \(\frac{\operatorname{Tr}(\mathbf{\Sigma}_{B}(\mathbf{X}^{(l)}))}{\operatorname{ Tr}(\mathbf{\Sigma}_{B}(\mathbf{H}^{(l-1)}))},\frac{\operatorname{Tr}( \mathbf{\Sigma}_{W}(\mathbf{X}^{(l)}))}{\operatorname{Tr}(\mathbf{\Sigma}_{W}( \mathbf{H}^{(l-1)}))}\) are highly dependent on \(\beta_{1},\beta_{3}\). Notice that since \({\mathbf{W}_{1}^{*(l)\top}\mathbf{W}_{1}^{*(l)}}\) in \(\mathbf{T}_{W}\) is not scaled by any factor that is inversely dependent on \(n\), it tends to act as a spectrum controlling mechanism and the reduction in within-class variability of features in \(\psi_{\Theta}^{\mathcal{F}}\) is relatively slow when compared to \(\psi_{\Theta}^{\mathcal{F}^{\prime}}\). Thus, justifying the empirical behavior that we observed in subplots 5(c) and 5(d) in Figure 6.
## 5 Conclusion
In this work, we studied the feature evolution in GNNs for inductive node classification tasks. Adopting a Neural Collapse (NC) perspective, we analyzed both empirically and theoretically the within- and between-class variability of features along the training epochs and along the layers during inference. We showed that a partial decrease in within-class variability (and NC1 metrics) is present in the GNNs' deepest features and provided theory that indicates that greater collapse is not expected when training GNNs on practical graphs (as it requires strict structural conditions). We also showed a depthwise decrease in variability metrics, which resembles the case with plain DNNs. Especially, by leveraging the analogy of feature transformation across layers in GNNs with spectral clustering along projected power iterations, we provided insights into this GNN behavior and distinctions between two GNN architectures. Interestingly, the structural conditions on graphs for exact collapse, which we rigorously established in this paper, are aligned with those that have been empirically hypothesized to facilitate GNNs learning in [28] (outside the context of NC). As a direction for future research, one may try to use this connection to link NC behavior with the generalization performance of GNNs.
## Acknowledgments and Disclosure of Funding
The authors would like to thank Jonathan Niles-Weed, Soledad Villar, Teresa Huang, Zhengdao Chen, and Lei Chen for informative discussions and feedback. The authors acknowledge the NYU High Performance Computing services for providing the computing resources to run the experiments reported in this manuscript. This work is partially supported by NSF DMS 2134216, NSF CAREER CIF 1845360, NSF IIS 1901091, and the Alfred P Sloan Foundation.
|
2308.00212 | Optimizing dual-energy CT technique for iodine-based contrast-to-noise
ratio | Purpose: This study proposes a systematic method for determining the optimal
x-ray tube settings/energy windows and fluence for minimal noise and maximum
CNR in material density images obtained from DECT scans by fixing the subject
size and the total radiation dose. Methods: The noise propagation in the
process of sinogram and image reconstruction from DECT measurements is
analyzed. Analytic estimates for the sinogram and monochromatic image pixel
variances and the CNR as functions of tube potentials, fluence, and virtual
monochromatic image (VMI) energy are derived, and then used in a phantom
experiment as an objective function for optimizing the tube settings to
minimize the image noise and maximize the CNR. Results: A non-trivial example
that shows the existence of singular solutions to the inversion of
sinograms-to-DECT measurements map was presented. Additionally, the optimal VMI
energy for maximal CNR was determined. The optimal energy VMI was found to be
the least noisy monochromatic image synthesized from the iodine and water
density images, and it was shown that using more general weights in combining
the two images linearly does not improve image quality. When the x-ray beam
filter material was fixed at 2mm of Aluminum and the photon fluence for low and
high kV scans were considered equal, the tube potential pair of 60/120 kV led
to the maximal CNR in the VMI formed at energy 55 KeV. Conclusions: Optimizing
DECT scan parameters to maximize the CNR can be done in a systematic way. Also
choosing the parameters that maximize the Jacobian determinant over the
sinogram domain would lead to more stable reconstructions due to the reduced
amplification of the measurement noise. Since the values of the Jacobian
determinant depend strongly on the imaging task, careful consideration of all
of the relevant factors is needed when implementing the proposed framework. | Fatma Terzioglu, Emil Y. Sidky, Jp Phillips, Ingrid Reiser, Guillaume Bal, Xiaochuan Pan | 2023-07-28T20:09:57Z | http://arxiv.org/abs/2308.00212v1 | # Optimizing dual-energy CT technique for iodine-based contrast-to-noise ratio
###### Abstract
**Purpose:** The goal of this study is to propose a systematic method for determining the optimal x-ray tube settings/energy windows and fluence for minimal noise and maximum CNR in material density images obtained from dual-energy CT (DECT) scans by fixing the subject size and the total radiation dose.
**Methods:** The noise propagation in the process of sinogram and image reconstruction from DECT measurements is analyzed. The main objects of the study are the pixel variances for the sinogram and monochromatic image and the contrast-to-noise ratio (CNR), which were shown to depend on the Jacobian matrix of the sinograms-to-DECT measurements map.
Analytic estimates for the sinogram and monochromatic image pixel variances and the CNR as functions of tube potentials, fluence, and virtual monochromatic image (VMI) energy are derived, and then used in a phantom experiment as an objective function for optimizing the tube settings to minimize the image noise and maximize the CNR.
**Results:** A non-trivial example that shows the existence of singular solutions to the inversion of sinograms-to-DECT measurements map was presented. Additionally, the optimal VMI energy for maximal CNR was determined. The optimal energy VMI was found to be the least noisy monochromatic image synthesized from the iodine and water density images, and it was shown that using more general weights in combining the two images linearly does not improve image quality.
When the x-ray beam filter material was fixed at 2mm of Aluminum and the photon fluence for low and high kV scans were considered equal, the tube potential pair of 60/120 kV led to the maximal CNR in the VMI formed at energy 55 KeV.
**Conclusions:** Optimizing DECT scan parameters to maximize the CNR can be done in a systematic way. Also choosing the parameters that maximize the Jacobian determinant over the sinogram domain would lead to more stable reconstructions due to the reduced amplification of the measurement noise. Since the values of the Jacobian
determinant depend strongly on the imaging task, careful consideration of all of the relevant factors is needed when implementing the proposed framework.
###### Contents
* 1 Introduction
* II Methods II.1 Scan configuration, DECT technique, and noise simulation II.2 Jacobian of the sinogram-to-photon-counts transformation and uniqueness of reconstructions II.3 Noise propagation based on linearization of the inverse sinogram-to-photon-counts transformation II.4 Computation of pixel variances and the CNR of a monochromatic image II.5 Minimization of mean pixel variance
* III Results III.A Experimental setup III.B Uniqueness of the reconstructed sinograms III.C Optimization of the iodine CNR in virtual monochromatic images
* IV Discussion and conclusion
Introduction
Dual-energy CT (DECT) systems enable the simultaneous acquisition of two spectral measurements to identify different materials within the scanned object. DECT has been demonstrated to outperform single-energy CT in terms of image quality and contrast-to-noise ratio (CNR), allowing for reduced radiation exposure and contrast agent concentration while maintaining image quality [1, 2, 3, 4, 5].
The concept of DECT was introduced by Hounsfield in the 1970s [6], and the mathematical framework for pre-reconstruction processing of DECT data was developed by Alvarez and Macovski in a seminal paper in 1976 [7]. Their approach to material decomposition is based on the assumption that energy-dependent attenuation coefficients of chemical compounds can be approximated by a linear combination of elemental mass attenuation maps weighted by the partial density of each element. This reduces sinogram reconstruction to solving a nonlinear system of equations for each sinogram value.
Due to the nonlinearity of the DECT measurement model, the uniqueness of reconstructed sinograms from DECT measurements is not guaranteed. Levine first provided an example of non-unique solutions in DECT for a material basis of water and bone using spectra with three discrete photon energies [8]. Alvarez analyzed the non-invertibility of the sinogram to DECT measurements mapping by studying the Jacobian determinant [9]. In general, the nonvanishing of the Jacobian determinant only guarantees local uniqueness (a.k.a. injectivity), requiring additional constraints for global uniqueness. Bal and Terzioglu [10] presented sufficient analytic criteria for the global injectivity of multi-energy CT (MECT) measurement map for a given number of materials and equal number of energy measurements. In the case of DECT, they showed that the nonvanishing of the Jacobian determinant of the sinogram to DECT measurements map throughout its domain is sufficient to ensure global uniqueness. They also demonstrated how the choice of basis materials and x-ray spectra influences the Jacobian determinant values and, consequently, the invertibility. In this paper, we showcase the occurrence of nonuniqueness when the Jacobian determinant vanishes within the rectangular region encompassing all possible sinogram values, providing a clear example of two isolated solutions corresponding to the same DECT measurement pair.
It was shown by Bal et al. [11] that the stability of the inversion of the mapping of sino
grams to MECT measurements is improved by choosing the x-ray spectra that maximize Jacobian determinant values of the sinogram to DECT measurements map. In this paper, we propose a systematic method for determining optimal tube settings and fluence that minimize noise and maximize CNR in material density images obtained from DECT scans while keeping subject size and total radiation dose fixed. To achieve this, we consider a noise model based on a compound Poisson process for the DECT measurements and analyze the noise propagation from DECT measurements to material density images by linearizing the inverse sinogram-to-photon-counts transformation. We derive analytic expressions for sinogram and monochromatic image pixel variances as functions of tube potentials, fluence, and virtual monochromatic image (VMI) energy. These expressions, along with CNR, are used as an objective function to optimize the tube settings and minimize image noise in a phantom experiment. We determine the optimal VMI energy for maximal CNR and also prove that the least noisy monochromatic image synthesized from iodine and water density images corresponds to the optimal energy VMI. Consequently, using more general weights to linearly combine the two images does not enhance image quality.
The problem of improving iodine CNR in VMI has been previously addressed, but our approach differs from existing works in terms of the analysis methods employed. Specifically, the works by Leng et al. [2] and Tao et al. [12] focus on applying denoising techniques during the reconstruction process. Yu et al. [13, 14] examine the effect of subject size and photon fluence on the image quality of linearly mixed images generated from DECT scans using a dual-source CT scanner, assuming a fixed x-ray source kilovoltage-peak (kV) setting and total radiation dose. Michalak et al. [15] conduct a phantom study to empirically determine the optimal photon energies for virtual mono-energetic imaging across various phantom sizes. Dabli et al. [16] empirically determine optimal tube potential settings that yield high image quality and accuracy for low iodine concentration quantification. Ren et al. [17] analyze the conditioning of spectral weights by employing singular value decomposition of the matrix formed by sampling the intensity profile of each spectral weight over the energy range.
## 2 Methods
In this section, we detail the approaches and procedures employed to optimize Dual-Energy Computed Tomography (DECT) scan parameters for improved iodine-based Contrast-to
Noise Ratio (CNR). We first present the physical model for the DECT measurements, considering the well-established assumption of the basis material decomposition [7, 18] and a noise model based on the compound Poisson process [19]. Next, we explore the role of the Jacobian matrix in ensuring reconstruction uniqueness and examine noise propagation by linearizing the inverse sinogram-to-photon-counts transformation. We then derive analytical expressions for CNR, mean sinogram variance, and mean pixel variance. We finally show that the smallest eigenvalue of the mean pixel covariance matrix gives the minimum mean variance for VMI obtained by using optimal VMI energy, identifying optimal energy levels for maximum image quality.
### Scan configuration, DECT technique, and noise simulation
The physics modeling for the dual-energy CT system includes available models for Tungsten X-ray source spectra, low and high kV fluence, response of energy-integrating detectors, and compound Poisson noise [19].
For iodine-based contrast imaging with a DECT system, we assume that the scanned object is composed of iodine and water only. Consequently, the linear attenuation coefficient is approximated by [7]
\[\mu(E,y)\approx M_{1}(E)\rho_{1}(y)+M_{2}(E)\rho_{2}(y), \tag{1}\]
where \(M_{1}(E)\) and \(M_{2}(E)\) denote material attenuation at energy \(E\) for iodine and water, respectively, and \(\rho_{1}(y)\) and \(\rho_{2}(y)\) are their mass density at a spatial location \(y\). While the mass densities are unknown and need to be reconstructed, the energy dependent attenuation maps are known _a priori_, which are available at the NIST database [20]. To simplify the notation, we write
\[M(E)=\begin{bmatrix}M_{1}(E)\\ M_{2}(E)\end{bmatrix}.\]
For a given x-ray beam \(l\), the x-ray transform (or sinogram) of mass density \(\rho_{j}\) of the \(j\)-th material is given by \(x_{j}(l)=\int_{l}\rho_{j}(y)dy\). We let
\[x=x_{l}=\begin{bmatrix}x_{1}(l)\\ x_{2}(l)\end{bmatrix}.\]
Let \(S_{1}(E)\) and \(S_{2}(E)\) be the x-ray energy spectra corresponding to low and high energy x-ray tube potentials \(tp_{1}\) and \(tp_{2}\), respectively. The x-ray spectra used in the experiments
were known _a priori_, which were modeled, for given tube potentials, by using the publicly available Python software toolkit SpekPy v2.0[21].
For an x-ray beam \(l\), the number of photons incident on the detector with energy \(E\) per unit time corresponding to the \(i\)-th measurement is given by
\[I_{i}(x_{l};E)=S_{i}(E)e^{-\int_{l}\mu(E,y)dy}\approx S_{i}(E)e^{-M(E)\cdot x_{l }},\quad 1\leq i\leq 2, \tag{2}\]
where \(M(E)\cdot x_{l}=M_{1}(E)x_{1}(l)+M_{2}(E)x_{2}(l)\).
The total number of detected photons associated to line \(l\) is then given by
\[I_{i}(x_{l})=\int_{0}^{\infty}S_{i}(E)e^{-M(E)\cdot x_{l}}D(E)dE,\quad 1\leq i \leq 2, \tag{3}\]
where \(D(E)\) is the energy dependence of the detector sensitivity. For energy integrating detectors, \(D(E)=\alpha E\) for some \(\alpha>0\)[19].
In this study, we consider the negative logarithm of the intensity measurements
\[g_{i}(x)=-\ln I_{i}(x),\quad i=1,2. \tag{4}\]
We define
\[I(x)=\begin{bmatrix}I_{1}(x)\\ I_{2}(x)\end{bmatrix},\qquad g(x)=\begin{bmatrix}g_{1}(x)\\ g_{2}(x)\end{bmatrix}.\]
We assume that the sinograms \(x=x_{l}\in\mathcal{R}=[0,a_{1}]\times[0,a_{2}]\), a rectangle in \(\mathbb{R}^{2}\) with \(a_{j}\) being the maximal attenuation over all possible lines expected for the \(j\)-th material in a given imaging task.
In the reconstruction of mass density maps from DECT measurements, we consider a two step method given in the following diagram. For each line \(l\),
\[\begin{bmatrix}g_{1}\\ g_{2}\end{bmatrix}\quad\xrightarrow[\text{method}]{\text{Newton's}}\quad \begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}\quad\xrightarrow[\text{back-projection}]{\text{Filtered}}\quad \begin{bmatrix}\rho_{1}\\ \rho_{2}\end{bmatrix}. \tag{5}\]
For \(i=1,2\), the spectral measurements \(I_{i}\) are assumed to be independent random variables that follow a compound Poisson process[19]. The covariance matrix for the log-intensity measurements \(g\) is then given by
\[\Sigma_{g}(x)=\begin{bmatrix}\sigma_{g}(x)_{1}^{2}&0\\ 0&\sigma_{g}(x)_{2}^{2}\end{bmatrix}, \tag{6}\]
where
\[\sigma_{g}(x)_{i}^{2}=\frac{\int_{0}^{\infty}D^{2}(E)S_{i}(E)e^{-M(E) \cdot x}dE}{\left(\int_{0}^{\infty}D(E)S_{i}(E)e^{-M(E)\cdot x}dE\right)^{2}}, \tag{7}\]
is the variance of the \(i\)-th log-intensity measurement \(g_{i}\).
We note that \(\Sigma_{g}\) depends on the detector sensitivity \(D(E)\) but is independent of the factor \(\alpha\) if \(D(E)\) is replaced by \(\alpha D(E)\).
### Jacobian of the sinogram-to-photon-counts transformation and uniqueness of reconstructions
Unlike the case of single energy CT, the reconstructions obtained from DECT measurements may not always be unique, which is mainly due to the non-linearity of the DECT measurement model with respect to sinogram values. For nonlinear maps defined on convex domains, local constraints on the Jacobian of the forward measurements provide sufficient criteria for the uniqueness of reconstructions[22, 23, 24].
In our case, the map \(x\to g(x)\) is smooth and its Jacobian matrix at point \(x\in\mathcal{R}\) is given by the matrix \(J(x)\) with entries
\[J_{ij}(x)=\frac{\partial g_{i}}{\partial x_{j}}(x)=\frac{\int_{ 0}^{\infty}D(E)S_{i}(E)M_{j}(E)e^{-M(E)\cdot x}dE}{\int_{0}^{\infty}D(E)S_{i} (E)e^{-M(E)\cdot x}dE},\qquad i,j=1,2. \tag{8}\]
Based on the work of Gale and Nikaido[22], Bal and Terzioglu[10] proved that if the Jacobian determinant
\[\det J(x)=J_{11}(x)J_{22}(x)-J_{12}(x)J_{21}(x)\neq 0, \tag{9}\]
for all \(x\in\mathcal{R}\), the map \(x\to g(x)\) is globally injective. This means that the reconstruction of \(x\) from the knowledge of \(g(x)\) (or \(I(x)\)) is unique.
In DECT, the values of the Jacobian determinant depend on the material basis and the x-ray spectra[10]. For iodine-water material pair, we demonstrate the dependence of the Jacobian determinant on the x-ray spectra in Figures 2 and 3 of section III.B..
We also present in Fig. 4 an example scan protocol where the uniqueness does not hold. That is, the Jacobian determinant vanishes inside the rectangle and there exist two distinct sinogram values that are mapped by \(g\) to the same measurement pair.
### Noise propagation based on linearization of the inverse sinogram-to-photon-counts transformation
We now present an analysis of the noise propagation from DECT measurements to the reconstructed sinograms by considering first order Taylor approximation to inverse sinogram-to-photon-counts transformation.
For a given sinogram value \(x_{l}\in\mathcal{R}\) that corresponds to a line \(l\), we let \(g^{\eta}(x_{l})\) denote the noisy DECT measurement:
\[g^{\eta}(x_{l})=g(x_{l})+\eta, \tag{10}\]
where \(\eta\) is the noise vector. Let \(x_{l}^{\eta}\) be the sinogram value that is reconstructed from \(g^{\eta}(x_{l})\). Assuming that the noise is small and considering a first order Taylor expansion, we have
\[x_{l}^{\eta}\approx x_{l}+J^{-1}(x_{l})\eta. \tag{11}\]
Under the linearization regime, the \(2\times 2\) covariance matrix of the reconstructed sinograms is given by
\[\Sigma_{l}=\Sigma_{x_{l}}=J^{-1}(x_{l})\Sigma_{g^{\eta}}(x_{l})J^{-t}(x_{l}), \tag{12}\]
where \(-t\) denotes the transpose of inverse matrix (see eg. Cowan, p. 21) [25]. Here, the diagonal entries of (12) are the variances of the iodine and water sinograms, and the off-diagonal entries are the covariance between them (see, for example, Roessl et al. [26] for their explicit formulas).
We define the diagonal matrices iodine and water sinogram variances and covariance as \(C^{(ij)},i,j=1,2\). Therefore,
\[(C^{(ij)})_{ll}=(\Sigma_{l})_{ij},\quad i,j=1,2. \tag{13}\]
We also define a matrix \(Q=(q_{ij})_{i,j=1,2}\) with diagonal entries being the mean pixel variance of each material density map whereas the off-diagonal entries are the mean covariance between them. Let \(B=(b_{pl})_{1\leq p\leq P,\;1\leq l\leq L}\) denote the filtered back-projection matrix. Then,
\[q_{ij}=\frac{1}{P}\text{tr}(BC^{(ij)}B^{t}),\quad j=1,2. \tag{14}\]
It is well known that the acquisition of photons for each spectral measurement may be shortened or lengthened to minimize reconstruction errors. Let \(T\) be the total time of acquisition and \(0<\tau<T\) be the acquisition time of the low energy measurement (so that \(T-\tau\) is the time of acquisition in the high kV setting). Since the variance of the measurement noise is proportional to photon count, we observe that the covariance matrix of the measurement error is divided by \(\tau\) when \(i=1\) and \(T-\tau\) with \(i=2\). We thus have a modified covariance matrix for the reconstructed sinograms
\[\Sigma_{l}=J^{-1}(x_{l})F\Sigma_{g^{\eta}}(x_{l})J^{-t}(x_{l}), \tag{15}\]
where
\[F=\begin{bmatrix}\tau&0\\ 0&T-\tau\end{bmatrix}.\]
It is important to note that the matrix \(\Sigma_{l}\) is always symmetric, and is also positive definite provided that \(\det J(x_{l})\neq 0\). We also observe from eqs. (12) and (15) that the sinogram (co-)variances are inversely proportional to the square of the Jacobian determinant. Therefore, optimizing DECT scan parameters to maximize the minimum value of the Jacobian determinant over the sinogram domain leads to more stable sinogram reconstructions due to the reduced amplification of the measurement noise (also see Bal et al. [11]).
### Computation of pixel variances and the CNR of a monochromatic image
Once the sinograms \(x_{1}\) and \(x_{2}\) (for iodine and water, respectively) are reconstructed from the DECT measurements using a nonlinear iterative algorithm, for which we use Newton's method in this study, they can be combined linearly to obtain a monochromatic sinogram, that is
\[x_{mono}=w_{1}x_{1}+w_{2}x_{2}=w^{t}x, \tag{16}\]
where \(w=[w_{1},w_{2}]^{t}\) is a unit vector in \(\mathbb{R}^{2}\), i.e, \(\|w\|_{2}=1\).
We note that for virtual monochromatic images (VMI), one considers the attenuation weights \(w=\frac{M(E)}{\|M(E)\|_{2}}\) where the VMI energy \(E\) is chosen to maximize a given metric, e.g. the contrast-to-noise ratio (CNR) of a region of interest (ROI) in a given imaging task.
We define the contrast-to-noise ratio of a signal as the difference between the mean CT numbers of the signal and the background divided by the mean standard deviation of the signal:
\[{\rm CNR}(w,tp,\tau)=\frac{mean(CT\#_{signal})-mean(CT\#_{background})}{s(w, tp,\tau)_{signal}}. \tag{17}\]
Here, we consider as the background the part of the image containing only water.
In the following, we derive an analytic expression for the pixel variance \(s^{2}(w,tp,\tau)\).
Let \(L\) denote the total number of lines (bins \(\times\) views). If the monochromatic sinograms (16) corresponding to different lines are independent, then their covariance, which is denoted by \(\Sigma_{mono}\) and is of size \(L\times L\), is a diagonal matrix of monochromatic variances
\[\sigma_{l}^{2}(w,tp,\tau)=w^{t}\Sigma_{l}w. \tag{18}\]
The pixel covariance matrix is then given by
\[\Sigma_{y}=B\Sigma_{mono}B^{t}, \tag{19}\]
where \(B=(b_{pl})_{1\leq p\leq P,\;1\leq l\leq L}\) is the filtered back-projection matrix.
By direct calculation, we obtain that
\[(\Sigma_{y})_{ii}=\sum_{l=1}^{L}\sigma_{l}^{2}(w,tp,\tau)b_{pl}^{2},\quad i=1, \ldots,P. \tag{20}\]
Therefore, the mean pixel variance of the monochromatic signal is given by
\[s^{2}(w,tp,\tau) = \frac{1}{P}\sum_{p=1}^{P}(\Sigma_{y})_{ii}=\frac{1}{P}\sum_{p=1} ^{P}\sum_{l=1}^{L}\sigma_{l}^{2}(w,tp,\tau)b_{pl}^{2} \tag{21}\] \[= \frac{1}{P}\sum_{l=1}^{L}\big{(}{\sum_{p=1}^{P}}b_{pl}^{2}\big{)} \sigma_{l}^{2}(w,tp,\tau)=\frac{1}{P}\sum_{l=1}^{L}(b_{l}^{t}b_{l})\sigma_{l} ^{2}(w,tp,\tau),\]
where \(b_{l}\) denote the \(l\)-th column of \(B\). Now using eq. (18), we obtain that
\[s^{2}(w,tp,\tau)=\frac{1}{P}\sum_{l=1}^{L}(b_{l}^{t}b_{l})w^{t}\Sigma_{l}w=w^ {t}\left(\frac{1}{P}\sum_{l=1}^{L}b_{l}^{t}b_{l}\Sigma_{l}\right)w=w^{t}Qw, \tag{22}\]
since
\[q_{ij}=\frac{1}{P}{\rm tr}({\rm BC}^{({\rm ij})}{\rm B}^{t})=\frac{1}{P}\sum_ {l=1}^{L}\sum_{p=1}^{P}b_{pl}^{2}(\Sigma_{l})_{ij}=\frac{1}{P}\sum_{l=1}^{L}b _{l}^{t}b_{l}(\Sigma_{l})_{ij},\quad j=1,2. \tag{23}\]
### Minimization of mean pixel variance
In the following, we show that the smallest eigenvalue of the mean pixel covariance matrix gives the minimum mean variance for VMI obtained by using optimal VMI energy, identifying optimal energy levels for maximum image quality.
We first observe that if \(\det J(x_{l})\neq 0\) for all \(l\), then \(\Sigma_{l}\) is positive definite for all \(l\), and thus the matrix \(Q=\frac{1}{P}\sum_{l=1}^{L}b_{l}^{t}b_{l}\Sigma_{l}\) is positive definite. This implies that \(s^{2}(w,tp,\tau)=w^{t}Qw\) is a positive definite quadratic form.
When the matrix \(Q\) is positive definite, by the spectral theorem, it is orthogonally diagonalizable, that is the eigenvectors of \(Q\) form an orthogonal basis for \(\mathbb{R}^{2}\). For \(i=1,2\), we let \(Qu_{i}=\lambda_{i}u_{i}\) where \(\|u_{i}\|_{2}=1\), and \(\lambda_{1}\geq\lambda_{2}>0\), i.e., \(u_{i}\) is the unit eigenvector associated to eigenvalue \(\lambda_{i}\) (see eg. Horn and Johnson [27]). Then, for each tube potential pair \(tp\) and fluence \(\tau\),
\[\min_{\|w\|_{2}=1}s^{2}(w,tp,\tau)=\min_{\|w\|_{2}=1}w^{t}Q(tp,\tau)w=\lambda_ {2}(tp,\tau), \tag{24}\]
where the minimizer is \(u_{2}(tp,\tau)\). One can then minimize \(\lambda_{2}\) over \(tp\) and \(\tau\) to find the optimal tube potential pair and the photon fluence. Hence,
\[(w^{*},tp^{*},\tau^{*})=\operatorname*{arg\,min}_{w,tp,\tau}s^{2}(w,tp,\tau), \tag{25}\]
where
\[(tp^{*},\tau^{*})=\operatorname*{arg\,min}_{tp,\tau}\lambda_{2}(tp,\tau), \tag{26}\]
and \(w^{*}\) is the unit eigenvector of \(Q(tp^{*},\tau^{*})\) corresponding to \(\lambda_{2}(tp^{*},\tau^{*})\).
We observe that if the Jacobian determinant values increase, then the diagonal entries of (\(\Sigma_{l}\) and hence) \(Q\) decrease. Since the smallest eigenvalue of a \(2\times 2\) matrix is always less than the smallest diagonal entry (see eg. Horn and Johnson [27]), scan parameters that maximize Jacobian determinant values give the minimal smallest eigenvalue of \(Q\) and hence it reduces the mean pixel variance.
In general, when the mean pixel variance is minimized only over the set of the attenuation weights, one expects to obtain a larger value in comparison to generalized weights. However, for the iodine-water material pair, we numerically observed that there is \(E^{*}\) such
that \(\frac{M(E^{*})}{\|M(E^{*})\|_{2}}=w^{*}\). This is mainly because \(M_{1}(E)>M_{2}(E)\) for all \(E\) in the diagnostic energy range and the components of \(w^{*}\) have the same sign. We thus have
\[\min_{0<E<E_{max}}s^{2}(E,tp,\tau)=\min_{\|w\|_{2}=1}s^{2}(w,tp,\tau)=\lambda_{2 }(tp,\tau). \tag{27}\]
We remark that this result is specific to iodine-water material pair and may not hold for others.
## III. Results
In this section, we present the results of our numerical experiments conducted for the iodine-water material pair. We first explain our experimental setup. We then demonstrate the dependence of uniqueness to the Jacobian determinant values and showcase a non-unique reconstruction. We finally provide our numerical results on the optimization of the VMI energy and iodine CNR in virtual monochromatic images.
### Experimental setup
In the numerical experiments, we used a \(4\times 4\)\(cm^{2}\) phantom consisting of a large water disk with inserted iodine-solution and calcium disk signals (see Fig. 1). The centers and radii of each disk, and the concentration of iodine solutions are given in table 1.
Figure 1: The plot of the circular water phantom with inserted iodine-solution and calcium disk signals. The red circle indicates the disk with the lowest iodine concentration, which is our region of interest.
\begin{tabular}{|c|c|c|} \hline Material & Center (cm) & Radius (cm) \\ \hline
0.1\% Iodine & (-1,1) & 0.3 \\ \hline
0.2\% Iodine & (1,1) & 0.3 \\ \hline
0.5\% Iodine & (1,-1) & 0.3 \\ \hline
1\% Iodine & (-1,1) & 0.3 \\ \hline Calcium & (-2,0) & 0.1 \\ \hline Calcium & (0,-2) & 0.2 \\ \hline Calcium & (2,0) & 0.3 \\ \hline Calcium & (0,2) & 0.4 \\ \hline Water & (0,0) & 3.6 \\ \hline \end{tabular}
Table 1: Phantom configuration.
For the phantom shown in Fig. 1, the DECT measurements were numerically simulated for \(512\times 512\) lines (bins \(\times\) views) in fan beam geometry. The distance from source to detector was 100 cm. For each line, 100 realizations were obtained by considering a compound Poisson process [19]. The range of x-ray tube voltages was considered to be 30-150 kV. The x-ray beam filter material was fixed at 2mm of Aluminum. The results were not significantly affected by variations in the total number of photons and fluence between low and high kV scans. As a result, these factors were considered equal in the analysis.
### Uniqueness of the reconstructed sinograms
As mentioned in section II.B., the uniqueness of reconstructions is ensured if the Jacobian determinant is nonzero everywhere in the rectangle containing all possible sinogram values (or pathlengths). Jacobian determinant values vary according to the chosen tube potentials and the filters of the x-ray energy spectra, where we fix the latter and focus on examining the effect of the former.
According to the eqs. (12) and (15), the sinogram (co-)variances are inversely proportional to the square of the Jacobian determinant. In Figure 2, we present the plot of the quantity
\[\min_{x\in\mathbb{R}}|\det J(x)|, \tag{28}\]
This is the minimum of absolute values of Jacobian determinant over the rectangular region \(\mathcal{R}=[0,0.01]\times[0,7.2]\), which corresponds to all possible pathlengths for iodine and water obtained from our phantom (depicted in Figure 1). We caution that the Jacobian determinant also depends on the tube settings, although we omit the specific notation for simplicity.
This plot reveals that lighter regions, which correspond to higher absolute values of the Jacobian determinant, indicate tube potentials that lead to more stable transmission-to-sinogram transformation. Therefore, the use of the tube potentials in the region \(50\leq tp_{1}\leq 80\) and \(120\leq tp_{2}\leq 150\) yield more stable sinogram reconstructions.
In Fig. 3, for a fixed high tube setting of \(tp_{2}=120\) kV, we plot in black the extremal Jacobian determinant values attained in the rectangle \(\mathcal{R}=[0,0.01]\times[0,7.2]\) cm\({}^{2}\) for low tube setting varying from 30 to 90 kV. The red curve represents the Jacobian determinant of the linearized map, which also corresponds to the Jacobian determinant values at zero pathlength.
Figure 2: The minimum of absolute value of Jacobian determinants over pathlengths up to 0.01 and 7.2, for iodine and water respectively, as a function of tube settings varying from 30 to 150 kV.
We observe in Fig. 3 that if the low tube potential is less than 45 kV, the Jacobian determinant vanishes somewhere inside the rectangle, and hence the uniqueness of sinogram reconstructions is not guaranteed. In fact, for the tube potential pair \((tp_{1},tp_{2})=(35,120)\), for which the Jacobian determinant becomes zero near the sinogram values \(x=(x_{1},x_{2})=(4,6)\), we encountered singular solutions to the inverse sinograms-to-DECT measurements map. In Fig. 4, we present a plot of the level curves of the corresponding log-intensity measurements as a function of the iodine and water sinogram values. The red dot is where the Jacobian determinant is zero. The level curve of \(g_{2}=2.4\) (shown in dashed black) intersects the level curve of \(g_{1}=4.92\) (shown in blue) at two points (shown with blue dots). This means that two different sinogram value pairs (shown with blue dots) lead to the same measurement value of \((g_{1},g_{2})=(4.92,2.4)\).
Figure 3: Plot of the Jacobian determinant for the map that transforms iodine/water sinograms to dual-energy transmission at low and high kV. The black curves show the range of determinant values over the expected tissue path lengths and the red curve shows the determinant for the linearized function. For the shown results, the high kV setting is 120, and the low kV setting is on the x-axis. Maximizing determinant values leads to stable transmission-to-sinogram transformation.
### Optimization of the iodine CNR in virtual monochromatic images
In this section, we present the results of the analysis conducted on the Contrast-to-Noise Ratio (CNR) values of the 0.1% Iodine disk in the virtual monochromatic image (VMI) which is enclosed by the red circle in Fig. 1. To compare the analytical and empirical approaches, the CNR values were computed using equation (17) analytically and through image reconstruction for 100 realizations. In the calculations, the 0.1% Iodine disk and the water only parts of the phantom were considered as the signal and the background, respectively.
Fig. 5 displays these CNR values for the low and high tube settings of 60/120 kV, with the blue curve representing the analytically computed values and the red dots indicating the empirically obtained results. Notably, there is a remarkable agreement between the empirical and analytical findings. Furthermore, the maximum CNR for the 0.1% Iodine disk in the
Figure 4: Plot of the level curves of the log-intensity measurements as a function of the water and iodine sinogram values demonstrating the existence of singular solutions to the inverse sinograms-to-DECT measurements map. The red dot is a point where the Jacobian vanishes. The dashed black curve (\(g_{2}=2.4\)) intersects the blue one (\(g_{1}=4.92\)) at two twice (shown with blue dots), so both sinogram value pairs lead to the same log-intensity measurement value of \((g_{1},g_{2})=(4.92,2.4)\).
Virtual Monochromatic Images (VMI) was observed at an energy level of 55 keV. Similar results were obtained for the 60/140 kV setting. These findings provide valuable insights into the optimal energy range for maximizing CNR in VMI.
To further explore the influence of tube potentials on the CNR, we plot in Fig. 6 the CNR values for various tube settings at this optimal VMI energy of 55 keV. Comparing the high tube potential settings of 120 kV and 140 kV, it is observed that the former yields slightly higher CNR values. Consequently, the tube potential pair of 60/120 kV emerges as the optimal choice for maximizing the CNR of the 0.1% Iodine disk in VMI within the scope of this study. We remark that, in view of Fig. 2, this pair of tube potentials lead to stable transmission-to-sinogram transformation. This result highlights the significance of selecting the appropriate kV settings to optimize the visualization and diagnostic quality of the Iodine disk in VMI imaging.
Figure 5: For low/high tube settings of 60/120 kV, the CNR of the 0.1% Iodine disk is shown for a noise level of 50,000 photons incident on each detector pixel. Shown are the analytically computed CNR values with blue curve and the empirically calculated values using noise realizations, with the red dots. The agreement between theory and simulation validates the computational method. A clear peak in the 0.1% Iodine disk CNR is seen in the VMI for an energy of 55 keV.
## IV Discussion and conclusion
In this study, we have introduced a novel approach for optimizing the DECT scan protocol specifically for iodine-based CNR. Our methodology centers around analyzing the propagation of noise from DECT measurements to material density images. By linearizing the inverse sinogram-to-transmission measurements map, we were able to derive analytic expressions for the mean sinogram and pixel variances. These expressions serve as key components of our framework, enabling us to systematically optimize the DECT scan parameters for improved image quality in terms of iodine-based CNR. We also identified the ideal VMI energy that maximizes CNR. We have shown that the VMI synthesized from iodine and water density images with the optimal energy exhibits the least amount of noise. As a result, our findings indicate that employing more general weights to linearly combine these images does not provide any substantial improvement in image quality.
Our DECT simulations were based on dual source DECT scanners. However, our theoretical framework is applicable to dual-layer detectors and photon counting detectors as well. In those cases, one needs to optimize the energy window thresholds instead of the tube potentials. The photon fluence was taken into account in our framework. However, we considered equal times for both energy measurement in the numerical simulations as
Figure 6: Using the VMI energy that maximizes the 0.1% Iodine disk CNR, the low and high kV settings are varied. Shown in the plot are only the high kV settings of 120 and 140; the low kV setting is indicated on the x-axis. Of the values computed, the 60/120 kV setting allows for maximum 0.1% Iodine CNR.
its influence on the results was not significant. We remark that the results of this study are specific to the phantom used as it directly affects the size of the sinogram domain and consequently the range of Jacobian values observed. Our primary objective in this research was to propose a framework for optimizing DECT scan parameters to achieve maximum stability of reconstructions and contrast-to-noise ratio (CNR). It is important to consider the specific imaging task at hand and carefully assess all relevant factors when implementing this framework.
In a future study, we plan to test our theoretical results using real DECT data. We also plan to extend our analysis to the case of three or more materials and energy measurements.
## Acknowledgment
F. Terzioglu's work was supported in part by the NSF DMS grant 2206279. G. Bal's work was supported in part by the NSF DMS grants 2306411 and 1908736. This work is also supported in part by NIH Grant Nos. R01-EB023968 and R21-CA263660. The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health.
|
2310.15333 | Safe and Interpretable Estimation of Optimal Treatment Regimes | Recent statistical and reinforcement learning methods have significantly
advanced patient care strategies. However, these approaches face substantial
challenges in high-stakes contexts, including missing data, inherent
stochasticity, and the critical requirements for interpretability and patient
safety. Our work operationalizes a safe and interpretable framework to identify
optimal treatment regimes. This approach involves matching patients with
similar medical and pharmacological characteristics, allowing us to construct
an optimal policy via interpolation. We perform a comprehensive simulation
study to demonstrate the framework's ability to identify optimal policies even
in complex settings. Ultimately, we operationalize our approach to study
regimes for treating seizures in critically ill patients. Our findings strongly
support personalized treatment strategies based on a patient's medical history
and pharmacological features. Notably, we identify that reducing medication
doses for patients with mild and brief seizure episodes while adopting
aggressive treatment for patients in intensive care unit experiencing intense
seizures leads to more favorable outcomes. | Harsh Parikh, Quinn Lanners, Zade Akras, Sahar F. Zafar, M. Brandon Westover, Cynthia Rudin, Alexander Volfovsky | 2023-10-23T19:59:10Z | http://arxiv.org/abs/2310.15333v2 | # Estimating Trustworthy and Safe Optimal Treatment Regimes
###### Abstract
Recent statistical and reinforcement learning methods have significantly advanced patient care strategies. However, these approaches face substantial challenges in high-stakes contexts, including missing data, inherent stochasticity, and the critical requirements for interpretability and patient safety. Our work operationalizes a safe and interpretable framework to identify optimal treatment regimes. This approach involves matching patients with similar medical and pharmacological characteristics, allowing us to construct an optimal policy via interpolation. We perform a comprehensive simulation study to demonstrate the framework's ability to identify optimal policies even in complex settings. Ultimately, we operationalize our approach to study regimes for treating seizures in critically ill patients. Our findings strongly support personalized treatment strategies based on a patient's medical history and pharmacological features. Notably, we identify that reducing medication doses for patients with mild and brief seizure episodes while adopting aggressive treatment for patients in intensive care unit experiencing intense seizures leads to more favorable outcomes.
## 1 Introduction
Our study investigates optimal treatment strategies for critically ill patients suffering from seizures or epileptiform activity (EA). These conditions are associated with elevated in-hospital mortality rates and long-term disabilities (Parikh et al., 2023; Ganesan and Hahn,
2019; Kim et al., 2018). EA is commonly observed in patients with various medical conditions such as brain injuries (Lucke-Wold et al., 2015), cancer (Lee et al., 2013), organ failure (Boggs, 2002), and infections like COVID-19. Healthcare professionals in intensive care units (ICUs) frequently use anti-seizure medications (ASMs) to manage EA. However, there are concerns regarding the utilization of highly potent ASMs due to their potential adverse health effects (Farrokh et al., 2018; De Wit et al., 2016). Additionally, the relative risks and benefits of ASMs vary among patients. This variation necessitates personalized treatment strategies to achieve optimal outcomes for each individual patient, as there is no one solution that fits all.1
Footnote 1: Strategies regarding when and how to treat patients based on their recent history are referred to as treatment regimes (denoted by \(\pi_{i}\) for each patient \(i\)).
We analyze data from a large hospital to identify optimal treatment regimes and generate clinically relevant hypotheses for future investigations in critical care. However, our data faces many challenges such as (i) a relatively small dataset of 995 patients, (ii) limited observation windows resulting in unobserved or missing ASM and EA data, and (iii) highly variable brain-drug interactions. No existing optimal treatment regime estimation methods are well-suited to handle these challenges (see Table 2, Section 2, and Appendix A). While our study focuses on treating EA in critically ill patients, the underlying framework is applicable across various medical and healthcare contexts, such as addressing substance use disorder in intravenous drug users (Volkow, 2020), managing coronary heart disease in ICU patients (Guo et al., 2022) or treating chronic psychiatric disorders (Murphy et al., 2007).
**Contributions.** We offer a general and flexible approach that allows for consistent estimation of optimal treatment regimes in the face of these challenges. Our approach is divided into three main steps:
1. **Pharmacological Feature Estimation:** We estimate patient-specific pharmacological features using a mechanistic model that captures EA-ASM interaction and is motivated by the underlying biochemistry.
2. **Distance Metric Learning:** We employ distance metric learning to identify clinical and pharmacological features affecting the outcome and use it to perform nearest-neighbors estimation to account for confounding factors.
3. **Optimal Regime Estimation:** We estimate the optimal treatment regime for each patient using their matched group. The matched group is comprised of nearby points according to the learned distance metric. The optimal regime is estimated using linear interpolation over the regimes of the nearby patients with favorable outcomes.
Estimation via our approach results in personalized optimal treatment regimes that are:
* _Interpretable_, allowing caregivers to easily understand, validate, and implement the regimes;
* _Safe_, ensuring that patients are neither over-prescribed nor under-prescribed ASMs; and
* _Accurate_, outperforming or performing on par with state-of-the-art black-box methods.
The simplicity and transparency of our approach coupled with its flexibility and interpretability makes it suited for high-stakes scenarios, such as the design of treatment strategies for patients experiencing epileptiform activity (EA) in the ICU. We discuss the identification of optimal treatment regimes in Section 4 and delineate our methodology to estimate them in Section 5. We validate and compare our approach with existing methods via simulation studies in Section 6 and Appendix F.
**Clinical Findings.** We show in Section 7 that our estimated treatment regimes _would have improved the outcomes for patients in the ICU_. The results indicate that a one-size-fits-all approach to escalating ASM usage in response to EA may not be universally beneficial. Instead, it is crucial to tailor treatment plans for each individual. For instance, patients exhibiting cognitive impairment or dementia are at a heightened risk of experiencing adverse effects from ASMs. A more cautious and lower-intensity approach to treatment may be warranted in such cases. This analysis not only characterizes beneficial approaches for treating EA in critically ill patients but also generates relevant hypotheses for future inquiry.
## 2 Related Literature
Our literature survey encompasses various techniques for estimating optimal treatment regimes. We classify these techniques into five categories: (i) Finite Timestep Backward Induction (Murphy, 2003; Robins, 2004; Murphy, 2005; Moodie and Richardson, 2010; Chakraborty et al., 2010; Zhang et al., 2012; Zhao et al., 2015; Murray et al., 2018; Blumlein et al., 2022; Qian and Murphy, 2011; Moodie et al., 2014; Zhang et al., 2018), (ii) Infinite Time Horizon (Ernst et al., 2005; Ertefaie and Strawderman, 2018; Clifton and Laber, 2020), (iii) Censored Data (Goldberg and Kosorok, 2012; Lyu et al., 2023; Zhao et al., 2020), (iv) Deep Reinforcement Learning (Mnih et al., 2013; Lillicrap et al., 2015; Haarnoja et al., 2018; Fujimoto et al., 2018; Kumar et al., 2020; Fujimoto et al., 2018; Wang et al., 2020), and (v) Causal Nearest Neighbors (Zhou and Kosorok, 2017).
Each category of techniques has its strengths and limitations. Finite timestep backward induction methods offer interpretability and ease of implementation. However, they struggle with missing states, samples with variable timesteps, and large action spaces. Infinite time horizon and censored data methods can handle more nuanced temporal data but require a predefined reward function. Deep reinforcement learning (RL) can handle more complex regimes but lacks interpretability and requires a large sample size. There is a need for a method that can handle continuous state and action spaces, variable and missing timesteps, does not require the specification of an arbitrary reward function, and can work with a small sample size while maintaining accuracy and interpretability. We provide a summary of each category of techniques in regard to these attributes in Table 2 and include an in-depth literature survey in Appendix A.
## 3 Preliminaries
We now introduce our setup and notation. While our study focuses on treating EA in critically ill patients, the underlying framework is applicable across various medical and healthcare contexts, as discussed earlier.
For each patient \(i\) in a cohort of \(n\) patients, we observe (i) pre-treatment covariates \(\mathbf{X}_{i}\), (ii) time-series of states \(\{E_{i,t}\}_{t=1}^{T_{i}}\) (in this case the EA burden), where \(T_{i}\) is the duration for which the patient is under observation, (iii) sequence of actions \(\{\mathbf{Z}_{i,t}\}_{t=1}^{T_{i}}\) (a vector of ASM
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Methods** & **CA** & **VT** & **MT** & **LO** & **DE** & **IN** \\ \hline
**Our Method** & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline
**Finite BI** & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\Delta\) \\ \hline
**Infinite HZ** & \(\checkmark\) & \(\Delta\) & \(\checkmark\) & \(\checkmark\) & \(\Delta\) \\
**Censored DTR** & \(\checkmark\) & \(\checkmark\) & \(\Delta\) & \(\checkmark\) & \(\Delta\) \\ \hline
**Deep RL** & \(\checkmark\) & \(\checkmark\) & \(\Delta\) & \(\checkmark\) & \(\checkmark\) \\ \hline
**Causal NN** & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline \end{tabular}
\end{table}
Table 1: Characteristics of optimal regime estimation approaches. _Finite BI_: finite timestep backward induction methods, _Infinite HZ_: infinite horizon methods, _Censored_: censored data methods, _Deep RL_: deep reinforcement learning methods, _Causal NN_: causal nearest neighbors. Columns represent _CA_: continuous action space, _VT_: variable timesteps, _MT_ missing timesteps, _LO_: targets long-term outcomes without requiring a designed reward function, _DE_: data efficiency, and _IN_: interpretability. Green cells denote desired properties and red cells indicate undesired properties in the context of our problem. \(\Delta\) indicates the attribute depends on underlying modeling choices.
drug doses given to the patient), and (iv) discharge outcome \(Y_{i}\). Here, \(Y_{i}\) is a binary indicator for patient well-being with 1 indicating an adverse outcome based on the modified Rankin Score (mRS). The mRS was retrospectively abstracted from hospital records, specifically physician and physical therapy notes, at the time of patient discharge. The mRS assessments underwent rigorous independent review by evaluators, who were intentionally blinded to the patients' EEG measurements and antiseizure medication status to avoid bias.
The sequence of actions, \(\{\mathbf{Z}_{i,t}\}\), are determined based on the administered policy \(\pi_{i}\) such that \(\mathbf{Z}_{i,t}=\pi_{i}(\{E_{i,t^{\prime}}\}_{t^{\prime}=1}^{t},\{\mathbf{Z}_ {i,t^{\prime}}\}_{t^{\prime}=1}^{t-1})+\mathcal{E}_{i,t}\) where \(\mathcal{E}_{i,t}\) is the unobserved time-and-patient specific factor affecting the action at time \(t\). \(Y_{i}(\{\mathbf{z}_{i,t}\})\) denotes the potential outcome, under the action sequence \(\{\mathbf{z}_{i,t}\}\). However, since \(\mathbf{z}_{i,t}\)'s are determined by the policy \(\pi_{a}\), we redefine the potential outcomes as a function of the policy itself, denoted as \(Y_{i}(\pi_{a})\). We assume that the observed outcome \(Y_{i}\) is equal to the potential outcome under the administered treatment regime, denoted \(Y_{i}(\pi_{i})\). Note that while we observe \(\mathbf{Z}_{i,t}\)'s, we do not observe the underlying treatment regime \(\pi_{i}\).
Our goal is to identify an optimal regime \(\pi^{*}\) for each patient \(i\) that minimizes their potential outcome:
\[\pi_{i}^{*}\in\arg\min_{\pi_{a}}\mathbb{E}[Y_{i}(\pi_{a})|\mathbf{X}_{i}].\]
What makes this challenging is that we only observe the potential outcome corresponding to the treatment regime administered by the doctors. Thus, \(Y_{i}=Y_{i}(\pi_{i})\) for each patient while all the other potential outcomes are missing (or unknown). Importantly, the outcome is observed at a timepoint \(\tau_{i}\) which may be substantially further down the road than the length of observation for each patient, denoted by \(T_{i}\).
To address this missingness, we note that the state-action interaction and state transition are determined by underlying pharmacology that can be decoupled into two parts: (i) pharmacokinetics and (ii) pharmacodynamics. Pharmacokinetics describes the changes in drug concentration at time \(t\) as a function of the drug concentration at the previous time points along with the current drug dose at time \(t\). Pharmacodynamics describes the changes to the EA burden at time \(t\) as a function of the current drug concentration and the past EA burden. The pharmacokinetic-pharmacodynamic (PK/PD) system is formalized as a pair of partial differential equations (described in detail in Appendix C). Since this structural system fully governs the drug-EA interaction, conditioning on it allows us to avoid complex outcome simulators while also providing context for the observed heterogeneity in outcomes.
Identification
We now discuss the underlying assumptions that allow identification of \(\pi_{i}^{*}\in\arg\min_{\pi_{a}}\mathbb{E}[Y_{i}(\pi_{a})\mid\mathbf{X}_{i}]\) for each patient \(i\). We start by assuming conditional ignorability (Rubin, 1974; Robins, 2000), \(Y_{i}(\pi_{a})\perp\pi_{i}\mid\mathbf{X}_{i}\), an assumption standard in observational causal studies. This assumption is reasonable in our setting as we know that caregivers decide the drug regimes primarily based on the pre-treatment features \(\mathbf{X}\). By the law of iterated expectations, we know that
\[\mathbb{E}[Y_{i}(\pi_{a})\mid\mathbf{X}_{i}]=\sum_{\pi}\mathbb{E}[Y_{i}(\pi_{a })\mid\mathbf{X}_{i},\pi_{i}=\pi]P(\pi_{i}=\pi\mid\mathbf{X}_{i}).\]
And by conditional ignorability,
\[\mathbb{E}[Y_{i}(\pi_{a})\mid\mathbf{X}_{i},\pi_{i}=\pi]=\mathbb{E}[Y_{i}(\pi _{a})\mid\mathbf{X}_{i},\pi_{i}=\pi_{a}]\]
for all \(\pi\). Thus, if we have positivity, i.e. \(P(\pi_{i}=\pi_{a}\mid\mathbf{X}_{i})>0\), then \(\mathbb{E}[Y_{i}(\pi_{a})\mid\mathbf{X}_{i}]\) is identifiable as \(\mathbb{E}[Y_{i}\mid\mathbf{X}_{i},\pi_{i}=\pi_{a}]\).
There are many scenarios similar to our setting where the dimensionality of \(\pi\) is high and experts' treatment choices are based on patients' characteristics. In these scenarios, it is _highly unlikely_ that \(P(\pi_{i}=\pi\mid\mathbf{X}_{i})>0\) for all \(\pi\) and \(\mathbf{X}_{i}\). However, recall that we are particularly interested in identifying the optimal treatment regime \(\pi_{i}^{*}\) for each patient \(i\) and not identifying \(\mathbb{E}[Y_{i}(\pi_{a})\mid\mathbf{X}_{i}]\) for any arbitrary policy \(\pi_{a}\). Thus, for our context, it is reasonable to assume that even if the clinicians' policies are suboptimal, they are sampled from the neighborhood of the optimal policy such that \(P(\pi_{i}=\pi_{i}^{*}\mid\mathbf{X}_{i})=\mathbb{E}_{\pi\mid\mathbf{X}_{i}}[P( \pi_{i}=\pi\mid\mathbf{X}_{i})]\). We refer to this assumption as _local_ positivity. This assumption is weaker than the standard positivity assumption in causal inference. The major implication of this assumption is that \(P(\pi_{i}=\pi_{i}^{*}\mid\mathbf{X}_{i})>0\), allowing us to identify \(\mathbb{E}[Y_{i}(\pi_{i}^{*})\mid\mathbf{X}_{i}]\) and subsequently \(\pi_{i}^{*}=\arg\min_{\pi\text{ s.t. }P(\pi\mid\mathbf{X}_{i})>0}\mathbb{E}[Y_{i} \mid\mathbf{X}_{i},\pi_{i}=\pi]\).
Under the assumption of local positivity, if \(\pi_{i}\) were observed for each patient \(i\), \(\pi_{i}^{*}\) is always identifiable. However, as noted in Section 3, we only observe \(\{E_{i,t}\}_{t=1}^{T_{i}}\) and \(\{\mathbf{Z}_{i,t}\}_{t=1}^{T_{i}}\) while the underlying \(\pi_{i}\) is unobserved. Recall that \(\mathbf{Z}_{i,t}=\pi_{i}(\{E_{i,t^{\prime}}\}_{t^{\prime}=1}^{t},\{\mathbf{Z}_ {i,t^{\prime}}\}_{t^{\prime}=1}^{t-1})+\mathcal{E}_{i,t}\), where \(\mathcal{E}_{i,t}\) is an unobserved patient-and-time specific factor. We make a Markovian assumption, \(\pi_{i}(\{E_{i,t^{\prime}}\}_{t^{\prime}=1}^{t},\{\mathbf{Z}_{i,t^{\prime}}\}_ {t^{\prime}=1}^{t-1})=\pi_{i}(\{E_{i,t^{\prime}}\}_{t^{\prime}=t-12h}^{t},\{ \mathbf{Z}_{i,t^{\prime}}\}_{t^{\prime}=t-12h}^{t-1})\) and a sequential ignorability assumption such that \(\mathcal{E}_{i,t}\perp(\{E_{i,t^{\prime}}\}_{t^{\prime}=1}^{t},\{\mathcal{E}_ {i,t^{\prime}}\}_{t^{\prime}=1}^{t-1})\mid\mathbf{X}_{i}.\) Under these assumptions, \(\pi_{i}\) is non-parametrically identifiable for each patient \(i\)(Matzkin, 2007). This, in turn, implies that the optimal treatment regime \(\pi_{i}^{*}\) is identifiable.
_Remark 1_.: Recall that the outcome \(Y_{i}\) is a function of a high-dimensional vector of EA burdens \(\{E_{i,t}\}_{t=1}^{\tau_{i}}\) and drug doses \(\{\mathbf{Z}_{i,t}\}_{t=1}^{\tau_{i}}\), some of which are unobserved. Defining the treatment as a regime \(\pi_{i}\) is akin to exposure mapping such that even though \((\{E_{i,t}\}_{t=1}^{\tau_{i}},\{\mathbf{Z}_{i,t}\}_{t=1}^{\tau_{i}})\neq(\{E_{ j,t}\}_{t=1}^{\tau_{j}},\{\mathbf{Z}_{j,t}\}_{t=1}^{\tau_{j}})\) we have \(\mathbb{E}[Y_{i}(\pi_{i})|\mathbf{X}_{i}=\mathbf{x}]=\mathbb{E}[Y_{j}(\pi_{j}) |\mathbf{X}_{j}=\mathbf{x}]\) if \(\pi_{i}=\pi_{j}\). This helps us address the problem with missing \(E_{i,t}\)'s and \(\mathbf{Z}_{i,t}\)'s and ensures that the local positivity assumption is more reasonable.
## 5 Methodology
We now outline our three-stage methodology for estimating the optimal treatment regime. The first stage involves estimating an individualized mechanistic model from observed state-action data to approximate state transition dynamics. Mechanistic modeling offers interpretability and needs much less data for fine-tuning. We also estimate the administered regimes (\(\pi_{i}\)'s) if they are unobserved (as in our setup). In the second stage, we create a distance metric to match patients based on pre-treatment covariates and estimated mechanistic model parameters. Subsequently, we use the estimated distance metric to tightly match patients. Finally, in the third stage, we leverage these matched groups to estimate the optimal treatment regimes.
Our interpretable matching approach allows validation through case-based reasoning, which enhances confidence in the estimation procedure and underlying assumptions. We provide details for each stage in the following subsections, with a focus on our real-world application. However, the framework is adaptable to other applications with similar data structures.
Mechanistic State Transition Modeling.We approximate PK using a one-compartment model (Shargel et al., 1999), with half-life as the parameter, and Hill's PD model (Hill, 1910; Weiss, 1997; Nelson et al., 2008), with receptor-ligand affinity and drug dose for 50% efficacy as parameters, to model the short-term effectiveness of the ASMs in reducing EA burden. We delineate the models formally in Appendix C. For each patient \(i\) in the cohort, we estimate these individualized PK/PD parameters by minimizing the mean squared error between the predicted EA time series under the observed ASM regime using the mechanistic model and the actual observed EA time series. This step is akin to estimating a multi-dimensional propensity score.
_Remark 2_.: We approximate state-transition dynamics via deterministic mechanistic models,
but we do not use them for counterfactual simulations. Mechanistic modeling isolates clinically relevant pharmacological features from stochastic dynamics. While state-transition dynamics adjustment is not necessary for consistent estimation, accounting for PK/PD parameters aids in estimating heterogeneous effects, akin to using propensity scores with Bayesian regression trees (Hahn et al., 2020).
Characterizing Administered Policies.In our study, we focus on treatment regimes for two commonly used anti-seizure medications (ASMs): propofol and levetiracetam. For our application, we employ the policy template that is defined by the drug administration protocols used in hospitals, to ensure interpretability, although our framework can accommodate non-parametric policy functions such as trees or forests. Propofol, a sedating ASM, is administered as a continuous infusion based on the past 1hr, 6hrs, and 12hrs of seizure levels using policy \(\pi^{prop}\). In contrast, non-sedative ASM levetiracetam is given as a bolus every 12 hours, with dosages varying according to recent EA burden and drug history through policy \(\pi^{lev}\). The regime for patient \(i\) is denoted by
\[\pi_{i}=\begin{Bmatrix}\pi^{prop}_{i}\left(\{E_{i,t^{\prime}}\}_{t^{\prime}=1} ^{t},\{\mathbf{Z}_{i,t^{\prime}}\}_{t^{\prime}=1}^{t-1};\mathbf{a}^{p}_{i} \right)\\ \pi^{lev}_{i}\left(\{E_{i,t^{\prime}}\}_{t^{\prime}=1}^{t},\{\mathbf{Z}_{i,t^ {\prime}}\}_{t^{\prime}=1}^{t-1};\mathbf{a}^{l}_{i}\right)\}\end{Bmatrix}.\]
We provide the functional forms of the policies in Appendix H. We use the observed EA burdens (\(\{E_{i,t}\}\)) and ASM doses (\(\{\mathbf{Z}_{i,t}\}\)) to deduce the administered policy \(\pi_{i}\) for each patient \(i\) by minimizing the mean squared error loss between the predicted and observed drug doses at each time \(t\). We discuss the goodness of fit of the estimation procedure in Appendix H.3.
Distance Metric Learning and MatchingTo adjust for confounding, we need to account for pre-treatment covariates and pharmacological features. We do this by grouping patients who are similar in these features but are treated differently. This procedure is called matching, a commonly used approach to nonparametrically estimate potential outcomes (Ho et al., 2007; Stuart, 2010; Parikh et al., 2022). For the sake of simplicity, let \(\mathbf{V}_{i}\) denote a vector of pre-treatment and pharmacological features for each patient \(i\). Then, the estimate for \(\mathbb{E}[Y(\pi_{a})|\mathbf{V}=\mathbf{v}]\) is given by \(\widehat{Y}_{\mathbf{v}}(\pi_{a})=m(MG_{d}(\mathcal{D},r,\mathbf{v}),\pi_{a})\) where \(MG_{d}(\mathcal{D},r,\mathbf{v})\) is the matched group of units from dataset \(\mathcal{D}\) that are \(r\) distance away from \(\mathbf{v}\) under distance metric \(d\), and \(m\) is a regression on the units in the matched group evaluated at \(\pi_{a}\).
In high-dimensional scenarios with limited data, it is not possible to precisely match all
covariates. Thus, we want to match tightly on important covariates that affect patients' prognoses. Recent matching approaches have explored distance metric learning before matching for more accurate and interpretable causal effect estimation (Parikh et al., 2022; Diamond and Sekhon, 2013; Lanners et al., 2023, see Appendix B for further details). We extend the Variable Importance Matching (VIM) framework (Lanners et al., 2023) to our problem setting. Our distance metric \(d\) is parameterized by a positive semi-definite matrix \(\mathcal{M}\) such that \(d_{\mathcal{M}}(\mathbf{v}_{i},\mathbf{v}_{k})=(\mathbf{v}_{i}-\mathbf{v}_{ k})^{T}\mathcal{M}(\mathbf{v}_{i}-\mathbf{v}_{k})\). We constrain \(\mathcal{M}\) to a diagonal matrix, enabling domain experts to interpret these entries as feature importance values. Consequently, we set \(\mathcal{M}_{j,j}\) equal to the gini impurity importance of the \(j\)-th feature in the model for \(\mathbb{E}[Y|\mathbf{V}]\) (as defined in Nembrini et al. (2018) and Ishwaran (2015)). To ensure the "honesty" of our approach, we split the dataset \(\mathcal{D}\) into two parts: the training set \(\mathcal{D}_{tr}\) and the estimation set \(\mathcal{D}_{est}\)(Ratkovic, 2019). We fit gradient-boosting trees with 100 estimators on \(\mathcal{D}_{tr}\), each with a maximum depth of 2. Henceforth, we denote the learned distance metric as \(d^{\dagger}\).
Estimating Optimal Regimes.For each matched group centered around patient \(i\in\mathcal{D}_{est}\), we consider the administered regimes \(\pi_{k}\) and outcomes \(Y_{k}\) for all \(k\in MG_{d^{\dagger}}(\mathcal{D}_{est},r,\mathbf{V}_{i})\), where \(d^{\dagger}\) is the learned distance metric. For the sake for simplicity, we will denote \(MG_{d^{\dagger}}(\mathcal{D}_{est},r,\mathbf{V}_{i})\) as \(MG_{i}\). We estimate the conditional expected outcome \(\nu_{i}(\pi):=\mathbb{E}[Y_{i}\mid\pi,\mathbf{V}_{i}]\) using only the units in \(MG_{i}\). The estimate is denoted as \(\widehat{\nu}_{i}(\pi)\). Further, consider a _new_ operator \(\bigoplus\) such that if \(\pi_{1}\in\text{Dom}(\pi)\) (a function that maps states to a vector of ASM doses) and \(\pi_{2}\in\text{Dom}(\pi)\) (another function that maps states to a vector of ASM doses) then \(\pi_{3}=\pi_{1}\bigoplus\pi_{2}\in\text{Dom}(\pi)\). This operation is defined so that if \(\pi_{3}=\pi_{1}\bigoplus\pi_{2}\) then \(\pi_{3}(s):=\pi_{1}(s)+\pi_{2}(s)\) for all \(s\) in the domain of states. Then, our estimate of the optimal treatment regime for unit \(i\) is \(\widehat{\pi}_{i}^{*}\in\arg\min_{\pi_{c,i}}\widehat{\nu}_{i}(\pi_{c,i})\) where, \(\pi_{c,i}=\underset{k\in MG_{i}}{\bigoplus}c_{k}\pi_{k}\), \(\sum_{k\in MG_{i}}c_{k}=1\) and \(0\leq c_{k}\leq 1\).
Consistency.We now discuss a smoothness of outcomes assumption under which our estimated optimal regime is consistent. Let's first define an \((\mathcal{S},p)\)-norm on the space of policies such that \(\|\pi_{1}-\pi_{2}\|_{\mathcal{S},p}=\left(\int_{s\in\mathcal{S}}|\pi_{1}(s)- \pi_{2}(s)|^{p}\right)^{1/p}ds\) where \(\mathcal{S}\) is the state space for the policies and \(p\) is some positive integer. The smoothness of outcomes assumption is given as follows: given constants \(\lambda_{\pi}\geq 0\) and \(\lambda_{\mathbf{V}}\geq 0\) such that for any two units 1 and 2, if \(\|\pi_{1}-\pi_{2}\|_{\mathcal{S},\infty}\leq\lambda_{\pi}\) and \(\|\mathbf{V}_{1}-\mathbf{V}_{2}\|_{2}\leq\lambda_{\mathbf{V}}\) then \(\|\mathbb{E}[Y(\pi_{1})\mid\mathbf{V}_{1}]-\mathbb{E}[Y(\pi_{2})\mid\mathbf{V }_{2}]\|\leq\delta(\lambda_{\pi},\lambda_{\mathbf{V}})\) where \(\delta\) is a monotonically decreasing function in both the arguments with \(\delta(0,0)=0\).
This assumption essentially implies that if \(\mathbf{V}_{1}\) and \(\mathbf{V}_{2}\) are close and if \(\pi_{1}\) and \(\pi_{2}\) are also close then the expected potential outcomes are also close.
**Theorem 1**.: _Given the conditional ignorability, local positivity, and smoothness of outcomes assumptions, \(\widehat{\pi}_{i}^{*}\) is a consistent estimate of \(\pi_{i}^{*}\), such that_
\[\lim_{n\rightarrow\infty}\mathbb{E}[Y(\widehat{\pi}_{i}^{*})\mid\mathbf{V}_{i}] \rightarrow\mathbb{E}[Y(\pi_{i}^{*})\mid\mathbf{V}_{i}].\]
We provide the proof of this theorem in Appendix I
_Remark 3_.: As our regimes \(\pi\) are linear score functions with parameter vector \(\mathbf{a}\) (see Appendix H), \(\pi_{k_{3}}=\pi_{k_{1}}\bigoplus\pi_{k_{2}}\) corresponds to defining new policy \(\pi_{k_{3}}\) with parameters \(\mathbf{a}_{k_{3}}=\mathbf{a}_{k_{1}}+\mathbf{a}_{k_{2}}\). This property comes in handy when comparing the administered policy's parameters with the estimated optimal policy.
## 6 Synthetic Data Experiments
Comparison Baselines.We compare our approach to 49 approaches based on 10 different state-of-the-art finite timestep backward induction, infinite horizon, and deep reinforcement
Figure 1: Percent of patients with poor outcomes under each method’s proposed policy (_lower is better_). Boxplots show the distribution of the average outcomes over 20 iterations. _Observed_ shows average observed outcomes. _Inaction_ and _Max Dosing_ administer no drugs and the max amount of drugs to each patient at each timestep, respectively. _RF Q-learning_ is a finite timestep backward induction method using random forests. _Infinite (Inf) Horizon_ methods use fitted Q-iteration (see Clifton and Laber, 2020) with either linear models or random forests. _Q-learning_ and _Inf Horizon_ discretize the treatment into five bins. _BCQ, CQL, CRR, GGPQ, SAC_, and _TD3_ are Deep RL methods. _Inf Horizon_ and Deep RL methods use an insightful reward function, see Appendix E.
learning frameworks. The vast majority of methods cannot be run on our data setup out of the box and often require major modifications. The various approaches we compare use different underlying models, ways to discretize continuous outcomes, and predefined reward functions. We outline the methods we compare to and the implementation details in Appendix E.
Data Generation Procedure.Our data-generative procedure is designed to emulate the real-world scenario where critically ill patients undergo drug treatment that affects their state. We design the data generation process to be customizable in five important aspects to discern how various methods perform with the challenges present in our real-world data: (i) number of covariates; (ii) number of total timesteps \(\tau_{i}\), for each patient; (iii) number of unobserved timesteps, \(\tau_{i}-T_{i}\), for each patient; (iv) cardinality of the action space; and (v) observed policies. We construct a total of 32 different experimental setups by varying these aspects. We provide the full details of our data generation process and experimental setups in Appendix D.
Results.For a real-world data simulation, we use 1000 simulated "patients" with (i) 100 pre-treatment covariates, (ii) varying lengths of stay (10-15 timesteps), and (iii) unobserved timesteps (2-5 steps), where (iv) drug doses at each timestep are between 0 to 100 and (v) determined using an educated policy akin to one doctors use in the ICU. We display the percent of patients with poor outcomes under the proposed policies of our method, representative approaches from each of our comparison baseline categories, and predetermined approaches like inaction, random assignment, and max-dose in the left plot of Figure 1. The right plot of Figure 1 shows results for the same setting except with (iii) no missing timesteps. In each of these complex setups, our matching-based method consistently yields optimal treatment policies, surpassing all comparison methods. Notably, among the 16 setups with 10-15 total timesteps, our method is the top performer in the majority (9 of 16) and ranks within the top 4 methods across all 16 setups, with a maximum performance difference of 7 percentage points compared to the best method. Even when methods with access to the oracle reward function are considered, our method remains within a 15 percentage point difference, the smallest gap among all approaches.
Analysis.Existing methods falter on our simulated data for various reasons. The suboptimal performance of Q-learning is likely caused by its inability to handle missing states as well as continuous action spaces (Huang et al., 2022). Infinite horizon methods like fitted
Q-iteration mainly rely on a predefined reward function, often focusing on short-term objectives, and cannot handle continuous action spaces (Clifton and Laber, 2020). Deep RL methods like DDPG are also likely struggling with having to rely on a predefined reward function and the relatively small dataset size (Riachi et al., 2021; Kondrup et al., 2023; Kang et al., 2023; Kalweit and Boedecker, 2017). More modern Deep RL methods like CQL, CRR, and BCQ mediate the deficiencies of DDPG. However, unlike our approach, these methods are inherently uninterpretable and, therefore, are unsuitable for high-stakes problems.
In Appendix F, we thoroughly compare our method to the 49 baselines using 32 simulation setups. These results underscore the suboptimal performance of existing methods in scenarios with missing data, continuous action space, and highly stochastic state dynamics. Our method can handle these various challenges, allowing it to accurately estimate interpretable optimal regimes that are safe for high-stakes settings.
## 7 Treating Seizures in ICU Patients
We now present the analysis and insights derived from our optimal treatment estimation approach when applied to a cohort of 995 critically ill patients. This cohort is comprised of individuals aged 18 and older with confirmed electrographic EA as diagnosed by clinical neurophysiologists or epileptologists.
We evaluate our approach by comparing the estimated optimal treatment policy
Figure 2: (a) Estimated density of the outcome probabilities under optimal and clinician’s administered policies. (b) Tree characterizing the subpopulations that would have benefited the most by switching to the optimal policy. The value at each node in the tree shows the percentage point _improvement_ in the outcome. Here, HEI/ABI refers to hypoxic-ischemic encephalopathy (HIE) and anoxic brain injury (ABI).
\(1|\mathbf{V}_{i}\rangle\) with the clinician's administered policy \(P(Y_{i}(\pi_{i})=1|\mathbf{V}_{i}\rangle\) for each patient. Our analysis indicates a significant improvement in patient outcomes, with a 23.6 \(\pm\) 1.9 percentage point reduction in the probability of adverse events under the optimal regimen. Few patients under the optimal policy had over a 50% chance of an adverse outcome (Figure 2(a)). Figure 2(b) reveals that patients with hypoxic-ischemic encephalopathy (HIE) or anoxic brain injury (ABI) experienced a substantial 35.9 percentage point decrease in the likelihood of an adverse outcome, highlighting those who benefited most from our estimated optimal treatment policies.
We compare and contrast the optimal regimes with the administered regimes for each drug. We consider the variability of each drug's regime with respect to patients' pre-treatment prognosis measured as APACHE II score (Knaus et al., 1986) and Glasgow coma scale (GCS) (Jain and Iverson, 2018). APACHE II score quantifies disease severity in ICU patients and GCS measures impaired consciousness in acute medical and trauma patients. Both of these measures are clinically relevant for deciding treatment strategies (Mumtaz et al., 2023). Table 2 displays mortality rates from Knaus et al. (1986) and estimated \(Y\) under administered and optimal regimes for different APACHE II scores. The optimal regime improves outcomes across all levels, with the most benefits seen in patients with high APACHE II scores (i.e., with worse prognoses).
**Propofol Regimes.** Figures 3(a) and 3(b) show that, on average, the estimated optimal propofol dose for individuals with low EA burden is generally lower than the administered dose, especially for those with worse prognoses (lower GCS or higher APACHE II scores). Conversely, when patients have a severe EA burden in the last hour and an APACHE II score below 30, the optimal dose is marginally higher than the administered dose. Also, one
\begin{table}
\begin{tabular}{l|c|c c} \hline APACHE & Death & Est. & Est. \\ II Score & Rate & \(\mathbb{E}[Y_{i}(\pi_{i})]\) & \(\mathbb{E}[Y_{i}(\widehat{\pi}_{i}^{*})]\) \\ \hline
0 to 4 & 4\% & 17\% & 6\% \\
5 to 9 & 8\% & 22\% & 8\% \\
10 to 14 & 15\% & 35\% & 17\% \\
15 to 19 & 24\% & 48\% & 25\% \\
20 to 24 & 40\% & 56\% & 31\% \\
25 to 29 & 55\% & 61\% & 35\% \\
30 to 34 & 73\% & 73\% & 36\% \\ \hline \end{tabular}
\end{table}
Table 2: APACHE II scores and corresponding non-operative mortality or death rate from Knaus et al. (1986), as well as estimated \(Y\) under estimated administered regime and optimal regime.
must adjust propofol dosages based on patients' PK/PD, specifically, based on the ED50 values - a PD parameter quantifying the amount of drug required to reduce the EA burden by 50%. When the EA burden is low, we recommend increasing the dosage for patients with low ED50 values to alleviate EA and decreasing it for those with high ED50 values, as an excess of propofol may lead to adverse effects (see Figure 3(c)).
**Levetiracetam Regimes.** The optimal and administered levetiracetam regimes generally align, except for patients with sustained 12-hour EA burden. In such cases, the optimal regime recommends a lower dose (0.50 mg/kg on average) compared to the administered regime (0.82 mg/kg on average). For dementia patients, the difference is more pronounced, with the optimal regime suggesting a dose of 4.2 mg/kg lower (see Figure 4(a)). Conversely, subarachnoid hemorrhage patients with a 6-hour sustained EA burden receive a 1 mg/kg higher dose with the optimal regime (see Figure 4(b)).
To summarize, our findings indicate that patients in this study would, on average, be less likely to have an adverse outcome under the optimal regimes estimated by using our method. These optimal regimes would lead us to advocate for an assertive approach to managing the high EA burden in more critically ill patients while reducing propofol and levetiracetam dosages for relatively healthier patients or those with mild EA.
## 8 Discussion & Conclusion
We present an approach that is capable of handling many challenges with real-world observational data like variable timesteps, missing states, a continuous action space, and small data size. Our approach balances accuracy and interpretability and demonstrates superior per
Figure 3: Difference in the propofol drug doses between the optimal and the administered regimes for mild and severe EA burden in last 1h for (a) patients on various levels of Glasgow coma scale (GCS); (b) patients with various levels of APACHE II scores; and (c) patients with various levels of ED50 for propofol, an important pharmacodynamic parameter determining the amount of drug required to reduce EA burden by 50%.
formance through simulation. We ultimately operationalize our approach to learn treatment regimes for ICU patients with EA, showcasing its ability to solve real-world problems.
Clinical Relevance.The current absence of evidence-based guidelines to inform ASM regimes (drug type and dosing) in patients with EA results in frequent overprescription of ASMs in response to EA (Zafar et al., 2020; Rubinos et al., 2018). High EA-burden is frequently treated with escalating doses of ASMs and anesthetics, and many of these patients are also discharged on ASM treatment (Zafar et al., 2018; Tabaeizadeh et al., 2020; Dhakar et al., 2022; Alvarez et al., 2017; Kilbride et al., 2009; Punia et al., 2020). Our findings suggest that not all patients may benefit from such ASM escalation. Thus, careful consideration of the baseline illness severity, injury type, and patient comorbidities is important to determine the risk-benefit trade-off of initiating treatment and selecting treatment intensity. For example, patients with cognitive impairment and dementia have a higher risk of ASM adverse effects (Mendez and Lim, 2003; Cretin, 2021) and may require lower-intensity treatment, which is supported by our findings. Finally, as shown in Figure 3 and Figure 4, heterogeneous treatment responses need to be considered in selecting drug dosing. Current clinical practice relies on population-level pharmacological data to infer standardized dosing regimens used for all patients. However, this one-size-fits-all approach is suboptimal due to the patient-level PK/PD heterogeneity shown in our study (see Figure 3(c)). Our findings strongly support the need for _clinical trials_ to reveal heterogeneous causal effects and construct individualized optimal treatment. Such efforts can guide evidence-based clinical
Figure 4: Difference in the levetiracetam doses between the optimal and the administered regimes for (a) patients with and without dementia experiencing a sustained EA burden for 12 hours; and (b) patients with and without subarachnoid hemorrhage experiencing a sustained EA burden for 6 hours.
practice and improve patient care in the ICU.
**Limitations.** Like all causal research, our study relies on untestable assumptions. We assume there are no hidden variables affecting both EA burden and patient discharge outcomes, though unmeasured disease characteristics might violate this assumption. Additionally, the misspecification of our predefined policy template, intended for doctor interpretability, could affect real-world drug administration, akin to issues discussed in recent work Savje (2023). Furthermore, while we focus on point estimation for personalized optimal treatment regimes, handling uncertainty, especially when estimating the exposure map from observed data, remains an open question.
**Future Direction.** Addressing the limitations inherent in our approach, we identify two promising areas for future work. First, there is a need for research into uncertainty quantification for estimated personalized optimal treatment regimes, with broader implications for situations where exposure mapping is data-driven. Second, developing a non-parametric approach for sensitivity analysis and partial identification has the potential to advance research in this area.
## References
* Alvarez et al. (2017) Alvarez, V., Ruiz, A. A. R., LaRoche, S., Hirsch, L. J., Parres, C., Voinescu, P. E., Fernandez, A., Petroff, O. A., Rampal, N., Haider, H. A., et al. (2017). The use and yield of continuous eeg in critically ill patients: a comparative study of three centers. _Clinical Neurophysiology_, 128(4):570-578.
* Arora and Doshi (2021) Arora, S. and Doshi, P. (2021). A survey of inverse reinforcement learning: Challenges, methods and progress. _Artificial Intelligence_, 297:103500.
* Blumlein et al. (2022) Blumlein, T., Persson, J., and Feuerriegel, S. (2022). Learning optimal dynamic treatment regimes using causal tree methods in medicine. In _Machine Learning for Healthcare Conference_, pages 146-171. PMLR.
* Boggs (2002) Boggs, J. (2002). Seizures and organ failure. In _Seizures_, pages 71-83. Springer.
* Chakraborty et al. (2010) Chakraborty, B., Murphy, S., and Strecher, V. (2010). Inference for non-regular parameters in optimal dynamic treatment regimes. _Statistical methods in medical research_, 19(3):317-343.
* Clifton and Laber (2020) Clifton, J. and Laber, E. (2020). Q-learning: Theory and applications. _Annual Review of Statistics and Its Application_, 7:279-301.
* Clifton et al. (2017)
Cretin, B. (2021). Treatment of seizures in older patients with dementia. _Drugs & Aging_, 38(3):181-192.
* De Wit et al. (2016) De Wit, F., Van Vliet, A., De Wilde, R., Jansen, J., Vuyk, J., Aarts, L., De Jonge, E., Veelo, D., and Geerts, B. (2016). The effect of propofol on haemodynamics: cardiac output, venous return, mean systemic filling pressure, and vascular resistances. _British Journal of Anaesthesia_, 116(6):784-789.
* Devroye et al. (1994) Devroye, L., Gyorfi, L., Krzyzak, A., and Lugosi, G. (1994). On the strong universal consistency of nearest neighbor regression function estimates. _The Annals of Statistics_, 22(3):1371-1385.
* Dhakar et al. (2022) Dhakar, M. B., Sheikh, Z., Kumari, P., Lawson, E. C., Jeanneret, V., Desai, D., Ruiz, A. R., and Haider, H. A. (2022). Epileptiform abnormalities in acute ischemic stroke: impact on clinical management and outcomes. _Journal of Clinical Neurophysiology_, 39(6):446-452.
* Diamond and Sekhon (2013) Diamond, A. and Sekhon, J. S. (2013). Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. _Review of Economics and Statistics_, 95(3):932-945.
* Einmahl and Mason (2005) Einmahl, U. and Mason, D. M. (2005). Uniform in bandwidth consistency of kernel-type function estimators.
* Ernst et al. (2005) Ernst, D., Geurts, P., and Wehenkel, L. (2005). Tree-based batch mode reinforcement learning. _Journal of Machine Learning Research_, 6.
* Ertefaie and Strawderman (2018) Ertefaie, A. and Strawderman, R. L. (2018). Constructing dynamic treatment regimes over indefinite time horizons. _Biometrika_, 105(4):963-977.
* Farrokh et al. (2018) Farrokh, S., Tahsili-Fahadan, P., Ritzl, E. K., Lewin, J. J., and Mirski, M. A. (2018). Antiepileptic drugs in critically ill patients. _Critical Care_, 22(1):1-12.
* Ferraty et al. (2010) Ferraty, F., Laksaci, A., Tadj, A., and Vieu, P. (2010). Rate of uniform consistency for nonparametric estimates with functional variables. _Journal of Statistical planning and inference_, 140(2):335-352.
* Fujimoto et al. (2018a) Fujimoto, S., Hoof, H., and Meger, D. (2018a). Addressing function approximation error in actor-critic methods. In _International conference on machine learning_, pages 1587-1596. PMLR.
* Fujimoto et al. (2018b)
Fujimoto, S., Meger, D., and Precup, D. (2018b). Off-policy deep reinforcement learning without exploration. corr abs/1812.02900 (2018). _arXiv preprint arXiv:1812.02900_.
* Ganesan and Hahn (2019) Ganesan, S. L. and Hahn, C. D. (2019). Electrographic seizure burden and outcomes following pediatric status epilepticus. _Epilepsy & Behavior_, 101:106409.
* Goldberg and Kosorok (2012) Goldberg, Y. and Kosorok, M. R. (2012). Q-learning with censored data. _Annals of statistics_, 40(1):529.
* Guo et al. (2022) Guo, H., Li, J., Liu, H., and He, J. (2022). Learning dynamic treatment strategies for coronary heart diseases by artificial intelligence: real-world data-driven study. _BMC Medical Informatics and Decision Making_, 22(1):1-16.
* Haarnoja et al. (2018) Haarnoja, T., Zhou, A., Hartikainen, K., Tucker, G., Ha, S., Tan, J., Kumar, V., Zhu, H., Gupta, A., Abbeel, P., et al. (2018). Soft actor-critic algorithms and applications. _arXiv preprint arXiv:1812.05905_.
* Hahn et al. (2020) Hahn, P. R., Murray, J. S., and Carvalho, C. M. (2020). Bayesian regression tree models for causal inference: Regularization, confounding, and heterogeneous effects (with discussion). _Bayesian Analysis_, 15(3):965-1056.
* Hill (1910) Hill, A. V. (1910). The possible effects of the aggregation of the molecules of hemoglobin on its dissociation curves. _j. physiol._, 40:iv-vii.
* Ho et al. (2007) Ho, D. E., Imai, K., King, G., and Stuart, E. A. (2007). Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. _Political analysis_, 15(3):199-236.
* Holloway et al. (2020) Holloway, S., Laber, E., Linn, K., Zhang, B., Davidian, M., and Tsiatis, A. (2020). Dyn-txregime: Methods for estimating optimal dynamic treatment regimes. _R package version_, 49:3.
* Huang et al. (2022) Huang, Y., Cao, R., and Rahmani, A. (2022). Reinforcement learning for sepsis treatment: A continuous action space solution. In _Machine Learning for Healthcare Conference_, pages 631-647. PMLR.
* Ishwaran (2015) Ishwaran, H. (2015). The effect of splitting on random forests. _Machine learning_, 99:75-118.
* Jain and Iverson (2018) Jain, S. and Iverson, L. M. (2018). Glasgow coma scale.
* Jain et al. (2018)
Jiang, H. (2019). Non-asymptotic uniform rates of consistency for k-nn regression. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 33, pages 3999-4006.
* Kalweit and Boedecker (2017) Kalweit, G. and Boedecker, J. (2017). Uncertainty-driven imagination for continuous deep reinforcement learning. In _Conference on Robot Learning_, pages 195-206. PMLR.
* Kang et al. (2023) Kang, Y., Shi, D., Liu, J., He, L., and Wang, D. (2023). Beyond reward: Offline preference-guided policy optimization. _arXiv preprint arXiv:2305.16217_.
* Kara et al. (2017) Kara, L.-Z., Laksaci, A., Rachdi, M., and Vieu, P. (2017). Data-driven knn estimation in nonparametric functional data analysis. _Journal of Multivariate Analysis_, 153:176-188.
* Kilbride et al. (2009) Kilbride, R. D., Costello, D. J., and Chiappa, K. H. (2009). How seizure detection by continuous electroencephalographic monitoring affects the prescribing of antiepileptic medications. _Archives of Neurology_, 66(6):723-728.
* Kim et al. (2018) Kim, J. A., Boyle, E. J., Wu, A. C., Cole, A. J., Staley, K. J., Zafar, S., Cash, S. S., and Westover, M. B. (2018). Epileptiform activity in traumatic brain injury predicts post-traumatic epilepsy. _Annals of Neurology_, 83(4):858-862.
* Knaus et al. (1986) Knaus, W. A., Draper, E. A., Wagner, D. P., and Zimmerman, J. E. (1986). Apache ii-a severity of disease classification system: Reply. _Critical Care Medicine_, 14(8):755.
* Koenig and Simmons (1996) Koenig, S. and Simmons, R. G. (1996). The effect of representation and knowledge on goal-directed exploration with reinforcement-learning algorithms. _Machine Learning_, 22:227-250.
* Kondrup et al. (2023) Kondrup, F., Jiralerspong, T., Lau, E., de Lara, N., Shkrob, J., Tran, M. D., Precup, D., and Basu, S. (2023). Towards safe mechanical ventilation treatment using deep offline reinforcement learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 37, pages 15696-15702.
* Kudraszow and Vieu (2013) Kudraszow, N. L. and Vieu, P. (2013). Uniform consistency of knn regressors for functional variables. _Statistics & Probability Letters_, 83(8):1863-1870.
* Kumar et al. (2020) Kumar, A., Zhou, A., Tucker, G., and Levine, S. (2020). Conservative q-learning for offline reinforcement learning. _Advances in Neural Information Processing Systems_, 33:1179-1191.
* Kudraszow et al. (2017)
Lanners, Q., Parikh, H., Volfovsky, A., Rudin, C., and Page, D. (2023). Variable importance matching for causal inference. In _Uncertainty in Artificial Intelligence_, pages 1174-1184. PMLR.
* Lee et al. (2013) Lee, M. H., Kong, D.-S., Seol, H. J., Nam, D.-H., and Lee, J.-I. (2013). Risk of seizure and its clinical implication in the patients with cerebral metastasis from lung cancer. _Acta neurochirurgica_, 155(10):1833-1837.
* Li (1984) Li, K.-C. (1984). Consistency for cross-validated nearest neighbor estimates in nonparametric regression. _The Annals of Statistics_, pages 230-240.
* Lillicrap et al. (2015) Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. (2015). Continuous control with deep reinforcement learning. _arXiv preprint arXiv:1509.02971_.
* Lucke-Wold et al. (2015) Lucke-Wold, B. P., Nguyen, L., Turner, R. C., Logsdon, A. F., Chen, Y.-W., Smith, K. E., Huber, J. D., Matsumoto, R., Rosen, C. L., Tucker, E. S., et al. (2015). Traumatic brain injury and epilepsy: underlying mechanisms leading to seizure. _Seizure_, 33:13-23.
* Luo et al. (2023) Luo, Y., Kay, J., Grefenstette, E., and Deisenroth, M. P. (2023). Finetuning from offline reinforcement learning: Challenges, trade-offs and practical solutions. _arXiv preprint arXiv:2303.17396_.
* Lyu et al. (2023) Lyu, L., Cheng, Y., and Wahed, A. S. (2023). Imputation-based q-learning for optimizing dynamic treatment regimes with right-censored survival outcome. _Biometrics_.
* Mataric (1994) Mataric, M. J. (1994). Reward functions for accelerated learning. In _Machine learning proceedings 1994_, pages 181-189. Elsevier.
* Matzkin (2007) Matzkin, R. L. (2007). Nonparametric identification. _Handbook of econometrics_, 6:5307-5368.
* Mendez and Lim (2003) Mendez, M. F. and Lim, G. T. (2003). Seizures in elderly patients with dementia: epidemiology and management. _Drugs & aging_, 20:791-803.
* Mnih et al. (2013) Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. (2013). Playing atari with deep reinforcement learning. _arXiv preprint arXiv:1312.5602_.
* Mnih et al. (2014)
* Moodie et al. (2012) Moodie, E. E., Chakraborty, B., and Kramer, M. S. (2012). Q-learning for estimating optimal dynamic treatment rules from observational data. _Canadian Journal of Statistics_, 40(4):629-645.
* Moodie et al. (2014) Moodie, E. E., Dean, N., and Sun, Y. R. (2014). Q-learning: Flexible learning about useful utilities. _Statistics in Biosciences_, 6:223-243.
* Moodie and Richardson (2010) Moodie, E. E. and Richardson, T. S. (2010). Estimating optimal dynamic regimes: Correcting bias under the null. _Scandinavian Journal of Statistics_, 37(1):126-146.
* Mumtaz et al. (2023) Mumtaz, H., Ejaz, M. K., Tayyab, M., Vohra, L. I., Sapkota, S., Hasan, M., and Saqib, M. (2023). Apache scoring as an indicator of mortality rate in icu patients: a cohort study. _Annals of Medicine and Surgery_, 85(3):416.
* Murphy (2003) Murphy, S. A. (2003). Optimal dynamic treatment regimes. _Journal of the Royal Statistical Society Series B: Statistical Methodology_, 65(2):331-355.
* Murphy (2005) Murphy, S. A. (2005). A generalization error for q-learning.
* Murphy et al. (2007) Murphy, S. A., Oslin, D. W., Rush, A. J., and Zhu, J. (2007). Methodological challenges in constructing effective treatment sequences for chronic psychiatric disorders. _Neuropsychopharmacology_, 32(2):257-262.
* Murray et al. (2018) Murray, T. A., Yuan, Y., and Thall, P. F. (2018). A bayesian machine learning approach for optimizing dynamic treatment regimes. _Journal of the American Statistical Association_, 113(523):1255-1267.
* Nelson et al. (2008) Nelson, D. L., Lehninger, A. L., and Cox, M. M. (2008). _Lehninger principles of biochemistry_. Macmillan.
* Nembrini et al. (2018) Nembrini, S., Konig, I. R., and Wright, M. N. (2018). The revival of the gini importance? _Bioinformatics_, 34(21):3711-3718.
* Ng et al. (2000) Ng, A. Y., Russell, S., et al. (2000). Algorithms for inverse reinforcement learning. In _Icml_, volume 1, page 2.
* Parikh et al. (2023) Parikh, H., Hoffman, K., Sun, H., Zafar, S. F., Ge, W., Jing, J., Liu, L., Sun, J., Struck, A., Volfovsky, A., et al. (2023). Effects of epileptiform activity on discharge outcome in critically ill patients in the usa: a retrospective cross-sectional study. _The Lancet Digital Health_.
* Parikh et al. (2018)
Parikh, H., Rudin, C., and Volfovsky, A. (2022). Malts: Matching after learning to stretch. _The Journal of Machine Learning Research_, 23(1):10952-10993.
* Punia et al. (2020) Punia, V., Chandan, P., Fesler, J., Newey, C. R., and Hantus, S. (2020). Post-acute symptomatic seizure (pass) clinic: a continuity of care model for patients impacted by continuous eeg monitoring. _Epilepsia Open_, 5(2):255-262.
* Qian and Murphy (2011) Qian, M. and Murphy, S. A. (2011). Performance guarantees for individualized treatment rules. _Annals of statistics_, 39(2):1180.
* Ratkovic (2019) Ratkovic, M. T. (2019). Rehabilitating the regression : Honest and valid causal inference through machine learning.
* Riachi et al. (2021) Riachi, E., Mamdani, M., Fralick, M., and Rudzicz, F. (2021). Challenges for reinforcement learning in healthcare. _arXiv preprint arXiv:2103.05612_.
* Robins (2000) Robins, J. M. (2000). Robust estimation in sequentially ignorable missing data and causal inference models. In _Proceedings of the American Statistical Association_, volume 1999, pages 6-10. Indianapolis, IN.
* Robins (2004) Robins, J. M. (2004). Optimal structural nested models for optimal sequential decisions. In _Proceedings of the Second Seattle Symposium in Biostatistics: analysis of correlated data_, pages 189-326. Springer.
* Rubin (1974) Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. _Journal of educational Psychology_, 66(5):688.
* Rubinos et al. (2018) Rubinos, C., Reynolds, A. S., and Claassen, J. (2018). The ictal-interictal continuum: to treat or not to treat (and how)? _Neurocritical care_, 29:3-8.
* Savje (2023) Savje, F. (2023). Causal inference with misspecified exposure mappings: separating definitions and assumptions. _Biometrika_, page asad019.
* Schulte et al. (2014) Schulte, P. J., Tsiatis, A. A., Laber, E. B., and Davidian, M. (2014). Q-and a-learning methods for estimating optimal dynamic treatment regimes. _Statistical science: a review journal of the Institute of Mathematical Statistics_, 29(4):640.
* Seno and Imai (2022) Seno, T. and Imai, M. (2022). d3rlpy: An offline deep reinforcement learning library. _Journal of Machine Learning Research_, 23(315):1-20.
* Snoek et al. (2018)
Shargel, L., Andrew, B., and Wu-Pong, S. (1999). _Applied biopharmaceutics & pharmacokinetics_, volume 264. Appleton & Lange Stamford.
* Singh et al. (2010) Singh, S., Lewis, R. L., Barto, A. G., and Sorg, J. (2010). Intrinsically motivated reinforcement learning: An evolutionary perspective. _IEEE Transactions on Autonomous Mental Development_, 2(2):70-82.
* Song et al. (2015) Song, R., Wang, W., Zeng, D., and Kosorok, M. R. (2015). Penalized q-learning for dynamic treatment regimens. _Statistica Sinica_, 25(3):901.
* Stuart (2010) Stuart, E. A. (2010). Matching methods for causal inference: A review and a look forward. _Statistical science: a review journal of the Institute of Mathematical Statistics_, 25(1):1.
* Tabaeizadeh et al. (2020) Tabaeizadeh, M., Aboul Nour, H., Shoukat, M., Sun, H., Jin, J., Javed, F., Kassa, S., Edhi, M., Bordbar, E., Gallagher, J., et al. (2020). Burden of epileptiform activity predicts discharge neurologic outcomes in severe acute ischemic stroke. _Neurocritical care_, 32:697-706.
* Volkow (2020) Volkow, N. D. (2020). Personalizing the treatment of substance use disorders. _American Journal of Psychiatry_, 177(2):113-116.
* Wang et al. (2020) Wang, Z., Novikov, A., Zolna, K., Merel, J. S., Springenberg, J. T., Reed, S. E., Shahriari, B., Siegel, N., Gulcehre, C., Heess, N., et al. (2020). Critic regularized regression. _Advances in Neural Information Processing Systems_, 33:7768-7778.
* Weiss (1997) Weiss, J. N. (1997). The hill equation revisited: uses and misuses. _The FASEB Journal_, 11(11):835-841.
* Zafar et al. (2018) Zafar, S. F., Postma, E. N., Biswal, S., Boyle, E. J., Bechek, S., O'Connor, K., Shenoy, A., Kim, J., Shafi, M. S., Patel, A. B., et al. (2018). Effect of epileptiform abnormality burden on neurologic outcome and antiepileptic drug management after subarachnoid hemorrhage. _Clinical Neurophysiology_, 129(11):2219-2227.
* Zafar et al. (2020) Zafar, S. F., Subramaniam, T., Osman, G., Herlopian, A., and Struck, A. F. (2020). Electrographic seizures and ictal-interictal continuum (iic) patterns in critically ill patients. _Epilepsy & Behavior_, 106:107037.
* Zhang et al. (2012) Zhang, B., Tsiatis, A. A., Davidian, M., Zhang, M., and Laber, E. (2012). Estimating optimal treatment regimes from a classification perspective. _Stat_, 1(1):103-114.
Zhang, Y., Laber, E. B., Davidian, M., and Tsiatis, A. A. (2018). Interpretable dynamic treatment regimes. _Journal of the American Statistical Association_, 113(524):1541-1549.
* Zhao et al. (2015) Zhao, Y.-Q., Zeng, D., Laber, E. B., and Kosorok, M. R. (2015). New statistical learning methods for estimating optimal dynamic treatment regimes. _Journal of the American Statistical Association_, 110(510):583-598.
* Zhao et al. (2020) Zhao, Y.-Q., Zhu, R., Chen, G., and Zheng, Y. (2020). Constructing dynamic treatment regimes with shared parameters for censored data. _Statistics in medicine_, 39(9):1250-1263.
* Zhou and Kosorok (2017) Zhou, X. and Kosorok, M. R. (2017). Causal nearest neighbor rules for optimal treatment regimes. _arXiv preprint arXiv:1711.08451_.
## Appendix A Dynamic Treatment Regime & Reinforcement Learning Literature Survey
There are a number of different techniques for estimating optimal treatment regimes. Prior methods include parametric, semi-parametric, and non-parametric modeling approaches and are often combined with reinforcement learning (RL) frameworks such as \(\mathcal{Q}\)-learning and policy gradient. We categorize the existing methods into four categories: _Finite Timestep Backward Induction Methods_, _Infinite Time Horizon and Censored Data Methods_, _Deep Reinforcement Learning_, and _Causal Nearest Neighbors_. Methods from each of these categories excel in certain settings. However, in this section, we highlight the limitations of each approach that ultimately make them unsuitable for our complex, high-stakes problem.
Finite timestep backward induction methods make up the majority of optimal treatment policy estimation methods. Murphy (2003) and Robins (2004) were some of the first ones to utilize backward induction in a semiparametric approach using approximate dynamic programming. Murphy (2005) introduced the now widely used Q-learning approach, of which initial extensions focused on using parametric and semi-parametric modeling of the Q-functions (Moodie and Richardson, 2010; Chakraborty et al., 2010; Song et al., 2015). These approaches can produce interpretable policies and can be easier to implement. However, correct specification of the Q-functions can be difficult, particularly with observational data (Moodie et al., 2014). This can lead to poor estimates of the optimal policy when a misspecified linear model is used. For this reason, recent work has focused on using flexible non-parametric machine learning methods (Zhang et al., 2012; Zhao et al., 2015; Murray
et al., 2018; Blumlein et al., 2022), particularly within the Q-learning framework (Qian and Murphy, 2011; Moodie et al., 2014; Zhang et al., 2018). While these methods are less prone to model misspecification, they often result in complex treatment regimes for which the rationale behind the treatment decision is difficult to discern. Although, Blumlein et al. (2022) and Zhang et al. (2018) proposed more explainable nonparametric approaches.
The majority of backward induction methods assume all patients have the same number of fixed timesteps, which presents difficulty when working with variable timesteps across patients and unobserved states. Infinite horizon methods, like fitted Q-iteration (Ernst et al., 2005; Clifton and Laber, 2020) and Ertefaie and Strawderman (2018)'s Q-learning approach, are better suited to handle these complexities. However, these methods necessitate a reward value to be associated with every action taken by each unit. These reward values are often assumed to be intrinsically linked to the optimization problem and a measurable value. However, when this is not the case, they need to be calculated using a predefined function over the observed variables. Having to create such a reward function is often a difficult task that can lead to poor optimal regime estimates (Mataric, 1994; Koenig and Simmons, 1996; Singh et al., 2010). Other work has investigated using backward induction with censored data (Goldberg and Kosorok, 2012; Lyu et al., 2023; Zhao et al., 2020). However, these methods have focused on survival analysis time-to-event tasks, which differ from our setup where we have a labeled outcome for each patient.
Regardless of time-step constraints, all of the methods discussed thus far assume that there are a discrete number of treatment options at each time point. Furthermore, while there is extensive work on backward induction methods for observational data (Moodie et al., 2012), many methods impose a strong positivity assumption over all of the treatments at each timepoint (Qian and Murphy, 2011; Zhao et al., 2015; Blumlein et al., 2022). This assumption is often broken in observational data, as patient care is under the supervision of a trained professional, and thus, unless randomized, at any given time point a patient in a particular state may have a near-zero chance of receiving a particular treatment. While approaches like Schulte et al. (2014) do employ weaker positivity assumptions, there is limited discussion on how various backward induction methods handle extremal propensity scores.
Deep reinforcement learning (RL) methods are a fast-growing area of research for optimal treatment regime estimation. Deep RL methods can be categorized as online or offline and on-policy or off-policy. In real-world high-stakes settings online and on-policy methods are infeasible, limiting the scope of applicable methods to offline, off-policy approaches. Mnih et al. (2013) introduced Deep Q-Learning as an effective method for off-policy RL
and Lillicrap et al. (2015) extended this method to a continuous action space with deep deterministic policy gradient (DDPG). More recent work has focused on improving upon DDPG by improving sampling efficiency (Haarnoja et al., 2018), limiting overestimation bias (Fujimoto et al., 2018; Kumar et al., 2020), overcoming extrapolation error (Fujimoto et al., 2018), and using a critic-regularized approach (Wang et al., 2020).
Deep RL methods are capable of learning complex optimal treatment regimes and can handle variable and infinite timesteps. However, these methods are significantly more data and resource-hungry than non-deep learning approaches. However, deep RL methods like Haarnoja et al. (2018) offer improvements in this area. A larger issue with Deep RL is that it requires reward values to be associated with each action. This can cause issues similar to those discussed with infinite horizon methods. A possible solution to not having reward values for infinite horizon and deep RL methods is to use inverse reinforcement learning methods to learn a good reward function (Ng et al., 2000; Arora and Doshi, 2021). However, such an approach would add an additional layer of complexity to the estimation procedure. In the case of deep RL, this would further exacerbate what is already its most crucial limitation in its inherent lack of interpretability. The black-box nature of deep RL makes it a poor choice for optimal treatment regime estimation in high-stakes applications.
In general, all of the methods discussed either assume the correct specification of a reward function or that there are no missing states or actions leading up to the final outcome. However, these assumptions do not align with our real-world scenario.
Matching is an intuitive method for optimal treatment regime estimation. Despite its inherent interpretability, little work has been done in this area. Zhou and Kosorok (2017) used a nearest-neighbor approach that examined the causal treatment effects within neighborhoods of similar patients to estimate optimal treatment regimes. While mentioning that their method can be extended to observational studies, they focus on randomized controlled trials - lacking theoretical or experimental results for the observational setting. Furthermore, they only consider a singular timestep with discrete treatment options and use a limited univariate approach for matching in high dimensions. Ultimately, their matching approach shows promise as an accurate and interpretable approach to optimal treatment regime estimation but is unable to handle the complexities commonly found in real-world problems.
Ideally, we want a method that can handle continuous action and state spaces, missing timesteps, does not require a reward function to be specified, and can be trained on a small number of samples. Furthermore, we want a method that is interpretable given the
high-stakes setting. Table 2, in the main text, summarizes the different optimal treatment regime estimation approaches in regard to these desired attributes. In Section 5, we present our matching approach for optimal treatment regime estimation. We subsequently present results showing our method's superior performance over a number of comparison approaches across various settings (see Section 6 and Appendix F.1). Ultimately, to the best of our knowledge, our method is the only approach that possesses all of the qualities needed to effectively address our problem.
## Appendix B Distance Metric Learning and Almost Exact Matching
In this section, we discuss some recent and relevant work in almost exactly matching literature and distance metric learning. In an ideal scenario, we would achieve exact matches for some units. However, in high-dimensional contexts with continuous covariates, exact matches are rare. When performing nearly exact matching with a caliper of \(r\), the objective is to achieve a close match on relevant features while not being overly concerned about matching on irrelevant ones. Therefore, especially in cases with limited data, the choice of the distance metric \(d\) for matching becomes crucial. Recent matching approaches have focused on distance metric learning before the matching process. One such approach, Genetic Matching (Diamond and Sekhon, 2013), employs a genetic algorithm to learn an appropriate distance metric. However, it has been found to perform poorly for individualized estimation and is limited to binary or categorical exposures. Another method, Matching After Learning to Stretch (MALTS) (Parikh et al., 2022), is effective for individualistic estimation but struggles to converge in high-dimensional settings with small datasets. A recent approach called Variable Importance Matching (VIM) (Lanners et al., 2023) uses a highly regularized model like LASSO or a shallow decision tree to model \(\mathbb{E}[Y|\mathbf{V}]\). It then utilizes the variable importance scores from the fitted model to guide the selection of the distance metric. This approach is both fast and interpretable and works well in high-dimensional scenarios, making it well-suited for our problem.
## Appendix C Pharmacokinetics and Pharmacodynamics
In this section, we discuss our modeling choice for PK and PD mechanistic models.
Pharmacokinetics.We use a one-compartment PK model to estimate the concentration of drug \(j\) for patient \(i\) at time \(t\) (\(D_{j,i,t}\)) as:
\[g_{i}(\{\mathbf{D}_{i,t^{\prime}}\}_{t^{\prime}=1}^{t-1},\mathbf{Z}_{i,t})=e^{- \gamma_{j,i}}D_{j,i,t-1}+Z_{j,i,t}, \tag{1}\]
where pharmacokinetic parameter \(\gamma_{j,i}\) is proportional to the half-life of the drug \(j\) in patient \(i\).
Pharmacodynamics.We model PD using Hill's model (Nelson et al., 2008) to estimate the short-term effectiveness of the ASMs in reducing EA burden:
\[f_{i}(\{E_{i,t^{\prime}}\}_{t^{\prime}=1}^{t-1},\mathbf{D}_{i,t})=\beta_{i} \left(1-\sum_{j}\frac{D_{j,i,t}^{\alpha_{j,i}}}{D_{j,i,t}^{\alpha_{j,i}}+ED50_{ j,i}^{\alpha_{j,i}}}\right), \tag{2}\]
where \(\beta_{i}\) is patient \(i\)'s EA burden when no drugs are administered, \(\alpha_{j,i}\) models the affinity of drug \(j\)'s ligand to a receptor for patient \(i\), and \(ED50_{j,i}\) is the amount of drug concentration necessary to reduce EA burden by 50% from the maximum level.
## Appendix D Data Generative Mechanism for the Simulation Study
We base our synthetic data experiments on our real world application where patients experiencing seizures are treated with anti-seizure medications. For our synthetic experiments, we let the first-order pharmacological state-transition model outlined in Appendix C be the true model for each patient's drug response and EA burden progression.
For each patient \(i\in\{1,\ldots,n\}\), the PK/PD model is defined by the following parameters: \(\beta_{i}\), \(\gamma_{i,j}\), \(\alpha_{i,j}\), and \(ED50_{i,j}\) for each drug \(j\). For simplicity, and to allow for comparison to more methods, we consider a setting with only one drug. Associated with each patient are \(p\) pre-treatment covariates, \(X_{i,1},\ldots,X_{i,p}\stackrel{{ iid}}{{\sim}}\text{Normal}(0,1)\). We let the PK/PD parameters be correlated with the pre-treatment covariates \(\mathbf{X}_{i}\) such that \(\beta_{i}\sim\text{Normal}\left(100+10X_{i,1},5\right)\) and \(ED50_{i}\sim\text{Normal}\left(15-2X_{i,3},1\right)\). Further, \(\gamma_{i},\alpha_{i}\stackrel{{ iid}}{{\sim}}\text{Normal}(1,0.1)\).
From here, we let the total number of timesteps, \(\tau_{i}\), be a random integer in [\(T_{min}\), \(T_{max}\)] and set the number of observed states as \(T_{i}=\tau_{i}-m_{i}\), where \(m_{i}\) is the number of unobserved timesteps and is a random integer in [\(M_{min}\), \(M_{max}\)]. Finally, \(E_{i,0}\), the initial burden for patient \(i\), is sampled as \(E_{i,0}\sim\text{Normal}(75+5X_{i,2},5)\), and is lower bounded by 0 and upper
bounded by \(\beta_{i}\).
We simulate a complete sequence of states \(\{E_{i,t}\}_{t=1}^{\tau_{i}}\) and actions \(\{Z_{i,t}\}_{t=1}^{\tau_{i}}\) given the initial burden \(E_{i,0}\), a policy \(\pi_{i}\), and the patient's corresponding PK/PD parameters. We use the same PK/PD equations outlined in Appendix C with a small amount of noise added to the patient's EA burden at each timestep. In particular, we calculate the EA burden for patient \(i\) at timestep \(t\) by slighltly modifying Equation 2 so that
\[E_{i,t}=\beta_{i}\left(1-\sum_{j}\frac{D_{i,t}^{\alpha_{i}}}{D_{i,t}^{\alpha_ {i}}+ED50_{i}^{\alpha_{i}}}\right)+\epsilon_{E_{i,t}}. \tag{3}\]
where \(\epsilon_{E_{i,t}}\sim\text{Normal}(0,2.5)\). This produces a series of EA burdens \(\{E_{i,t}\}_{t=1}^{\tau_{i}}\) drug doses \(\{Z_{i,t}\}_{t=1}^{\tau_{i}}\) and drug concentrations \(\{D_{i,t}\}_{t=1}^{\tau_{i}}\) corresponding to each patient \(i\). The outcome is related to the patient's pre-treatment covariates, EA burdens, and drug concentrations - thus inducing a level of confounding. In particular, we calculate the continuous outcome value as
\[O_{i}=\frac{1}{\tau_{i}}\left[\exp\left(\sum_{j=1}^{2}\frac{X_{i,j}}{2}\right) \left(\sum_{t=1}^{\tau_{i}}\exp\left(\frac{E_{i,t}}{50}\right)-1\right)+\exp \left(\sum_{j=3}^{4}\frac{X_{i,j}}{2}\right)\left(\sum_{t=1}^{\tau_{i}}\exp \left(\frac{D_{i,t}}{50}\right)-1\right)\right] \tag{4}\]
Note that we desire a smaller continuous outcome value. This outcome function represents a scenario where patients with a large average value in \(X_{i,1}\) and \(X_{i,2}\) are more at risk from high levels of EA burden. Whereas, patients with a large average value in \(X_{i,3}\) and \(X_{i,4}\) are more at risk from high drug concentrations. Finally, to emulate the real-world setting where we observe a binary outcome, we discretize the continuous outcomes to a binary outcome for each patient, setting \(Y_{i}=1\) [\(O_{i}>3\)].
_Remark 4_.: Three was chosen as our cutoff value for the binary outcomes to create a setting where about 50% of patients experience a bad outcome (i.e. \(Y=1\)). By using a static value, we could more easily compare the binary outcomes across a variety of data generation setups.
Ultimately, the observed data for each patient \(i\) is \(\{X_{i},\{E_{i,t}\}_{t=1}^{T_{i}},\{Z_{i,t}\}_{t=1}^{T_{i}},Y_{i}\}\). Note that the observed history only includes the states and actions up to timestep \(T_{i}\), not \(\tau_{i}\), and only includes the binary outcome \(Y_{i}\), not \(O_{i}\).
### Data Generation Process Setups
We vary the data generation process in five important aspects to create a comprehensive synthetic experiment under these conditions.
1. Number of pre-treatment covariates.
2. Number of total timesteps.
3. Number of missing timesteps.
4. Size of the action space.
5. Policy creation method (i.e. how we generate \(\pi_{i}\)).
For each of these five aspects, we consider two separate settings. **We enumerate over all possible combinations for a total of 32 experimental setups.** To align with our real-world dataset size, we set the number of patients \(n=1000\) for all setups. We outline the two options for each aspect below.
1. Number of pre-treamtent covariates. 1. 10 pre-treatment covariates (\(p=10\)). 2. 100 pre-treatment covariates (\(p=100\)).
2. Number of total timesteps. 1. Each patient has two total timesteps (\(\tau_{i}=2\) for all \(i\)). 2. Each patient has between 10 and 15 total timesteps (\(T_{min}=10\), \(T_{max}=15\)).
3. Number of missing timesteps. 1. No missing timesteps for any patients (\(T_{i}=\tau_{i}\) for all \(i\)). 2. Patients are missing a variable number of timesteps. If the number of total timesteps is 2(a), then patients are missing between zero and one timesteps (\(M_{min}=0\), \(M_{max}=1\)). Otherwise, if the total number of timesteps is 2(b), then patients are missing between two and five timesteps (\(M_{min}=2\), \(M_{max}=5\))
4. Size of the action space. 1. A continuous action space with drug doses allowed in \([0,100]\). 2. A binary action space with only two drug doses allowed \(\{0,50\}\).
5. Policy creation method (i.e. how we generate \(\pi_{i}\)). 1. Random policy. If the action space is continuous, 4(a), then \(\pi_{i}\left(\{E_{i,t^{\prime}}\}_{t^{\prime}=1}^{t-1},\{Z_{i,t^{\prime}}\}_{t ^{\prime}=1}^{t-1}\right)=\epsilon_{\pi_{i,t}}\) where \(\epsilon_{\pi_{i,t}}\sim\text{Uniform}(0,100)\). If the action space is binary, 4(b), then \(\pi_{i}\left(\{E_{i,t^{\prime}}\}_{t^{\prime}=1}^{t-1},\{Z_{i,t^{\prime}}\}_{t ^{\prime}=1}^{t-1}\right)=\epsilon_{\pi_{i,t}}\) where \(\epsilon_{\pi_{i,t}}\sim\text{Uniform}(\{0,50\})\). 2. An informed policy that is an additive model using ten binary features \(F^{1},\ldots,F^{10}\). For a patient \(i\) at timestep \(t\), the ten features are calculated as: 1. \(F^{1}_{i,t}=\mathbbm{1}\left[E_{i,t-1}>10\right]\) 2. \(F^{2}_{i,t}=\mathbbm{1}\left[E_{i,t-1}>20\right]\) 3. \(F^{3}_{i,t}=\mathbbm{1}\left[E_{i,t-1}>30\right]\) 4. \(F^{4}_{i,t}=\mathbbm{1}\left[E_{i,t-1}>40\right]\) 5. \(F^{5}_{i,t}=\mathbbm{1}\left[E_{i,t-1}>60\right]\)
* \(F_{i,t}^{6}=\mathbb{1}\left[E_{i,t-1}>80\right]\)
* \(F_{i,t}^{7}=\mathbb{1}\left[Z_{i,t-1}>25\right]\)
* \(F_{i,t}^{8,t}=\mathbb{1}\left[Z_{i,t-1}>50\right]\)
* \(F_{i,t}^{9}=\mathbb{1}\left[t\geq 3\right]\mathbb{1}\left[E_{i,t-1}>40\right] \mathbb{1}[\frac{1}{3}\sum_{t^{\prime}=t-3}^{t-1}E_{i,t^{\prime}}>20]\)
* \(F_{i,t}^{10}=\mathbb{1}\left[t\geq 3\right]\mathbb{1}\left[Z_{i,t-1}>40\right] \mathbb{1}[\frac{1}{3}\sum_{t^{\prime}=t-3}^{t-1}Z_{i,t^{\prime}}>20]\)
Then, \(\pi_{i}\left(\left\{E_{i,t^{\prime}}\right\}_{t^{\prime}=1}^{t-1},\left\{Z_{i,t^{\prime}}\right\}_{t^{\prime}=1}^{t-1}\right)=\pi_{i}\left(\left\{F_{i,t}^ {j}\right\}_{j=1}^{10}\right)=\sum_{j=1}^{10}\zeta_{j}F_{i,t}^{j}\), where \(\zeta_{1},\ldots,\zeta_{10}\) are determined by the type of policy assigned to patient \(i\). We define three separate policy types: aggressive (\(\pi_{i}^{a}\)), moderate (\(\pi_{i}^{m}\)), and conservative (\(\pi_{i}^{c}\)). Depending on the size of the action space, the coefficients corresponding to each of the policy types are shown in Table 5b.
We then assign a policy to each patient \(i\) such that if the patient has a larger average value in \(X_{i,1}\) and \(X_{i,2}\) then they are assigned an aggressive policy with high probability. And similarly, if the patient has a larger average value in \(X_{i,3}\) and \(X_{i,4}\) then they are assigned a conservative policy with high probability.
Finally, to emulate a doctor occasionally deviating from the informed policy, at each timestep there is a small chance that the administered dose does not follow the assigned policy \(\pi_{i}\). In particular, if the action space is continuous, 4(a), there is a 5% chance that \(\mathbf{Z}_{i,t}\sim\text{Normal}(E_{i,t},10)\). And if the action space is binary, 4(b), there is a 5% chance that \(\mathbf{Z}_{i,t}\sim\text{Uniform}(\{0,50\})\).
Varying this five aspects of the data generation process, we generate a suite of results
\begin{table}
\begin{tabular}{c|c|c|c|c|c c} \multicolumn{2}{c|}{**Policy Type**} & \multicolumn{2}{c|}{**Aggressive**} & \multicolumn{2}{c|}{**Moderate**} & \multicolumn{2}{c}{**Conservative**} \\ \hline
**Action Space** & **Continuous** & **Binary** & **Continuous** & **Binary** & **Continuous** & **Binary** \\ \hline \multirow{8}{*}{**Coefficient Values**} & \(\zeta_{1}\) & 10\(+\epsilon_{\zeta a1}\) & 0 & \(\epsilon_{\zeta m1}\) & 0 & \(\epsilon_{\zeta c1}\) & 0 \\ & \(\zeta_{2}\) & 10\(+\epsilon_{\zeta a2}\) & 50 & \(\epsilon_{\zeta m2}\) & 0 & \(\epsilon_{\zeta c2}\) & 0 \\ & \(\zeta_{3}\) & 20\(+\epsilon_{\zeta a3}\) & 0 & 10\(+\epsilon_{\zeta m3}\) & 0 & \(\epsilon_{\zeta c3}\) & 0 \\ \cline{1-1} & \(\zeta_{4}\) & 20\(+\epsilon_{\zeta a4}\) & 0 & 10\(+\epsilon_{\zeta m4}\) & 0 & \(\epsilon_{\zeta c4}\) & 0 \\ \cline{1-1} & \(\zeta_{5}\) & 20\(+\epsilon_{\zeta a5}\) & 0 & 20\(+\epsilon_{\zeta m5}\) & 50 & 10\(+\epsilon_{\zeta c5}\) & 50 \\ \cline{1-1} & \(\zeta_{6}\) & 20\(+\epsilon_{\zeta a6}\) & 0 & 20\(+\epsilon_{\zeta m6}\) & 0 & 20\(+\epsilon_{\zeta c6}\) & 0 \\ \cline{1-1} & \(\zeta_{7}\) & \(\epsilon_{\zeta a7}\) & 0 & -10\(+\epsilon_{\zeta m7}\) & 0 & -10\(+\epsilon_{\zeta c7}\) & -50 \\ \cline{1-1} & \(\zeta_{8}\) & \(\epsilon_{\zeta a8}\) & 0 & -20\(+\epsilon_{\zeta m8}\) & 0 & -20\(+\epsilon_{\zeta c8}\) & 0 \\ \cline{1-1} & \(\zeta_{9}\) & 20 \(+\epsilon_{\zeta a9}\) & 0 & 20 \(+\epsilon_{\zeta m9}\) & 0 & 20 \(+\epsilon_{\zeta e9}\) & 0 \\ \cline{1-1} & \(\zeta_{10}\) & \(\epsilon_{\zeta a10}\) & 0 & -20\(+\epsilon_{\zeta m10}\) & 0 & -20\(+\epsilon_{\zeta c10}\) & 0 \\ \hline \end{tabular}
\end{table}
Table 3: Coefficient values for aggressive, moderate, and conservative policies. All \(\epsilon_{\zeta_{*}}\overset{iid}{\sim}\text{Normal}(0,1)\) and are added to emulate the liberty that experts take to slightly deviate from the preset policies.
that provide a comprehensive analysis of the strengths and weaknesses of a variety of optimal policy estimation methods. We outline the methods we compare to, and provided implementation details, in Appendix E. Results for all experiments are shown in Appendix F.
## Appendix E Comparison Methods and Implementation Details
We compare our matching method to _Finite Timestep Backward Induction Methods_, _Infinite Time Horizon Methods_, and _Deep Reinforcement Learning Methods_. Many of the methods we compare to are not configured to handle all of the complexities present in our data. For this reason, we make adaptations to each of the methods where necessary. In this section, we outline the methods we implemented and any adaptations we made. We omit censored data methods due to their focus on survival analysis time-to-event tasks. We also omit the matching method of Zhou and Kosorok (2017) as they do not consider multiple timesteps and only discuss discrete treatment options.
_Note One: Many of the methods we compare to can only handle binary or discrete actions spaces. For binary action space methods, we let \(Z_{i,t}\in\left\{0,50\right\}\) and we binarize the doses such that \(Z_{i,t}=50\left(1\left[Z_{i,t}>25\right]\right)\). For discrete action space methods, we let \(Z_{i,t}\in\left\{0,25,50,75,100\right\}\) and we discretize the doses such that \(Z_{i,t}=25\left(1\left[Z_{i,t}>12.5\right]+1\left[Z_{i,t}>37.5\right]+1\left[Z_ {i,t}>25,50,75,100\right\}\) and we discretize the doses such that \(Z_{i,t}=25\left(1\left[Z_{i,t}>12.5\right]+1\left[Z_{i,t}>37.5\right]+1\left[Z_ {i,t}>25,50,75,100\right\}\) and we discretize the doses such that \(Z_{i,t}=25\left(1\left[Z_{i,t}>12.5\right]+1\left[Z_{i,t}>37.5\right]+1\left[Z_ {i,t}>25,50,75,100\right\}\right.\).
_Note Two: The optimal treatment regime estimation literature normally focuses on maximizing outcomes, not minimizing like we do in our setup. We flip the outcomes in our data for methods that try to maximize in order to account for this._
_Note Three: A number of the methods we compare to require a reward value corresponding to each patient \(i\) at timestep \(t\), \(\left\{R_{i,t^{\prime}}\right\}_{t^{\prime}=1}^{T_{i}}\). To calculate these values, we define three separate reward functions: naive, insightful, and oracle. The naive reward function prioritizes reducing EA burden while avoiding large drug doses, but does not consider the patient's pre-treatment covariates. The insightful reward function considers the interaction between \(X_{i,1}\) and EA burdens and \(X_{i,3}\) and drug doses, but assumes a linear relationship and does not account for \(X_{i,2}\) nor \(X_{i,4}\). The oracle reward function is of the same form as our outcome function defined in Equation 4. We compare to three configurations of each method that requires reward values, where each configuration uses reward values calculated from a different reward function. The exact reward functions are outlined below. Note that all methods aim to maximize the reward function._
* **Finite Timestep Backward Induction Methods:** We compare to a wide array of finite timestep backward induction methods. The methods we compare to are: Q-learning Murphy (2005); Moodie et al. (2012); Clifton and Laber (2020), BOWL Zhao et al. (2015), and optimal classifier Zhang et al. (2012). We used the R package DynTxRegime Holloway et al. (2020) to implement each of these methods. These methods all require a discrete treatment space and the DynTxRegime package only handles the binary case. Given that there is a large literature on Q-learning for discrete action spaces with more than two actions, we also implement our own version of Q-learning for multilevel treatments. For these methods, we followed the Q-learning implementation for observational data as outlined by Moodie et al. (2012). Finite timestep backward induction methods assume full observation of all states and actions for each patient and that the number of timesteps for each patient is the same. To implement these methods when patients have varying numbers of observable timesteps, we truncate the state and action space to only include the timesteps for which all samples have observed data, \(\hat{T}=\min_{i\in\{1,\ldots,n\}}T_{i}\). We then carry out each method on this subset of the data to generate estimated optimal treatments for timesteps \(t\in\{1,\ldots,\hat{T}\}\). From here, we use the model generated at the last observed timestep, \(\hat{T}\), to estimate optimal treatments for the remaining \(t\in\{\hat{T},\ldots,\tau_{i}\}\) for each patient \(i\). For the binary Q-learning methods implemented using the DynTxRegime R package we run two versions. One where the contrasts model is a linear model and one where the contrasts model is a decision tree model. For both versions, we use a linear model for the main effects component of the outcome regression. This results in two binary Q-learning varieties. For the optimal classifier method we also run two versions. One where the contrasts model is a linear model and one where the contrasts model is a decision tree model. For both versions, we use a linear model for the propensity score model and main
effects component of the outcome regression. We use a decision tree classifier for the classification model. This results in two optimal classification varieties. BOWL requires reward values and thus we run a version for each of the three reward functions. We also run a linear kernel and second degree polynomial kernel version of BOWL for each reward function. All versions use a linear model for the propensity score model. This results in six BOWL varieties. For the multilevel Q-learning methods, we incorporate the propensity score at each timestep as a term in our Q-function model (Moodie et al., 2012). All propensity scores are estimated with a linear model. We consider three cases: linear model Q-functions, support vector machine with RBF kernels Q-functions, and random forest Q-functions. This results in three multivel Q-learning varieties. **In total, we generate results from 13 varieties of finite timestep backward induction methods.**
* **Infinite Time Horizon Methods:** We compare to infinite time horizon Q-learning. We implement this method using _Fitted Q-iteration_ as outlined in Algorithm 2 of Section 4 of Clifton and Laber (2020). Similar to multilevel backward induction Q-learning, we use a linear model to estimate propensity scores and include them as a term to the Q-funcion. We consider using three different types of models for the Q-functions: linear models, support vector machines with RBF kernels, and random forests. For each model type, we also consider the case of binarizing the doses into \(\{0,50\}\) and discretizing the doses into \(\{0,25,50,75,100\}\). Finally, infinite horizon methods need a reward for each action, so we run each configuration under each of the three reward functions. **In total, we generate results from 18 varieties of infinite time horizon methods.**
* one for each of the three reward functions. We set the number of steps for each model to 10,000 and kept the remaining parameters at their default values.
**In total, we generate results from 18 varieties of deep reinforcement learning methods.**
We compare to a number of additional baselines in addition to the optimal treatment regime estimation methods outlined above.
* **Expert:** This baseline is meant to emulate an educated doctor strictly following the informed policy with no deviation. Here we assign policies to each patient \(i\) as done in the informed policy creation method 5(b). However, we remove all the noise we added to 5(b).In particular, \(\epsilon_{\eta_{*}}=0\) and there is a 0% chance that the doctor deviates from the assigned policy at each timestep.
* **Random:** Random dosing at each timestep. If the action space is continuous, 4(a), then \(Z_{i,t}\sim\text{Uniform}(0,100)\) for all \(i\) and \(t\). Otherwise, if the action space is binary, 4(b), then \(Z_{i,t}\sim\text{Uniform}(\{0,50\})\).
* **Inaction:** No drug is administered to any patients at any timesteps. \(Z_{i,t}=0\) for all \(i\) and \(t\).
* **Full Dosing:** If the action space is continuous, 4(a), then a dose of 100 is given at every timestep. \(Z_{i,t}=100\) for all \(i\) and \(t\). If the action space is binary, 4(b), then a dose of 50 is given at every timestep. \(Z_{i,t}=50\) for all \(i\) and \(t\).
We implement our method as outlined in Section 5. Since here we know the true underlying PK/PD parameters, we omit Step 1 from our method to ensure a fair comparison. We first estimate each patient's observed regime with a linear model, using the ten features in 5(b) of Appendix D as our policy template. We then learn a distance metric with a linear model and use that distance metric to perform nearest neighbors matching. We create matched groups of size five for each patient, where we match to the five closest patients with good outcomes. Finally, we perform linear interpolation over the patients' policies in each matched group to estimate the optimal policy, \(\hat{\pi}_{i}^{*}\), for each patient \(i\).
## Appendix F Synthetic Data Experiments: Additional Results and Implementation Details
In Section 6 we present just a small selection of the results from our synthetic data experiment. Here we provide all of our results and further implementation details. We give a comprehensive analysis of key findings in Section F.1. We provide additional experimental implementation details in Section F.2. Given the number of approaches (54) and data generation process setups (32) we ran tests for, we include our full results in separate csv
files. These files are in the Supplementary Materials. We outline each file and its contents in Section F.3.
### Additional Results for Synthetic Data Experiments
Summary of our Analysis.We first compare our method with the 39 approaches that do not use the oracle reward function and are not a preset policy. Looking at the 8 setups where we have 10-15 timesteps and 2-5 missing timesteps, our method outperforms all other approaches in the majority of setups (5 of 8) and is always among the top four performing approaches - never more than 4.5 percentage points worse than the best approach. As noted in Section 6, in the 16 setups with 10-15 timesteps we are the best performing method 9 of 16 times and among the top 4 approaches 16 of 16 times - never more than 7 percentage points worse than the top approach. When compared on all 32 simulations setups, where some are specifically designed for finite-timestep backward induction methods to perform well, our method outperforms all of the comparison approaches in 17 of the 32 setups and is among the top 4 approaches 29 times - never more than 10.1 percentage points worse than the top approach.
When we also consider the oracle reward functions, our method is never more than 12 percentage points worse than the top performing approach on the 8 setups with 10-15 timesteps and 2-5 missing timesteps and never more than 15 percentage points worse than the top performing approach across all 32 setups.
All of these upper limits on the number of percentage points between our method and the top performing approach are the lowest such values for any method. Ultimately, our simulation results show that our method is frequently the best approach and that its performance is consistent across a variety of scenarios.
In the remainder of this section, we perform an in-depth analysis comparing our method to each of the categories of existing DTR and RL methods that we implemented. We focus on finite-timestep backward induction methods Q-learning and optimal classifier in Appendix F.1.1, infinite horizon methods in Appendix F.1.2, Deep RL in Appendix F.1.3, and BOWL in Appendix F.1.4. In each subsection, we comment on the strengths and weaknesses of the methods, ultimately highlighting how our approach is superior for estimating optimal treatment regimes in complex high-stakes settings.
#### f.1.1 Analyzing Q-learning and Optimal Classifier Performance
**Q-learning** and **Optimal Classifier** methods implemented using the _DynTxRegime_ R package struggle in complex settings for what we presume is a variety of reasons. Figure 5 details the performance of our method, Q-learning, and optimal classifier with varying action spaces (binary vs. continuous), number of timesteps (2 vs. 10-15), and missing states (missing vs. no missing). These plots highlight how our method drastically outperforms Q-learning and optimal classifier in continuous action spaces. It makes sense that Q-learning and optimal classifier struggle with continuous actions spaces, as they are forced to binarize continuous actions spaces, thereby losing important information. Note that the best results across all the plots in Figure 5 are achieved by our Method when we allow the doses to be continuous, suggesting that binarizing the treatment is not a good strategy to optimize outcomes for patients.
We also note that our method is far superior in settings with longer time horizons. This aligns with the fact that previous work on finite-timestep backward induction methods has largely focused on the two timestep setting, paying less attention to longer time horizons (Clifton and Laber, 2020). Furthermore, as outlined in Appendix E, when implementing these methods we truncate all of the states to only include timesteps for which all individuals have an observed state and action. This removes a large amount of information from the data and most likely impacts the performance of these methods.
We further note that the finite-timestep backward induction methods perform better, on average, when there are no missing timesteps. Whereas, our method is quite robust to the missingness of states.
As a sanity check, we show the performance of Q-learning and optimal classifier under the conditions that it was primarily designed for in Figure 6. These results show how effective Q-learning and optimal classifer can be in a more conducive setting, with all varieties outperforming our method. However, this performance does not translate to our challenging high-stakes setting, ultimately making these methods ill-suited for our application.
One obvious way to try to improve finite-timestep backward induction Q-learning is to decrease the amount of information loss by discretizing the continuous treatments into more bins. While the _DynTxRegime_ R package does not support multilevel treatments, we implemented our own version of Q-learning to handle this. We outline our implementation in Appendix E.
Figure 7 shows a comparison between binary Q-learning with two treatment options and discrete Q-learning with five treatment options. All plots in Figure 7 are in settings with 10
pre-treatment covariates, a continuous action space, and observed data generated using an informed policy. While we do see a gain in performance, this gain is less substantial when there are more timesteps and missing states. Ultimately, the multi-level treatment form of Q-learning still fails to match the performance of our method. This suggests that while Q-learning can improve by increasing the number of discrete dose options, it still struggles with long time horizons and missing states. Furthermore, at some point the small sample size limits the gain in performance Q-learning can achieve by creating more treatment bins.
Figure 5: Percent of patients with poor outcomes under different proposed policies (_lower is better_). Boxplots show the distribution of the average outcomes over 20 iterations. _Observed_ shows average observed outcomes. _Expert_ shows outcomes under the expert policies. _Linear_ and _DTree Q-learning_ are finite-timestep backward induction Q-learning using either linear models or decision trees. _Linear_ and _DTree OptClass_ are optimal classifier using either linear models or decision trees. See Appendix E for further details of each method. Note that not all backward induction methods converged for all 20 iterations of each setup. See all_sims_nan.csv and Appendix F.3 for details.
#### f.1.2 Analyzing Infinite Horizon Performance
**Infinite Horizon** methods can overcome the issue finite-timestep backward induction methods face with longer time horizons and missing states/actions. Figure 8 shows a comparison of our method, the infinite horizon method fitted Q-iteration (see Clifton and Laber (2020)), and the finite-timestep backward induction methods Q-learning and optimal classifier. The subplots in this figure show how each method performs with different numbers of missing states and different size action spaces.
Figure 8 highlights how infinite horizon methods can handle long time horizons and missing states much better than finite-timestep backward induction Q-learning and optimal classifier. In fact, fitted Q-iteration can outperform our method when the action space is binary and does especially well when the observed data is generated from a random policy.
However, we still see that fitted Q-iteration struggles with a continuous action space. This is particularly true when the observed data is generated from an informed policy (plots first row of Figure 8). Conversely, our method can handle these added complexities, producing much better results in the setups most resembling a complex real-world setting.
Similar to Figure 7 for backward induction Q-learning, Figure 9 shows how infinite horizon
Figure 6: Percent of patients with poor outcomes under different proposed policies (_lower is better_) in a setting more conducive to finite-timestep backward induction methods. Here we set the (i) number of covariates to 10, (ii) number of timesteps to 2, (iii) have no missing states, (iv) only allow binary doses, and (v) generate the observed data from a random policy. Boxplots show the distribution of the average outcomes over 20 iterations. _Observed_ shows average observed outcomes. _Expert_ shows outcomes under expert policies. _Inaction_ and _Max Dosing_ administer no drugs and the max amount of drugs to each patient at each timestep, respectively. _Linear_ and _DTree Q-learning_ are finite-timestep backward induction Q-learning using either linear models or decision trees. _Linear_ and _DTree OptClass_ are optimal classifier using either linear models or decision trees. See Appendix E for further details of each method.
zon methods can alleviate the problem of a continuous action space by using a multi-level treatment version of fitted Q-iteration instead of a binary version. The bottom row of Figure 9 shows the strong performance of infinite horizon methods when using observational data generated from a random policy. Fitted Q-iteration outperforms our method in these setups. However, when the training data is generated from an informed policy (top row of Figure 9), as observational data typically is, our method has superior performance. This is most-likely due to the fact that infinite horizon methods have to deal with the notion of exploration vs. exploitation (Clifton and Laber, 2020), leading to worse performance when the data is collected following a relatively stagnant and educated policy. Observed data collected under such policies essentially has less "exploration" built into it. This struggle could also be due to the fact that infinite horizon methods often work under the assumption that the data are collected from a random policy.
Figure 7: Percent of patients with poor outcomes under different proposed policies (_lower is better_). In all plots, the setup has (i) 10 pre-treatment covariates, (iv) a continuous action space, and (v) generates the observed data using an informed policy. Boxplots show the distribution of the average outcomes over 20 iterations. _Observed_ shows average observed outcomes. _Expert_ shows outcomes under expert policies. _Linear Q-learning_ binarizes the continuous treatment values into two values and _Linear Q-learning Multi_ discretizes the treatments into five bins. Both methods use linear models.
there is a non-zero probability of each action at each timestep (Ertefaie and Strawderman, 2018). However, under the informed policy there are certain states for which certain actions are near-impossible.
Infinite horizon methods are a promising technique, but face a key challenge in our data setup as they require a reward value to be assigned to each action. In our setup, we only observe an outcome at the end of a patient's timesteps. Therefore, we are forced to define a reward function ourselves. We outline the three different reward functions we consider in Appendix E. Figure 9 showed results using the oracle reward function. Figure 10 depicts the stark differences in performance we see using infinite horizon methods with different reward functions. We observe that the performance of infinite horizon methods suffers as the reward
Figure 8: Percent of patients with poor outcomes under different proposed policies (_lower is better_). In all plots, the setup has (i) 10 pre-treatment covariates and (ii) 10-15 total timesteps. Boxplots show the distribution of the average outcomes over 20 iterations. _Observed_ shows average observed outcomes. _Expert_ shows outcomes under expert policies. _Linear Q-learning_ is finite-timestep backward induction Q-learning using linear models. _Linear OptClass_ is optimal classifier using linear models. _Linear Inf_ is the infinite horizon method fitted Q-iteration using linear models. _Linear Inf_ uses the oracle reward function. See Appendix E for further details of each method and the reward functions.
function gets farther away from the truth. Researchers typically do not know the oracle, or true, reward function and while we can compare the different reward functions, since we know the underlying simulation setup, this is not the case with real-world observational data. Thus, the researcher has to carefully consider the reward function when using infinite horizon methods. This ultimately limits the usefulness of infinite horizon methods in high-stakes applications where reward values are not available for each action that is observed.
#### f.1.3 Analyzing Deep RL Performance
The performance capabilities of **Deep Reinforcement Learning** is already depicted in Section 6's Figure 1. While DDPG, SAC, and TD3 struggle with the smaller sample size,
Figure 9: Percent of patients with poor outcomes under different proposed policies (_lower is better_). In all plots, the setup has (i) 100 pre-treatment covariates, (ii) 10-15 total timesteps, and (iv) a continuous action space. Boxplots show the distribution of the average outcomes over 20 iterations. _Observed_ shows average observed outcomes. _Expert_ shows outcomes under expert policies. _Linear Inf_ is Fitted Q-iteration where the treatments are binarized and _Linear Inf Multi_ is Fitted Q-iteration where the treatments are discretized into five bins. Both methods use linear models and the oracle reward function (see Appendix E for details on reward functions).
the more modern architectures like BCQ, CQL, and CRR perform well, although slightly worse than our method, on a simulated dataset that resembles our real-world data. However, we note that these Deep RL methods struggle when a random policy is used to generate the observed data. Figure 11 shows how BCQ, CQL, and CRR perform worse when the training data is generated from a random policy. While our method's performance also suffers in this setting, the dip in performance is less severe than Deep RL methods. We hypothesize that Deep RL struggles when using data generated from a random policy because they all use an evaluation set to guide the learning process (Seno and Imai, 2022). Thus, with limited data generated via a random policy, it is difficult to evaluate the model's performance. Deep RL methods would likely improve if we had significantly more randomly generated data or had
Figure 10: Percent of patients with poor outcomes under different proposed policies (_lower is better_). In all plots, the setup has (i) 100 pre-treatment covariates, (ii) 10-15 total timesteps, and (iv) a continuous action space. Boxplots show the distribution of the average outcomes over 20 iterations. _Observed_ shows average observed outcomes. _Expert_ shows outcomes under expert policies. All _Linear Inf Multi_ methods are fitted Q-iteration approaches that discretize the treatment into five bins and use linear models. _Naive_, _Insightful_, or _Oracle_ at the end of each _Linear Inf Multi_ method specifies which reward function is uses to calculate the reward values. See Appendix E for details on reward functions.
the ability to do online learning (Luo et al., 2023).
Deep RL approaches, like infinite horizon methods, also require a reward to be specified for each action. Figure 12 shows that the performance of the best deep RL methods when using each of the three different reward functions outlined in Appendix E. We observe that the performance is stable across these three reward functions when training on data generated from informed policies. Although, we note that all three reward functions are at least slightly related to the outcome, and thus performance could suffer if the reward function was badly misspecified.
Ultimately, deep reinforcement learning methods show promise for optimal treatment regime estimation from observational data generated by domain experts. The main drawbacks of Deep RL in our setting is its fundamental lack of interpretability. The inability to explain the estimates generated by Deep RL makes it ill-suited for high-stakes applications in the medical field.
As an aside, we also note that Deep RL methods require substantially more compute power to train than our method, and any of the other methods we compare to. We train these models using significantly more compute power and GPUs Even with the enhanced computing power, these methods had substantially larger runtimes as well. See Appendix F.2 for further details.
Figure 11: Percent of patients with poor outcomes under different proposed policies (_lower is better_). In all plots, the setup has (i) 100 pre-treatment covariates, (ii) 10-15 total timesteps, (iii) 2-5 missing states, and (iv) a continuous action space. Boxplots show the distribution of the average outcomes over 20 iterations. _Observed_ shows average observed outcomes. _Expert_ shows outcomes under expert policies. _CQL_, _CRR_, and _BCQ_ are all Deep RL methods using the insightful reward function. See Appendix E for details on reward functions.
#### r.1.4 Analyzing BOWL Performance
The final method we compare to is Backward Outcome Weighted Learning, **BOWL**. We find that the BOWL method implemented in _DynTxRegime_ struggles to consistently converge, especially when the training data has more timesteps. We show the frequency in which BOWL fails to run for different configurations and reward functions in Figure 13. We further discuss the most likely reasons for these runtime issues, and the steps we took to avoid them, in Appendix F.2. The instability of BOWL for the vast majority of our data configuration setups makes it difficult to discern what aspects of our data are causing it the most problems. We ultimately conclude that BOWL, as implemented in the _DynTxRegime_ R package, is ill equipped to handle the challenges present in our simulated data.
### Additional Implementation Details for Synthetic Data Experiments
Code to reproduce the results in this paper is available at [will include GitHub link in final version].
We run each of our methods outlined in Section E for a total of 20 iterations for each data generation setup. Tests are run on a Slurm cluster with VMware, where each VM is an
Figure 12: Percent of patients with poor outcomes under different proposed policies (_lower is better_). In all plots, the setup has (i) 100 pre-treatment covariates, (ii) 10-15 total timesteps, (iii) 2-5 missing states, (iv) a continuous action space, and (v) generates data from an informed policy. Boxplots show the distribution of the average outcomes over 20 iterations. _Observed_ shows average observed outcomes. _Expert_ shows outcomes under expert policies. _CQL_, _CRR_, and _BCQ_ are all Deep RL methods where _Naive_, _Insightful_, or _Oracle_ at the end of each method specifies which reward function is uses to calculate the reward values. See Appendix E for details on reward functions.
Intel(R) Xeon(R) Gold CPU (either 5317 @ 3.00GHz, 5320 @ 2.20GHz, 6142 @ 2.60GHz, 6152 @ 2.10GHz, 6226 @ 2.70GHz, or 6252 @ 2.10GHz). Deep RL methods are run on machines with RTX2080 GPUs. Slurm jobs are allocated a single core with 2 GB of RAM for non-Deep RL methods and 16 GB of RAM for Deep RL methods. We set the random seed to match the iteration number for each.
We split the dataset into 5 folds to perform estimation using our method and Deep RL methods. For our method, we use 1 fold to learn the distance metric and perform estimation on the remaining 4 folds. We then average across the 4 outcomes for each sample. For Deep RL methods, we use 4 folds for training and perform estimation on the remaining fold - performing this 5 times to get estimates for each sample.
There are some data generation processes for which we did not generate results for each method for all 20 iterations. You can find details on which methods failed to run for which setups in the all_sims_nan.csv file described in Appendix F.3. We outline which methods we are missing results for and explain the reasons below:
* Q-linear, Optimal Classifier, and BOWL implemented using the _DynTxRegime_ R package: Each of these methods is missing results for some of the setups because they failed to converge or produced a runtime error. We attempted to alleviate these issues by running both Q-learning and optimal classifier with decision trees and linear models and running BOWL with a linear kernel and a second degree polynomial kernel. We
Figure 13: Percentage of total simulation iterations for which different BOWL variations produced either a runtime or convergence error. _Linear_ or _Poly_ refer to the kernel type BOWL uses. _Naive_, _Insightful_, or _Oracle_ at the end of each method specifies which reward function is used to calculate the reward values. See Appendix E for details on BOWL implementation and reward functions. See Appendix F.2 for further details on BOWL and DynTxRegime errors. See all_sim_nan.csv and Appendix F.3 for full results on which simulation setups BOWL failed to run for.
performed five-fold cross-validation to choose the lambda for BOWL. However, the package errored out if any of the folds failed to converge. We added exception handling to account for this, where we attempted to fit BOWL with preset lambda values of 2 and then 0.5 if it produced an error while performing cross-validation. After investigation, we hypothesized that Q-learning and optimal classification failed to converge for extremal propensity scores in observational data. This is supported by the fact that their errors only occurred when the policy was semi-random. For this policy choice, there are timesteps where a patient's next dose is mostly predetermined by their current state - thus leading to very small or large propensity scores. We believe that BOWL struggles with a similar issue, given that it also employs the use of a propensity score. However, BOWL failed to converge for a number of the setups that used a random policy. We acknowledge that the software package made it difficult to discern if the errors were being produced due to an issue with how we were implementing it in the _DynTxRegime_ R package or with the BOWL method itself. Thus, we are less sure of the exact reasons why BOWL struggled for so many of our setups.
* Deep RL methods implemented using the _d3rlpy_ Python package: The Deep RL methods we compare to only accept continuous actions spaces. Therefore, we do not have results for any of the setups where the action space was discrete. Also, for two of the setups with continuous action spaces, 2 total timesteps, and 0-1 missing timesteps the number of realized doses was such that the methods interpreted the action space as discrete in some of the iterations, causing it to error out.
### Synthetic Data Experiments Results Files
We include files with results for all 54 approaches and 32 simulation setups in the Supplementary Materials. The files use seven columns to indicate the settings of the data generation process for that run.
* Sim: Indicates the assigned simulation number. All rows with the same sim number are run under the same data generation configuration, except for the random seed.
* Iter: Indicates the iteration number of the corresponding Sim. The Iter value is also used as the random seed for that run.
* Cows: The number of pre-treatment covariates.
* T Setting: The number of total timesteps setting, where a corresponds to setup 2(a) and b corresponds to setup 2(b) (in Appendix D).
* T Drop Setting: The number of unobserved timesteps setting, where a corresponds to setup 3(a) and b corresponds to setup 3(b) (in Appendix D).
* Binary Dose: Whether the treatment space is binary or not (if FALSE then treatment space is continuous).
* Policy: The policy used to generate the observed data. If random, than a random policy was used to generate the data. Else if informed, than an informed policy was used.
We outline the contents of each file below.
* all_sims_binary_outcomes.csv: This file contains the average binary outcome value, \(\frac{1}{n}\sum_{i=1}^{n}Y_{i}\), under the proposed policies of each approach. Each row corresponds to the average value for a single iteration of the specified simulation setup.
* all_sims_cont_outcomes.csv: This file contains the average continuous outcome value under the proposed policies of each approach. The continuous outcome is simply \(O_{i}\) rather than \(Y_{i}\) in our data generation process outlined in Appendix D. We can report these values since we know the true underlying data generation process. Each row corresponds to the average value for a single iteration of the specified simulation setup.
* all_sims_nan.csv: This file contains the number of iterations that each approach failed to produce policy estimates for the 32 simulation setups. See details in Appendix F.2 on why methods failed.
* sims_binary_outcomes_mean.csv: This file contains the average binary outcome value across all iterations of each simulation setup for each method. Note that not all methods ran for 20 iterations for each setup. See all_sims_nan.csv.
* sims_binary_outcomes_std.csv: This file contains the standard deviation of the average binary outcome value across all iterations of each simulation setup for each method. Note that not all methods ran for 20 iterations for each setup. See all_sims_nan.csv.
* sims_binary_outcomes_median.csv: This file contains the median of the average binary outcome value across all iterations of each simulation setup for each method. Note that not all methods ran for 20 iterations for each setup. See all_sims_nan.csv.
We also include sims_cont_outcomes_mean.csv, sims_cont_outcomes_std.csv, and sims_cont_outcomes which contain the same content but for the continuous outcome.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Variable** & **Value** \\ \hline Age, year, median (IQR) & 61 (48 – 73) \\ \hline Male gender, n (\%) & 475 (47.7\%) \\ \hline Race & \\ \hline Asian, n (\%) & 33 (3.3\%) \\ \hline Black / African American, n (\%) & 72 (7.2\%) \\ \hline White / Caucasian, n (\%) & 751 (75.5\%) \\ \hline Other, n (\%) & 50 (5.0\%) \\ \hline Unavailable / Declined, n (\%) & 84 (8.4\%) \\ \hline Married, n (\%) & 500 (50.3\%) \\ \hline Premorbid mRS before admission, median (IQR) & 0 (0 – 3) \\ \hline APACHE II in first 24h, median (IQR) & 19 (11 – 25) \\ \hline Initial GCS, median (IQR) & 11 (6 – 15) \\ \hline Initial GCS is with intubation, n (\%) & 415 (41.7\%) \\ \hline Worst GCS in first 24h, median (IQR) & 8 (3 – 14) \\ \hline Worst GCS in first 24h is with intubation, n (\%) & 511 (51.4\%) \\ \hline Admitted due to surgery, n (\%) & 168 (16.9\%) \\ \hline Cardiac arrest at admission, n (\%) & 79 (7.9\%) \\ \hline Seizure at presentation, n (\%) & 228 (22.9\%) \\ \hline Acute SDH at admission, n (\%) & 146 (14.7\%) \\ \hline Take anti-epileptic drugs outside hospital, n (\%) & 123 (12.4\%) \\ \hline Highest heart rate in first 24h, /min, median (IQR) & 92 (80 – 107) \\ \hline Lowest heart rate in first 24h, /min, median (IQR) & 71 (60 – 84) \\ \hline Highest systolic BP in first 24h, mmHg, median (IQR) & 153 (136 – 176) \\ \hline Lowest systolic BP in first 24h, mmHg, median (IQR) & 116 (100 – 134) \\ \hline Highest diastolic BP in first 24h, mmHg, median (IQR) & 84 (72 – 95) \\ \hline Lowest diastolic BP in first 24h, mmHg, median (IQR) & 61 (54 – 72) \\ \hline Mechanical ventilation on the first day of EEG, n (\%) & 572 (57.5\%) \\ \hline Systolic BP on the first day of EEG, mmHg, median (IQR) & 148 (130 – 170) \\ \hline GCS on the first day of EEG, median (IQR) & 8 (5 – 13) \\ \hline History & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Full cohort characteristics and data description.
\begin{tabular}{c c} \hline Stroke, n (\%) & 192 (19.3\%) \\ \hline Hypertension, n (\%) & 525 (52.8\%) \\ \hline Seizure or epilepsy, n (\%) & 182 (18.3\%) \\ \hline Brain surgery, n (\%) & 109 (11.0\%) \\ \hline Chronic kidney disorder, n (\%) & 112 (11.3\%) \\ \hline Coronary artery disease and myocardial infarction, n (\%) & 160 (16.1\%) \\ \hline Congestive heart failure, n (\%) & 90 (9.0\%) \\ \hline Diabetes mellitus, n (\%) & 201 (20.2\%) \\ \hline Hypersensitivity lung disease, n (\%) & 296 (29.7\%) \\ \hline Peptic ulcer disease, n (\%) & 50 (5.0\%) \\ \hline Liver failure, n (\%) & 46 (4.6\%) \\ \hline Smoking, n (\%) & 461 (46.3\%) \\ \hline Alcohol abuse, n (\%) & 231 (23.2\%) \\ \hline Substance abuse, n (\%) & 119 (12.0\%) \\ \hline Cancer (except central nervous system), n (\%) & 180 (18.1\%) \\ \hline Central nervous system cancer, n (\%) & 85 (8.5\%) \\ \hline Peripheral vascular disease, n (\%) & 41 (4.1\%) \\ \hline Dementia, n (\%) & 45 (4.5\%) \\ \hline Chronic obstructive pulmonary disease or asthma, n (\%) & 139 (14.0\%) \\ \hline Leukemia or lymphoma, n (\%) & 22 (2.2\%) \\ \hline AIDS, n (\%) & 12 (1.2\%) \\ \hline Connective tissue disease, n (\%) & 47 (4.7\%) \\ \hline Primary diagnosis \\ \hline Septic shock, n (\%) & 131 (13.2\%) \\ \hline Ischemic stroke, n (\%) & 85 (8.5\%) \\ \hline Hemorrhagic stroke, n (\%) & 163 (16.4\%) \\ \hline Subarachnoid hemorrhage (SAH), n (\%) & 188 (18.9\%) \\ \hline Subdural hematoma (SDH), n (\%) & 94 (9.4\%) \\ \hline SDH or other traumatic brain injury including SAH, n (\%) & 52 (5.2\%) \\ \hline Traumatic brain injury including SAH, n (\%) & 21 (2.1\%) \\ \hline Seizure/status epilepticus, n (\%) & 258 (25.9\%) \\ \hline Brain tumor, n (\%) & 113 (11.4\%) \\ \hline CNS infection, n (\%) & 64 (6.4\%) \\ \hline \end{tabular}
## Appendix H Anti-Seizure Medications and Policy Templates
### Anti-Seizure Medications
Two drugs were studied: propofol and levetiracetam, Propofol is a sedative antiseizure medication and is given as a continuous infusion, while levetiracetam is a non-sedative antiseizure medication given as a bolus. The doses are normalized by body weight (kg). We use the half-lives from the literature for estimating the drug concentrations \(\mathbf{D}\) and estimate the PD parameters using the \(E\) and \(\mathbf{D}\) for each patient in our cohort (see Table 5 and Figure 14).
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Drug** & **Half-Life** & **avg.**\(\widehat{E}D\overline{50}\) & **avg.\(\widehat{\alpha}\)** \\ \hline Propofol & 20 minutes & 2.41 mg/kg/hr & 2.96 \\ Levetiracetam & 8 hours & 2.26 mg/kg & 3.33 \\ \hline \hline \end{tabular}
\begin{tabular}{c c} \hline \hline Ischemic encephalopathy or Anoxic brain injury, n (\%) & 72 (7.2\%) \\ \hline Toxic metabolic encephalopathy, n (\%) & 104 (10.5\%) \\ \hline Primary psychiatric disorder, n (\%) & 35 (3.5\%) \\ \hline Structural-degenerative diseases, n (\%) & 35 (3.5\%) \\ \hline Spell, n (\%) & 5 (0.5\%) \\ \hline Respiratory disorders, n (\%) & 304 (30.6\%) \\ \hline Cardiovascular disorders, n (\%) & 153 (15.4\%) \\ \hline Kidney failure, n (\%) & 65 (6.5\%) \\ \hline Liver disorder, n (\%) & 30 (3.0\%) \\ \hline Gastrointestinal disorder, n (\%) & 18 (1.8\%) \\ \hline Genitourinary disorder, n (\%) & 34 (3.4\%) \\ \hline Endocrine emergency, n (\%) & 28 (2.8\%) \\ \hline Non-head trauma, n (\%) & 13 (1.3\%) \\ \hline Malignancy, n (\%) & 65 (6.5\%) \\ \hline Primary hematological disorder, n (\%) & 24 (2.4\%) \\ \hline \hline \end{tabular}
\end{table}
Table 5: PK and the estimated average PD parameters for the anti-seizure medications.
### Policy Templates
The regime determining the dose for patient \(i\), for propofol at time \(t\) is given by:
\[\pi_{i}^{prop}\left(\{E_{i,t^{\prime}}\}_{t^{\prime}=1}^{t-1},\{ \mathbf{Z}_{i,t^{\prime}}\}_{t^{\prime}=1}^{t-1};\mathbf{a}_{i}^{p}\right)\] \[=a_{1,i}^{p}\mathbbm{1}[E_{i,t-1hr}>25\%]+a_{2,i}^{p}\mathbbm{1} [E_{i,t-1hr}>50\%]\] \[+a_{3,i}^{p}\mathbbm{1}[E_{i,t-1hr}>75\%]\] \[+a_{6,i}^{p}\mathbbm{1}[E_{i,t-6hr}>25\%]+a_{5,i}^{p}\mathbbm{1} [E_{i,t-6hr}>50\%]\] \[+a_{6,i}^{p}\mathbbm{1}[E_{i,t-1hr}>25\%]\mathbbm{1}[E_{i,t-6hr}>2 5\%]\] \[+a_{7,i}^{p}\mathbbm{1}[E_{i,t-6hr}>25\%]\mathbbm{1}[E_{i,t-12hr}>2 5\%], \tag{8}\]
where \(\mathbf{a}^{p}\) is a vector of the parameters for propofol's regime, \(E_{i,t-t^{\prime}}\) is the average EA burden between time \(t-t^{\prime}\) and \(t\), and \(Z_{i,j^{\prime},t-t^{\prime}}\) is the total dose of drug \(j^{\prime}\) administered between time \(t-t^{\prime}\) and \(t\).
The regime determining the dose for patient \(i\), for levetiracetam at time \(t\) is given by:
\[\pi_{i}^{lev}\left(\{E_{i,t^{\prime}}\}_{t^{\prime}=1}^{t-1},\{ \mathbf{Z}_{i,t^{\prime}}\}_{t^{\prime}=1}^{t-1};\mathbf{a}_{i}^{l}\right)\] \[=\mathbbm{1}\left[Z_{\text{lev},i,t-12hr}=0\right]\times\] \[\left(a_{0,i}^{l}+a_{1,i}^{l}\mathbbm{1}[E_{i,t-1hr}>25\%]\right.\] \[+a_{2,i}^{l}\mathbbm{1}[E_{i,t-1hr}>50\%]+a_{3,i}^{l}\mathbbm{1} [E_{i,t-1hr}>75\%]\] \[+a_{4,i}^{l}\mathbbm{1}[E_{i,t-6hr}>25\%]+a_{5,i}^{l}\mathbbm{1} [E_{i,t-6hr}>50\%]\] \[+a_{6,i}^{l}\mathbbm{1}[E_{i,t-1hr}>25\%]\mathbbm{1}[E_{i,t-6hr}>2 5\%]\] \[+a_{7,i}^{l}\mathbbm{1}[E_{i,t-6hr}>25\%]\mathbbm{1}[E_{i,t-12hr}>2 5\%]\bigg{)}, \tag{9}\]
Figure 14: Boxplots showing the distribution of the estimated pharmacodynamics parameters.
where \(\mathbf{a}^{l}\) is a vector of the parameters for levetiracetam's regime,
Thus, the regime for patient \(i\), denoted by
\[\pi_{i}=\begin{Bmatrix}\pi_{i}^{prop}\left(\{E_{i,t^{\prime}}\}_{t^{\prime}=1}^{t -1},\{\mathbf{Z}_{i,t^{\prime}}\}_{t^{\prime}=1}^{t-1};\mathbf{a}_{i}^{p} \right)\\ \pi_{i}^{lev}\left(\{E_{i,t^{\prime}}\}_{t^{\prime}=1}^{t-1},\{\mathbf{Z}_{i,t ^{\prime}}\}_{t^{\prime}=1}^{t-1};\mathbf{a}_{i}^{l}\right)\}\end{Bmatrix}\]
We estimate \(\mathbf{a}^{p}\) and \(\mathbf{a}^{l}\) by minimizing the mean squared error loss between the predicted drug doses and the observed drug doses, \(Z_{\mathrm{prop},i,t}\) and \(Z_{\mathrm{lev},i,t}\) at each time \(t\).
### Goodness of fit
Next, we study how good are the estimated administered regimes in predicting the observed drug doses. Recall, we learn a policy per patient. Just to keep the discussion brief, first, we show here the distribution of the \(R^{2}\) values for the propofol's policy fits (see Figure 15(a)) and a scatterplot of predicted and observed propofol doses for all patients overlaid on a single plot (see Figure 15(b)). We find that while most of the fitted models have high \(R^{2}\) there are a few which do not fit well. We also see a similar behavior in the scatter plot of observed and predicted drug doses.
## Appendix I Consistency Theorem and Proof
**Theorem** (Consistency of Treatment Regime Estimator).: _Consider a nest sequence of datasets \(\{\mathcal{D}_{n}\}\) such that \(|\mathcal{D}_{n}|=n\). Then, given conditional ignorability, local positivity, and the smooth outcomes assumptions,_
\[\lim_{n\to\infty}E[Y_{i}(\widehat{\pi}_{i}^{*,(n)})\mid\mathbf{V}_{i}]\to \mathbb{E}[Y_{i}(\pi_{i}^{*})\mid\mathbf{V}_{i}],\]
_where \(\widehat{\pi}_{i}^{*,(n)}\) is the estimate of the optimal treatment regime for unit \(i\) estimated using the caliper nearest neighbors interpolation on dataset \(\mathcal{D}_{n}\) with caliper \(r_{n}\)._
Proof.Let \(\mu_{i}(\mathbf{v},\pi):=\mathbb{E}[Y_{i}(\pi)\mid\mathbf{V}_{i}=\mathbf{v}]\) be the expected potential outcome for unit \(i\) for which we are interested in estimating the optimal policy, and
\[A_{i}^{(n)}:=\mu_{i}(\mathbf{V}_{i},\widehat{\pi}_{i}^{*,(n)})-\mu_{i}( \mathbf{V}_{i},\pi_{i}^{*}).\]
Figure 15: Goodness of fit for the policy for propofol doses. (a)Histogram of \(R^{2}\) value for all the model fits per patient, and (b) scatter plot of predicted vs. observed doses for all patients overlaid on one plot.
By conditional ignorability, \(\mu_{i}(\mathbf{v},\pi)=\mathbb{E}[Y_{i}\mid\mathbf{V}_{i}=\mathbf{v},\pi_{i}=\pi]\). Also, let \(\widehat{\mu}_{i}^{(n)}(\mathbf{v},\pi)\) denote the \(r_{n}\)-caliper nearest neighbor estimate of \(\mu_{i}(\mathbf{v},\pi)\) on dataset \(\mathcal{D}_{n}\) and \(MG_{i}^{(n)}\) denote the set of all units in \(\mathcal{D}_{n}\) that are at max \(r_{n}\) distance away from \(\mathbf{V}_{i}\). Then, by definition, \(\pi_{i}^{*}\) is the policy that, given \(\mathbf{V}_{i}\), minimizes \(\mu_{i}(\cdot,\cdot)\), and \(\widehat{\pi}_{i}^{*,(n)}\) is the policy that, given \(\mathbf{V}_{i}\), minimizes \(\widehat{\mu}_{i}^{(n)}(\cdot,\cdot)\). Thus,
\[A_{i}^{(n)} \leq \left(\mu_{i}(\mathbf{V}_{i},\widehat{\pi}_{i}^{*,(n)})-\widehat {\mu}_{i}^{(n)}(\mathbf{V}_{i},\widehat{\pi}_{i}^{*,(n)})\right)-\left(\mu_{i} (\mathbf{V}_{i},\pi_{i}^{*})-\widehat{\mu}_{i}^{(n)}(\mathbf{V}_{i},\pi_{i}^{ *})\right)\] \[\leq \left|\left(\mu_{i}(\mathbf{V}_{i},\widehat{\pi}_{i}^{*,(n)})- \widehat{\mu}_{i}^{(n)}(\mathbf{V}_{i},\widehat{\pi}_{i}^{*,(n)})\right)- \left(\mu_{i}(\mathbf{V}_{i},\pi_{i}^{*})-\widehat{\mu}_{i}^{(n)}(\mathbf{V}_ {i},\pi_{i}^{*})\right)\right|\] \[\leq \left|\left(\mu_{i}(\mathbf{V}_{i},\pi_{i}^{*})-\widehat{\mu}_{i }^{(n)}(\mathbf{V}_{i},\pi_{i}^{*})\right)\right|+\left|\left(\mu_{i}(\mathbf{ V}_{i},\widehat{\pi}_{i}^{*,(n)})-\widehat{\mu}_{i}^{(n)}(\mathbf{V}_{i}, \widehat{\pi}_{i}^{*,(n)})\right)\right|.\]
As \(n\rightarrow\infty\) we shrink \(r_{n}\to 0\) such that \(|MG_{i}^{(n)}|\rightarrow\infty\). Then, by the consistency of the caliper nearest-neighbors estimator under smoothness of outcomes, \(\widehat{\mu}_{i}^{(n)}(\mathbf{V}_{i},\pi)\rightarrow\mu_{i}(\mathbf{V}_{i},\pi)\) (see Remark 5). This implies that, as \(n\rightarrow\infty\), \(A^{(n)}=\mu_{i}(\mathbf{V}_{i},\widehat{\pi}_{i}^{*,(n)})-\mu_{i}(\mathbf{V}_ {i},\pi_{i}^{*})\to A^{(\infty)}\leq 0\). Further, by definition of, \(\mu_{i}(\mathbf{V}_{i},\pi_{i}^{*})\leq\mu_{i}(\mathbf{V}_{i},\widehat{\pi}_{i }^{*,(n)})\). Thus, we get, \(\mu_{i}(\mathbf{V}_{i},\widehat{\pi}_{i}^{*,(n)})\rightarrow\mu_{i}(\mathbf{ V}_{i},\pi_{i}^{*})\), as \(n\rightarrow\infty\). **QED.**
_Remark 5_.: The consistency of the caliper nearest-neighbors estimator is a standard and well-explored result in the literature (Parikh et al., 2022; Devroye et al., 1994; Kudraszow and Vieu, 2013; Li, 1984; Jiang, 2019; Ferraty et al., 2010; Kara et al., 2017; Einmahl and Mason, 2005). Our context is similar to the one discussed in Theorem 1 of Parikh et al. (2022) and Theorem 2 of Kudraszow and Vieu (2013).
_Remark 6_.: The results in Theorem 2.2 from Zhou and Kosorok (2017), shows similar consistency result of the optimal treatment regime estimator.
|
2301.03210 | Probing the structural evolution along the fission path in the
superheavy nucleus $^{256}$Sg | The evolution of structure property along the fission path in the superheavy
nucleus $^{256}$Sg is predicted through the multi-dimensional
potential-energy(or Routhian)-surface calculations,in which the
phenomenological deformed Woods-Saxon potential is adopted. Calculated nuclear
deformations and fission barriers for $^{256}_{106}$Sg$_{150}$ and its
neighbors, e.g., $^{258,260}$Sg, $^{254}$Rf and $^{252}$No are presented and
compared with other theoretical results. A series of energy maps and curves are
provided and used to evaluate the corresponding shape-instability properties,
especially in the directions of triaxial $\gamma$ and different hexadecapole
deformations (e.g., $\alpha_{40}$, $\alpha_{42}$ and $\alpha_{44}$). It is
found that the triaxial deformation may help the nucleus bypass the first
fission-barrier of the axial case. After the first minimum in the nuclear
energy surface, the fission pathway of the nucleus can be affected by $\gamma$
and hexadecapole deformation degrees of freedom. In addition, microscopic
single-particle structure, pairing and Coriolis effects are briefly
investigated and discussed. | Ting-Ting Li, Hua-Lei Wang, Zhen-Zhen Zhang, Min-Liang Liu | 2023-01-09T09:00:48Z | http://arxiv.org/abs/2301.03210v1 | # Probing the structural evolution along the fission path in the superheavy nucleus \({}^{256}\)Sg
###### Abstract
The evolution of structure property along the fission path in the superheavy nucleus \({}^{256}\)Sg is predicted through the multi-dimensional potential-energy(or Routhian)-surface calculations, in which the phenomenological deformed Woods-Saxon potential is adopted. Calculated nuclear deformations and fission barriers for \({}^{256}_{106}\)Sg\({}_{150}\) and its neighbors, e.g., \({}^{258,260}\)Sg, \({}^{254}\)Rf and \({}^{252}\)No are presented and compared with other theoretical results. A series of energy maps and curves are provided and used to evaluate the corresponding shape-instability properties, especially in the directions of triaxial \(\gamma\) and different hexadecapole deformations (e.g., \(\alpha_{40}\), \(\alpha_{42}\) and \(\alpha_{44}\)). It is found that the triaxial deformation may help the nucleus bypass the first fission-barrier of the axial case. After the first minimum in the nuclear energy surface, the fission pathway of the nucleus can be affected by \(\gamma\) and hexadecapole deformation degrees of freedom. In addition, microscopic single-particle structure, pairing and Coriolis effects are briefly investigated and discussed.
**Keywords: structure evolution, fission path; fission barrier; superheavy nuclei; macroscopic-microscopic model.**
Footnote †: preprint:
## 1 Introduction
The evolution of nuclear structure properties with some degree of freedom (e.g., nucleon number, spin, temperature, etc) is one of the most significant issues in nuclear physics [1], especially towards the superheavy mass region. Great progress has been made in the synthesis of superheavy nuclei with the development of the radioactive beam facility, heavy-ion accelerator and highly-effective detector systems [2; 3; 4]. Spontaneous fission is usually one of important decay modes in a superheavy nucleus and the barrier along the fission path is critical to understand the fission process [5; 6]. For instance, the survival probability of a synthesized superheavy nucleus in the heavy-ion fusion reaction is directly related to such a barrier, during the cooling process of a compound nucleus, which plays a decisive role in the competition between nucleon evaporation and fission (a small change of the fission barrier may result in several orders of magnitude difference in survival probability) [7]. Nevertheless, it is still rather difficult to give an accurate description for the fission barrier so far. To a large extent, the barrier size and shape can be determined by the fission path in the nuclear energy surface.
Up to now, there are several types of models which are widely used for investigating nuclear fission phenomena, including e.g., the macroscopic-microscopic (MM) models [8; 9; 10; 11; 12], the non-relativistic energy density functionals based on zero-range Skyrme and finite-range Gogny interactions [13; 14; 15; 16; 17; 18], the extended Thomas-Fermi plus Strutinsky integral methods [19; 20], and the covariant density functional theory [5; 21; 22]. The MM methods usually have the high descriptive power as well as simplicity of calculation and thus are still used by many researchers so far. In such an approach, the empirical one-body nuclear mean-filed (e.g., the Nilsson and Woods-Saxon potentials) Hamiltonian is used to solve the microscopic single-particle levels and wave functions and a macroscopic liquid-drop model (e.g., the standard liquid-drop model [23], the finite-range droplet model [24], and the Lublin-Strasboug drop model [25], etc) is combined to describe the nuclear bulk property. In recent years, the model parameters, including their uncertainties and propagations, in both phenomenological Woods-Saxon potential and the macroscopic liquid-drop model are still studied and optimized, e.g., cf Refs. [11; 26; 27; 28; 29; 30]. Indeed, the parameters of MM models are mainly from the fitting of available single-particle levels of several spherical nuclei and several thousand nuclear-mass data. They are generally successful near the \(\beta\)-stability line, especially in the medium and heavy nuclear regions. Without the preconceived knowledge, e.g., about the measured densities and single-particle energies, it may be needed to test whether the modeling and model parameters of a phenomenological one-body potential are still valid enough. Part of our aim of this work is to test the theoretical method in such aspects.
Prior to this work, 16 Sg isotopes from \(A=258\) to 273 were synthesized by the fusion-evaporation reactions, e.g., \({}^{238}\)U(\({}^{30}\)Si,\(xn\))\({}^{268-x}\)Sg [31]. It was reported that the lightest even-even Sg isotope, \({}^{258}\)Sg, has a revised half-life of \(2.8^{+0.8}_{-0.5}\)\(ms\)[32]. Naturally, one expects that based on the fusion-evaporation mechanism, the superheavy nuclide \({}^{256}\)Sg will be synthesized as the next
candidate which is the nearest even-even nucleus to the known ones in this isotopic chain. Keeping this in mind, we predict the properties of structure evolution along the possible fission path for the superheavy nuclide \({}^{256}\)Sg in this project. In our previous studies, we systematically investigated the octupole correlation properties for 42 even-even nuclei with \(102\leq Z\leq 112\)[33] and the triaxial effects on the inner fission barriers in 95 tranuranium even-even nuclei \(94\leq Z\leq 118\)[34]. The triaxiality and Coriolis effects on the fission barrier in isovolumic nuclei with \(A=256\) were investigated, where the \({}^{256}\)Sg was calculated but just focused on the first (inner) fission barrier [35]. In Ref. [36], we investigated the effects of various deformations (e.g., \(\beta_{2}\), \(\gamma\) and \(\beta_{4}\)) on the first barrier in even-even nuclei with \(N=152\) and \(94\leq Z\leq 108\). In addition, we studied the collective rotational effects including the \(\alpha\)-decay-chain nuclei (from \({}^{216}\)Po and \({}^{272}\)Cn) [37] and \({}^{254-258}\)Rf [38] by the similar calculation. The primary purpose of this study is to investigate the effects of different deformation parameters, especially the axial and non-axial hexadepole deformations, on the fission path of \({}^{256}\)Sg by analyzing the topography of the energy surfaces calculated in a reasonable subspace of collective coordinates (it is impossible to calculate in the full deformation space). The probe of the shape evolution along the fission path on the energy landscape will be useful for understanding the formation mechanism of the fission barrier. We provide the analysis of the single-particle structures, shell and pairing evolutions, especially at the minima and saddles. Sobiczewski et al [39] systematically investigated the static inner barrier of heaviest nuclei with proton number \(98\leq Z\leq 126\) and neutron number \(134\leq N\leq 192\) in a multidimensional deformation space and pointed out that the inclusion of the non-axial hexadecapole shapes lowers the barrier by up to about 1.5 MeV. In the synthesis of the superheavy nuclei, nuclear hexadecapole deformations were revealed to have an important influence on production cross sections of superheavy nuclei by e.g., affecting the driving potentials and the fusion probabilities [40; 41].
This paper is organized as follows: In Sect.2, we briefly describe the outline of the theoretical framework and the details of the numerical calculations. The results of the calculations and their relevant discussion are given in Sect.3. Finally, the concluding remarks will be given in Sect.4.
### 2. Theoretical framework
In what follows, we recall the unified procedure and give the necessary references related to the present theoretical calculation, which may be somewhat helpful for some readers to clarify some details (e.g., the various variants of the pairing-energy contribution within the framework of the macroscopic-microscopic method). We employ potential-energy(or Routhian)-surface calculation to study the present project. This method is based on the macroscopic-microscopic model [42; 43] and the cranking approximation [44; 45; 46], which is one of widely used and powerful tools in nuclear structure research, especially for rotating nuclei. The usual expression for the total energy in the rotating coordinate frame (namely, the so-called total Routhian) reads [47]
\[E^{\omega}(Z,N,\hat{\beta})\;=\;E^{\omega}_{macr}(Z,N,\hat{\beta})+\delta E^{ \omega}_{micro}(Z,N,\hat{\beta}), \tag{1}\]
where \(E^{\omega}(Z,N,\hat{\beta})\) represents the total Routhian of a nucleus (\(Z\), \(N\)) at frequency \(\omega\) and deformation \(\hat{\beta}\). The first term on the right-hand side in Eq. (1) denotes the macroscopic (liquid drop, or LD) energy with the rigid-body moment of inertia calculated classically at a given deformation, assuming a uniform density distribution; \(\delta E^{\omega}_{micro}\) represents the contribution due to the microscopic effects under rotation. After rearrangement employing elementary transformations [48; 49; 50; 51; 52], the total Routhian can be rewritten as,
\[E^{\omega}(Z,N,\hat{\beta}) = E^{\omega=0}(Z,N,\hat{\beta}) \tag{2}\] \[+ [\langle\hat{H}^{\omega}(Z,N,\hat{\beta})\rangle-\langle\hat{H}^ {\omega=0}(Z,N,\hat{\beta})\rangle]\] \[- \frac{1}{2}\omega^{2}[{\cal J}_{macr}(A,\hat{\beta})-{\cal J}_{ Stru}(Z,N,\hat{\beta})].\]
The notations for the quantities in Eq. (2) are standard [47; 53]. The term \(E^{\omega=0}(Z,N,\hat{\beta})\) is the static total energy (corresponding \(\omega=0\)) which consists of a macroscopic LD part \(E_{LD}(Z,N,\hat{\beta})\) and a shell correction \(\delta E_{shell}(Z,N,\hat{\beta})\) and a pairing-energy contribution \(\delta E_{pair}(Z,N,\hat{\beta})\) (neglecting the superscript \(\omega=0\)). The second term in the square brackets represents the energy change of the cranked Hamiltonian \(\hat{H}^{\omega}(Z,N,\hat{\beta})\) due to rotation [47; 53]. In Eq. (2), it is usually and reasonably assumed that the average pairing energy of the liquid-drop term and the Strutinsky-smeared pairing energy cancel each other [47]. Therefore, one can further write Eq. (2) as [cf. Ref. [54] and references therein],
\[E^{\omega}(Z,N,\hat{\beta}) = E_{LD}(Z,N,\hat{\beta}) \tag{3}\] \[+ \delta E_{shell}(Z,N,\hat{\beta})+\delta E_{pair}(Z,N,\hat{\beta})\] \[+ [\langle\hat{H}^{\omega}(Z,N,\hat{\beta})\rangle-\langle\hat{H}^ {\omega=0}(Z,N,\hat{\beta})\rangle].\]
As known, several phenomenological LD models (such as standard liquid drop model [23], finite-range droplet model [42], Lublin-Strasbourg drop model [25]) with slight difference have been developed for calculating the smoothly varying part. In these LD models, the dominating terms are mainly associated with the volume energy, the surface energy and the Coulomb energy. In the present work, the macroscopic energy is given by the standard LD model with the parameters used by Myers and Swiatecki [23].
The single-particle levels used below are calculated by solving numerically the Schrodinger equation with the Woods-Saxon (WS) Hamiltonian [55]
\[H_{WS} = T+V_{\rm cent}(\vec{r};\hat{\beta})+V_{\rm so}(\vec{r},\vec{p}, \vec{s};\hat{\beta}) \tag{4}\] \[+V_{\rm Coul}(\vec{r},\hat{\beta}),\]
where the Coulomb potential \(V_{\rm Coul}(\vec{r},\hat{\beta})\) defined as a classical electrostatic potential of a uniformly charged drop is added for protons. The central part of the WS potential is calculated as
\[V_{\rm cent}(\vec{r},\hat{\beta})=\frac{V_{0}[1\pm\kappa(N-Z)/(N+Z)]}{1+\exp[{ \rm dist}_{\Sigma}(\vec{r},\hat{\beta})/a]}, \tag{5}\]
where the plus and minus signs hold for protons and neutrons, respectively and the parameter \(a\) denotes the diffuseness of the nuclear surface. The term \(\mbox{dist}_{\Sigma}(\vec{r},\hat{\beta})\) represents the distance of a point \(\vec{r}\) from the nuclear surface \(\Sigma\) parameterized in term of the multipole expansion of spherical harmonics \(Y_{\lambda\mu}(\theta,\phi)\) (which are convenient to describe the geometrical properties), that is,
\[\Sigma:R(\theta,\phi)=r_{0}A^{1/3}c(\hat{\beta})\Big{[}1+\sum_{\lambda}\sum_{ \mu=-\lambda}^{+\lambda}\alpha_{\lambda\mu}Y_{\lambda\mu}^{*}(\theta,\phi) \Big{]}, \tag{6}\]
where the function \(c(\hat{\beta})\) ensures the conservation of the nuclear volume with a change in the nuclear shape and \(\hat{\beta}\) denotes the set of all the deformation parameters \(\{\alpha_{\lambda\mu}\}\). For a given nucleus with mass number \(A\), a limiting value of \(\lambda<A^{1/3}\) is often estimated. In the present shape parametrization, we consider quadrupole and hexadecapole degrees of freedom, including non-axial deformations, namely, \(\hat{\beta}\equiv\{\alpha_{20}\), \(\alpha_{2\pm 2}\), \(\alpha_{40}\), \(\alpha_{4\pm 2}\), \(\alpha_{4\pm 4}\}\). The quantity \(R(\theta,\phi)\) denotes the distance of any point on the nuclear surface from the origin of the coordinate system. Because only the even \(\lambda\) and even \(\mu\) components are taken into account, the present parametrisation will preserve three symmetry planes. After requesting the hexadecople degrees of freedom to be functions of the scalars in the quadrupole tensor \(\alpha_{2\mu}\), one can reduce the number of independent coefficients to three, namely, \(\beta_{2}\), \(\gamma\) and \(\beta_{4}\), which obey the relationships [56]
\[\left\{\begin{array}{l}\alpha_{20}=\beta_{2}\cos\gamma\\ \alpha_{22}=\alpha_{2-2}=-\frac{1}{\sqrt{2}}\beta_{2}\sin\gamma\\ \alpha_{40}=\frac{1}{6}\beta_{4}(5\cos^{2}\gamma+1)\\ \alpha_{42}=\alpha_{4-2}=-\frac{1}{12}\sqrt{30}\beta_{4}\sin 2\gamma\\ \alpha_{44}=\alpha_{4-4}=\frac{1}{12}\sqrt{70}\beta_{4}\sin^{2}\gamma.\end{array}\right. \tag{7}\]
The (\(\beta_{2},\gamma,\beta_{4}\)) parametrization has all the symmetry properties of Bohr's (\(\beta_{2},\gamma\)) parametrization [57]. The spin-orbit potential, which can strongly affects the level order, is defined by
\[V_{\rm so}(\vec{r},\vec{p},\vec{s};\hat{\beta}) = -\lambda\Big{[}\frac{\hbar}{2mc}\Big{]}^{2}\] \[\times \left\{\nabla\frac{V_{0}[1\pm\kappa(N-Z)/(N+Z)]}{1+exp[dist_{ \Sigma_{so}}(\vec{r},\hat{\beta})/a_{so}]}\right\}\times\vec{p}\cdot\vec{s},\]
where \(\lambda\) denotes the strength parameter of the effective spin-orbit force acting on the individual nucleons. The new surface \(\Sigma_{so}\) is different from the one in Eq. (6) due to the different radius parameter. In the present work, the WS parameters are taken from Refs. [56, 58], as listed in Table 1.
In computing the Woods-Saxon Hamiltonian matrix, the eigenfunctions of the axially deformed harmonic oscillator potential in the cylindrical coordinate system are adopted as the basis func
tions [59],
\[|n_{\rho}n_{z}\Lambda\Sigma\rangle=\psi^{\Lambda}_{n_{\rho}}(\rho)\psi_{n_{z}}(z) \psi_{\Lambda}(\varphi)\chi(\Sigma), \tag{9}\]
where
\[\left\{\begin{array}{ll}\psi^{\Lambda}_{n_{\rho}}(\rho)&=\frac{\sqrt{n_{\rho}!}}{\sqrt{(n_{\rho}!\!+\!|\Lambda|)!}}(2m\omega_{\rho}/\hbar)^{1/2}\\ &\quad\quad\times e^{-\frac{\pi^{2}}{2}}\eta^{\Lambda}L^{|\Lambda|}_{n_{\rho} }(\eta),\\ \psi_{n_{z}}(z)&=\frac{1}{\sqrt{\sqrt{\pi}^{2n_{z}n_{z}}}}(2m\omega_{z}/\hbar) ^{1/4}\\ &\quad\quad\times e^{-\frac{\xi^{2}}{2}}H_{n_{z}}(\xi),\\ \psi_{\Lambda}(\varphi)&=\frac{1}{\sqrt{2\pi}}e^{i\Lambda\varphi},\end{array}\right. \tag{10}\]
and \(\chi(\Sigma)\) represents the spin wave functions, cf. e.g., Sec. 3.1 in Ref. [59] for more details. In our calculation, the eigenfunctions with the principal quantum number \(N\leq\) 12 and 14 have been chosen as a basis for protons and neutrons, respectively. It is found that, by such a basis cutoff, the results are sufficiently stable with respect to a possible enlargement of the basis space. In addition, the time reversal (resulting in the Kramers degeneracy) and spatial symmetries (e.g., the existence of three symmetry \(x-y\), \(y-z\) and \(z-x\) planes) are used for simplifying the Hamiltonian matrix calculation.
The shell correction \(\delta E_{shell}(Z,N,\hat{\beta})\), as seen in Eq. (3), is usually the most important correction to the LD energy. Strutinsky first proposed a phenomenological expression,
\[\delta E_{shell}(Z,N,\hat{\beta})=\sum e_{i}-\int e\tilde{g}(e)de, \tag{11}\]
where \(e_{i}\) denotes the calculated single-particle levels and \(\tilde{g}(e)\) is the so-called smooth level density. Obviously, the smooth level distribution function is the most important quantity, which was early defined as,
\[\tilde{g}(e,\gamma)\equiv\frac{1}{\gamma\sqrt{\pi}}\sum_{i}\exp[-\frac{(e-e_ {i})^{2}}{\gamma^{2}}], \tag{12}\]
where \(\gamma\) indicates the smoothing parameter without much physical significance. To eliminate any possibly strong \(\gamma\)-parameter dependence for the final result, the mathematical form of the smooth level density \(\tilde{g}(e)\) has been optimized by introducing a phenomenological curvature-correction
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline V\({}_{0}\) (MeV) & \(\kappa\) & r\({}_{0}\) (fm) & \(a\)(fm) & \(\lambda\) & (r\({}_{0}\))\({}_{so}\) (fm) & \(a_{\rm so}\) (fm) \\ \hline
53.754 & 0.791 & 1.190 & 0.637 & 29.494 & 1.190 & 0.637 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The adopted WS parameters for both protons and neutrons (for more details, cf e.g, Ref. [56]). Note that nuclear shape does not sensitively depend on the parameter sets in well-deformed nuclei, especially those with large stiffness.
polynomial \(P_{p}(x)\)[49; 60; 61; 62]. Then, the \(\tilde{g}(e)\) expression will take the form
\[\tilde{g}(e,\gamma,p)=\frac{1}{\gamma\sqrt{\pi}}\sum_{i=1}P_{p}(\frac{e-e_{i}}{ \gamma})\times\exp[-\frac{(e-e_{i})^{2}}{\gamma^{2}}], \tag{13}\]
where the corrective polynomial \(P_{p}(x)\) can be expanded in terms of the Hermite or Laguerre polynomials. The corresponding coefficients of the expansion can be obtained by using the orthogonality properties of these polynomials and Strutinsky condition (i.e., see the APPENDIX in Ref.[63]). In fact, this method can be considered standard so far. For instance, the integration in Eq. (12) can be calculated as follows (see Ref.[64] for more details),
\[\int e\tilde{g}(e,\gamma,p)de = \int\tilde{e}(n)dn \tag{14}\] \[= \sum_{i=1}\{\frac{1}{2}e_{i}[1+{\rm erf}(\frac{\tilde{\lambda}-e _{i}}{\gamma})]\] \[-\frac{1}{2\sqrt{\pi}}\gamma{\rm exp}[-\frac{(\tilde{\lambda}-e _{i})^{2}}{\gamma^{2}}]\] \[-\frac{1}{\sqrt{\pi}}{\rm exp}[-\frac{(\tilde{\lambda}-e_{i})^{2 }}{\gamma^{2}}]\] \[\times\sum_{m=1}^{p}c_{m}[\frac{1}{2}\gamma H_{m}(\frac{\tilde{ \lambda}-e_{i}}{\gamma})\] \[+e_{i}H_{m-1}(\frac{\tilde{\lambda}-e_{i}}{\gamma})\] \[+m\gamma H_{m-2}(\frac{\tilde{\lambda}-e_{i}}{\gamma})]\}.\]
Of course, there are some other methods developed for the shell correction calculations, e.g., the semiclassical Wigner-Kirkwood expansion method [65; 65] and the Green's function method [66]. In this work, the widely used Strutinsky method is adopted though its known problems which appear for mean-field potentials of finite depth as well as for nuclei close to the proton or neutron drip lines. The smooth density is calculated with a sixth-order Hermite polynomial and a smoothing range \(\gamma=1.20\hbar\omega_{0}\), where \(\hbar\omega_{0}=41/A^{1/3}\) MeV, indicating a satisfactory independence of the shell correction on the parameters \(\gamma\) and \(p\)[64].
Besides the shell correction, the pairing-energy contribution is also one of important single-particle corrections. Due to the short-range interaction of nucleon pairs in time-reversed orbitals, the total potential energy in nuclei relative to the energy without pairing always decreases. There exist various variants of the pairing-energy contribution in the microscopic-energy calculations, as is recently pointed out in Ref. [11]. Typically, several kinds of the phenomenological pairing energy expressions (namely, pairing correlation and pairing correction energies employing or not employing the particle number projection technique) are widely adopted in the applications of the macroscopic-microscopic approach [11]. To avoid the confusions, it may be somewhat necessary
to simply review the'standard' definitions for pairing correlation and pairing correction, e.g., cf Refs. [11, 64]. For instance, the former is given by the difference between e.g., BCS energy of the system at pairing \(\Delta\neq 0\) and its partner expression at \(\Delta=0\); similar to the Strutinsky shell correction, the later represents the difference between the above pairing correlation and its Strutinsky-type smoothed out partner.
In the present work, the contribution \(\delta E_{pair}(Z,N,\hat{\beta})\) in Eq. (3) is the pairing correlation energy as mentioned above. The pairing is treated by the Lipkin-Nogami (LN) method [67], which helps avoiding not only the spurious pairing phase transition but also the particle number fluctuation encountered in the simpler BCS calculation. In the LN technique [53, 67], it aims at minimizing the expectation value of the following model Hamiltonian
\[\hat{\mathcal{H}}=\hat{H}_{WS}+\hat{H}_{pair}-\lambda_{1}\hat{N}-\lambda_{2} \hat{N}^{2}. \tag{15}\]
Here, \(\hat{H}_{pair}\) indicates the pairing interaction Hamiltonian including monopole and doubly stretched quadrupole pairing forces [68, 69, 70]:
\[\bar{v}_{\alpha\beta\gamma\delta}^{(\lambda\mu)}=-G_{\lambda\mu}g_{\alpha \bar{\beta}}^{(\lambda\mu)}g_{\gamma\bar{\delta}}^{*(\lambda\mu)}, \tag{16}\]
where
\[g_{\alpha\bar{\beta}}^{(\lambda\mu)}=\left\{\begin{array}{cc}\delta_{\bar{ \alpha}\beta}&\lambda=0,\mu=0,\\ \langle\alpha|\widetilde{Q}_{\mu}|\bar{\beta}\rangle&\lambda=2,\mu=0,1,2.\end{array}\right. \tag{17}\]
The monopole pairing strength \(G_{00}\) is determined by the average gap method [68] and the quadrupole pairing strengths \(G_{2\mu}\) are obtained by restoring the Galilean invariance broken by the seniority pairing force [70]. To some extent, the quadrupole pairing can affect rotational band-head energies, moments of inertia, band-crossing frequencies and signature inversion in odd-odd nuclei [69, 71, 72, 73]. The pairing window, including dozens of single-particle levels, the respective states (e.g. half of the particle number \(Z\) or \(N\)) just below and above the Fermi energy, is adopted empirically for both protons and neutrons. The pairing gap \(\Delta\), Fermi energy \(\lambda\) (namely, \(\lambda_{1}+2\lambda_{2}(N_{total}+1)\)), particle number fluctuation constant \(\lambda_{2}\), occupation probabilities \(v_{k}^{2}\), and shifted single-particle energies \(\varepsilon_{k}\) can be determined from the following 2\((N_{2}-N_{1})\) + 5 coupled nonlinear equations [67, 68],
\[\left\{\begin{array}{l}N_{total}=2\sum_{k=N_{1}}^{N_{2}}v_{k}^{2}+2(N_{1}-1),\\ \Delta=G\sum_{k=N_{1}}^{N_{2}}u_{k}v_{k},\\ v_{k}^{2}=\frac{1}{2}\left[1-\frac{\varepsilon_{k}-\lambda}{\sqrt{( \varepsilon_{k}-\lambda)^{2}+\Delta^{2}}}\right],\\ \varepsilon_{k}=e_{k}+(4\lambda-G)v_{k}^{2},\\ \lambda_{2}=\frac{G}{4}\left[\frac{(\sum_{k=N_{1}}^{N_{2}}u_{k}^{3}v_{k})( \sum_{k=N_{1}}^{N_{2}}u_{k}v_{k}^{2})-\sum_{k=N_{1}}^{N_{2}}u_{k}^{4}v_{k}^{4} }{(\sum_{k=N_{1}}^{N_{2}}u_{k}^{2}v_{k}^{2})^{2}-\sum_{k=N_{1}}^{N_{2}}u_{k}^{4} v_{k}^{4}}\right],\end{array}\right. \tag{18}\]
where \(u_{k}^{2}=1-v_{k}^{2}\) and \(k=N_{1},N_{1}+1,\cdots,N_{2}\). The LN pairing energy for the system of even-even nuclei at "paired solution" (pairing gap \(\Delta\neq 0\)) can be given by [42; 67]
\[E_{LN} = \sum_{k}2{v_{k}}^{2}e_{k}-\frac{\Delta^{2}}{G}-G\sum_{k}{v_{k}}^{4} \tag{19}\] \[-4\lambda_{2}\sum_{k}{u_{k}}^{2}{v_{k}}^{2},\]
where \({v_{k}}^{2}\), \(e_{k}\), \(\Delta\) and \(\lambda_{2}\) represent the occupation probabilities, single-particle energies, pairing gap and number-fluctuation constant, respectively. Correspondingly, the partner expression at "no-pairing solution" (\(\Delta=0\)) reads
\[E_{LN}(\Delta=0)\;=\;\sum_{k}2e_{k}-G\frac{N}{2}. \tag{20}\]
The pairing correlation is defined as the difference between paired solution \(E_{LN}\) and no-pairing solution \(E_{LN}\)(\(\Delta=0\)).
In the cranking calculation, we only consider the one-dimensional approximation, supposing that the nuclear system is constrained to rotate around a fixed axis (e.g. the \(x-\)axis with the largest moment of inertia) at a given frequency \(\omega\). The cranking Hamiltonian follows the form
\[H^{\omega}=H_{WS}+H_{pair}-\omega j_{x}-\lambda_{1}\hat{N}-\lambda_{2}\hat{N}^ {2}. \tag{21}\]
The resulting cranking LN equation takes the form of the well known Hartree-Fock-Bogolyubov-like (HFB) equation which can be solved by using the HFB cranking (HFBC) method [74] (also see, e.g., Ref [1], for a detailed description). The HFB-like equations have the following form (see, e.g., Ref. [53]):
\[\left\{\begin{array}{l}\sum_{\beta>0}\left\{\left[\left(e_{ \alpha}-\lambda\right)\delta_{\alpha\beta}-\omega(j_{x})_{\alpha\beta}-G\rho_ {\bar{\alpha}\bar{\beta}}^{*}+4\lambda_{2}\rho_{\alpha\beta}\right]\right.\\ \left.\times U_{\beta k}-\Delta\delta_{\alpha\beta}V_{\beta k}\right\}=E_{k} U_{\alpha k},\\ \sum_{\beta>0}\left\{\left[\left(e_{\alpha}-\lambda\right)\delta_{\alpha\beta }-\omega(j_{x})_{\alpha\beta}-G\rho_{\alpha\beta}+4\lambda_{2}\rho_{\bar{ \alpha}\bar{\beta}}^{*}\right]\right.\\ \left.\times V_{\bar{\beta}k}+\Delta^{*}\delta_{\alpha\beta}U_{\beta k} \right\}=E_{k}V_{\bar{\alpha}k},\end{array}\right. \tag{22}\]
where \(\Delta=G\sum_{\alpha>0}\kappa_{\alpha\bar{\alpha}}\), \(\lambda=\lambda_{1}+2\lambda_{2}(N+1)\) and \(E_{k}=\varepsilon_{k}-\lambda_{2}\). Further, \(\varepsilon_{k}\) is the quasi-particle energy and \(\alpha\) (\(\bar{\alpha}\)) denotes the states of signature \(r=-i\) (\(r=+i\)). The quantities \(\rho\) and \(\kappa\) respectively correspond to the density matrix and pairing tensor. While solving the HFBC equations, pairing is treated self-consistently at each frequency \(\omega\) and each grid point in the selected deformation space (namely, pairing self-consistency). Symmetries of the rotating potential are used to simplify the cranking equations. For instance, in the present reflection-symmetric case, both
signature, \(r\), and intrinsic parity, \(\pi\) are good quantum numbers. Finally, the energy in the rotating framework can be given by
\[E^{\omega} = {\rm Tr}(e-\omega j_{x})\rho-\frac{\Delta^{2}}{G}-G\sum_{\alpha, \beta>0}\rho_{\alpha,\beta}\rho_{\tilde{\alpha},\tilde{\beta}} \tag{23}\] \[-2\lambda_{2}{\rm Tr}\rho(1-\rho).\]
Accordingly, one can obtain the energy relative to the non-rotating (\(\omega=0\)) state, as seen in the last term of Eq. (3). It should certainly be mentioned that the above derivations are used for the quasi-particle vacuum configuration of even-even nuclear system. However, it is convenient to extend the formalism to one or many quasi-particle excited configuration(s) by only modifying the density matrix and pairing tensor and keeping the form of all the equations untouched. After the numerically calculated Routhians at any fixed \(\omega\) are interpolated using, e.g., a cubic spline function between the lattice points, the equilibrium deformation can be determined by minimizing the multi-dimensional potential-energy map.
### 3. Results And Discussion
The calculations of nuclear potential energy and/or Routhian surfaces are very helpful for understanding the structure properties (including the fission path) in nuclei. It is well known that theoretical description of fission is usually based on the analysis of the topography of the energy maps. The evolution of the potential energy surface as a function of the collective coordinates is of importance. We performed the nuclear potential-energy calculations using the deformed Woos-Saxon mean-field Hamiltonian in the deformation spaces (\(\beta_{2}\), \(\gamma\), \(\alpha_{4\mu=0,2,4}\)) and (\(\beta_{2}\), \(\gamma\), \(\beta_{4}\)). More elaborated investigation will include the parameters related to reflection asymmetric shapes because they are required for the description of the asymmetry in fission-fragment mass-distribution [75]. In Fig. 1, the results of potential energy surfaces projected on (\(\beta_{2}\), \(\gamma\)) plane and respectively minimized over the hexadecapole deformation \(\alpha_{40}\), \(\alpha_{42}\), \(\alpha_{44}\) and \(\beta_{4}\) are illustrated for \({}^{256}_{106}\)S\({}_{150}\). In these maps, the \(\beta_{2}\) and \(\gamma\) deformation variables are directly presented as the horizontal and vertical coordinates in a Cartesian coordinate system, instead of the usual Cartesian quadrupole coordinates [\(X=\beta_{2}\)sin(\(\gamma+30^{\circ}\)), \(Y=\beta_{2}\)cos(\(\gamma+30^{\circ}\))] and the (\(\beta_{2}\), \(\gamma\)) plane in the polar coordinate system. For the static energy surfaces, for guiding eyes, the \(\gamma\) domain [\(-60^{\circ}\), \(60^{\circ}\)] is adopted though, in principle, half is enough. One can see that two minima (at \(\beta_{2}\approx 0.24\) and 0.7) appear and the double-humped barrier is reproduced but the second peak is lower than those in the actinide region [76]. Calculated energy map shows that the hexadecapole deformation has no influence on the first minimum but can decrease the second minimum. It is found that the \(\gamma\) destroy will strongly change the fission path, especially, between two minima.
In order to understand how dependent calculated total energies are on these hexadecapole deformations \(\alpha_{4\mu=0,2,4}\) (we focus here on the even-\(\mu\) components), Figure 2 illustrates the corresponding
2D maps projected on (\(\beta_{2}\), \(\alpha_{4\mu=0,2,4}\)) and (\(\beta_{2}\), \(\beta_{4}\)) planes for \({}^{256}_{106}\)Sg\({}_{150}\). To separately investigate the effects of different hexadecapole deformation parameters on the energy surfaces, in the left four subfigures of Fig. 2, we performed the calculations in 2D deformation spaces displayed by the horizontal and vertical coordinates, ignoring other degrees of freedom. It needs to be stressed that the hexadecapole deformation \(\beta_{4}\) involves the fixed relationships of \(\{\alpha_{4\mu=0,2,4}\}\) and \(\gamma\), cf. Eq. 7. For instance, three deformation parameters \(\{\alpha_{4\mu=0,2,4}\}\) can be determined in terms of a pair of given \(\beta_{4}\) and \(\gamma\) values. It can be seen from the left panel of Fig. 2 that only \(\alpha_{40}\) (equivalently \(\beta_{4}\) at \(\gamma=0^{\circ}\)) deformation changes the fission pathway. It seems that the non-axial deformation parameters \(\alpha_{42}\) and \(\alpha_{44}\) have no influence on the fission trajectory at this moment. In the right panel of Fig. 2, we performed the calculations in 2D deformation spaces displayed by the horizontal and vertical coordinates, ignoring other degrees of freedom. It needs to be stressed that the hexadecapole deformation \(\beta_{4}\) involves the fixed relationships of \(\{\alpha_{4\mu=0,2,4}\}\) and \(\gamma\), cf. Eq. 7. For instance, three deformation parameters \(\{\alpha_{4\mu=0,2,4}\}\) can be determined in terms of a pair of given \(\beta_{4}\) and \(\gamma\) values. It can be seen from the left panel of Fig. 2 that only \(\alpha_{40}\) (equivalently \(\beta_{4}\) at \(\gamma=0^{\circ}\)) deformation changes the fission pathway. It seems that the non-axial deformation parameters \(\alpha_{42}\) and \(\alpha_{44}\) have no influence on the fission trajectory at this moment. In the right panel of Fig. 2, we performed the calculations in 2D deformation spaces displayed by the horizontal and vertical coordinates, ignoring other degrees of freedom. It needs to be stressed that the hexadecapole deformation \(\beta_{4}\) involves the fixed relationships of \(\{\alpha_{4\mu=0,2,4}\}\) and \(\gamma\), cf. Eq. 7. For instance, three deformation parameters \(\{\alpha_{4\mu=0,2,4}\}\) can be determined in terms of a pair of given \(\beta_{4}\) and \(\gamma\) values. It can be seen from the left panel of Fig. 2 that only \(\alpha_{40}\) (equivalently \(\beta_{4}\) at \(\gamma=0^{\circ}\)) deformation changes the fission pathway. It seems that the non-axial deformation parameters \(\alpha_{42}\) and \(\alpha_{44}\) have no influence on the fission trajectory at this moment. In the right panel of Fig. 2, we performed the calculations in 2D deformation spaces displayed by the horizontal and vertical coordinates, ignoring other degrees of freedom. It needs to be stressed that the hexadecapole deformation \(\beta_{4}\) involves the fixed relationships of \(\{\alpha_{4\mu=0,2,4}\}\) and \(\gamma\), cf. Eq. 7. For instance, three deformation parameters \(\{\alpha_{4\mu=0,2,4}\}\) can be determined in terms of a pair of given \(\beta_{4}\) and \(\gamma\) values. It can be seen from the left panel of Fig. 2 that only \(\alpha_{40}\) (equivalently \(\beta_{4}\) at \(\gamma=0^{\circ}\)) deformation changes the fission pathway. It seems that the non-axial deformation parameters \(\alpha_{42}\) and \(\alpha_{44}\) have no influence on the fission trajectory at this moment. In the right panel of Fig. 2, we performed the calculations in 2D deformation spaces displayed by the horizontal and vertical coordinates, ignoring other degrees of freedom. It needs to be stressed that the hexadecapole deformation \(\beta_{4}\) involves the fixed relationships of \(\{\alpha_{4\mu=0,2,4}\}\) and \(\gamma\), cf. Eq. 7. For instance, three deformation parameters \(\{\alpha_{4\mu=0,2,4}\}\) can be determined in terms of a pair of given \(\beta_{4}\) and \(\gamma\) values. It can be seen from the left panel of Fig. 2 that only \(\alpha_{40}\) (equivalently \(\beta_{4}\) at \(\gamma=0^{\circ}\)) deformation changes the fission pathway. It seems that the non-axial deformation parameters \(\alpha_{42}\) and \(\alpha_{44}\) have no influence on the fission trajectory at this moment. In the right panel of Fig. 2, we performed the calculations in 2D deformation spaces displayed by the horizontal and vertical coordinates, ignoring other degrees of freedom. It needs to be stressed that the hexadecapole deformation \(\beta_{4}\) involves the fixed relationships of \(\{\alpha_{4\mu=0,2,4}\}\) and \(\gamma\) values.
part, at each deformation point of the corresponding map, the minimization was performed over triaxial deformation \(\gamma\). Indeed, one can find that non-zero \(\{\alpha_{4\mu=0,2,4}\}\) values appear along the fission pathway, indicating the three \(\{\alpha_{4\mu=0,2,4}\}\) deformations play a role during the calculations; see, e.g., Fig. 2(e)-(g). For simplicity of calculation and simultaneously including the effects of such three hexadecapole deformation parameters, total energy projection on the (\(\beta_{2}\), \(\beta_{4}\)) plane is illustrated in Fig. 2(h), minimized over \(\gamma\). It was often suggested that the 3-dimensional space (\(\beta_{2},\gamma,\beta_{4}\)) is the most important, e.g., cf. Ref. [39]. Similar to the \(\gamma\) deformation, the \(\beta_{4}\) deformation has an obvious influence on the fission pathway after the first minimum for this nucleus. Moreover, the \(\beta_{4}\) deformation always keeps a non-zero value after the first minimum.
From the 2D energy \(\beta_{2}\) vs \(\gamma\) and \(\beta_{2}\) vs \(\beta_{4}\) maps, we can obtain the further energy projection e.g., on the \(\beta_{2}\) direction. By such an operation, the total energy curve will be given, which is usually useful for extracting the information of fission barrier. Figure 3 illustrates four types of total energy curves in functions of \(\beta_{2}\) for five selected nuclei \({}^{256,258,260}\)Sg, \({}^{254}\)Rf and \({}^{252}\)No. Note that the blue, grey, red and green lines respectively correspond to those curves whose energies are minimized over \(\gamma\) and \(\beta_{4}\); \(\gamma\); \(\beta_{4}\); and none. By them, one can see the evolution of the energy curves
Figure 2: Similar to Fig. 1 but projections on (\(\beta_{2}\),\(\alpha_{40}\)), (\(\beta_{2}\),\(\alpha_{42}\)),(\(\beta_{2}\),\(\alpha_{44}\)) and (\(\beta_{2}\),\(\beta_{4}\)) planes for \({}^{256}_{106}\)Sg\({}_{150}\). Note that in the right four subfigures (e),(f),(g) and (h), the minimization was performed over the triaxial deformation \(\gamma\) at each mesh grid. In (a),(b),(c) and (d) subplots, the triaxial destroy was not considered. See text for more information.
from both isotopic and isotonic directions. It seems that from the isotonic direction, \({}^{256}_{106}\)Sg\({}_{150}\) is the critical nucleus in which the hexadecapole deformation \(\beta_{4}\) always play a role after the first minimum. From this figure, we can obtain the equilibrium deformations of different minima and maxima, further the height of fission barriers. The impact of the triaxial and hexadecapole deformations on the energy curves can clearly evaluated. The inclusion of different deformation parameters can affect not only the height but also the shape of the fission barrier. As noted in Ref. [75], the tunneling probability through the fission barrier will depend exponentially on the square root of its height times its width, when approximated by a square potential barrier. One can find that the triaxial deformation can decrease the barrier hight, especially for the inner barrier e.g. in \({}^{256}\)Sg. Nevertheless, the hexadecapole deformation (responsible for necking [77]) decreases both the height and the width of the fission barrier. Even, as seen in \({}^{256,258}\)Sg, the least-energy fission path is strongly modified by the hexadecapole deformation after their first minima. After the second saddles, the effect of the hexadecapole deformation becomes significant in all selected nuclei. However, it was found that the octupole deformation will play an important role at the second saddle and after that, leading to a change of the obtained mass asymmetry at the scission point [7, 33, 75].
In Table 2, the present results (calculated quadrupole deformation \(\beta_{2}\) and fission barrier \(B_{f}\)) for five selected nuclei are confronted with other accepted theories (the experimental data are scarce so far), including the results of the heavy-nuclei (HN) model [9, 78], the fold-Yukawa (FY) single-particle potential and the finite-range droplet model (FRDM) [79], the Hartree-Fock-BCS (HF
Figure 3: Four types of deformation energy curves as the function of quadrupole axial deformation \(\beta_{2}\) for \({}^{256}_{106}\)Sg\({}_{150}\) and its two isotopic and isotonic neighbours, namely, \({}^{258}_{106}\)Sg\({}_{152}\), \({}^{260}_{106}\)Sg\({}_{154}\), \({}^{254}_{104}\)Rf\({}_{150}\) and \({}^{252}_{102}\)No\({}_{150}\). At each \(\beta_{2}\) point, the minimization was performed over \(\gamma\) and/or \(\beta_{4}\). The legends denote that whether or not total energy at each \(\beta_{2}\) was minimized and, if so, with respect to what deformation parameter(s). See text for further explanations.
BCS) [80], the fold-Yukawa (FY) single-particle potential and the finite-range liquid-drop model (FRLDM) [8], and the extended Thomas-Fermi plus Strutinsky integral (ETFSI) [81; 20] methods. Comparison shows that these results are somewhat model-dependent but in good agreement with each other to a large extent. It can be found that the HFBCS calculation gave the larger equilibrium deformations and our calculation has the higher inner fission-barriers. Our calculated deformations may be underestimated to some extent, cf. Ref. [82]. As discussed by Dudek et al. [83], the underestimated quadrupole deformation \(\beta_{2}\) should be slightly modified by the empirical relationship \(1.10\beta_{2}\)-\(0.03(\beta_{2})^{3}\). Within the framework of the same model, it can be seen that the selected five nuclei almost have the same \(\beta_{2}\) in the PES, HN and FF (FY+FRDM) [79] calculations. In the HFBCS and ETFSI calculations, the nucleus \({}^{256}\)Sg has the largest and the smallest \(\beta_{2}\) values in the five nuclei, respectively, but the differences are still rather small. Concerning the inner fission barriers, it seems that the present calculation may relatively overestimate the barriers. However, the present calculation has the same trends to the results given by HN and FFL (FY+FRLDM) [8] calculations. For instance, the nucleus \({}^{256}\)Sg has the smallest inner barrier in these five nuclei, in good agreement with those in HN and FFL calculations. In our previous publication [34], a lower \(B_{f}\) about 4.8 MeV was obtained by using the universal parameter set. This value is lower about 1 MeV than the present calculation (5.88 MeV, as seen in the table) and lower than the values by HN and FFL calculations. The further experimental information is desirable. Interestingly, though the inner barrier of \({}^{256}\)Sg is the lowest, its outer barrier (\(\sim 2.72\) MeV) is higher than those in its isotopic neighbors \({}^{258,260}\)Sg (\(\sim 2.52\) and 2.29 MeV). It is certainly expected that the outer barrier of \({}^{256}\)Sg can relatively increase the survival probability of this superheavy nucleus, benifiting for the observation in experiment to some extent.
In macroscopic-microscopic model, as is well known, the total energy is mainly determined by the liquid-drop energy and shell correction. In Fig. 4, to understand their evolution properties from light to heavy nuclei, we show the macroscopic energy and microscopic shell correction for arbitrarily selected nine nuclei along the \(\beta\)-stability line (cf. Ref. [29]). As ex
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{\(\beta_{2}\)} & \multicolumn{4}{c}{\(B_{f}\)MeV} \\ \cline{2-10} Nuclei & PES & HN [78] & FF [79] & HFBCS [80] & ETFSI [81] & PES & HN [9] & FFL [8] & ETFSI [20] \\ \({}^{260}_{106}\)Sg\({}_{154}\) & 0.243 & 0.247 & 0.242 & 0.31 & 0.25 & 6.49 & 6.28 & 5.84 & 4.6 \\ \({}^{258}_{106}\)Sg\({}_{152}\) & 0.242 & 0.247 & 0.252 & 0.27 & 0.25 & 6.16 & 6.22 & 5.93 & 4.7 \\ \({}^{256}_{106}\)Sg\({}_{150}\) & 0.243 & 0.246 & 0.252 & 0.25 & 0.27 & 5.88 & 5.46 & 5.30 & — \\ \({}^{254}_{104}\)Rf\({}_{150}\) & 0.243 & 0.247 & 0.252 & 0.27 & 0.27 & 6.44 & 5.74 & 5.87 & 5.3 \\ \({}^{252}_{102}\)No\({}_{150}\) & 0.243 & 0.249 & 0.250 & 0.30 & 0.26 & 7.01 & 6.52 & 6.50 & 5.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The results of potential-energy-surface (PES) calculations for ground-state equilibrium deformation parameter \(\beta_{2}\) and inner fission barriers \(B_{f}\) for the 5 selected even-even nuclei, together with some other theoretical calculations for comparison; see the text for more descriptions.
that with increasing mass number \(A\) the macroscopic energy (the important contribution of fission barrier) is decreasing at a given \(\beta_{2}\) (e.g., \(\sim 0.4\), about the position of the first barrier;cf. Fig. 3) deformation, indeed, almost approaching zero in the superheavy region [e.g., with \(Z\gtrsim 104\), see \({}^{276}_{106}\)S\({}_{170}\) in Fig. 4(a), indicating the disappearance of the macroscopic fission barrier]. In particular, the calculated liquid-drop energy rapidly descends with increasing \(\beta_{2}\) in the "heavier" superheavy nucleus \({}^{312}_{118}\)S\({}_{194}\) which denotes that it is more difficult to bound such a heavy nucleon-system. Figure 3(b) illustrates the corresponding shell corrections for the selected nuclei mentioned above. Indeed, the energy staggering is rather large and combining the smoothed macroscopic energy, the potential pocket(s) can appear, which is the formation mechanism of superheavy nuclei.
In Fig. 5, we provide the further evolution information on the total energy and its different components in functions of the quadrupole deformation \(\beta_{2}\) for \({}^{256}_{106}\)S\({}_{150}\). Figure 5(a) illustrates that total energy, together with the macroscopic liquid-drop energy \(E_{ld}\), shell correction \(\delta E_{shell}\) and pairing correlation \(\delta E_{pair}\). For simplicity, other deformation degrees of freedom are ignored. In this nucleus, as seen, the macroscopic energy fully makes no contribution to the fission barrier. The barrier is mainly formed by the quantum shell effect. The inclusion of short-range pairing interaction always decreases the total energy, showing an irregular but relatively smoothed change (decreasing the barrier here). With increasing \(\beta_{2}\), the shell effect tends to disapear. In the subfigure Fig. 5(b), we show the total Routhian and the rotational contribution at ground-state and two se
Figure 4: Macroscopic energies (a) and Shell correction energies (b) as the function of quadrupole axial deformation \(\beta_{2}\) for several selected nuclei (see the legends, or cf. Ref. [29]) along the \(\beta\)-stability line. Note that during the calculation other deformation parameters are set to be zero.
lected frquencies \(\hbar\omega=\) 0.15 and 0.30 MeV, aiming to see the effect of the Coriolis force. One can see that, similar to the trend of the pairing correlation, the energy due to rotation will decrease the barrier because the energy difference e.g., at the positions of the first barrier and the first minimum is a negative value. It should be noted that the selected rotational frequencies respectively correspond to the values before and after the first band-crossing frequency in such a normal-deformed superheavy nucleus, e.g., cf. Ref. [38]. Along the curve, the ground-state or yrast configuration for the nucleus may be rather different (see, e.g., Fig. 6, the occupied single-particle levels below the Fermi surface will generally be rather different). In Fig. 5(c), the total energy and its pairing correlation energy are illustrated with different pairing strengths by adjusting the factor \(F\) (e.g., in \(G=FG_{0}\), where \(G_{0}\) is the orginal pairing strength). It can be noticed that the pairing correlation energy will decrease with increasing pairing strength \(G\). Both at the barrier and the minimum, the effects seem to be very similar. At the large deformation region, the pairing correlation tends to a constant.
Figure 5: (a) Total energy \(E_{tot}\) curve (together with its macroscopic liquid-drop energy \(E_{ld}\) and microscopic shell correction and pairing correlation energies, namely, \(\delta E_{shell}\) and \(\delta E_{pair}\)) vs \(\beta_{2}\) deformation for the nucleus \({}^{256}_{106}\)Sg\({}_{150}\). For simplicity, other deformation degrees of freedom were closed during the calculation. (b) Similar to (a) but for the total Routhian (\(E_{rou}\)) curves and the corresponding rotational contribution \(\delta\)H at three selected frequencies \(\hbar\omega=0.00,0.15\) and 0.30 MeV. (c)Similar to (a) but for the total energy and the corresponding pairing correlation \(\delta E\) at three selected pairing-strength factor \(F=0.95,1.00\) and 1.05 (the adjusted pairing strength \(G=FG_{0}\)).
The microscopic structure of nuclei is primarily determined by the single-particle levels, especially near the Fermi level [84]. Experimentally, one can detect and investigate single-particle states by e.g., the inelastic electron scattering [like \((e,e^{\prime}p)\)], the direct stripping and pick-up reactions [typically \((p,d)\) and \((d,p)\) reactions], \(\beta\)-decay rates, and so on [85; 86]. Because the measured single-particle states may be not pure, a rigorous definition of these states is given by the Green's function formalism (cf. Ref. [84]), showing that it is necessary to extract the spectroscopic factor. Such a quantity will provide an illustration of how much a single-particle level can be considered as a pure state and whether or not the correlations (e.g., the short- and long-range ones) beyond the mean field appear. Theoretically, the single-particle levels correspond to the eigenstates of the mean-field Hamiltonian (e.g., the Woods-Saxon-type one in this work). They are also the building blocks of the many-body wave functions, e.g., in self-consistent Hartree-Fock calculation. In Fig. 6, the single-particle levels near the proton and neutron Fermi surfaces are respectively illustrated in (a) and (b) parts. A set of conserved quantum numbers (associated with a complete set of commuting observables) are usually used for labeling the corresponding single-particle levels and wave functions. For instance, the spherical single-particle levels are denoted by the spherical quantum numbers \(n,l\) and \(j\) (corresponding the principal quantum number, the orbital angular momentum, and the total angular momentum, respectively). Similar to atomic spectroscopy, the notations \(s\), \(p\), \(d\), \(f\), \(g\), \(h\)\(\cdots\) (corresponding to \(l=0\), \(1\), \(2\), \(3\), \(4\), \(5\)\(\cdots\), respectively) are used. Due to the strong spin-orbit coupling, the single particle state with \(l\) will split into two states with \(j\) = \(l\)\(\pm\) 1/2 (The degeneracy of each spherical single-particle level can be calculated by \(2j+1\)). In the present work, one can see that the expected shell structure and shell closure can be well reproduced. When deformed shape occurs, the \(2j\)+1 degeneracy will be broken and the spherical single-particle level will split into \(j+1/2\) components (each one is
Figure 6: Calculated proton (a) and neutron (b) single-particle energies as functions of the quadrupole deformation \(\beta_{2}\) for \({}^{256}_{106}\)Sg\({}_{150}\), focusing on the domain near the Fermi surface. The levels with positive and negative parities are respectively denoted by red solid and blue dotted lines. Spherical single-particle orbitals (i.e., at \(\beta_{2}\) = 0.0) in the window of interest are labeled by the quantum numbers \(nlj\).
typically double degenerate due to Kramers degeneracy). These deformed single-particle levels are generally described by asymptotic Nilsson quantum numbers \(\Omega^{\pi}[Nn_{z}\Lambda]\), where \(N\) is the total oscillator shell quantum number; \(n_{z}\) stands the number of oscillator quanta in the \(z\) direction (the direction of the symmetry axis); \(\Lambda\) is the projection of angular momentum along the symmetry axis; \(\Sigma\) is the projection of intrinsic spin along the symmetry axis; \(\Omega\) is the projection of total angular momentum \(j\) (including orbital \(l\) and spin \(s\)) on the symmetry axis and \(\Omega=\Lambda+\Sigma\). Note that the Nilsson labels are not given owing to space limitations. Similar to magnetic field, in the rotational coordinate system, the Coriolis force resulted from the non-inertial reference frame can also break the time reversal symmetry and mix the Nilsson states. Then, the single-particle Routhians can only be labeled by the conserved parity and signature \((\pi,\alpha)\) or \((\pi,r)\) (cf. Ref. [1] for the rigorous definition). It should be pointed out here that we did not perform the virtual crossing removal [87] of single-particle levels with same symmetries in these plots but this will not affect the identification of the single-particle levels. From Fig. 6, one can see that the shell gaps appear at the energy-minimum positions with lower level-densities and the higher level-densities occur at the saddle positions (cf. e.g., Fig. 5). The deformed neutron shells at \(N=152\) and 162 are reproduced [4].
For a clear display about the level density near the minimum and saddle points, Figure 7 presents the proton and neutron single-particle levels at these corresponding deformation points. Note that the Fermi levels (the green levels) at the four typical points \(A,B,C\) and \(D\) are shifted to zero for comparison. The levels in Fig. 7(a) and (c) correspond to deformation conditions same to those in Figs. 5 and 6 where only the \(\beta_{2}\) deformation is considered. In the right two subfigures of Fig. 7, at each \(\beta_{2}\) point, the "realistic" \(\beta_{4}\) value is taken into account (the equilibrium deformation is adopted after potential-energy minimization over \(\beta_{4}\)). Relative to the left two ones, the levels are rearranged to an extent by the hexadecapole deformation degree of freedom. As excepted, the level density is lower (higher) near the minimum (saddle) point, indicating the occurrence of a largely negative (positive) shell correction.
Figure 8 illustrates the total Routhian surfaces projected on the \(\beta\) vs \(\gamma\) plane for \({}^{256}_{106}\)Sg\({}_{150}\) at several typical rotational frequencies. At each grid in the maps, the minimization of the total Routhian was performed over \(\beta_{4}\). It needs to be stressed that the energy domains denoted by the color palettes are different in Figs. 8(c) and (d) for a better display. Under rotation, the triaxial deformation parameter \(\gamma\) covers the range from \(-120^{\circ}\) to \(60^{\circ}\) because the three sectors (\(-120^{\circ}\), \(-60^{\circ}\)), (\(-60^{\circ}\), \(0^{\circ}\)) and (\(0^{\circ}\), \(60^{\circ}\)) will represent rotation about the long, medium and short axes, respectively (the nucleus with triaxial shape). The nucleus with four \(\gamma\) values \(-120^{\circ}\), \(-60^{\circ}\), \(0^{\circ}\) and \(60^{\circ}\) has the axially symmetric shape but different rotational orientation (cf. e.g., Ref. [88]). For instance, the triaxial deformation parameter \(\gamma=-120^{\circ}\) during rotation denotes that a prolate nucleus with a non-collective rotation (namely, rotating around its symmetry axis; see, e.g., the low-frequency part on the fission path in Fig. 8(d)). The 1D cranking is limited in the present study. From this figure, one can see the evolution properties of the triaxiality and rotation axis for
both the equilibrium shape and states along the fission path.
To investigate the hexadecapole-deformation effect under rotation, the total Routhian surfaces projected on the (\(\beta_{2}\), \(\beta_{4}\)) plane are shown in Fig. 9 for \({}^{256}_{106}\)S\({}_{150}\) at four selected rotational frequencies \(\hbar\omega=0.0\), 0.1, 0.2 and 0.3 MeV, respectively. Note that the color palletes are slightly adjusted, similar to those in Fig. 8. It can be seen that the hexadecapole deformation \(\beta_{4}\) can strongly decrease the total Routhian along the fission path, especially at high rotational frequency and large quadrupole deformation. In other words, the fission pathway will be modified by the the hexadecapole deformation \(\beta_{4}\). It should be pointed out that from this figure one can find that part of the fission pathway evolutes along the border (with \(\beta_{4}\)=0.30) of the calculation domain, indicating the nucleus may possess a larger \(\beta_{4}\) at this moment. Figure 10 illustrates the total Routhian curves in functions of \(\beta_{2}\) for \({}^{256}_{106}\)S\({}_{150}\) at the selected rotational frequencies mentioned above. The size and shape of the inner and outer barriers and their evolution with rotation can be evaluated con
Figure 7: (a) Calculated proton single-particle levels for \({}^{256}_{106}\)S\({}_{150}\) at the four typical \(\beta_{2}\) deformation points (\(A\), the 1st minimum; \(B\), the 1st maximum;, \(C\), the 2nd minimum; and \(D\), the 2nd maximum) along the energy curve, see e.g., Fig. 3. In this plot, only \(\beta_{2}\) deformation is considered for simplicity, corresponding to the blue energy curve in Fig. 3. (b) Similar to (a) but, in this plot, the energy is minimized over \(\beta_{4}\) for each \(\beta_{2}\) points, corresponding to the red energy curve in Fig. 3. (c) Similar to (a) but for neutron single-particle levels. (d) Similar to (b) but for neutron single-particle levels.
Figure 8: Similar to Fig. 1(d) but for total Routhian projections of \({}^{256}_{106}\)S\({}_{\rm B150}\) at rotational frequencies \(\hbar\omega=0.0\) (a), 0.1 (b), 0.2 (c) and 0.3 (d) MeV, respectively.
Figure 9: Similar to Fig. 2(h) but for total Routhian projections of \({}^{256}_{106}\)S\({}_{\rm B150}\) at rotational frequencies \(\hbar\omega=0.0\) (a), 0.1 (b), 0.2 (c) and 0.3 (d) MeV, respectively.
veniently. In the previous studies, e.g., in Refs. [6; 7; 33], it was pointed out that the octupole correlation may further decrease the outer barrier in this mass region based on the PES calculation and fission fragment analysis. The outer barrier for this nucleus may finally be very low. It will be an open problem whether it will be able to play a certain role in blocking the fission process.
## 4. Conclusions
We evaluate the structure evolution along the fission pathway for \({}^{256}\)Sg by using the multi-dimensional potential-energy(or Routhian)-surface calculations, focusing on the effects of triaxial and hexadecapole deformation and Coriolis force. Nuclear shape and microscopic single-particle structure are investigated and analyzed. The present results are compared with other theories. The properties of nuclear shape and fission barrier are analyzed by comparing with its neighboring even-even nuclei, showing a reasonable agreement. Based on the deformation energy or Routhian curves, the fission barriers are analyzed, focusing on their shapes, heights, and evolution with rotation. It is found that the triaxial deformation \(\gamma\) decreases the potential energy on the landscape near the saddles but the hexadecapole deformation \(\beta_{4}\) (especially the axial \(\alpha_{40}\) component) modifies the least-energy fission path after the first minimum, especially in \({}^{256}\)Sg. In addition, in contrast to the inner barrier, the outer barriers seem to have an increasing trend from \({}^{260}\)Sg to \({}^{256}\)Sg which may be benefit for blocking the fission of \({}^{256}\)Sg to some extent. Next, it will be necessary to simultaneously consider the reflection asymmetry in a more reasonable deformation subspace.
## Acknowledgement
This work was supported by the National Natural Science Foundation of China (Nos. 11975209, U2032211 and 12075287), the Physics Research and Development Program of
Figure 10: The calculated total Routhian curves against \(\beta_{2}\) for \({}^{256}_{106}\)Sg\({}_{150}\) at four selected frequencies \(\hbar\omega=0.0\), 0.1, 0.2 and 0.3 MeV. At each \(\beta_{2}\) point, the minimization was performed over \(\gamma\) and \(\beta_{4}\).
Zhengzhou University (No. 32410017), and the Project of Youth Backbone Teachers of Colleges and Universities of Henan Province (No. 2017GGJS008). Some of the calculations were conducted at the National Supercomputing Center in Zhengzhou.
### Conflict of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. |
2303.03469 | On the Origin of Dust Structures in Protoplanetary Disks: Constraints
from the Rossby Wave Instability | High resolution sub-mm observations of protoplanetary disks with ALMA have
revealed that dust rings are common in large, bright disks. The leading
explanation for these structures is dust-trapping in a local gas pressure
maximum, caused by an embedded planet or other dynamical process. Independent
of origin, such dust traps should be stable for many orbits to collect
significant dust. However, ring-like perturbations in gas disks are also known
to trigger the Rossby Wave Instability (RWI). We investigate whether
axisymmetric pressure bumps can simultaneously trap dust and remain stable to
the RWI. The answer depends on the thermodynamic properties of pressure bumps.
For isothermal bumps, dust traps are RWI-stable for widths from ${\sim}1$ to
several gas scale-heights. Adiabatic dust traps are stable over a smaller range
of widths. For temperature bumps with no surface density component, however,
all dust traps tend to be unstable. Smaller values of disk aspect ratio allow
stable dust trapping at lower bump amplitudes and over a larger range of
widths. We also report a new approximate criterion for RWI. Instability occurs
when the radial oscillation frequency is $\lesssim75$\% of the Keplerian
frequency, which differs from the well-known Lovelace necessary (but not
sufficient) criterion for instability. Our results can guide ALMA observations
of molecular gas by constraining the resolution and sensitivity needed to
identify the pressure bumps thought to be responsible for dust rings. | Eonho Chang, Andrew N. Youdin, Leonardo Krapp | 2023-03-06T19:52:22Z | http://arxiv.org/abs/2303.03469v1 | # On the Origin of Dust Structures in Protoplanetary Disks:
###### Abstract
High resolution sub-mm observations of protoplanetary disks with ALMA have revealed that dust rings are common in large, bright disks. The leading explanation for these structures is dust-trapping in a local gas pressure maximum, caused by an embedded planet or other dynamical process. Independent of origin, such dust traps should be stable for many orbits to collect significant dust. However, ring-like perturbations in gas disks are also known to trigger the Rossby Wave Instability (RWI). We investigate whether axisymmetric pressure bumps can simultaneously trap dust and remain stable to the RWI. The answer depends on the thermodynamic properties of pressure bumps. For isothermal bumps, dust traps are RWI-stable for widths from \(\sim\)1 to several gas scale-heights. Adiabatic dust traps are stable over a smaller range of widths. For temperature bumps with no surface density component, however, all dust traps tend to be unstable. Smaller values of disk aspect ratio allow stable dust trapping at lower bump amplitudes and over a larger range of widths. We also report a new approximate criterion for RWI. Instability occurs when the radial oscillation frequency is \(\lesssim 75\%\) of the Keplerian frequency, which differs from the well-known Lovelace necessary (but not sufficient) criterion for instability. Our results can guide ALMA observations of molecular gas by constraining the resolution and sensitivity needed to identify the pressure bumps thought to be responsible for dust rings.
Astrophysical fluid dynamics(101) -- Circumstellar dust(236) -- Planet formation(1241) -- Protoplanetary disks(1300) -- Submillimeter astronomy(1647) -- Hydrodynamics(1963) +
Footnote †: journal: ApJL: 02/01/23 (Accepted: 03/03/23)
## 1 Introduction
High resolution observations of protoplanetary disks by the Atacama Large Millimeter/submillimeter Array (ALMA) have revealed a variety of substructures, including axisymmetric features such as rings and gaps, as well as non-axisymmetric vortex-shaped or crescent-shaped traps (van der Marel et al., 2013; ALMA Partnership et al., 2015; Andrews et al., 2018).
These regions of enhanced continuum emission correspond to locations where dust has concentrated and/or become heated. The leading hypothesis is that these structures form when dust drifts into local maxima in gas pressure (Whipple, 1972; Pinilla and Youdin, 2017). However, alternate explanations have been proposed, including: the concentration of dust by a "secular" gravitational instability of the dust layer (Youdin, 2011; Takahashi and Inutsuka, 2016); a thermal shadowing instability of the disk (Ueda et al., 2021); and changes in dust properties near condensation fronts, i.e. "snow lines" (Zhang et al., 2015; Okuzumi et al., 2016). These mechanisms, and related ones, are reviewed in Bae et al. (2022).
For the leading hypothesis of dust concentration in pressure bumps, the pressure bumps could have a planetary or non-planetary origin. The outer edge of planet-carved gaps can trap dust in a pressure maxima (Paardekooper and Mellema, 2004; Lyra et al., 2009; Pinilla et al., 2012). Without planets, a variety of dynamical mechanisms could also create a pressure bump. These include: zonal flows arising in magneto-rotational turbulence (Johansen et al., 2009; Krapp et al., 2018); dead zone boundaries (Lyra et al., 2008; Ruge et al., 2016); magnetized disk winds (Suriano et al., 2017; Riols and
Lesur, 2019); and the vertical shear instability (Nelson et al., 2013; Lin & Youdin, 2015; Flock et al., 2017).
Therefore, identifying whether or not dust is concentrated in gas pressure maxima will aid our understanding of the nature of dust substructures and their role in planet formation. ALMA observations of molecular lines, especially of CO, combined with chemical models, constrain the mass and temperature distribution of disk gas (Oberg et al., 2021). However the spatial and velocity resolution is not sufficiently high in current observations to clearly confirm or rule out gas pressure maxima as the source of dust structures. The goal of this letter is to theoretically constrain the properties of gas structures that can trap dust, to aid the planning and interpretation of ALMA observations. Specifically, we require that dust-trapping pressure bumps be dynamically stable.
Specifically the Rossby Wave Instability (RWI, Lovelace et al., 1999, hereafter L99) is triggered by narrow, ring-like gas structures. There are two main nonlinear outcomes to the RWI. First, the RWI can trigger the formation of vortices (Li et al., 2001), whether the initial ring-like perturbation was formed by a planet (Koller et al., 2003) or by another source such as a dead zone boundary (Varniere & Tagger, 2006). Second, after vortices decay, ring-like structures spread out to a RWI-stable state (Hammer et al., 2017). In either case, an axisymmetric pressure bump should not persist in a RWI-unstable state.
Thus if the dust rings observed by ALMA are caused by pressure trapping, the pressure bump should be RWI-stable, or at most marginally unstable. In this letter, we use this constraint to place limits on the amplitudes and widths of gas bumps that could produce observed dust rings. We believe that this work provides the first systematic comparison of the conditions for pressure trapping and hydrodynamic stability. Most similarly, Yang & Menou (2010) considered the effect of the axisymmetric Rayleigh instability on gas bumps and steps. However the non-axisymmetric RWI is more readily triggered (see Section 3.4) and thus places more stringent constraints on gas rings. Moreover, neither that work, nor other previous works (to our knowledge), have addressed the main question we are asking: _Which stable gas rings can also trap dust?_
In Section 2, we describe our model of a disk with a bump and the methods of our stability analysis. Section 3 presents our results for the properties of stable, dust-trapping rings. In Section 4, we discuss the implications and possible extensions of our work.
## 2 Methods
### Disk-bump model
We consider a set of simple, but flexible models of a protoplanetary disk with a bump. These axisymmetric models have surface mass density \(\Sigma\) and (vertically isothermal) temperature \(T\) that vary with radius \(R\) as
\[\begin{split}\Sigma(R)&\equiv\Sigma_{0}\left[\left( \frac{R}{R_{0}}\right)^{n}+A_{\Sigma}g(R-R_{0},W)\right],\\ T(R)&\equiv T_{0}\left[\left(\frac{R}{R_{0}} \right)^{q}+A_{T}g(R-R_{0},W)\right],\end{split} \tag{1}\]
where the background disk slope is given by the exponents \(n\) and \(q\). The bumps are Gaussian-shaped, with \(g(R-R_{0},W)\equiv\exp[-(R-R_{0})^{2}/(2W^{2})]\), centered on \(R_{0}\) with width \(W\). The bump amplitudes are \(A_{\Sigma}\) and \(A_{T}\).1 In the absence of a bump, the disk would have the background values, \(\Sigma_{0}\) and \(T_{0}\), at \(R_{0}\).
Footnote 1: Unlike some previous works (e.g. Lovelace et al., 1999; Li et al., 2000), our bump is not multiplied by the background power-law. This choice allows our large amplitude bumps to be independent of the background slope.
We assume an ideal gas, with pressure \(P\propto\rho T\) for mass density \(\rho\). The structure of \(P\) and \(\rho\) with vertical distance \(z\) from the disk midplane follows from hydrostatic balance. We neglect disk self-gravity and use the vertical gravitational acceleration of a thin disk, \(g_{z}=-\Omega_{\rm K}^{2}z\) with the Keplerian frequency \(\Omega_{\rm K}\propto R^{-3/2}\). The midplane density and pressure can then be written
\[\begin{split}\rho_{\rm m}&=\frac{\Sigma}{\sqrt{2 \pi}H}\,,\\ P_{\rm m}&=\frac{\Sigma}{\sqrt{2\pi}}H\Omega_{\rm K }^{2}\,.\end{split} \tag{2}\]
For the gas scale-height \(H\), we specify the aspect ratio
\[h\equiv\frac{H}{R}=h_{0}\sqrt{\frac{T}{T_{0}}\frac{R}{R_{0}}}\,. \tag{3}\]
The midplane pressure \(P_{\rm m}\) is used to determine the location of dust traps, because dust will accumulate at a pressure maximum with \(dP_{\rm m}/dR=0\)(Whipple, 1972; Youdin, 2010). For the RWI analysis, we use a height integrated disk model, where the relevant pressure is \(P_{\rm H}=\int_{-\infty}^{\infty}Pdz=P_{\rm m}\sqrt{2\pi}H\). We henceforth drop the subscript "m" from midplane values for convenience.
Choosing scaled units of \(R_{0},\Sigma_{0}\) and \(\Omega_{0}\equiv\Omega_{\rm K}(R_{0})\), we specify our model by six dimensionless parameters \(n,q,h_{0},A_{\Sigma},A_{T}\) and \(W/H_{0}=W/(R_{0}h_{0})\). We further fix an effective, height-integrated adiabatic index, \(\Gamma=4/3\), which is approximately equivalent to a standard (3D) adiabatic index of 7/5, appropriate for diatomic molecules (Ostriker et al., 1992).
Our fiducial model includes a bump in surface density but not in temperature (i.e. \(A_{\Sigma}>0,\,A_{T}=0\)). The
parameters of the fiducial case are \(n=-1\), \(q=-0.5\), \(h_{0}=0.05\) and \(A_{T}=0\), with different choices of \(A_{\Sigma}\) and \(W/H_{0}\). In Section 3.2 we explore deviations from these background disk parameters, and in Section 3.3 we consider heated bumps with \(A_{T}>0\).
Our choices of background disk parameters span most theoretical expectations, as well as observational constraints, especially in the \(R_{0}\simeq 50-100\) AU region where ALMA has observed prominent dust rings, e.g. with the DSHARP survey (Dullemond et al., 2018). Gas parameters are constrained by ALMA observations of molecular lines plus thermo-chemical modelling. Zhang et al. (2021) fit ALMA MAPS survey data to a disk model with an exponentially truncated power-law disk in \(\Sigma\). The local slope \(n=d\ln\Sigma/d\ln R\) of those models vary from \(-1\) to \(-2\) at \(R_{0}\simeq 50-100\) AU. Their fits also give values from \(q\simeq-0.3\) to \(-0.8\) throughout the disk, and values \(h_{0}\simeq 0.03\) in the inner disk (\(R_{0}\simeq 10\) AU) and \(h_{0}\simeq 0.1\) in the outer disk (\(R_{0}\simeq 150\) AU), consistent with our choices.
Figure 1 illustrates examples of ring-like bumps in our fiducial disk model. The red solid curves represent a relatively strong and narrow bump with \(A_{\Sigma}=1,W/H_{0}=1.5\). A weaker bump (purple dotted curves) and a wider bump (yellow dashed curves) are shown for comparison. The top two panels show the bumps in surface density and in midplane pressure, respectively. The bumps appear less prominent in pressure than in \(\Sigma\) due to the steeper background, power-law slope of pressure. The third panel shows the logarithmic pressure gradient, i.e. the local power-law slope. Both the weak and wide bumps have negative slope everywhere, implying that dust drift is directed solely inward due to sub-Keplerian gas orbits. For the strong bump, the slope briefly becomes positive and dust can collect in the local pressure maximum.
In the weak and wide cases, the reduction of inward drift speeds -- where radial pressure gradients are weak, but still negative -- would increase the dust density as a traffic jam effect (Carrera et al., 2021). We focus on the stronger dust concentrations that occur in local pressure maxima.
The bottom panel of Figure 1 shows the disk's radial oscillation frequency squared, a combination of the epicyclic frequency, \(\kappa\), and radial buoyancy frequency, \(N_{R}\), computed as in L99. Negative values of \(\kappa^{2}+N_{R}^{2}\) would imply instability by the Solberg-Hoiland criterion (Lin & Youdin, 2015). While this quantity remains positive, it is reduced near \(R_{0}\) by the pressure bump. As we show in Section 3.4, even a partial reduction could trigger the RWI. Specifically, we find \(\kappa^{2}+N_{R}^{2}\lesssim 0.60\Omega_{\rm K}^{2}\) somewhere to be an approximate criterion for the RWI. The strong and narrow bump which traps particles also causes the largest reduction of \(\kappa^{2}+N_{R}^{2}\), which makes it closer to triggering the RWI. This particular bump turns out to be stable to the RWI, and the goal of this letter is to explore systematically when this is true in different circumstances.
Figure 1: The behavior of three different bumps in our fiducial disk model: “Strong” (which is also narrow), “Weak” and “Wide” with \(A_{\Sigma}=(1,0.4,1)\), \(W/H_{0}=(1.5,1.5,3)\), respectively. _Top two panels_: Surface density, \(\Sigma\), and midplane pressure, \(P\), in the vicinity of the bump. _Third panel_: The pressure gradient, which only becomes positive in the “Strong” case, indicating a dust-trapping pressure maximum. _Bottom panel_: the disk’s squared radial oscillation frequency, relative to the squared Keplerian frequency. These radial frequencies are related to disk stability.
### RWI Stability Analysis
This work studies the RWI stability of disks with bumps, as parameterized in Equation 1, for the purpose of comparing to the conditions for trapping dust. However, determining RWI stability requires a numerical calculation as no general, analytic criterion for the RWI exists. The well-known Lovelace criterion (L99) provides a necessary, but not sufficient, condition for instability.
Many works, starting with Li et al. (2000), have computed the linear growth of the RWI. Of these, Ono et al. (2016, hereafter O16) performed the most thorough analysis to date of the RWI stability boundary, finding the amplitudes and widths needed for bumps to trigger instability. O16 also considered gaps and steps, which we ignore here, partly because the results are similar, and also because we wish to more thoroughly examine the bump case in this initial study.
Comparing to O16, our work has two key distinctions. First and foremost, we are comparing to the conditions for dust trapping. Second, O16 considered disks with a flat background (\(n=q=0\)) and a barotropic equation of state (\(T\propto\Sigma^{\Gamma-1}\)). We study non-barotropic disks as well, to consider a wider, and more realistic, range of background disk slopes and also to compare bumps in surface density to bumps in temperature. We thus note that -- as a means to the end of better understanding dust trapping pressure bumps -- our results build on O16 by performing the most thorough investigation to date of the RWI stability boundary for non-barotropic disks.
Our stability analysis uses the original height-integrated, linearized equations of L99. These equations are non-barotropic which allows for disk entropy gradients, and thus radial buoyancy. Furthermore, these equations assume adiabatic perturbations, i.e. no cooling. Specifically, we solve the ODE of Equation (10) in L99, which describes the behavior of linear perturbations to an equilibrium disk model. We use our disk model, Equation 1, as the equilibrium, using the \(P_{\rm H}\) as the relevant, height-integrated pressure.
We solve the governing ODE using the same method and boundary conditions as those of O16, described in their Appendix. The wave frequencies, \(\omega\), and RWI growth rates, \(\gamma\), are found as the complex eigenvalues of the resulting linear system, using Muller's method. For all linear stability calculations, we use \(N=3000\) grid points uniformly spaced in the radial domain \(R\in[0.3R_{0},3R_{0}].\) As a check on our calculations, we reproduced the stability boundary that O16 found for their bump cases, (iii) and (iv).
While the RWI has been been analyzed in 3D (Meheut et al., 2010; Lin, 2013), even the linear calculations are considerably computationally intensive. Fortunately, growth rates appear similar in 2D and 3D (Meheut et al., 2012), though an investigation of the RWI stability boundary in 3D is left to future work.
A technical difficulty in studying the marginal stability to the RWI is that when \(\gamma=0\), the governing ODE is singular at corotation.2 To avoid the singularity, we are restricted to finding solutions near marginal stability, and our main results are for \(\gamma=5\times 10^{-3}\Omega_{0}\). Such growth rates are sufficiently slow for two reasons. First, since RWI growth rates increase rapidly away from the stability boundary and the precise threshold chosen for \(\gamma/\Omega_{0}\ll 1\) has little effect on the inferred boundary. Second, linear growth that is slower than hundreds of orbits is unlikely to be astrophysically relevant, as it becomes a significant fraction of the disk lifetime in the outer disk and unlikely to be the dominant dynamical effect.
Footnote 2: Corotation is where the wave’s pattern speed, \(\omega/m\), matches the disk’s orbital frequency, \(\Omega(R)\), and as a result the Doppler-shifted frequency vanishes: \(\Delta\omega\equiv\omega-m\Omega(R)=0\)(Tsang & Lai, 2008). In practice, the corotation radius is located near the center of the bump.
O16 were able to remove the corotation singularity of order \(\mathcal{O}(1/\Delta\omega)\) for marginally stable states (see their Section 5.2) and confirm that the \(\gamma=0\) and small \(\gamma\) boundaries were indistinguishable. Their technique works for barotropic disks with no radial entropy gradients, but could not be applied to our non-barotropic model. Moreover, for our non-barotropic case, the corotation singularity is of higher order, \(\mathcal{O}(1/\Delta\omega^{2})\), which means that we require higher grid resolution to solve for a given small growth rate.
In finding the stability boundary, we fix the azimuthal wavenumber to \(m=1\). This mode was found to be the most unstable near marginal stability by O16, in the sense of giving instability for the smallest bump amplitudes, at a given width. We also investigated whether this result held for our models, which are non-barotropic and include smaller values of \(h_{0}\). With our fiducial model, we confirmed that \(m=1\) is the fastest growing mode as \(\gamma\to 0\). However near the \(\gamma=5\times 10^{-3}\Omega_{0}\) threshold used in this work, \(m>1\) modes can be the fastest growing, but only for narrow widths (\(W/H_{0}\lesssim 0.5\)). The RWI stability boundary would thus move to somewhat smaller amplitudes at narrow widths if \(m>1\) modes were included. But this shift would not affect our results because amplitudes are already too low for dust-trapping in this region (as shown in Figure 2).
## 3 Results
We find the properties of ring-shaped bumps in gas disks which could explain the bright dust rings observed by ALMA, because they can both trap dust in a pressure maximum, and remain stable to the RWI. A dust trap should be stable for hundreds of orbital times (\(\Omega_{\rm K}^{-1}\)) for significant amounts of dust to accumulate.
The radial drift timescale is at least \(\sim\)\(1/h_{0}^{2}\Omega_{0}^{-1}\simeq 400\Omega_{0}^{-1}\) if dust over a large radial scale \(\sim\)\(R_{0}\) accumulates in a ring. Drift times are longer if particles are not of the optimum size (\(\sim\)cm) at which the drag and orbital timescales match (Adachi et al., 1976; Chiang & Youdin, 2010). Despite uncertainties in grain size and ring feeding zone, this drift timescale is similar to, or longer than, the adopted "marginal" growth timescale, \(1/\gamma=200\Omega_{0}^{-1}\) in our RWI analysis. Our stability constraints would become somewhat tighter if pressure traps need to survive for even longer to accumulate dust. However, as noted in Section 2.2, our results are not very sensitive to the choice of growth rate, as long as \(\gamma/\Omega_{0}\ll 1\).
Results for our fiducial disk model are in Section 3.1. We vary the background disk power-laws and aspect ratio in Section 3.2 and then consider the effect of the temperature on the pressure bump in Section 3.3. Section 3.4 presents an approximate, empirical, and apparently rather general stability criterion for the RWI.
### Fiducial case
Our fiducial model considers a pressure bump parameterized by choices of the bump amplitude in surface density, \(A_{\Sigma}\), and the width, \(W\). This bump has the temperature of the background disk (i.e. \(A_{T}=0\)) with background disk parameters given in Section 2.1.
The ability of these bumps to produce pressure maxima, and thus trap dust, is shown in the left panel of Figure 2. The colored region shows that bumps with larger amplitudes and narrower widths produce dust traps. The critical curve (in purple) shows the minimum amplitude needed for a dust trap at a given width, so that \(dP/dR\geq 0\) somewhere. At low amplitudes, the critical amplitude increases linearly with width, simply because a bump's maximum pressure gradient scales as \(A_{\Sigma}/W\). For \(A_{\Sigma}\gtrsim 1\), however, the critical amplitude increases more sharply. To explain this steepening, we look at the effect of the bump on \(d\ln P/d\ln R\). The logarithmic gradient is the most relevant one since the pressure gradient is affected by other power-laws, such as the Keplerian rotation \(\Omega_{\rm K}\propto R^{-3/2}\). We find the dependence of the power-law slope \(d\ln\Sigma/d\ln R\) (and consequently \(d\ln P/d\ln R\)) on \(A_{\Sigma}\) to diminish for \(A_{\Sigma}\gtrsim 1\). Thus, dust-trapping pressure bumps are unlikely to be wider than a few scale-heights since the required bump amplitudes would be extremely large.
The middle panel of Figure 2 shows the amplitudes and widths that are either unstable or stable to the RWI. Qualitatively, the RWI-unstable region consists of larger
Figure 2: The ability of ring-like surface density bumps — with amplitude, \(A_{\Sigma}\), and width relative to local gas scale-height, \(W/H_{0}\) — to trap dust and/or trigger hydrodynamic instability is shown, for our fiducial disk model. _Left:_ the shaded region, above and to the left of the purple solid curve, denotes bumps that reverse the sign of the background pressure gradient and thus can trap dust. _Middle:_ the shaded region, below and to the right of the red dashed curve, denotes bumps that are stable to the RWI, or with a low growth rate. _Right:_ the yellow shaded region shows where the previous shaded regions overlap, giving a region where the bumps can both trap dust and not trigger significant RWI. Gas rings with these properties could produce observed dust rings. Bumps with amplitude and width outside the shaded region will fail to trap dust and/or be modified by the RWI.
amplitudes and narrower widths, similar to the dust-trapping region. This general similarity is the reason we investigate these regions more quantitatively. The curve of marginal RWI-stability (red dashed) was determined numerically, as described in Section 2.2, and we explain its detailed shape in Section 3.4.
To roughly explain the behavior of the RWI stability curve, we note that the instability is mainly driven by shear from the pressure gradient -- or more specifically, the gradient of the pressure gradient. Orbital shear thus scales as \(A_{\Sigma}/W^{2}\) at low amplitudes, explaining the \(A_{\Sigma}\propto W^{2}\) slope at low amplitudes. At large amplitudes, the strength of pressure gradients saturates with increasing amplitude, since \(\Sigma^{-1}dP_{\mathrm{H}}/dR\propto d\ln\Sigma/dR\) with no temperature component to the bump. Again, the weak dependence of \(d\ln\Sigma/dR\) on \(A_{\Sigma}\) at \(A_{\Sigma}\gtrsim 1\) explains the steepening of the RWI stability curve in \(A_{\Sigma}\) vs. \(W\) at large amplitudes.
Since the stability boundary crosses the critical curve for pressure trapping, an overlap for RWI-stable pressure bumps exists as shown in the right panel of Figure 2 (shaded yellow). If pressure bumps are the cause of ring-shaped dust structures in ALMA disks, then the bumps should lie in this wedge-shaped region. Specifically, for the fiducial case, the region of stable dust traps occurs above a minimum bump amplitude and for a range of widths, starting at around a gas scale-height, \(W\sim H_{0}\). For larger amplitudes (above the minimum), the minimum width and the range of widths increases, reaching a few \(H_{0}\).
However, the properties of stable dust traps, and even their existence, depend on disk properties that we vary in the next subsections.
### Background disk effects
We now probe how the region of stable dust traps depends on the properties of the background disk, namely the aspect ratio, \(h_{0}\), and the power-laws, \(n\) and \(q\), for the surface density and temperature, respectively. As with the fiducial case, we fix \(A_{T}=0\).
Specifically, we consider a range of values \(n\in\left\{-2,-1,0\right\},\,q\in\left\{-1,-0.5,0\right\},\,h_{0}\in\left\{0.03,0.05,0.1\right\}\), which are consistent with observational expectations as discussed in Section 2.1. While the flat \(n=0\) and \(q=0\) cases seem less realistic, they are included as an idealized control.
The effect of the background disk slopes, \(n\) and \(q\), on the properties of stable dust traps is shown in the left and middle panels, respectively, of Figure 3. Neither slope has a significant effect on the RWI stability boundary (dashed curves on the left). This independence is expected since smooth disk gradients do not introduce significant shear or vortensity. Both slopes do affect the pressure trapping boundary, on the right of the stable dust trapping region. This effect is also expected since a pressure maximum involves a competition between the pressure gradients caused by the background and bump. Thus with flatter background (e.g. \(n=0\), \(q=0\)), smaller and wider bumps can create pressure traps. The effect for the temperature slope \(q\) appears weaker for two reasons. First, the pressure has a weaker dependence on temperature, \(P\propto\Sigma\sqrt{T}\), when the scale-height is accounted for. Second, a smaller range of \(q\) is considered.
The disk aspect ratio, \(h_{0}\), has a strong effect, as shown in the right panel of Figure 3. Colder, thinner disks with smaller \(h_{0}\) have an expanded region of parameter space for stable, dust-trapping rings. To understand the effect of varying \(h_{0}\equiv H_{0}/R_{0}\), it important to note that widths are plotted relative to \(H_{0}\). From this perspective it appears that with changing \(h_{0}\) the RWI stability boundary changes little, while the pressure trapping boundary expands for colder disks. However the effect is perhaps easier to understand when considering bump widths relative to the radial length-scale of the disk, \(W/R_{0}\). From this perspective the pressure trapping boundary does not change, since the pressure gradients of the background and bump are described by the length-scales \(R_{0}\) and \(W\), independent of \(H_{0}\). Meanwhile, the RWI stability boundary moves to smaller \(W/R_{0}\) for smaller \(h_{0}\). This shift occurs because the strength of pressure gradients relative to Keplerian gravity scales as \(h_{0}^{2}\). Thus for a smaller value of \(h_{0}\), bumps have to be narrower to produce the same velocity deviation.
The fact that colder disks with smaller \(h_{0}\) can have lower-amplitude stable pressure traps is significant. Regardless of their origin, lower-amplitude bumps should be more readily produced in these disks.
### Effects of heated bumps
Thus far, we have considered isothermal bumps in surface density, i.e. \(A_{T}=0\). Since the thermodynamic properties of pressure bumps are not well-constrained, we consider bumps with a temperature component as well. Specifically, we examine two types of bumps: in surface density which are also heated, and in temperature alone, which Kim et al. (2020) showed were a possibility for the CR Cha disk. For reference, the left panel of Figure 4 shows the properties of stable dust traps in our fiducial, isothermal model. The other panels show the effects of heated bumps. The parameter space for stable dust traps is reduced or eliminated in heated bumps, as explained below.
The middle panel of Figure 4 considers bumps with \(T\propto\Sigma^{\Gamma-1}\), as by adiabatic compression with no cooling. For this case (only), the disk temperature (background and bump) is given not by Equation 1 but by the adiabatic relation \(T/T_{0}=(\Sigma/\Sigma_{0})^{\Gamma-1}\), with the fiducial values of \(n\) and \(h_{0}\).3
Footnote 3: The adiabatic case thus has \(q=n(\Gamma-1)=-1/3\) far away from the bump. We showed in Figure 3 that modest changes to \(q\) have little effect on our results.
For this adiabatic case, the parameter space for stable dust traps is reduced, compared to the isothermal case. We roughly explain this result as follows. The boundary
Figure 4: The effect of bump heating on the existence of stable dust traps, where the bump amplitude on the \(y\)-axis is \(A_{T}\) in the right panel and \(A_{\Sigma}\) in the left and middle panels. _Left_: the red shaded region denotes stable dust traps in the fiducial case of isothermal bumps. This red region is repeated more transparently in the other panels for comparison, with arrows roughly showing the directions that the boundaries change. _Middle_: the red shaded region shows stable dust traps for bumps in both surface density bump and temperature, related adiabatically. The parameter space for stable dust traps is reduced. _Right_: the case of a temperature bump with no surface density variation. There is no shaded region, as stable dust traps do not exist in this case due to the relative locations of the pressure trapping boundary (_solid_) and the RWI boundary (_dashed_).
Figure 3: The effect of the background disk on the properties of stable dust-trapping rings (indicated by the shaded regions) is shown, generalizing the fiducial case shown in Figure 2. The effects of slopes in the background surface density (\(n\), _left_) and temperature (\(q\), _middle_) and of the aspect ratio (\(h_{0}\), _right_) are shown. The most significant effect is that colder disks with smaller \(h_{0}\) can host stable dust trapping rings with smaller amplitudes, and over a greater range of widths.
for pressure trapping (solid curve) expands to slightly larger widths. This expansion occurs because the pressure gradient from adiabatic bumps have an extra contribution from the temperature bump, in addition to the surface density contribution. The more significant effect is that the RWI-stable region contracts, also moving to larger widths.
This contraction of the RWI-stable region occurs for two reasons. First, as just noted, with the additional temperature component the bump produces stronger pressure gradients and thus more shear. Second, the pressure gradient acceleration, \(\Sigma^{-1}dP_{\rm H}/dR\propto\Sigma^{\Gamma-1}d\ln\Sigma/dR\), does not saturate with increasing \(A_{\Sigma}\), but continues to increase since \(\Gamma-1=1/3>0\). Thus the amplitude-width curve of marginal stability does not steepen like the isothermal case discussed in Section 3.1, or as seen in the left panel of Figure 4. The net result of both shifts is a smaller region of parameter space for stable dust traps, when the bump is adiabatically heated vs. remaining isothermal.
The right panel of Figure 4 considers a temperature bump with no surface density component (i.e. \(A_{T}>0\), \(A_{\Sigma}=0\)). The background disk parameters \(n,q\) and \(h_{0}\) are the same as those of the fiducial model. In this case, there are no dust traps that are stable to the RWI. The pressure trapping boundary contracts significantly to narrower widths, compared to a surface density bump. The effect arises because, with the scaling \(P\propto\Sigma\sqrt{T}\), a temperature bump produces weaker pressure gradients compared to a surface density bump.The RWI-stable region contracts, moving to wider widths. The main effect is again that at large amplitudes the pressure gradient acceleration, \(\Sigma^{-1}dP_{\rm H}/dR\propto dT/dR\), increases in amplitude (now \(A_{T}\), instead of \(A_{\Sigma}\)) faster than either the isothermal or the adiabatic case, without any saturation. As a result, the marginal stability curve flattens to \(A_{T}\propto W\) at large amplitudes. The net effect of the shifts to both boundaries is that all bumps with a pressure maximum are in the RWI-unstable region.
The thermodynamics of any process that creates pressure bumps is crucial for understanding whether the dust traps can remain RWI-stable. Isothermal pressure bumps have the largest parameter space of stable dust traps, which is reduced for adiabatically heated bumps. A temperature bump with no accumulation of surface density is unlikely to create a stable dust trap. From Figure 3 (right panel), lower \(h_{0}\) values will introduce a region of stable dust traps for pure temperature bumps. Nevertheless, by significantly reducing the allowed parameter space, our results disfavor the hypothesis of dust-trapping in gas temperature bumps.
### New approximate stability criterion for RWI
Unfortunately, there is no analytic criterion for the RWI which is both necessary and sufficient. Such a criterion would greatly facilitate our exploration of stable dust traps, and many other applications of the RWI. We report an approximate empirical criterion here, which might prove useful or spur further developments. We first introduce some well-known stability criterion for context.
The Lovelace criterion is that a maximum in fluid vortensity gives a necessary, but not sufficient, criterion for the RWI (L99). Figure 5 confirms that the Lovelace criterion (dot-dashed curve) lies well below the numerically determined RWI stability boundary (dashed curve). Recall that unstable (or potentially unstable in the case of the Lovelace criterion) regions lie above and to the left of stability boundaries.
The Rayleigh criterion, \(\kappa^{2}<0\), gives the axisymmetric condition for instability to radial oscillations for a barotropic rotating fluid, such as a disk (Chandrasekhar, 1961). The generalization to baroclinic fluids is one of the Solberg-Hoiland criterion, \(\kappa^{2}+N_{R}^{2}<0\)(Tassoul, 1978). It is well known that the non-axisymmetric RWI occurs when disks are stable to both of these axisymmetric criteria (L99). In summary, the RWI criterion lies between the Lovelace and Solberg-Hoiland criterion.
We find a simple modification of the Solberg-Hoiland criterion, that somewhere in the flow:
\[\kappa^{2}+N_{R}^{2}\lesssim 0.6\Omega_{\rm K}^{2}\,. \tag{4}\]
This approximate condition gives an imperfect, but surprisingly good description of the numerically determined RWI criterion. Physically, this criterion states that the squared radial oscillation frequency should be less than about 60% of the squared Keplerian frequency, somewhere, for the RWI.
Figure 5 compares the new approximate (dotted curve) and precise numerical (dashed curve) criteria for the same isothermal, adiabatic and heated bump cases as Figure 4. The approximate criterion underestimates instability at low amplitudes and narrow widths, and overestimates instability in the opposite regime. But as least on a logarithmic scale, accuracy is reasonable.
Thus our approximate explanations of the shape of the RWI stability boundary could be made more precise by a consideration of radial oscillation frequency, which is dominated by \(\kappa\), with \(N_{R}\) a modest correction in our models, and zero in the adiabatic case. With \(\kappa^{2}=R^{-3}d(R^{4}\Omega^{2})/dR\) (and \(\Omega\) the orbital frequency including deviations from Keplerian due to pressure gradients), we justify that our arguments based on shear in \(\Omega\) apply to the RWI. We hope that our approximate cri
terion proves useful for similar interpretations or quick estimates, and especially that it might motivate deeper insights to the nature of the RWI.
## 4 Conclusions
The leading hypothesis to explain the continuum rings imaged by ALMA is the trapping of dust in a disk bump with a pressure maximum. This letter constrains this hypothesis by investigating whether these bumps can be stable to the RWI. Regardless of their origin, the pressure bumps should remain dynamically stable for long enough to trap significant dust. We have shown that dust-trapping pressure bumps can be stable to the RWI, adding further theoretical support for the hypothesis. Moreover, our results could be used to plan and interpret searches for pressure bumps with ALMA, via the intensity and velocity shifts of molecular gas in the bumps.
Our stability analysis finds that low-amplitude pressure bumps cannot be stable dust traps. At low bump amplitudes, \(A_{\Sigma}\lesssim 0.2\) for our fiducial case, the narrow widths needed for a pressure maximum also trigger the RWI. For high enough bump amplitudes, however, stable dust traps exist for a range of bump widths that depends significantly on temperature. The temperature of the disk background and of the bump relative to this background are both important, especially with the background temperature parameterized as the disk aspect ratio, \(h_{0}\).
Cooler temperatures, in either bump or background disk, favor the existence of stable dust traps. For lower values of the disk aspect ratio, \(h_{0}\), stable dust traps are found for lower amplitude bumps and over a wider range of widths. Our stability constraints thus imply that dust traps should be more readily produced in thinner, colder disks.
Our analysis also constrains the allowed temperature of bumps relative to the disk background. Cooler pressure bumps, i.e. those that are isothermal with the disk's background temperature, can be stable dust traps over a large range of bump widths, from one to several disk scale-heights. As bump temperature increases, the range of stable widths decreases. For hot pressure bumps, i.e. with no surface density excess, all bumps with a pressure maxima are unstable, for our fiducial disk model.
The background slopes of disk surface density and temperature are found to have a relatively modest effect on our results, over the relevant parameter range considered. This finding limits the impact of uncertainties in disk parameters. We also report a new approximate criterion for the RWI, that \(\sqrt{\kappa^{2}+N_{R}^{2}}\lesssim 0.75\Omega_{\rm K}\) somewhere in the flow.
There are possible extensions that could also address some limitations of this initial study. Our analysis of dust traps in gas bumps could be extended to dust traps at gap edges. Moreover the stability properties of gaps carved by planets could be analyzed (as in Lin & Papaloizou, 2010; Cimerman & Rafikov, 2023). Additional physical effects could be included such as 3D motions, radiative cooling, self-gravity of massive disks and magnetic fields. See Lesur et al. (2022) for a review of disk instabilities from these effects. For the disk bumps considered here, the RWI (and the related Papaloizou & Pringle, 1985) instability for the barotropic case) is the instability that arises from the simplest and most gen
Figure 5: Our approximate stability criterion (_dotted_) is compared to our numerical results (_dashed_) for the same models as Figure 4. The approximate condition for marginal stability is \(\min((\kappa^{2}+N_{R}^{2})/\Omega_{\rm K}^{2})=0.6\). The Lovelace criterion (_dot-dashed_), a necessary but not sufficient criterion for the RWI, is shown for comparison. Our approximate criterion can be used to estimate conditions for the RWI to occur.
eral physical ingredients, and thus the natural starting point. The effect of radiative cooling appears to be limited on the linear RWI (slightly decreases growth rates, Huang & Yu, 2022), even though it can affect the Rossby vortex lifetimes significantly (Fung & Ono, 2021). Nevertheless, more study is needed. Ultimately radiative transfer models, based on hydrodynamic numerical simulations with a distribution of dust grain sizes (as in Krapp et al., 2022) could give more detailed and realistic observational predictions for ALMA.
A key reason to better understand observed disk structures is to learn about how planets form. Some dust traps may be caused by already-formed planets, and others may arise from (magneto)hydrodynamic processes in disks. It is very important to understand which case is more prevalent. However, in either case, dust that concentrates in these bumps is likely to grow into planetesimals. Such growth could occur by enhanced collisional growth, direct gravitational collapse and/or dust concentration by the streaming instability (Chiang & Youdin, 2010; Johansen et al., 2014). The streaming instability is a mechanism to create dust overdensities from the mutual aerodynamic coupling of dust and gas in disks (Youdin & Goodman, 2005). These overdensities can then collapse gravitationally into planetesimals, typically in binary pairs (Nesvorny et al., 2019). However, particle concentration by the streaming instability already requires locally elevated values of the dust/gas ratio (Johansen et al., 2009; Li & Youdin, 2021). This requirement can become even more stringent when a broad dust size distribution is accounted for (Krapp et al., 2019). Several studies show that the streaming instability is most likely to be triggered in overdense dust rings, caused by ice lines and/or pressure bumps (e.g. Drazkowska et al., 2013; Schoonenberg & Ormel, 2017; Drazkowska & Dullemond, 2018; Ida et al., 2021). And the streaming instability has been studied in the specific context of pressure bumps (Onishi & Sekiya, 2017; Carrera et al., 2021). A better understanding of the dust structures observed by ALMA is thus of crucial importance for theoretical models of planet formation.
The authors acknowledge support from NASA through TCAN grant 80NSSC21K0497, and thank TCAN team members, including Wladimir Lyra, Chao-Chin Yang, and Jacob Simon for constructive comments. L. K. acknowledges support by the Heising-Simons 51 Pegasi b postdoctoral fellowship. The authors are thankful to Feng Long, Ilaria Pascucci and the Star and Planet Formation Theory Group at Steward Observatory for useful discussions and feedback.
|
2305.07581 | Nonparametric data segmentation in multivariate time series via joint
characteristic functions | Modern time series data often exhibit complex dependence and structural
changes which are not easily characterised by shifts in the mean or model
parameters. We propose a nonparametric data segmentation methodology for
multivariate time series termed NP-MOJO. By considering joint characteristic
functions between the time series and its lagged values, NP-MOJO is able to
detect change points in the marginal distribution, but also those in possibly
non-linear serial dependence, all without the need to pre-specify the type of
changes. We show the theoretical consistency of NP-MOJO in estimating the total
number and the locations of the change points, and demonstrate the good
performance of NP-MOJO against a variety of change point scenarios. We further
demonstrate its usefulness in applications to seismology and economic time
series. | Euan T. McGonigle, Haeran Cho | 2023-05-12T16:15:41Z | http://arxiv.org/abs/2305.07581v3 | # Nonparametric data segmentation in multivariate time series via joint characteristic functions
###### Abstract
Modern time series data often exhibit complex dependence and structural changes which are not easily characterised by shifts in the mean or model parameters. We propose a nonparametric data segmentation methodology for multivariate time series termed NP-MOJO. By considering joint characteristic functions between the time series and its lagged values, NP-MOJO is able to detect change points in the marginal distribution, but also those in possibly non-linear serial dependence, all without the need to pre-specify the type of changes. We show the theoretical consistency of NP-MOJO in estimating the total number and the locations of the change points, and demonstrate the good performance of NP-MOJO against a variety of change point scenarios. We further demonstrate its usefulness in applications to seismology and economic time series.
_Keywords:_ change point detection, joint characteristic function, moving sum, multivariate time series, nonparametric
## 1 Introduction
Change point analysis has been an active area of research for decades, dating back to Page (1954). Literature on change point detection continues to expand rapidly due to its prominence in numerous applications, including biology (Jewell et al., 2020), financial analysis (Lavielle and Teyssiere, 2007) and environmental sciences (Carr et al., 2017). Considerable efforts have been made for developing computationally and statistically efficient methods for data segmentation, a.k.a. multiple change point detection, in the mean of univariate data under independence (Killick et al., 2012; Frick et al., 2014; Fryzlewicz, 2014) and permitting serial dependence (Tecuapetla-Gomez and Munk, 2017; Dette et al., 2020; Cho and Kirch,
2022; Cho and Fryzlewicz, 2022). There also exist methods for detecting changes in the covariance (Aue et al., 2009; Wang et al., 2021), parameters under linear regression (Bai and Perron, 1998; Xu et al., 2022) or other models (Fryzlewicz and Subba Rao, 2014; Safikhani and Shojaie, 2022) in fixed and high dimensions. For an overview, see Truong et al. (2020) and Cho and Kirch (2023).
Any departure from distributional assumptions such as independence and Gaussianity tends to result in poor performance of change point algorithms. Furthermore, it may not be realistic to assume any knowledge of the type of change point that occurs, or to make parametric assumptions on the data generating process, for time series that possess complex structures and are observed over a long period. Searching for change points in one property of the data (e.g. mean), when the time series instead undergoes changes in another (e.g. variance), may lead to misleading conclusions and inference on such data. Therefore, it is desirable to develop flexible, nonparametric change point detection techniques that are applicable to detect general changes in the underlying distribution of serially dependent data.
There are several strategies for the nonparametric change point detection problem, such as those based on the empirical cumulative distribution and density functions (Carlstein, 1988; Zou et al., 2014; Haynes et al., 2017; Padilla et al., 2021; Vanegas et al., 2022; Padilla et al., 2022, 2023), kernel transforms of the data (Harchaoui et al., 2009; Celisse et al., 2018; Arlot et al., 2019; Li et al., 2019) or \(U\)-statistics measuring the 'energy'-based distance between different distributions (Matteson and James, 2014; Chakraborty and Zhang, 2021; Boniece et al., 2022). There also exist graph-based methods applicable to non-Euclidean data (Chen and Zhang, 2015; Chu and Chen, 2019). All these methods can only detect changes in the marginal distribution of the data and apart from Padilla et al. (2023), assume serial independence. We also mention Cho and Fryzlewicz (2012), Preuss et al. (2015) and Korkas and Fryzlewicz (2017) where the problem of detecting changes in the second-order structure is addressed, but their methods do not have power against changes in non-linear dependence.
We propose NP-MOJO, a **n**on**parametric **m**oving sum (MOSUM) procedure for detecting changes in the **j**oint characteristic function, which detects multiple changes in serial, possibly non-linear dependence as well as marginal distributions of a multivariate time series \(\{X_{t}\}_{t=1}^{n}\). We adopt a moving sum (MOSUM) procedure to scan the data for multiple change points. The moving sum methodology has successfully been applied to a variety of change point testing (Chu et al., 1995; Huskova and Slaby, 2001) and data segmentation problems (Eichinger and Kirch, 2018). Here, we combine it with a detector statistic carefully designed to detect changes in complex dependence structure beyond those detectable from considering the marginal distribution only. Specifically, we utilise an energy-based distributional discrepancy that measures any change in the joint characteristic function of the time series at some lag \(\ell\geq 0\), which allows for detecting changes in the joint distribution of \((X_{t},X_{t+\ell})\) beyond the changes in their linear dependence. To the best of our knowledge, NP-MOJO is the first nonparametric
methodology which is able to detect changes in non-linear serial dependence in multivariate time series.
We establish that NP-MOJO achieves consistency in estimating the number and locations of the change points for a given lag, and propose a methodology that extends this desirable property of single-lag NP-MOJO to multiple lags. Combined with a dependent multiplier bootstrapping procedure, NP-MOJO and its multi-lag extension perform well across a wide range of change point scenarios in simulations and real data applications.
The remainder of the article is organised as follows. Section 2 introduces the piecewise stationary time series model and describes the measure of change in serial dependence. In Section 3, we propose the NP-MOJO procedure for detecting changes in the joint distribution of \((X_{t},X_{t+\ell})\) at a given \(\ell\geq 0\), as well as its multi-lag extension, and establish their consistency in multiple change point detection. In Section 4, we discuss recommendations for the practical implementation of the method, followed by simulation studies (Section 5) and applications to seismology and economic data sets (Section 6). Accompanying R software implementing NP-MOJO is available from [https://github.com/EuanMcGonigle/CptNonPar](https://github.com/EuanMcGonigle/CptNonPar).
## 2 Model and measure of discrepancy
We observe a multivariate time series \(\{X_{t}\}_{t=1}^{n}\) of (finite) dimension \(p\), where
\[X_{t}=\sum_{j=0}^{q}X_{t}^{(j)}\cdot\mathbb{I}\{\theta_{j}+1\leq t \leq\theta_{j+1}\} \tag{1}\]
with \(X_{t}=(X_{t1},\ldots,X_{tp})^{\top}\). For each sequence \(\{X_{t}^{(j)}:\ t\geq 1\},\,j=0,\ldots,q\), there exists an \(\mathbb{R}^{p}\)-valued measurable function \(g^{(j)}(\cdot)=(g_{1}^{(j)}(\cdot),\ldots,g_{p}^{(j)}(\cdot))^{\top}\) such that \(X_{t}^{(j)}=g^{(j)}(\mathcal{F}_{t})\) with \(\mathcal{F}_{t}=\sigma(\varepsilon_{s}:s\leq t)\), and i.i.d. random elements \(\varepsilon_{t}\). We assume that \(g^{(j-1)}\neq g^{(j)}\) for all \(j=1,\ldots,q\), such that under the model (1), the time series undergoes \(q\) change points at locations \(\Theta=\{\theta_{1},\ldots,\theta_{q}\}\), with the notational convention that \(\theta_{0}=0\) and \(\theta_{q+1}=n\). That is, \(\{X_{t}\}_{t=1}^{n}\) consists of \(q+1\) stationary segments where the \(j\)-th segment is represented in terms of a segment-dependent 'output' \(g^{(j)}(\mathcal{F}_{t})\), with the common 'input' \(\mathcal{F}_{t}\) shared across segments such that dependence across the segments is not ruled out. Each segment has a non-linear Wold representation as defined by Wu (2005); this representation includes commonly adopted time series models including ARMA and GARCH processes.
Denote the inner product of two vectors \(x\) and \(y\) by \(\langle x,y\rangle=x^{\top}y\) and \(\imath\) the imaginary unit with \(\imath^{2}=-1\). At some integer \(\ell\), define the joint characteristic function of \(\{X_{t}^{(j)}\}_{t\in\mathbb{Z}}\) at lag \(\ell\), as
\[\phi_{\ell}^{(j)}(u,v)=\mathbb{E}\left[\exp\left(\imath\langle u,X_{1}^{(j)}\rangle+\imath\langle v,X_{1+\ell}^{(j)}\rangle\right)\right], \quad 0\leq j\leq q.\]
We propose to measure the size of changes between adjacent segments under (1), using an 'energy-based' distributional discrepancy given by
\[d_{\ell}^{(j)}=\int_{\mathbb{R}^{p}}\int_{\mathbb{R}^{p}}\left|\phi_{\ell}^{(j)}(u,v)-\phi_{\ell}^{(j-1)}(u,v)\right|^{2}w(u,v)dudv,\quad 1\leq j\leq q, \tag{2}\]
where \(w(u,v)\) is a positive weight function for which the above integral exists. For given lag \(\ell\geq 0\), the quantity \(d_{\ell}^{(j)}\) measures the weighted \(L_{2}\)-norm of the distance between the lag \(\ell\) joint characteristic functions of \(\{X_{t}^{(j-1)}\}_{t\in\mathbb{Z}}\) and \(\{X_{t}^{(j)}\}_{t\in\mathbb{Z}}\). A discrepancy measure of this form is a natural choice for nonparametric data segmentation, since:
**Lemma 1**.: We have \(d_{\ell}^{(j)}=0\) for all \(\ell\geq 0\) if and only if \(g^{(j)}=g^{(j-1)}\).
Lemma 1 extends the observation made in Matteson and James (2014) about the correspondence between the characteristic function and marginal distribution. It shows that by considering the joint characteristic functions \(\phi_{\ell}^{(j)}(u,v)\) at multiple lags \(\ell\geq 0\), the discrepancy \(d_{\ell}^{(j)}\) is able to capture changes in the serial dependence as well as those in the marginal distribution of \(\{X_{t}\}_{t=1}^{n}\).
Let \(\|x\|\) denote the Euclidean norm of a vector \(x\). For some choices of the weight function \(w(u,v)\), the discrepancy \(d_{\ell}^{(j)}\) is associated with the expectations of the kernel-based transforms of \(Y_{t}^{(j)}=(X_{t}^{(j)},X_{t+\ell}^{(j)})\) and \(\tilde{Y}_{t}^{(j)}=(\tilde{X}_{t}^{(j)},\tilde{X}_{t+\ell}^{(j)})\), where \(\tilde{X}_{t}^{(j)}=g^{(j)}(\tilde{\mathcal{F}}_{t})\) with \(\tilde{\mathcal{F}}_{t}=\sigma(\tilde{\varepsilon}_{s}:s\leq t)\) and \(\tilde{\varepsilon}_{t}\) is an independent copy of \(\varepsilon_{t}\).
**Lemma 2**.:
* For any \(\beta>0\), suppose that \(d_{\ell}^{(j)}\) in (2) is obtained with respect to the following weight function: \[w_{1}(u,v)=C_{1}(\beta,p)^{-2}\exp\left(-\frac{1}{2\beta^{2}}\left(\|u\|^{2} +\|v\|^{2}\right)\right)\ \ \text{with}\ \ C_{1}(\beta,p)=(2\pi)^{p/2}\beta^{p}.\] Then, the function \(h_{1}:\mathbb{R}^{2p}\times\mathbb{R}^{2p}\to[0,1]\) defined as \(h_{1}(x,y)=\exp(-\beta^{2}\|x-y\|^{2}/2)\) for \(x,y\in\mathbb{R}^{2p}\), satisfies \[d_{\ell}^{(j)}=\mathbb{E}\left[h_{1}\left(Y_{1}^{(j)},\tilde{Y}_{1}^{(j)} \right)\right]+\mathbb{E}\left[h_{1}\left(Y_{1}^{(j-1)},\tilde{Y}_{1}^{(j-1)} \right)\right]-2\mathbb{E}\left[h_{1}\left(\tilde{Y}_{1}^{(j)},Y_{1}^{(j-1)} \right)\right].\]
* For any \(\delta>0\), suppose that \(d_{\ell}^{(j)}\) is obtained with \[w_{2}(u,v)=C_{2}(\delta,p)^{-2}\prod_{s=1}^{p}u_{s}^{2}v_{s}^{2}\exp\left(- \delta\left(u_{s}^{2}+v_{s}^{2}\right)\right)\ \ \text{with}\ \ C_{2}(\delta,p)=\frac{\pi^{p/2}}{2^{p}\delta^{3p/2}}.\] Then, the function \(h_{2}:\mathbb{R}^{2p}\times\mathbb{R}^{2p}\to[-2e^{-2/3},1]\) defined as \[h_{2}(x,y)=\prod_{r=1}^{2p}\frac{\left(2\delta-(x_{r}-y_{r})^{2}\right)\exp \left(-\frac{1}{4\delta}(x_{r}-y_{r})^{2}\right)}{2\delta}\]
for \(x=(x_{1},\ldots,x_{2p})^{\top}\) and \(y=(y_{1},\ldots,y_{2p})^{\top}\), satisfies
\[d_{\ell}^{(j)}=\mathbb{E}\left[h_{2}\left(Y_{1}^{(j)},\tilde{Y}_{1}^{(j)} \right)\right]+\mathbb{E}\left[h_{2}\left(Y_{1}^{(j-1)},\tilde{Y}_{1}^{(j-1)} \right)\right]-2\mathbb{E}\left[h_{2}\left(\tilde{Y}_{1}^{(j)},Y_{1}^{(j-1)} \right)\right].\]
The weight function \(w_{1}\) is commonly referred to as the Gaussian weight function. Both \(w_{1}\) and \(w_{2}\) are unit integrable and separable in their arguments, such that \(d_{\ell}^{(j)}\) is well-defined due to the boundedness of the characteristic function. We provide an alternative weight function in Appendix A.2 and also refer to Fan et al. (2017) for other suitable choices.
_Remark 1_.: From Lemma 2, \(d_{\ell}^{(j)}\) can be viewed as the squared maximum mean discrepancy (MMD) on a suitably defined reproducing kernel Hilbert space with the associated kernel function; see Lemma 6 of Gretton et al. (2012) and Section 2.6 of Celisse et al. (2018). We also note the literature on the (auto)distance correlation for measuring and testing dependence in multivariate (Szekely et al., 2007) and time series (Zhou, 2012; Fokianos and Pitsillou, 2017; Davis et al., 2018) settings.
## 3 Methodology
### NP-MOJO: nonparametric MOSUM procedure for detecting changes in the joint characteristic function
The identities given in Lemma 2 allow for the efficient computation of the statistics approximating \(d_{\ell}^{(j)}\) and their weighted sums, which forms the basis for the NP-MOJO procedure for detecting multiple change points from a multivariate time series \(\{X_{t}\}_{t=1}^{n}\) under the model (1). Throughout, we present the procedure with a generic kernel \(h\) associated with some weight function \(w\). We first introduce NP-MOJO for the problem of detecting changes in the joint distribution of \(Y_{t}=(X_{t},X_{t+\ell})\) at a given lag \(\ell\geq 0\), and extend it to the multi-lag problem in Section 3.3.
For fixed bandwidth \(G\in\mathbb{N}\), NP-MOJO scans the data using a detector statistic computed on neighbouring moving windows of length \(G\), which approximates the discrepancy between the local joint characteristic functions of the corresponding windows measured analogously as in (2). Specifically, the detector statistic at location \(k\) is given by the following two-sample \(V\)-statistic:
\[T_{\ell}(G,k)=\frac{1}{(G-\ell)^{2}}\left(\sum_{s,t=k-G+1}^{k-\ell}h(Y_{s},Y_ {t})+\sum_{s,t=k+1}^{k+G-\ell}h(Y_{s},Y_{t})-2\sum_{s=k-G+1}^{k-\ell}\sum_{t= k+1}^{k+G-\ell}h(Y_{s},Y_{t})\right)\]
for \(k=G,\ldots,n-G\), as an estimator of the local discrepancy measure
\[\mathcal{D}_{\ell}(G,k)=\sum_{j=0}^{q}\left(\frac{G-\ell-|k-\theta_{j}|}{G- \ell}\right)^{2}d_{\ell}^{(j)}\cdot\mathbb{I}\{|k-\theta_{j}|\leq G-\ell\}. \tag{3}\]
We have \(\mathcal{D}_{\ell}(G,k)=0\) when the section of the data \(\{X_{t},\,|t-k|\leq G-\ell\}\) does not undergo a change and accordingly, \(T_{\ell}(G,k)\) is expected to be close to zero. On the other hand, if \(|k-\theta_{j}|<G-\ell\), then \(\mathcal{D}_{\ell}(G,k)\) increases and then decreases around \(\theta_{j}\) with a local maximum at \(k=\theta_{j}\), and \(T_{\ell}(G,k)\) is expected to behave similarly. We illustrate this using the following example.
**Example 1**.: A univariate time series \(\{X_{t}\}_{t=1}^{n}\) of length \(n=1000\) is generated as \(X_{t}=\mu_{t}+\varepsilon_{t}\), where \(\mu_{t}=0.7\cdot\mathbb{I}\{t>\theta_{1}\}\) and \(\varepsilon_{t}=\varepsilon_{t}^{(1)}\cdot\mathbb{I}\{t<\theta_{2}\}+ \varepsilon_{t}^{(2)}\cdot\mathbb{I}\{t\geq\theta_{2}\}\), with \(\theta_{1}=300\) and \(\theta_{2}=650\). Each \(\varepsilon_{t}^{(j)}\) is an autoregressive (AR) process of order 1: \(\varepsilon_{t}^{(1)}=0.5\varepsilon_{t-1}^{(1)}+W_{t}\) and \(\varepsilon_{t}^{(2)}=-0.5\varepsilon_{t-1}^{(2)}+W_{t}\), where \(\{W_{t}\}_{t\in\mathbb{Z}}\) is a white noise process with \(\mathsf{Var}(W_{t})=\sqrt{1-0.5^{2}}\). This choice leads to \(\mathsf{Var}(X_{t})=1\) for all \(t\), see the top panel of Figure 1 for a realisation. Then, the mean shift at \(\theta_{1}\) is detectable at all lags while the autocorrelation change at \(\theta_{2}\) is detectable at odd lags only, i.e. \(d_{\ell}^{(2)}=0\) for even \(\ell\geq 0\). The bottom panel of Figure 1 plots \(T_{\ell}(G,k)\), \(G\leq k\leq n-G\), computed using kernel \(h_{2}\) in Lemma 2 (ii) with \(G=166\). At lag \(\ell=0\), the detector statistic forms a prominent peak around \(\theta_{1}\) but it is flat around \(\theta_{2}\); at lag \(\ell=1\), the statistic \(T_{1}(G,k)\) forms local maxima around both \(\theta_{j}\), \(j=1,2\).
Based on these observations, it is reasonable to detect and locate the change points in the joint distribution of \((X_{t},X_{t+\ell})\) as significant local maximisers of \(T_{\ell}(G,k)\). We adopt the selection criterion, first considered by Eichinger and Kirch (2018) in the context of detecting mean shifts from univariate time series, for simultaneous estimation of multiple change points. For some fixed constant \(\eta\in(0,1)\) and a threshold \(\zeta_{\ell}(n,G)>0\), we identify any local maximiser of \(T_{\ell}(G,k)\), say \(\widehat{\theta}\), which satisfies
\[T_{\ell}(G,\widehat{\theta})>\zeta_{\ell}(n,G)\quad\text{and}\quad\widehat{ \theta}=\arg\max_{k:\,|k-\widehat{\theta}|\leq\eta G}T_{\ell}(G,k). \tag{4}\]
Figure 1: Top: time series of length \(n=1000\) with change points \(\theta_{1}=300\) and \(\theta_{2}=650\) (vertical dashed lines), see Example 1. Bottom: corresponding detector statistics \(T_{\ell}(G,k)\) computed at lags \(\ell=0\) (dashed) and \(\ell=1\) (solid).
We denote the set of such estimators fulfilling (4) by \(\widehat{\Theta}_{\ell}\) with \(\widehat{q}_{\ell}=|\widehat{\Theta}_{\ell}|\). The choice of \(\zeta_{\ell}(n,G)\) is discussed in Section 3.4.
### Theoretical properties
For some finite integer \(\ell\geq 0\), we define the index set of the change points _detectable_ at lag \(\ell\) as \(\mathcal{I}_{\ell}=\{1\leq j\leq q:\,d_{\ell}^{(j)}\neq 0\}\), and denote its cardinality by \(q_{\ell}=|\mathcal{I}_{\ell}|\leq q\). Not all change points are detectable at all lags, see Example 1 where we have \(\mathcal{I}_{0}=\{1\}\) and \(\mathcal{I}_{1}=\{1,2\}\). In this section, we show that the single-lag NP-MOJO described in Section 3.1 consistently estimates the total number \(q_{\ell}\) and the locations \(\{\theta_{j},\,j\in\mathcal{I}_{\ell}\}\) of the change points detectable at lag \(\ell\), by \(\widehat{\Theta}_{\ell}\).
Writing \(g_{ti}(\cdot)=\sum_{j=0}^{q}g_{i}^{(j)}(\cdot)\cdot\mathbb{I}\{\theta_{j}+1 \leq t\leq\theta_{j+1}\}\), define \(X_{ti,\{t-s\}}=g_{ti}(\mathcal{F}_{t,\{t-s\}})\), where \(F_{t,\{t-s\}}=\sigma(\ldots,\varepsilon_{t-s-1},\tilde{\varepsilon}_{t-s}, \varepsilon_{t-s+1},\ldots,\varepsilon_{t})\) is a coupled version of \(\mathcal{F}_{t}\) with \(\varepsilon_{t-s}\) replaced by its independent copy \(\tilde{\varepsilon}_{t-s}\). For a random variable \(Z\) and \(\nu>0\), let \(\|Z\|_{\nu}=(\mathbb{E}(|Z|^{\nu}))^{1/\nu}\). Analogously as in Xu et al. (2022a), we define the element-wise functional dependence measure and its cumulative version as
\[\delta_{s,\nu,i}=\sup_{t\in\mathbb{Z}}\|X_{ti}-X_{ti,\{t-s\}}\|_{\nu}\ \ \text{and}\ \ \Delta_{m,\nu}=\max_{1\leq i\leq p}\sum_{s=m}^{\infty}\delta_{s,\nu,i},\ m\in \mathbb{Z}. \tag{5}\]
Then, we make the following assumptions on the degree of serial dependence in \(\{X_{t}\}_{t=1}^{n}\).
**Assumption 1**.: There exist some constants \(C_{F},C_{X}\in(0,\infty)\) and \(\gamma_{1}\in(0,2)\) such that
\[\sup_{m\geq 0}\exp(C_{F}m^{\gamma_{1}})\Delta_{m,2}\leq C_{X}.\]
**Assumption 2**.: The time series \(\{X_{t}\}_{t=1}^{n}\) is continuous and \(\beta\)-mixing with \(\beta(m)\leq C_{\beta}m^{-\gamma_{2}}\) for some constants \(C_{\beta}\in(0,\infty)\) and \(\gamma_{2}\geq 1\), where
\[\beta(m)=\sup_{t\in\mathbb{Z}}\left(\sup\frac{1}{2}\sum_{r=1}^{R}\sum_{s=1}^{S }|\mathsf{P}(A_{r}\cap B_{s})-\mathsf{P}(A_{r})\mathsf{P}(B_{s})|\right).\]
Here, the inner supremum is taken over all pairs of finite partitions \(\{A_{1},\ldots,A_{R}\}\) of \(\mathcal{F}_{t}=\sigma(\varepsilon_{u},\,u\leq t)\) and \(\{B_{1},\ldots,B_{S}\}\) of \(\sigma(\varepsilon_{u},\,u\geq t+m)\).
Assumptions 1 and 2 require the serial dependence in \(\{X_{t}\}_{t=1}^{n}\), measured by \(\Delta_{m,2}\) and \(\beta(m)\), to decay exponentially, and both are met by a range of linear and non-linear processes (Wu, 2005; Mokkadem, 1988). Under Assumption 1, we have \(\|X_{it}\|_{2}<\infty\) for all \(i\) and \(t\). Assumption 1 is required for bounding \(T_{\ell}(G,k)-\mathbb{E}[T_{\ell}(G,k)]\) uniformly over \(k\), while Assumption 2 is used for controlling the bias \(\mathbb{E}[T_{\ell}(G,k)]-\mathcal{D}_{\ell}(G,k)\) which is attributed to serial dependence. A condition similar to Assumption 2 is often found in the time series
literature making use of distance correlations, see e.g. Davis et al. (2018) and Yousuf and Feng (2022).
**Assumption 3**.: The kernel function \(h\) is symmetric and bounded, and can be written as \(h(x,y)=h_{0}(x-y)\) for some function \(h_{0}:\mathbb{R}^{2p}\to\mathbb{R}\) that is Lipschitz continuous with respect to \(\|\cdot\|\) with Lipschitz constant \(C_{h}\in(0,\infty)\).
Assumption 3 on the kernel function \(h\) is met by \(h_{1}\) and \(h_{2}\) introduced in Lemma 2, with constants \(C_{h}\) bounded by \(\beta e^{-1/2}\) and \(2\sqrt{2}p^{3/2}\delta^{-1/2}\), respectively.
**Assumption 4**.:
* \(G=G_{n}\,\text{satisfies}\;G^{-1}\log(n)\to 0\;\text{as}\;n\to\infty\), and \(\min_{0\leq j\leq q}(\theta_{j+1}-\theta_{j})\geq 2G\).
* \(\sqrt{G/\log(n)}\min_{j\in\mathcal{I}_{\ell}}d_{\ell}^{(j)}\to\infty\).
Recall that \(\mathcal{I}_{\ell}\) denotes the index set of detectable change points at lag \(\ell\), i.e. \(d_{\ell}^{(j)}>0\) iff \(j\in\mathcal{I}_{\ell}\). However, this definition of detectability is too weak to ensure that all \(\theta_{j},\,j\in\mathcal{I}_{\ell}\), are detected by NP-MOJO with high probability at lag \(\ell\), since we do not rule out the case of local changes where \(d_{\ell}^{(j)}\ \to\ 0\). Consider Example 1: the change in the autocorrelations results in \(d_{\ell}^{(2)}>0\) for all odd \(\ell\) but the size of change is expected to decay exponentially fast as \(\ell\) increases. Assumption 4 allows for local changes provided that \(\sqrt{G/\log(n)}d_{\ell}^{(j)}\) diverges sufficiently fast. Assumption 4 (i) on the minimum spacing of change points, is commonly imposed in the literature on change point detection using moving window-based procedures. Assumption 4 does not rule out \(G/n\to 0\) and permits the number of change points \(q\) to increase in \(n\). We discuss the selection of bandwidth in Section 4.
**Theorem 1**.: Let Assumptions 1, 2, 3 and 4 hold and \(\ell\geq 0\) be a finite integer, and set the threshold as \(\zeta_{\ell}(n,G)=c_{\zeta}\sqrt{\log(n)/G}\) for some constant \(c_{\zeta}>0\). Then, there exists \(c_{0}>0\), depending only on \(C_{F}\), \(C_{X}\), \(\gamma_{1}\), \(C_{\beta}\) and \(\gamma_{2}\), such that as \(n\to\infty\),
\[\mathsf{P}\left(\widehat{q}_{\ell}=q_{\ell},\,\max_{j\in\mathcal{I}_{\ell}} \min_{\widehat{\theta}\in\widehat{\Theta}_{\ell}}\;d_{\ell}^{(j)}|\widehat{ \theta}-\theta_{j}|\leq c_{0}\sqrt{G\log(n)}\right)\to 1.\]
Theorem 1 establishes that, for given \(\ell\), NP-MOJO correctly estimates the total number and the locations of the change points detectable at lag \(\ell\). In particular, by Assumption 4, the change point estimators satisfy
\[\min_{\widehat{\theta}\in\widehat{\Theta}_{\ell}}\;|\widehat{\theta}-\theta_ {j}|=O_{P}((d_{\ell}^{(j)})^{-1}\sqrt{G\log(n)})=o_{P}(\min(\theta_{j}-\theta _{j-1},\theta_{j+1}-\theta_{j}))\;\text{ for all }\;j\in\mathcal{I}_{\ell},\]
i.e. the change point estimators converge to the true change point locations in the rescaled time. Further, the rate of estimation is inversely proportional to the size of change \(d_{\ell}^{(j)}\), such that the change points associated with larger \(d_{\ell}^{(j)}\) are estimated with better accuracy. Also
making use of the energy-based distributional discrepancy, Matteson and James (2014) establish the consistency of their proposed E-Divisive method for detecting changes in (marginal) distribution under independence. In addition to detection consistency, we further derive the rate of estimation for NP-MOJO which is applicable to detect changes in complex time series dependence besides those in marginal distribution, in broader situations permitting serial dependence.
### Multi-lag extension of NP-MOJO
In this section, we address the problem of combining the results of the NP-MOJO procedure when it is applied with multiple lags. Let \(\mathcal{L}\subset\mathbb{N}_{0}=\{0,1,\ldots\}\) denote a (finite) set of non-negative integers. Recall that given \(\ell\in\mathcal{L}\), NP-MOJO returns a set of change points estimators \(\widehat{\Theta}_{\ell}\). Denote the union of change point estimators over all lags \(\mathcal{L}\) by \(\widetilde{\Theta}=\bigcup_{\ell\in\mathcal{L}}\widehat{\Theta}_{\ell}=\{ \widetilde{\theta}_{j},\,1\leq j\leq Q:\,\widetilde{\theta}_{1}<\ldots,< \widetilde{\theta}_{Q}\}\), and denote by \(\mathbb{T}(\widetilde{\theta})=\max_{\ell\in\mathcal{L}}T_{\ell}(G,\widetilde {\theta})\) the maximum detector statistic at \(\widetilde{\theta}\) across all \(\ell\in\mathcal{L}\). We propose to find a set of the final change point estimators \(\widehat{\Theta}\subset\widetilde{\Theta}\) by taking the following steps; we refer to this procedure as multi-lag NP-MOJO.
**Step 0.**: Set \(\widehat{\Theta}=\emptyset\) and select a constant \(c\in(0,2]\).
**Step 1.**: Set \(\widetilde{\Theta}_{1}=\widetilde{\Theta}\) and \(m=1\). Iterate Steps 2-4 for \(m=1,2,\ldots\), while \(\widetilde{\Theta}_{m}\neq\emptyset\).
**Step 2.**: Let \(\widetilde{\theta}_{m}=\min\,\widetilde{\Theta}_{m}\) and identify \(\mathcal{C}_{m}=\{\widetilde{\theta}\in\widetilde{\Theta}_{m}:\,\widetilde{ \theta}-\widetilde{\theta}_{m}<cG\}\).
**Step 3.**: Identify \(\widehat{\theta}_{m}=\arg\max_{\widetilde{\theta}\in\mathcal{C}_{m}}\mathbb{ T}(\widetilde{\theta})\); if there is a tie, we arbitrarily break it.
**Step 4.**: Add \(\widehat{\theta}_{m}\) to \(\widehat{\Theta}\) and update \(m\gets m+1\) and \(\widetilde{\Theta}_{m}=\widetilde{\Theta}_{m-1}\setminus\mathcal{C}_{m-1}\).
At iteration \(m\) of the multi-lag NP-MOJO, Step 2 identifies the minimal element from the current set of candidate change point estimators \(\widetilde{\Theta}_{m}\), and a cluster of estimators \(\mathcal{C}_{m}\) whose elements are expected to detect the identical change points from multiple lags. Then, Step 3 finds an estimator \(\widehat{\theta}\in\mathcal{C}_{m}\), which is associated with the largest detector statistic at some lag, and it is added to the set of final estimators. This choice is motivated by Theorem 1, which shows each \(\theta_{j}\) is estimated with better accuracy at the lag associated with the largest change in the lagged dependence (measured by \(d_{\ell}^{(j)}\)). Iterating these steps until all the elements of \(\widetilde{\Theta}\) are either added to \(\widehat{\Theta}\) or discarded, we obtain the set of final change point estimators.
We define a subset of \(\mathcal{L}\) containing the lags at which the \(j\)-th change point is detectable, as \(\mathcal{L}^{(j)}=\{\ell\in\mathcal{L}:\,d_{\ell}^{(j)}\neq 0\}\). Re-visiting Example 1, when we set \(\mathcal{L}=\{0,1\}\), it follows that \(\mathcal{L}^{(1)}=\{0,1\}\) and \(\mathcal{L}^{(2)}=\{1\}\). To establish the consistency of the multi-lag NP-MOJO, we formally assume that all changes points are detectable at some lag \(\ell\in\mathcal{L}\).
**Assumption 5**.: For \(\mathcal{L}\subset\mathbb{N}_{0}\) with \(L=|\mathcal{L}|<\infty\), we have \(\cup_{\ell\in\mathcal{L}}\mathcal{I}_{\ell}=\{1,\ldots,q\}\). Equivalently, \(\mathcal{L}^{(j)}\neq\emptyset\) for all \(j=1,\ldots,q\).
Under Assumptions 1-5, the consistency of the multi-lag NP-MOJO procedure is largely a consequence of Theorem 1. Assumption 4 (ii) requires that at any lag \(\ell\in\mathcal{L}\) and a given change point \(\theta_{j}\), we have either \(j\in\mathcal{I}_{\ell}\) with \(d_{\ell}^{(j)}\) large enough (in the sense that \(\sqrt{G/\log(n)}d_{\ell}^{(j)}\to\infty\)), or \(j\notin\mathcal{I}_{\ell}\) such that \(d_{\ell}^{(j)}=0\). Such a dyadic classification of the change points rules out the possibility that for some \(j\), we have \(d_{\ell}^{(j)}>0\) but \(d_{\ell}^{(j)}=O(\sqrt{\log(n)/G})\), in which case \(\theta_{j}\) may escape detection by NP-MOJO at lag \(\ell\). We therefore consider the following alternative:
**Assumption 6**.:
1. \(G=G_{n}\,\)satisfies \(G^{-1}\log(n)\to 0\,\)as \(n\to\infty\), and \(\min_{0\leq j\leq q}(\theta_{j+1}-\theta_{j})\geq 4G\).
2. \(\sqrt{G/\log(n)}\min_{1\leq j\leq q}\max_{\ell\in\mathcal{L}^{(j)}}d_{\ell}^{ (j)}\to\infty.\)
Compared to Assumption 4, Assumption 6 requires that the change points are further apart from one another relative to \(G\) by the multiplicative factor of two. At the same time, the latter only requires that for each \(j=1,\ldots,q\), there exists _at least one_ lag \(\ell\in\mathcal{L}\) at which \(d_{\ell}^{(j)}\) is large enough to guarantee the detection of \(\theta_{j}\) by NP-MOJO with large probability. Theorem 2 establishes the consistency of multi-lag NP-MOJO under either Assumption 4 or 6.
**Theorem 2**.: Suppose that Assumptions 1, 2, 3 and 5 hold and at each \(\ell\in\mathcal{L}\), we set \(\zeta_{\ell}(n,G)=c_{\zeta,\ell}\sqrt{\log(n)/G}\) with some constants \(c_{\zeta,\ell}>0\). Let \(\widehat{\Theta}=\{\widehat{\theta}_{j},\,1\leq j\leq\widehat{q}:\,\widehat{ \theta}_{1}<\ldots\widehat{\theta}_{\widehat{q}}\}\) denote the set of estimators returned by multi-lag NP-MOJO with tuning parameter \(c\).
1. If Assumption 4 holds for all \(\ell\in\mathcal{L}\) and \(c=2\eta\) with \(\eta\in(0,1/2]\), then with \(c_{0}\) in Theorem 1, \[\mathsf{P}\left(\widehat{q}=q,\,\max_{1\leq j\leq q}\max_{\ell\in\mathcal{L}^ {(j)}}d_{\ell}^{(j)}\left|\widehat{\theta}_{j}-\theta_{j}\right|\leq c_{0} \sqrt{G\log(n)}\right)\to 1\ \ \text{as}\ \ n\to\infty.\]
2. If Assumption 6 holds and \(c=2\), then the conclusion of (i) holds.
Under Assumption 6 (ii), which is weaker than Assumption 4 (ii), we may encounter a situation where \(\sqrt{G/\log(n)}d_{\ell}^{(j)}=O(1)\) while \(d_{\ell}^{(j)}>0\) at some lag \(\ell\in\mathcal{L}\). Then, we cannot guarantee that such \(\theta_{j}\) is detected by NP-MOJO at lag \(\ell\) and, even so, we can only show that its estimator \(\widetilde{\theta}\in\widetilde{\Theta}_{\ell}\) satisfies \(|\widetilde{\theta}-\theta_{j}|=O(G)\). This requires setting the tuning parameter \(c\) maximally for the clustering in Step 2 of multi-lag NP-MOJO, see Theorem 2 (ii). At the same time, there exists a lag well-suited for the localisation of each change point and Step 3 identifies an estimator detected at such lag, and the final estimator inherits the rate of estimation attained at the favourable lag.
### Threshold selection via dependent wild bootstrap
Theorem 1 gives the choice of the threshold \(\zeta_{\ell}(n,G)=c_{\zeta}\sqrt{\log(n)/G}\) which guarantees the consistency of NP-MOJO in multiple change point estimation. The choice of \(c_{\zeta}\) influences
the finite sample performance of NP-MOJO but it depends on many unknown quantities involved in specifying the degree of serial dependence in \(\{X_{t}\}_{t=1}^{n}\) (see Assumptions 1 and 2), which makes the theoretical choice of little practical use. Resampling is popularly adopted for the calibration of change point detection methods including threshold selection. However, due to the presence of serial dependence, permutation-based approaches such as that adopted in Matteson and James (2014) or sample splitting adopted in Padilla et al. (2021) are inappropriate.
We propose to adopt the dependent wild bootstrap procedure proposed in Leucht and Neumann (2013), in order to approximate the quantiles of \(\max_{G\leq k\leq n-G}T_{\ell}(G,k)\) in the absence of any change point, from which we select \(\zeta_{\ell}(n,G)\).
Let \(\{W_{t}^{[r]}\}_{t=1}^{n-G}\) denote a bootstrap sequence generated as a Gaussian AR(1) process with \(\mathsf{Var}(W_{t}^{[r]})=1\) and the AR coefficient \(\exp(-1/b_{n})\), where the sequence \(\{b_{n}\}\) is chosen such that \(b_{n}=o(n)\) and \(\lim_{n\to\infty}b_{n}=\infty\). We construct bootstrap replicates using \(\{W_{t}^{[r]}\}_{t=1}^{n-G}\) as \(T_{\ell}^{[r]}=\max_{G\leq k\leq n-G}T_{\ell}^{[r]}(G,k)\), where
\[T_{\ell}^{[r]}(G,k)=\frac{1}{(G-\ell)^{2}}\left(\sum_{s,t=k-G+1 }^{k-\ell}\bar{W}_{s,k}^{[r]}\bar{W}_{t,k}^{[r]}h(Y_{s},Y_{t})+\sum_{s,t=k+1}^ {k+G-\ell}\bar{W}_{s-G,k}^{[r]}\bar{W}_{t-G,k}^{[r]}h(Y_{s},Y_{t})\right.\] \[\left.-2\sum_{s=k-G+1}^{k-\ell}\sum_{t=k+1}^{k+G-\ell}\bar{W}_{s,k }^{[r]}\bar{W}_{t-G,k}^{[r]}h(Y_{s},Y_{t})\right),\]
with \(\bar{W}_{t,k}^{[r]}=W_{t}^{[r]}-(G-\ell)^{-1}\sum_{u=k-G+1}^{k-\ell}W_{u}^{[r]}\). Independently generating \(\{W_{t}^{[r]}\}_{t=1}^{n-G}\) for \(r=1,\ldots,R\) (\(R\) denoting the number of bootstrap replications), we store \(T_{\ell}^{[r]}\) and select the threshold as \(\zeta_{\ell}(n,G)=q_{1-\alpha}(\{T_{\ell}^{[r]}\}_{r=1}^{R})\), the \((1-\alpha)\)-quantile of \(\{T_{\ell}^{[r]}\}_{r=1}^{R}\) for the chosen level \(\alpha\in(0,1]\). Additionally, we can compute the importance score for each \(\widehat{\theta}\in\widehat{\Theta}_{\ell}\) as
\[s(\widehat{\theta})=\frac{\left|\left\{1\leq r\leq R:\,T_{\ell}(G,\widehat{ \theta})\geq T_{\ell,r}^{[r]}\right\}\right|}{R+1}. \tag{6}\]
Taking a value between \(0\) and \(1\), the larger \(s(\widehat{\theta})\) is, the more likely that there exists a change point close to \(\widehat{\theta}\) empirically. The bootstrap procedure generalises to the multi-lag NP-MOJO straightforwardly. In practice, we observe that setting \(\widehat{\theta}_{j}=\arg\max_{\widehat{\theta}\in\mathcal{C}_{j}}s(\widehat{ \theta})\) (with some misuse of the notation, \(s(\cdot)\) is computed at the relevant lag for each \(\widetilde{\theta}\)) works well in Step 3 of multi-lag NP-MOJO. This is attributed to the fact that this score inherently takes into account the varying scale of the detector statistics at multiple lags and'standardises' the importance of each estimator. In all numerical experiments, our implementation of multi-lag NP-MOJO is based on this choice of \(\widehat{\theta}_{j}\). We provide the algorithmic descriptions of NP-MOJO and its multi-lag extension in Algorithms 1 and 2 in Appendix A.3.
Implementation of NP-MOJO
In this section, we discuss the computational aspects of NP-MOJO and provide recommendations for the choice of tuning parameters based on extensive numerical results.
**Computational complexity.** Owing to the MOSUM-based approach, the cost of sequentially computing \(T_{\ell}(G,k)\) from \(T_{\ell}(G,k-1)\) is \(O(G)\), giving the overall cost of computing \(T_{\ell}(G,k),\,G\leq k\leq n-G\), as \(O(nG)\). Exact details of the sequential update are given in Appendix A.1. The bootstrap procedure described in Section 3.4 is performed once per lag for simultaneously detecting multiple change points, in contrast with E-Divisive (Matteson and James, 2014) that requires the permutation-based testing to be performed for detecting each change point. With \(R\) bootstrap replications, the total computational cost is \(O(|\mathcal{L}|RnG)\) for multi-lag NP-MOJO using the set of lags \(\mathcal{L}\) and bootstrapping, as opposed to \(O(Rqn^{2})\) for E-Divisive.
**Kernel function.** Based on empirical performance, we recommend the use of the kernel function \(h_{2}\) in Lemma 2 (ii) with \(\delta\) set using the'median trick', a common heuristic used in kernel-based methods (Li et al., 2019). Specifically, we set \(\delta\) to be a half the the median of all \(\|Y_{s}-Y_{t}\|^{2}\) involved in the calculation of \(T_{\ell}(G,k)\). For \(p\)-variate i.i.d. Gaussian data with common variance \(\sigma^{2}\), this corresponds to \(\delta\approx\sigma p\) as the dimension \(p\) increases (Ramdas et al., 2015).
**Bandwidth.** Due to the nonparametric nature of NP-MOJO, it is advised to use a larger bandwidth than that shown to work well for the MOSUM procedure for univariate mean change detection (Eichinger and Kirch, 2018). In practice, the practitioner may have prior knowledge that aids the choice of \(G\). In our simulation studies and data applications, we set \(G=\lfloor n/6\rfloor\). It is often found that using multiple bandwidths and merging the results improves the adaptivity of moving window-based procedures, such as the 'bottom-up' merging proposed by Messer et al. (2014) or the localised pruning of (Cho and Kirch, 2022). We leave investigation into the multiscale extension of NP-MOJO for future research.
**Parameters for change point estimation.** We set \(\eta=0.4\) in (4) following the recommendation in Meier et al. (2021). For multi-lag NP-MOJO, we set \(c=1\) for clustering the estimators from multiple lags, a choice that lies between those recommended in Theorem 2 (i) and (ii), since we do not know whether Assumptions 4 or 6 hold in practice. To further guard against spurious estimators, we only accept those \(\widehat{\theta}\) that lie in intervals of length greater than \(\lfloor 0.02G\rfloor\) where the corresponding \(T_{\ell}(G,k)\) exceeds \(\zeta_{\ell}(n,G)\).
**Parameters for the bootstrap procedure.** The choice of \(b_{n}\) sets the level of dependence in the multiplier bootstrap sequences. Leucht and Neumann (2013) show that a necessary condition is that \(\lim_{n\to\infty}(b_{n}^{-1}+b_{n}n^{-1})=0\), giving a large freedom for choice of \(b_{n}\). We recommend \(b_{n}=1.5n^{1/3}\), which works well in practice. In all numerical experiments, we use \(R=499\) bootstrap replications with \(\alpha=0.1\).
**Set of lags \(\mathcal{L}\).** The choice of \(\mathcal{L}\) depends on the practitioner's interest and domain knowledge, a problem commonly faced by general-purpose change point detection methods, such as the choice of the quantile level in Vanegas et al. (2022), the parameter of interest in Zhao et al. (2022) and the estimating equation in Kirch and Reckruehm (2022). For example, for monthly data, using \(\mathcal{L}=\{0,3,12\}\) allows for detecting changes in the quarterly and yearly seasonality. Even when the interest lies in detecting changes in the marginal distribution only, it helps to jointly consider multiple lags, since any marginal distributional change is likely to result in changes in the joint distribution of \((X_{t},X_{t+\ell})\). In simulations, we use \(\mathcal{L}=\{0,1,2\}\) which works well not only for detecting changes in the mean and the second-order structure, but also for detecting changes in (non-linear) serial dependence and higher-order characteristics.
## 5 Simulation study
We conduct extensive simulation studies with varying change point scenarios (18 where \(q\geq 1\), 7 with \(q=0\)). We provide complete descriptions of the simulation studies in Appendix B where, for comparison, we consider not only nonparametric but also parametric data segmentation procedures well-suited to detect the types of changes in consideration, which include changes in the mean, second-order and higher-order moments and serial dependence. In this section, we briefly discuss a selection of the results and compare both single-lag and multi-lag NP-MOJO (denoted by NP-MOJO-\(\ell\) and NP-MOJO-\(\mathcal{L}\) respectively), with the nonparametric competitors: E-Divisive (Matteson and James, 2014), NWBS (Padilla et al., 2021), KCPA (Celisse et al., 2018; Arlot et al., 2019) and cpt.np (Haynes et al., 2017). E-Divisive and KCPA are applicable to multivariate data segmentation whilst NWBS and cpt.np are not. The scenarios are (all with \(n=1000\)):
1. \(X_{t}=\sum_{j=0}^{3}\Sigma_{j}^{1/2}\mathbb{I}\{\theta_{j}+1\leq t\leq\theta_{ j+1}\}\cdot\varepsilon_{t}\), where \(\varepsilon_{t}=\left(\varepsilon_{1t},\varepsilon_{2t}\right)^{\top}\) with \(\varepsilon_{it}\sim_{\text{i.i.d.}}t_{5}\), \((\theta_{1},\theta_{2},\theta_{3})=(250,500,750)\), \(\Sigma_{0}=\Sigma_{2}=\left(\begin{smallmatrix}1&0\\ 0&1\end{smallmatrix}\right)\) and \(\Sigma_{1}=\Sigma_{3}=\left(\begin{smallmatrix}1&0.9\\ 0.9&1\end{smallmatrix}\right)\).
2. \(X_{t}=X_{t}^{(j)}=a_{j}X_{t-1}^{(j)}+\varepsilon_{t}\) for \(\theta_{j}+1\leq t\leq\theta_{j+1}\), where \(q=2\), \((\theta_{1},\theta_{2})=(333,667)\) and \((a_{0},a_{1},a_{2})=(-0.8,0.8,-0.8)\).
3. \(X_{t}=X_{t}^{(j)}=\sigma_{t}^{(j)}\varepsilon_{t}\) with \((\sigma_{t}^{(j)})^{2}=\omega_{j}+\alpha_{j}(X_{t-1}^{(j)})^{2}+\beta_{j}( \sigma_{t-1}^{(j)})^{2}\) for \(\theta_{j}+1\leq t\leq\theta_{j+1}\), where \(q=1\), \(\theta_{1}=500\), \((\omega_{0},\alpha_{0},\beta_{0})=(0.01,0.7,0.2)\) and \((\omega_{1},\alpha_{1},\beta_{1})=(0.01,0.2,0.7)\).
4. \(X_{t}=0.4X_{t-1}+\varepsilon_{t}\) where \(\varepsilon_{t}\sim_{\text{i.i.d.}}\mathcal{N}(0,0.5^{2})\) for \(t\leq\theta_{1}\) and \(t\geq\theta_{2}+1\), and \(\varepsilon_{t}\sim_{\text{i.i.d.}}\) Exponential\((0.5)-0.5\) for \(\theta_{1}+1\leq t\leq\theta_{2}\), with \(q=2\) and \((\theta_{1},\theta_{2})=(333,667)\).
The above scenarios consider: changes in the covariance of bivariate, non-Gaussian random vectors in (B5), changes in the autocorrelation (while the variance stays unchanged) in (C1), a change in the parameters of an ARCH(1, 1) process in (C3), and changes in higher moments of serially dependent observations in (D3). For further discussions of these scenarios, see
Appendix B.2. Table 1 reports the distribution of the estimated number of change points and the average covering metric (CM) and V-measure (VM) over 1000 realisations. Taking values between \([0,1]\), CM and VM close to 1 indicates better accuracy in change point location estimation, see Appendix B.2 for their definitions. In the case of (C1), \(q_{\ell}=0\) except for \(q_{1}=2\), and thus we report \(\widehat{q}_{\ell}-q_{\ell}\) for single-lag NP-MOJO. Across all scenarios, NP-MOJO-\(\mathcal{L}\) shows good detection and estimation accuracy and demonstrates the efficacy of considering multiple lags, see (C3) and (D3) in particular. As the competitors are calibrated for the independent setting, they tend to either over- or under-detect the number of change points in the presence of serial dependence in (C1), (C3) and (D3). In Appendix B.2, we compare NP-MOJO against change point methods proposed for time series data where it similarly performs well.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & & \multicolumn{4}{c}{\(\widehat{q}-q\) / \(\widehat{q}_{\ell}-q_{\ell}\)} & \\ Model & Method & \(\leq-2\) & \(-1\) & \(\mathbf{0}\) & \(1\) & \(\geq 2\) & CM & VM \\ \hline (B5) & NP-MOJO-0 & 0.000 & 0.001 & **0.997** & 0.002 & 0.000 & 0.974 & 0.959 \\ & NP-MOJO-1 & 0.005 & 0.121 & **0.867** & 0.007 & 0.000 & 0.931 & 0.927 \\ & NP-MOJO-2 & 0.006 & 0.103 & **0.884** & 0.007 & 0.000 & 0.935 & 0.929 \\ & NP-MOJO-\(\mathcal{L}\) & 0.000 & 0.001 & **0.999** & 0.000 & 0.000 & 0.973 & 0.958 \\ & E-Divisive & **0.670** & 0.189 & 0.101 & 0.032 & 0.008 & 0.431 & 0.335 \\ & KCPA & 0.322 & 0.000 & **0.662** & 0.015 & 0.001 & 0.775 & 0.725 \\ \hline (C1) & NP-MOJO-0 & – & – & **0.851** & 0.140 & 0.009 & – & – \\ & NP-MOJO-1 & 0.000 & 0.002 & **0.956** & 0.042 & 0.000 & 0.978 & 0.961 \\ & NP-MOJO-2 & – & – & **0.836** & 0.149 & 0.015 & – & – \\ & NP-MOJO-\(\mathcal{L}\) & 0.000 & 0.002 & **0.986** & 0.012 & 0.000 & 0.980 & 0.963 \\ & E-Divisive & 0.001 & 0.001 & 0.012 & 0.035 & **0.951** & 0.685 & 0.686 \\ & KCPA & **0.792** & 0.002 & 0.065 & 0.025 & 0.116 & 0.399 & 0.132 \\ & NWBS & 0.013 & 0.001 & 0.007 & 0.015 & **0.964** & 0.398 & 0.558 \\ & cpt.np & 0.000 & 0.000 & 0.002 & 0.003 & **0.995** & 0.593 & 0.647 \\ \hline (C3) & NP-MOJO-0 & – & 0.409 & **0.533** & 0.056 & 0.002 & 0.744 & 0.484 \\ & NP-MOJO-1 & – & 0.236 & **0.682** & 0.081 & 0.001 & 0.819 & 0.633 \\ & NP-MOJO-2 & – & 0.299 & **0.626** & 0.073 & 0.002 & 0.787 & 0.571 \\ & NP-MOJO-\(\mathcal{L}\) & – & 0.210 & **0.727** & 0.062 & 0.001 & 0.823 & 0.645 \\ \cline{2-8} & E-Divisive & – & 0.032 & 0.327 & 0.211 & **0.430** & 0.742 & 0.602 \\ & KCPA & – & **0.418** & 0.262 & 0.171 & 0.149 & 0.667 & 0.370 \\ & NWBS & – & **0.895** & 0.048 & 0.020 & 0.037 & 0.525 & 0.069 \\ & cpt.np & – & 0.000 & 0.013 & 0.047 & **0.940** & 0.634 & 0.554 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Distribution of the estimated number of change points and the average CM and VM over 1000 realisations. The modal value of \(\widehat{q}-q\) in each row is given in bold. Also, the best performance for each metric is underlined for each scenario.
## 6 Data applications
### California seismology measurements data set
We analyse a data set from the High Resolution Seismic Network, operated by the Berkeley Seismological Laboratory. Ground motion sensor measurements were recorded in three mutually perpendicular directions at 13 stations near Parkfield, California, USA for 740 seconds from 2am on December 23rd 2004. The data has previously been analysed in Xie et al. (2019) and Chen et al. (2022). Chen et al. (2022) pre-process the data by removing a linear trend and down-sampling, and the processed data is available in the ocd R package (Chen et al., 2020). According to the Northern California Earthquake Catalog, an earthquake of magnitude 1:47 Md hit near Atascadero, California (50 km away from Parkfield) at 02:09:54.01.
We analyse time series of dimension \(p=39\) and length \(n=2000\) by taking a portion of the data set between 544 and 672 seconds after 2am, which covers the time at which the earthquake occurred (594 seconds after). We apply the multi-lag NP-MOJO with tuning parameters selected as in Section 4, using \(G=333\) and set of lags \(\mathcal{L}=\{0,\ldots,4\}\). We detect two changes at all lags; the first occurs at between 603.712 and 603.968 seconds after
Figure 2: Heat map of standardised sensor data. Change points detected by multi-lag NP-MOJO are shown in vertical dashed lines, and the time of the earthquake is given by solid vertical line.
2am and may be attributed to the earthquake. As noted in Chen et al. (2022), P waves, which are the primary preliminary wave and arrive first after an earthquake, travel at up to 6km/s in the Earth's crust. This is consistent with the delay of approximately 9 seconds between the occurrence of the earthquake and the first change point detected by multi-lag NP-MOJO. We also note that performing online change point analysis, Xie et al. (2019) and Chen et al. (2022) report a change at 603.584 and 603.84 seconds after the earthquake, respectively. The second change is detected at between 626.176 and 626.496 seconds after 2am. It may correspond to the ending of the effect of the earthquake, as sensors return to 'baseline' behaviour. Figure 2 plots the heat map of the data with each series standardised for ease of visualisation, along with the onset of the earthquake and the two change points detected by the multi-lag NP-MOJO. It suggests, amongst other possible distributional changes, the time series undergoes mean shifts as found in Chen et al. (2022). We also examine the sample correlations computed on each of the three segments, see Figure 3 where the data exhibit a greater degree of correlation in segment 2 compared to the other two segments. Recalling that each station is equipped with three sensors, we notice that pairwise correlations from the sensors located at the same stations undergo greater changes in correlations. A similar observation is made about the sensors located at nearby stations.
### US recession data
We analyse the US recession indicator data set. Recorded quarterly between 1855 and 2021 (\(n=667\)), \(X_{t}\) is recorded as a 1 if any month in the quarter is in a recession (as identified by the Business Cycle Dating Committee of the National Bureau of Economic Research), and 0 otherwise. The data has previously been examined for change points under piecewise stationary autoregressive models for integer-valued time series in Hudecova (2013) and Diop and Kengne (2021). We apply the multi-lag NP-MOJO with \(G=111\) and \(\mathcal{L}=\{0,\ldots,4\}\). All
Figure 3: Sample correlations from the three segments defined by the change point estimators.
tuning parameters are set as recommended in Section 4 with one exception, \(\delta\) for the kernel \(h_{2}\). We select \(\delta=1\) for lag 0 and 2 otherwise, since pairwise distances for binary data are either 0 or 1 when \(\ell=0\) such that the median heuristic would not work as desired.
At all lags, we detect a single change point located between 1933:Q1 and 1938:Q2. Multi-lag NP-MOJO estimates the change point at 1933:Q1, which is comparable to the previous analyses: Hudecova (2013) report a change at 1933:Q1 and Diop and Kengne (2021) at 1932:Q4. The change coincides with the ending of the Great Depression and beginning of World War II. The left panel of Figure 4 plots the detected change along with the sample average of \(X_{t}\) over the two segments (superimposed on \(\{X_{t}\}_{t=1}^{n}\)), showing that the frequency of recession is substantially lower after the change. The right panel plots the detector statistics \(T_{\ell}(G,k)\) at lags \(\ell\in\mathcal{L}\), divided by the respective threshold \(\zeta_{\ell}(n,G)\) obtained from the bootstrap procedure. The thus-standardised \(T_{4}(G,k)\), shown in solid line, displays the change point with the most clarity, attaining the largest value over the widest interval above the threshold (standardised to be one). At lag 4, the detector statistic has the interpretation of measuring any discrepancy in the joint distribution of the recession indicator series and its yearly lagged values.
|
2302.12895 | Maximizing Miner Revenue in Transaction Fee Mechanism Design | Transaction fee mechanism design is a new decentralized mechanism design
problem where users bid for space on the blockchain. Several recent works
showed that the transaction fee mechanism design fundamentally departs from
classical mechanism design. They then systematically explored the mathematical
landscape of this new decentralized mechanism design problem in two settings:
in the plain setting where no cryptography is employed, and in a
cryptography-assisted setting where the rules of the mechanism are enforced by
a multi-party computation protocol. Unfortunately, in both settings, prior
works showed that if we want the mechanism to incentivize honest behavior for
both users as well as miners (possibly colluding with users), then the miner
revenue has to be zero. Although adopting a relaxed, approximate notion of
incentive compatibility gets around this zero miner-revenue limitation, the
scaling of the miner revenue is nonetheless poor.
In this paper, we show that if we make a mildly stronger reasonable-world
assumption than prior works, we can circumvent the known limitations on miner
revenue, and design auctions that generate optimal miner revenue. We also
systematically explore the mathematical landscape of transaction fee mechanism
design under the new reasonable-world and demonstrate how such assumptions can
alter the feasibility and infeasibility landscape. | Ke Wu, Elaine Shi, Hao Chung | 2023-02-24T21:13:15Z | http://arxiv.org/abs/2302.12895v3 | # Maximizing Miner Revenue in Transaction Fee Mechanism Design
###### Abstract
Transaction fee mechanism design is a new decentralized mechanism design problem where users bid for space on the blockchain. Several recent works showed that the transaction fee mechanism design fundamentally departs from classical mechanism design. They then systematically explored the mathematical landscape of this new decentralized mechanism design problem in two settings: in the plain setting where no cryptography is employed, and in a cryptography-assisted setting where the rules of the mechanism are enforced by a multi-party computation protocol. Unfortunately, in both settings, prior works showed that if we want the mechanism to incentivize honest behavior for both users as well as miners (possibly colluding with users), then the miner revenue has to be zero. Although adopting a relaxed, approximate notion of incentive compatibility gets around this zero miner-revenue limitation, the scaling of the miner revenue is nonetheless poor.
In this paper, we show that if we make a mildly stronger reasonable-world assumption than prior works, we can circumvent the known limitations on miner revenue, and design auctions that generate optimal miner revenue. We also systematically explore the mathematical landscape of transaction fee mechanism design under the new reasonable-world and demonstrate how such assumptions can alter the feasibility and infeasibility landscape.
###### Contents
* 1 Introduction
* 1.1 Our Results and Contributions
* 1.2 Additional Related Work
* 2 Technical Roadmap
* 2.1 Infinite Block Setting
* 2.2 Finite Block Setting
* 2.2.1 Limits of Strict Incentive Compatibility
* 2.2.2 Optimal Miner Revenue under Approximate Incentive Compatibility
* 2.3 Additional Results
* 3 Model and Definitions
* 3.1 MPC-Assisted Model
* 3.2 Defining Incentive Compatibility
* 4 Analysis of the LP-Based Mechanism
* 4.1 Preliminaries: Linear Algebra Tools
* 4.2 Proofs for the LP-Based Mechanism
* 5 Characterization for Finite Block Size
* 5.1 Characterization for Strict IC
* 5.1.1 Feasibility for \(c=1\)
* 5.1.2 Zero Social Welfare for Users When \(c\geq 2\)
* 5.2 Feasibility for Approximate IC: Diluted Threshold-Based Mechanism
* 6 Bounds on Miner Revenue
* 6.1 Optimality of \(\Theta(h)\)-Miner Revenue in \((h,\rho,c,d)\)-environment
* 6.2 Necessity of Bayesian Incentive Compatibility
* A Feasibility for Infinite Block Size
* A.1 MPC-Assisted, Parity-Based Mechanism
* A.2 MPC-Assisted, Threshold-Based Mechanism
Introduction
The transaction fee mechanism (TFM) [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 35, 36, 371, 38, 39, 31, 33, 33, 33, 34, 35, 36, 371, 38, 39, 31, 33, 34, 35, 36, 371, 38, 39, 31, 34, 35, 36, 372, 38, 39, 31, 34, 35, 36, 373, 38, 39, 32, 34, 35, 36, 38, 39, 31, 34, 35, 37, 39, 32, 34, 35, 36, 38, 39, 30, 31, 32, 34, 35, 36, 39, 32, 35, 37, 39, 33, 36, 39, 33, 37, 38, 39, 30, 31, 32, 33, 34, 35, 36, 39, 34, 37, 38, 39, 35, 39, 36, 37, 39, 38, 39, 39, 31, 32, 34, 35, 36, 39, 37, 38, 39, 39, 32, 39, 33, 34, 35, 36, 39, 37, 38, 39, 39, 34, 38, 39, 35, 39, 36, 39, 37, 39, 38, 39, 39, 38, 39, 39, 30, 31, 32, 34, 35, 36, 39, 37, 38, 39, 39, 39, 31, 34, 35, 36, 39, 38, 39, 39, 32, 39, 33, 34, 35, 36, 39, 37, 39, 38, 39, 39, 30, 31, 32, 34, 35, 36, 39, 37, 38, 39, 39, 32, 39, 33, 34, 35, 36, 39, 37, 39, 38, 39, 39, 39, 30, 31, 32, 34, 35, 36, 39, 38, 39, 39, 31, 34, 35, 37, 39, 39, 32, 36, 39, 33, 38, 39, 34, 35, 36, 39, 37, 39, 38, 39, 39, 31, 34, 35, 39, 36, 39, 38, 39, 39, 31, 35, 39, 37, 39, 38, 39, 39, 32, 39, 33, 34, 35, 36, 39, 37, 39, 38, 39, 39, 30, 31, 32, 34, 35, 36, 39, 37, 39, 38, 39, 39, 30, 31, 32, 35, 39, 34, 35, 36, 39, 37, 38, 39, 39, 31, 35, 39, 32, 36, 39, 34, 35, 36, 39, 37, 39, 38, 39, 39, 39, 30, 31, 32, 34, 35, 36, 39, 37, 39, 38, 39, 39, 31, 32, 39, 33, 34, 35, 36, 39, 37, 39, 38, 39, 39, 31, 32, 34, 35, 36, 39, 39, 32, 35, 39, 31, 36, 39, 37, 39, 38, 39, 39, 30, 31, 32, 34, 35, 36, 39, 37, 39, 38, 39, 39, 31, 32, 34, 35, 36, 39, 37, 39, 38, 39, 39, 30, 31, 32, 35, 39, 33, 36, 39, 31, 37, 39, 32, 38, 39, 33, 34, 35, 36, 39, 37, 39, 38, 39, 39, 30, 31, 32, 34, 35, 36, 39, 32, 37, 39, 38, 39, 39, 30, 31, 32, 34, 35, 36, 39, 31, 35, 39, 32, 36, 39, 37, 38, 39, 31, 34, 35, 37, 39, 38, 39, 32, 39, 33, 34, 35, 36, 39, 37, 39, 38, 39, 30, 31, 32, 34, 35, 36, 39, 37, 38, 39, 39, 31, 38, 39, 32, 35, 39, 30, 31, 32, 36, 39, 33, 34, 37, 39, 38, 39, 31, 35, 39, 32, 36, 39, 33, 37, 39, 38, 39, 30, 31, 32, 34, 35, 36, 39, 37, 39, 38, 39, 30, 31, 32, 35, 39, 32, 36, 39, 37, 39, 38, 39, 30, 31, 32, 34, 35, 36, 39, 31, 33, 36, 39, 32, 37, 39, 33, 38, 39, 30, 31, 32, 34, 35, 36, 39, 37, 39, 38, 39, 30, 31, 32, 34, 35, 36, 39, 39, 32, 33, 35, 37, 39, 39, 31, 34, 35, 36, 39, 32, 37, 39, 33, 38, 39, 31, 35, 39,
Chung and Shi [13] showed strong impossibility results in the plain model. One of the impossibilities they proved is the _0 miner-revenue_ limitation. Specifically, any TFM that simultaneously satisfies UIC and SCP must suffer from 0 miner revenue, i.e., all the payments from the users must be burnt rather than paid to the miner as transaction fee. Shi, Chung, and Wu [13] showed that the same 0 miner-revenue limitation holds even in the _MPC-assisted_ model, and even for _Bayesian_ (as opposed to ex post) notions of incentive compatibility. In both the plain and the MPC-assisted models, the 0 miner-revenue limitation holds regardless of whether the block size is finite or infinite, and even when the miner colludes with at most \(c=1\) user.
To broaden the design space, Shi et al. [13] additionally suggested relaxing the _strict_ incentive compatibility notion to _approximate_ incentive compatibility. Although with approximate incentive compatibility, we can indeed circumvent the 0 miner-revenue barrier, unfortunately, Shi et al. showed some fundamental limitations that the miner revenue cannot scale proportionally as the magnitude of the bids increases [13].
In this paper, we ask the following natural question:
_How can we maximize the miner revenue in transaction fee mechanism design under reasonable assumptions?_
### Our Results and Contributions
Our hope is to consider a _mildly stronger "reasonable-world" type assumption_ than Shi et al. [13], thus allowing us circumvent the 0 miner-revenue limitation in the MPC-assisted model (for strict incentive compatibility). More specifically, the mechanisms proposed by Shi et al. [13]_implicitly_ assume the following reasonable-world assumption: the strategic coalition controls no more than \(\rho\) fraction of the miners, and no more than \(c\) users3. We will consider a mildly stronger reasonable world which promises the following:
Footnote 3: Shi et al. [13] referred to this as achieving MIC against any \(\rho\)-sized miner coalition, achieving SCP against any \((\rho,c)\)-sized miner-user coalition.
1. Just like Shi et al. [13], we assume that the strategic coalition controls no more than \(\rho\) fraction of the miners, and no more than \(c\) users;
2. We additionally assume that there is an a-priori known lower bound \(h\) on the number of honest users; naturally, \(h\) is also a lower bound on the number of honest bids;
3. We additionally assume that the number of bids submitted by the strategic coalition is no more than some a-priori known parameter \(d\geq c\). In other words, here we are assuming while identities or pseudonyms may be cheap, they are not completely cost-free; and this is why the strategic coalition cannot make up arbitrarily many fake identities and submit arbitrarily many fake bids.
New reasonable world: \((h,\rho,c,d)\)-environment.Henceforth, if the above reasonable-world assumptions are promised to hold, we say that it is an \((h,\rho,c,d)\)-environment, where \(h\geq 1\), \(\rho\in(0,1)\), \(c\geq 1\), and \(d\geq c\) are a-priori known parameters. In this paper, we want to design mechanisms that satisfy UIC, MIC, and SCP in an \((h,\rho,c,d)\)-environment, while maximizing miner revenue.
In comparison, the mechanisms in the prior work of Shi et al. [13] receive only the a-priori parameters \(\rho\) and \(c\) as input, and there is no promise about \(h\) and \(d\), i.e., the resulting mechanisms must work for any choice of \(h\) and \(d\). Henceforth, we say that such mechanisms are _universal_ in the parameters \(h\) and \(d\), and that they satisfy UIC, MIC, and SCP in an \((*,\rho,c,*)\)_-environment_, where each \(*\) denotes a universal parameter.
An interesting observation is that the new strengthened reasonable world does not help us overcome the 0 miner-revenue limitation in the _plain_ model. By contrast, our extra assumptions indeed invalidate the proof of the 0 miner-revenue limitation in the _MPC-assisted_ model. In other words, _the 0 miner-revenue limitation actually holds in a stronger sense in the plain model [10] than the MPC-assisted model [11]_. In the plain model, the limitation holds even when 1) the only strategic play is to bid untruthfully, i.e., a strategic user (possibly colluding with the miner) never injects bids or drops out, and a strategic miner always implements the rules of the TFM honestly, and 2) the mechanism is promised that there are \(n-1\) honest bids for some fixed \(n>2\), and only one strategic bid. By contrast, in the MPC-assisted model, the proof of the 0 miner-revenue limitation (for Bayesian incentive compatibility) [11] critically depends on 1) a strategic user's ability to inject fake bids or drop out; and 2) the fact that the TFM must nonetheless provide (Bayesian) UIC, MIC, and SCP even when the world consists of only a single strategic user (possibly colluding with some miners), and no honest user at all.
Inspired by the above key insights, we prove several new results that jointly give a complete characterization of the miner revenue in \((h,\rho,c,d)\)-environments. Throughout the paper, we assume that in the MPC-assisted model, the total miner revenue is divided equally among the miners. Since we focus on Bayesian notions of equilibrium, we make the standard assumption that each honest user's bid is independently sampled from some a-priori known distribution \(\mathcal{D}\). Throughout, we assume that \(\mathcal{D}\) has bounded support.
A complete characterization for the infinite block setting.This setting is relevant because real-world blockchains like Ethereum use the recent past to adjust the reserve price, such that the vast majority of the time, the block size can accommodate all users who are willing to pay the reserve price, thus emulating an "infinite block size" scenario.
We show that assuming that block size is infinite and that \(c\leq d\leq\frac{1}{8}\sqrt{\frac{h}{2\log h}}\), one can indeed construct a TFM that guarantees Bayesian UIC, MIC, and SCP in an \((h,\rho,c,d)\)-environment, and meanwhile achieves \(\Theta(h)\) miner revenue in expectation (taken over the randomness of the mechanism as well as the honest users' bids). Importantly, we achieve _scalability_ in miner revenue in the sense that when we scale up the bid distribution \(\mathcal{D}\) by a factor of \(\alpha\), the miner revenue scales up by \(\alpha\) too. We also show that \(\Theta(h)\) miner revenue is asymptotically optimal for an \((h,\rho,c,d)\)-environment even for an arbitrarily small \(\rho\in(0,1)\), and even for \(c=d=1\). In particular, we cannot hope for asymptotically more than \(\Theta(h)\) expected miner revenue even when the actual number of honest users far exceeds the anticipated lower bound \(h\).
More formally, we prove the following theorems that give tightly matching upper and lower bounds on miner revenue for the infinite block setting.
**Theorem 1.1** (Achieving \(\Theta(h)\) miner revenue model under infinite block size).: _Suppose that the block size is infinite and that \(c\leq d\leq\frac{1}{8}\sqrt{\frac{h}{2\log h}}\). For any \(h\geq 1\), \(d\geq c\geq 1\), and \(\rho\in(0,1)\), there exists an MPC-assisted TFM that guarantees Bayesian UIC, MIC, and SCP in an \((h,\rho,c,d)\)-environment, and meanwhile the mechanism achieves \(\Theta(h\cdot M_{\mathcal{D}})\) expected miner revenue, and at least \(\Theta(\widetilde{h}\cdot C_{\mathcal{D}})\) expected social welfare for the users where \(\widetilde{h}\geq h\) is the actual number of honest users that show up, \(M_{\mathcal{D}}\) denotes the median of the bid distribution \(\mathcal{D}\), and \(C_{\mathcal{D}}=\mathbf{E}_{x\sim\mathcal{D}}[x-M_{\mathcal{D}}|x\geq M_{ \mathcal{D}}]\) is another constant related to the distribution \(\mathcal{D}\)._
**Theorem 1.2** (\(\Theta(h)\) miner revenue is optimal in \((h,\rho,c,d)\)-environments).: _Fix any \(h\geq 1\), \(d\geq c\geq 1\), and \(\rho\in(0,1)\). No MPC-assisted TFM that simultaneously satisfies Bayesian UIC, MIC, and SCP in an \((h,\rho,c,d)\)-environment can achieve more than \(h\cdot\mathbf{E}(\mathcal{D})\) expected miner revenue where
\(\mathbf{E}(\mathcal{D})\) denotes the expectation of the bid distribution \(\mathcal{D}\). Further, this limitation on miner revenue holds no matter whether the block size is finite or infinite._
A complete characterization for the finite block setting.In Theorem 1.1, we focus on the infinite block setting. An interesting question is whether we can get non-trivial results for the finite block setting too. We show how to modify the mechanism of Theorem 1.1 to work in the finite block setting, leading to the following theorem.
**Theorem 1.3** (Finite block, \(c=1\)).: _Suppose that the block size is \(k\). Fix \(c=1\), any \(h\geq 1\), any \(\rho\in(0,1)\), and any \(d\) such that \(c\leq d\leq\frac{1}{8}\sqrt{\frac{h}{2\log h}}\). Then, there exists an MPC-assisted TFM that satisfies ex post UIC, Bayesian MIC, and Bayesian SCP in an \((h,\rho,c,d)\)-environment; and moreover, the mechanism achieves \(\Theta(\min(h,k)\cdot M_{\mathcal{D}})\) expected miner revenue and at least \(\Theta(\min(\widetilde{h},k)\cdot C_{\mathcal{D}})\) expected social welfare for the users, where \(\widetilde{h}\), \(M_{\mathcal{D}}\), and \(C_{\mathcal{D}}\) defined in the same way as Theorem 1.1._
For \(c\geq 2\), it turns out that the answer is more subtle. The challenge for \(c\geq 2\) is that one user with a small true value and simply drop out to increase its friend's chances, and thus the aforementioned random down-selection idea would fail. On one hand, we show that there indeed exist some bid distributions \(\mathcal{D}\) such that for any choice of \(d\geq c\geq 2\) and \(\rho\in[0,1)\), using a slight modification of the mechanism of Theorem 1.1, we can get \(\Theta(h)\) expected miner revenue, while satisfying (Bayesian) UIC, MIC, and SCP in an \((h,\rho,c,d)\)-environment. Unfortunately, the resulting mechanism appears to degenerate in some respects in the sense that the social welfare for all users is \(0\). We prove that this is no accident: in fact, for any \(d\geq c\geq 2\) and \(\rho\in(0,1)\), any MPC-assisted TFM that simultaneously satisfies Bayesian UIC, MIC, and SCP must suffer from \(0\) total user social welfare when more than \(h\) users show up whose bids are sampled from \(\mathcal{D}\):
**Theorem 1.4** (Finite block, \(c\geq 2\): limit on social welfare).: _Suppose that the block size is finite, and fix any \(h\geq 1\), any \(d\geq c\geq 2\), and any \(\rho\in(0,1)\). Then, any MPC-assisted TFM that simultaneously satisfies Bayesian UIC, MIC and SCP in an \((h,\rho,c,d)\)-environment must suffer from \(0\) social welfare for the users under a bid vector \(\mathbf{b}\sim\mathcal{D}^{\ell}\) where \(\ell>h\)._
Due to the \(0\) social welfare limitation for strict incentive compatibility, it makes sense to consider approximate incentive compatibility for finite blocks and \(c\geq 2\). In this case, we show a feasibility result with asymptotic optimality in both miner revenue and social welfare, as stated below:
**Theorem 1.5** (Approximately incentive compatible mechanism for the finite block setting).: _Fix any \(h\geq 1\), \(c\geq 1\), and \(\rho\in(0,1)\). Suppose the block size is \(k\), and let \(\epsilon\geq M_{\mathcal{D}}\cdot\frac{h}{2}\cdot e^{-\frac{h}{16}}\). There is an MPC-assisted TFM that satisfies ex post UIC, Bayesian \(\epsilon\)-MIC, and Bayesian \(\epsilon\)-SCP in \((h,\rho,c,*)\)-environments. Moreover, for sufficiently large \(h\), the mechanism achieves \(\Theta(k\cdot M_{\mathcal{D}})\) expected miner revenue, and \(\Theta(k\cdot C_{\mathcal{D}})\) expected total user social welfare, where \(M_{\mathcal{D}}\) and \(C_{\mathcal{D}}\) are defined in the same way as Theorem 1.1._
Additional results.Interestingly, we show that all of our feasibility results, namely, Theorem 1.1, Theorem 1.3, and Theorem 1.5, critically rely on the Bayesian notion of equilibrium. As argued by Shi et al. [13], the Bayesian notion of equilibrium is suitable for the MPC-assisted model since the users cannot observe others' bids before submitting their own. Had we insisted on an _ex post_ notion of equilibrium in the MPC-assisted model, our additional reasonable-world assumptions would not help us overcome the previously known impossibility results. Specifically, we show that any MPC-assisted mechanism that simultaneously achieves ex post UIC and SCP in an
\((h,\rho,c,d)\)-environment must suffer from 0 miner revenue even for \(d=c=1\) and an arbitrarily small positive \(\rho\). More generally, for approximate but ex post notions of incentive compatibility, even the \((h,c,d,\rho)\)-environment is subject to the same miner revenue limit stated in Shi et al. [23]. Further, the above restrictions on miner revenue hold no matter whether the block size is finite or infinite.
### Additional Related Work
We now review some closely related recent works besides the prior works on transaction mechanism design [11, 12, 13, 14, 15, 16] already mentioned.
TFM in a Bayesian setting.The recent works of Gafni and Yaish [11] and Zhao, Chen, and Zhou [17] both consider TFM in a Bayesian setting. Although their works did not explicitly define the MPC-assisted model, from a practical standpoint, their results are in fact only relevant in an MPC-assisted (or a similar) model. As explained in Section 3.2 and Fact 3.3, plain-model TFMs that achieve _Bayesian_ equilibrium also achieve _ex post_ equilibrium, since in the plain-model game, the strategic player can decide its actions _after_ having observed honest users' bids.
Gafni and Yaish [11] suggest a mechanism that satisfies Bayesian UIC, while also satisfying MIC and OCA-proof (short for offchain-agreement-proof) even if the miner knows everyone's bid. Further, their mechanism works in the finite-block setting while achieving asymptotical optimality in social welfare and revenue. We stress that their result does not contradict the zero miner-revenue limitation proven by Shi et al. [23] since their OCA-proofness notion (originally defined by Roughgarden [14, 15] ) is of a different nature from our side-contract-proofness (SCP) notion (originally defined by Chung and Shi [23]). Roughly speaking, OCA-proofness requires that a strategic coalition cannot enter an off-chain contract that increases _everyone_'s utility (_not just those in the coalition_) relative to what's achievable on-chain. In comparison, SCP is the notion that directly captures the cryptocurrency community's outpouring concerns about Miner Extractable Value (MEV). In particular, middleman platforms that such as Flashbot facilitate the collusion of miners and users, where the coalition plays strategically to profit themselves at the expense of other users. This is why we choose to use the SCP notion rather than OCA-proofness. Moreover, the reason why the cryptocurrency community is developing encrypted mempool techniques (which can be viewed as instantiations of the MPC-assisted model) is also because they care about SCP (i.e., resilience to MEV).
Zhao, Chen, and Zhou [17] suggest a mechanism that generates positive miner revenue while achieving Bayesian UIC and Bayesian 1-SCP even for the finite block setting. Their result does not contradict the 0-miner revenue limitation of Shi et al. [23], since Zhao et al. [17] consider only a restricted strategy space. In their work, a strategic user or a miner-user coalition can only deviate by bidding untruthfully; the coalition cannot inject fake bids, strategic users cannot drop out, and nor can strategic miners alter the inclusion rule. Due to their restricted strategy space, their results are only relevant under very stringent assumptions: 1) the TFM is implemented in the MPC-assisted (or similar) model; 2) the TFM is fully "permissioned" and allows only a set of pre-registered users to submit bids. In particular, the latter "permissioned" requirement is unrealistic for major decentralized cryptocurrencies today where any user can join and submit transactions.
Cryptography meets game theory.Prior to the advent of cryptocurrencies, a line of work [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 21, 222, 23, 24, 25, 26, 27, 28, 29, 21, 23, 25, 26, 28, 29, 22, 24, 26, 29, 23, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 110, 111, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 154, 156, 157, 158, 159, 160, 161, 170, 171, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 21, 22, 23, 24, 25, 26, 27, 28, 29, 20, 21, 23, 24, 25, 26, 28, 29, 20, 21, 23, 24, 25, 26, 29, 21, 24, 27, 28, 29, 20, 21, 24, 29, 21, 25, 26, 29, 21, 26, 27, 28, 29, 21, 28, 29, 20, 21, 29, 21, 27, 28, 29, 21, 20, 22, 23, 24, 25, 26, 29, 21, 28, 29, 22, 20, 21, 29, 22, 23, 24, 25, 26, 27, 29, 20, 21, 28, 29, 23, 26, 27, 28, 29, 20, 21, 29, 22, 24, 25, 26, 29, 21, 28, 29, 22, 20, 21, 29, 23, 24, 25, 26, 27, 28, 29, 20, 21, 22, 23, 26, 29, 23, 27, 28, 29, 20, 21, 29, 24, 25, 26, 29, 21, 27, 28, 29, 23, 29, 24, 26, 27, 29, 23, 28, 29, 20, 21, 29, 24, 29, 25, 26, 29, 26, 27, 28, 29, 21, 29, 23, 27, 28, 29, 20, 21, 29, 24, 28, 29, 20, 21, 29, 22, 23, 25, 26, 29, 23, 27, 29, 24, 28, 29, 25, 26, 27, 29, 28, 29, 20, 21, 29, 23, 29, 24, 29, 26, 27, 29, 21, 28, 29, 20, 22, 23, 29, 24, 29, 25, 27, 28, 29, 21, 29, 26, 29, 20, 21, 29, 23, 28, 29, 20, 22, 23, 29, 24, 29, 25, 26, 27, 28, 29, 23, 29, 26, 29, 27, 29, 28, 29, 20, 21, 29, 21, 29, 20, 22, 24, 29, 26, 28, 29, 23, 27, 29, 21, 28, 29, 20
in correlated equilibria [10]. Adopting game-theoretic fairness can allow us to circumvent lower bounds pertaining to the more stringent cryptographic notions of fairness [10, 11, 12, 13, 14]. Ferreira et al. [15] and Essaidi et al. [1] showed that cryptographic commitments can help us circumvent impossibilities pertaining to credible auctions. As Chung and Shi [13] explained in detail, credible auction is of a fundamentally different nature from transaction fee mechanism design.
## 2 Technical Roadmap
Convention.In both our paper and the prior work of Shi et al. [13], the \(\rho\) parameter is only needed to instantiate the underlying MPC protocol among the miners, since the MPC protocol needs to resist \(\rho\) fraction of corrupt miners.
As explained in Section 3, when we focus on the game theoretic analysis, it helps to abstract out the underlying MPC protocol and think of it as an ideal functionality (i.e., a trusted third party) whose job is to correctly compute the confirmation rule, payment rule, and miner revenue rule of the TFM. In this idealized model, all the mechanisms described in this paper as well as Shi et al. [13] actually achieve universality in the parameter \(\rho\). Henceforth throughout the paper, we will state the results in the idealized world with universality in \(\rho\).
### Infinite Block Setting
For the infinite block setting, we can achieve \(\Theta(h)\) miner revenue in \((h,\rho,c,d)\)-environments. To aid understanding, we break down the thought process into several intermediate steps, eventually leading to our final mechanism, the LP-based mechanism.
Glimpse of hope.First, consider the special case where we are promised that there is at least \(h=1\) honest user. In this case, the following simple parity-based mechanism satisfies ex-post UIC, Bayesian MIC, and Bayesian SCP in \((1,*,*,*)\)-environments.
**MPC-assisted, parity-based mechanism**
_// Let \(m\) be the median of the distribution \(\mathcal{D}\) such that \(\Pr_{x\sim\mathcal{D}}[x\geq m]=1/2\)._
All bids that are at least \(m\) get confirmed and pay \(m\).
If the number of confirmed bids is odd, then the total miner revenue is \(m\); else the total miner revenue is \(0\).
In the above mechanism, as long as there is at least one honest bid, the expected miner revenue is always \(m/2\) no matter how the coalition behaves. With this key observation, it is not hard to see that the mechanism satisfies Bayesian MIC and Bayesian SCP (for an arbitrary \(c\)). Further, ex post UIC follows directly since the mechanism is a simple posted-price auction from a user's perspective.
**Remark 2.1**.: In the above, we assumed that the bid distribution \(\mathcal{D}\) has a median \(m\) such that \(\Pr_{x\sim\mathcal{D}}[x\geq m]=1/2\). In case the median \(m\) does not exactly equally divide the probability mass half and half, then it must be that \(\Pr_{x\sim\mathcal{D}}[x>m]<1/2\) and \(\Pr_{x\sim\mathcal{D}}[x<m]<1/2\). In this case, we can modify the above mechanism slightly as follows: if a user's bid is strictly greater than \(m\), then it is confirmed; if a user's bid is exactly \(m\), then we confirm it with some appropriate probability \(q\); else the user's bid is not confirmed. We can always pick a \(q\) such that a bid randomly sampled from \(\mathcal{D}\) is confirmed with probability exactly \(1/2\). Finally, the miner revenue rule is still decided the same way as before.
Warmup: \(\Theta(h)\) revenue but approximate incentive compatibility.The parity-based mechanism overcomes the \(0\) miner-revenue limitation of Shi et al. [13] by assuming the existence of at least \(h=1\) honest user. However, the drawback is obvious: the total miner revenue is severely restricted and does not increase w.r.t. the number of bids. A natural question is whether we can increase the expected miner revenue if we are promised more honest users. Unfortunately, there does not seem to be a straightforward way to extend the parity-based mechanism to achieve \(\Theta(h)\) revenue for a more general choice of \(h\).
As a stepping stone, we first consider an intermediate mechanism that aims only for _approximate_ incentive compatibility:
**MPC-assisted, threshold-based mechanism**
_// Let \(m\) be the median of the distribution \(\mathcal{D}\) such that \(\Pr_{x\sim\mathcal{D}}[x\geq m]=1/2\)._
* All bids that are at least \(m\) get confirmed and pay \(m\).
* If the number of confirmed bids is at least \(h/4\), then the miner revenue is \(m\cdot h/4\); else the total miner revenue is \(0\).
Due to the standard Chernoff bound, except with \(e^{-\Omega(h)}\) probability, the number of confirmed bids among the \(h\) (or more) honest bids is at least \(h/4\). Therefore, the above mechanism achieves at least \(m\cdot h/4\cdot(1-e^{-\Omega(h)})\) expected miner revenue. If the number of confirmed honest bids is \(h/4\) or higher, then the coalition cannot increase the miner revenue no matter how it behaves. Only when the number of confirmed honest bids is less than \(h/4\), is it possible for the coalition to influence the miner revenue by at most \(m\cdot h/4\). Therefore, it is not hard to see that the mechanism satisfies \(\epsilon\)-Bayesian MIC and \(\epsilon\)-Bayesian SCP in \((h,*,*,*)\)-environments, for \(\epsilon=m\cdot h\cdot e^{-\Omega(h)}/4\).
Just like before, in case the median \(m\) does not exactly divide the probability mass half and half, we can use the same approach of Remark 2.1 to modify the mechanism and make it work.
Finally, we remark that while in this paper, we use the threshold-based mechanism as a stepping stone towards getting strict incentive compatibility, the mechanism itself might be of interest in practical scenarios, due to its simplicity and the error tolerance in choosing the parameter \(m\).
Strict incentive compatibility.The drawback of the threshold-based mechanism is that it achieves only approximate incentive compatibility. Our final goal is to achieve \(\Theta(h)\) total miner revenue but with strict incentive compatibility. To achieve this, our idea is to devise a mechanism that is "close in distance" to the aforementioned threshold-based mechanism, but correcting the "error" such that we can achieve strict incentive compatibility.
Observe that the earlier threshold-based mechanism only needs an a-priori known lower bound on \(h\), and it is universal in the parameters \(c\) and \(d\). To achieve strict incentive compatibility, we additionally assume that the the number of bids contributed by the strategic coalition is upper bounded by some apriori-known parameter \(d\).
Now, consider the following mechanism that relies on linear programming to correct the error in the earlier threshold-based mechanism. For simplicity, we assume that the honest bid distribution \(\mathcal{D}\) has a median \(m\) such that \(\Pr_{x\sim\mathcal{D}}[x\geq m]=1/2\) -- if not, we can again use the technique of Remark 2.1 to modify the mechanism and make it work.
**MPC-assisted, LP-based mechanism**
_// Let \(m\) be the median of the distribution \(\mathcal{D}\), i.e., \(\Pr_{x\sim\mathcal{D}}[x\geq m]=1/2\)._
* All bids that are at least \(m\) get confirmed and pay \(m\).
* Let \(n\) be the length of the bid vector, let \(\mathbf{y}:=(y_{0},y_{1},\ldots,y_{n})\) be any feasible solution to the following linear program: \[\forall i\in[n]:\ 0\leq y_{i}\leq i\cdot m\] (1) \[\forall 0\leq j\leq d:\ \sum_{i=0}^{n-d}q_{i}\cdot y_{i+j}=\frac{m \cdot h}{4}\] (2) where \(q_{i}\) is the probability of observing \(i\) heads if we flip \(n-d\) independent fair coins.
* The total miner revenue is \(y_{s}\) where \(s\) is the number of bids confirmed.
In the above, Equation (1) expresses a _budget feasibility_ requirement, i.e., the total miner revenue cannot exceed the total user payment. Equation (2) expresses a _fixed-revenue requirement_ stipulating that the miner revenue must be exactly \(m\cdot h/4\) no matter how the strategic individual or coalition behaves (as long as it controls at most \(d\) bids). More specifically, Equation (2) contains one requirement for each \(j\in[0,d]\): conditioned on the fact that among the (at most) \(d\) bids controlled by the strategic individual or coalition, exactly \(j\) of them are confirmed, the expected miner revenue must be exactly \(m\cdot h/4\) where \(h\) is an a-priori known lower bound on the number of honest users.
**Remark 2.2**.: We know that the actual number of honest users that show up is at least \(\max(n-d,h)\). So if \(n-d>h\), it means that more honest users showed up than the anticipated number \(h\). Observe that on the left-hand side of Equation (2), we are tossing coins for \(n-d\) honest users' bids. However, it is important that the right-hand-side of Equation (2) use the a-priori known \(h\) rather than the observed \(n-d\); otherwise, injecting extra (but up to \(d-c\)) fake \(0\)-bids can increase the expected miner revenue, which violates MIC and SCP.
If the LP in the above mechanism indeed has a feasible solution, then we can prove that the resulting mechanism satisfies ex post UIC, Bayesian MIC, and Bayesian SCP in \((h,*,c,d)\)-environments. The formal proofs are presented in [11].
The key technical challenge is to answer the question why the LP has a feasible solution. Intuitively, the earlier threshold-based mechanism gives an "approximate" solution \(\widehat{\mathbf{y}}:=(\widehat{y}_{0},\ldots,\widehat{y}_{n})\) to the LP, where \(\widehat{y}_{i}=0\) for \(i\leq(n-d)/4\) and \(\widehat{y}_{i}=\frac{m\cdot h}{4}\) otherwise. With the approximate solution \(\widehat{\mathbf{y}}\), the equality constraints in Equation (2) may be satisfied with some small error. We want to show that we can adjust the \(\widehat{\mathbf{y}}:=(y_{0},y_{1},\ldots,y_{n})\) vector slightly such that we can correct the error, and yet without violating the budget feasibility constraints (Equation (1)).
To achieve this, we will take a constructive approach. We first guess that a feasible solution is of the form \(\mathbf{y}=\widehat{\mathbf{y}}+\mathbf{e}\) where \(\mathbf{e}\) is a correction vector that is zero everywhere except in the coordinates \(\tau,\tau+1,\ldots,\tau+d\) for some appropriate choice of \(\tau\) that is close to \((n-d)/2\). Henceforth, let \(\boldsymbol{\delta}:=\mathbf{e}[\tau:\tau+d]/\left(\frac{m\cdot h}{4}\right)\) be the non-zero coordinates of the correction vector \(\mathbf{e}\) scaled by \(\frac{m\cdot h}{4}\).
By Equation (2), we know that the correction vector \(\boldsymbol{\delta}\) must satisfy the following system of linear equations where \(t:=\frac{n-d}{4}\):
\[\begin{pmatrix}\begin{pmatrix}n-d\\ \tau\end{pmatrix}&\begin{pmatrix}n-d\\ \tau+1\end{pmatrix}&\ldots&\begin{pmatrix}n-d\\ \tau+d\end{pmatrix}\\ \begin{pmatrix}n-d\\ \tau-1\end{pmatrix}&\begin{pmatrix}n-d\\ \tau\end{pmatrix}&\ldots&\begin{pmatrix}n-d\\ \tau+d-1\end{pmatrix}\\ \vdots&\vdots&\ddots&\vdots\\ \begin{pmatrix}n-d\\ \tau-d\end{pmatrix}&\begin{pmatrix}n-d\\ \tau-d+1\end{pmatrix}&\ldots&\begin{pmatrix}n-d\\ \tau\end{pmatrix}\end{pmatrix}\cdot\boldsymbol{\delta}=\begin{pmatrix} \sum_{i=0}^{t}\begin{pmatrix}n-d\\ i\end{pmatrix}\\ \sum_{i=0}^{t-1}\begin{pmatrix}n-d\\ i\end{pmatrix}\\ \vdots\\ \sum_{i=0}^{t-d}\begin{pmatrix}n-d\\ i\end{pmatrix}\end{pmatrix}. \tag{3}\]
In Lemma 4.5, we prove that as long as \(d\leq\frac{1}{8}\sqrt{\frac{h}{2\log h}}\), and that \(\tau\) is an appropriate choice close to \(n/2\), then the solution \(\boldsymbol{\delta}\) to the linear system in Equation (3) has a small infinity norm -- specifically, \(\|\boldsymbol{\delta}\|_{\infty}\leq 1\) -- such that the resulting \(\mathbf{y}\) vector will respect the budget feasibility constraints, i.e., Equation (1). The actual proof of this bound is somewhat involved and thus deferred to Section 4. In particular, a key step is to bound the _smallest singular value_ of the matrix in Equation (3) (henceforth denoted \(A\)) appropriately -- to achieve this, we first bound \(A\)'s determinant, and then use an inequality proven by [13] which relates the smallest singular value and the determinant.
### Finite Block Setting
#### 2.2.1 Limits of Strict Incentive Compatibility
Feasibility for \(c=1\).The LP-based mechanism confirms any bid that offers to pay at least \(m\). Thus, total number of confirmed bids may be unbounded. Therefore, when the block size \(k\) is finite, we cannot directly run the LP-based mechanism. We suggest the following modification to the LP-based mechanism such that it works for the finite-block setting:
**MPC-assisted, LP-based mechanism for finite blocks**
_// Let \(k\) be the block size, let \(m\) be the median \(\mathcal{D}\) such that \(\Pr_{x\sim\mathcal{D}}[x\geq m]=1/2\)._
* All bids offering at least \(m\) are candidates. If there are more than \(k\) candidates, randomly select \(k\) of them to confirm; else confirm all candidates. Every confirmed bid pays \(m\).
* Let \(n\) be the length of the bid vector, let \(\mathbf{y}=(y_{0},y_{1},\ldots,y_{n})\) be any feasible solution to the following linear program: \[\forall i\in[n]:\ 0\leq y_{i}\leq\min(i,k)\cdot m\] (4) \[\forall 0\leq j\leq d:\ \sum_{i=0}^{n-d}q_{i}\cdot y_{i+j}=\frac{m \cdot\min(h,k)}{4}\] (5) where \(q_{i}\) is the probability of observing \(i\) heads if we flip \(n-d\) independent fair coins.
* The total miner revenue is \(y_{s}\) where \(s\) is the number of _candidates_.
In comparison with the earlier LP-based mechanism, we modify the budget feasibility constraints (Equation (4)) to make sure that the total miner revenue is constrained by the actual number of confirmed bids which is now \(\min(i,k)\) if the number of candidates is \(i\). Further, we modify the expected miner revenue (Equation (5)) to be \(\frac{m\cdot\min(h,k)}{4}\) which takes into account the block size \(k\). In Section 5.1.1, we prove that as long as \(c\leq d\leq\frac{1}{8}\sqrt{\frac{h}{2\log h}}\), the above LP indeed has a feasible solution, and the resulting mechanism satisfies ex post UIC, Bayesian MIC, and Bayesian SCP in \((h,*,1,d)\)-environments.
Infeasibility for \(c\geq 2\).Unfortunately, the above approach fails for \(c\geq 2\). In this case, two users Alice and Bob may be in the same coalition. Alice can now help Bob simply by dropping out and not posting a bid, thus effectively increasing Bob's chance of getting confirmed. In the event that
Alice's true value is very small and Bob's true value is sufficiently large, this strategic action can increase the coalition's joint utility.
Interestingly, it turns out that this is no accident. In fact, we prove that for any \(h\geq 1\), \(\rho\in(0,1)\), and \(d\geq c\geq 2\), no "interesting" mechanism can simultaneously achieve Bayesian UIC and SCP in \((h,\rho,c,d)\)-environments -- any such mechanism must suffer from \(0\) total social welfare for the users if the actual number of bids received is greater than \(h\) (see Theorem 1.4). We can regard Theorem 1.4 as a generalization of Shi et al. [13]'s Theorem 5.5: they show that any MPC-assisted mechanism that achieves Bayesian UIC, MIC, and SCP in \((*,\rho,c,*)\)-environments for \(c\geq 2\) must suffer from \(0\) social welfare for users.
Proof roadmap.We use the following blueprint to prove Theorem 1.4. Below, consider any TFM that satisfies Bayesian UIC and Bayesian SCP in \((h,\rho,c,d)\)-environments where \(d\geq c\geq 2\).
1. First, in Lemma 5.3, using techniques inspired by Goldberg and Hartline [14] we prove the following: provided that there are at least \(h\) honest users (not including \(i\) and \(j\)) whose bids are sampled at random from \(\mathcal{D}\), then a strategic user \(i\) changing its bid should not affect the utility of another user \(j\), if user \(j\)'s bid is also sampled at random from \(\mathcal{D}\).
2. Next, in Lemma 5.3, we prove a strategic user \(i\) dropping out should not affect another user \(j\)'s utility, assuming that at least \(h\) bids (excluding user \(i\)) sampled at random from \(\mathcal{D}\).
3. Next, in Corollary 5.4, we show that in a world of at least \(h\) random bids (excluding user \(i\)) sampled from \(\mathcal{D}\), user \(i\)'s expected utility when its bid is sampled randomly from \(\mathcal{D}\) depends only on \(i\)'s identity, and does not depend on the identities of the other random bids. Therefore, henceforth we can use \(Ui\) to denote this expected utility.
4. Next, in Lemma 5.5, we show that for any two identities \(i,j\), it must be that \(U_{i}=U_{j}\), otherwise, it violates the assumption that the mechanism is weakly symmetric (see definition of weak symmetry below).
5. Next, we can show that \(U_{i}=0\): imagine a world with \(K\) bids sampled independently from \(\mathcal{D}\) whose support is bounded. There must exist some user whose confirmation probability is upper bounded by \(k/K\). This user's expected utility must be arbitrarily small when \(K\) is arbitrarily large. With a little more work, we can show that if the world consists of more than \(h\) bids sampled independently at random from \(\mathcal{D}\), it must be that every user's expected utility is \(0\).
One technicality that arises in the full proof (see Section 5.1.2) is the usage of the weak symmetry assumption. In particular, the proof would have been much easier if we could instead assume _strong symmetry_ which, unfortunately, is too stringent. In strong symmetry, we assume that any two users who bid the same amount will receive the same treatment. While it is a good approach for gaining intuition about the proof, it is too stringent since there could well be more bids offering the same value than the block size \(k\) -- in this case, a non-trivial mechanism would treat them differently, i.e., confirm some while rejecting others. Our actual proof of Theorem 1.4 needs only a _weak symmetry_ assumption which is a standard assumption made in prior works [13, 13], that is, if two input bid vectors \(\mathbf{b},\mathbf{b}^{\prime}\) of length \(n\) are permutations of each other, then the joint distribution of the _set_ of outcomes \(\{(x_{i},p_{i})\}_{i\in[n]}\) must be identical. This implies that if the input bid vectors are permutations of each other, then the vector of expected utilities are permutations of each other too.
#### 2.2.2 Optimal Miner Revenue under Approximate Incentive Compatibility
Because of the limitation shown in Theorem 1.4, we relax the notion to approximate incentive compatibility, and ask if we can achieve optimal miner revenue in the finite block setting. Consider the following LP-based TFM.
**MPC-assisted, diluted threshold-based Mechanism**
_/* Let_ \(k\) _be the block size, let_ \(m\) _be the median of_ \(\mathcal{D}\) _such that_ \(\Pr_{x\sim\mathcal{D}}[x\geq m]=1/2\)_, let_ \(T\) _be the maximum value of the distribution_ \(\mathcal{D}\)_. */_
* Let \(R:=\max\left(2c\sqrt{\frac{kT}{\epsilon}},k\right)\). All bids offering at least \(m\) are candidates. If the number of candidates \(s\leq R\), randomly select \(\frac{k}{R}\cdot s\) candidates to confirm; else, randomly select \(k\) candidates to confirm. Every confirmed bid pays \(m\).
* If \(s\geq\frac{h}{4}\), then the total miner revenue is \(\min(\frac{h}{4}\cdot\frac{k}{R},k)\cdot m\). Otherwise, the miners get nothing.
Intuitively, here are modifying the earlier threshold-based mechanism to 1) make it compatible with finite block size, and 2) make sure that up to \(c\) users dropping out can only minimally increase their friend's probability of getting confirmed. In particular, resilience to drop-out is achieved by artificially diluting the probability that a user is confirmed when the number of eligible bids (i.e., offering at least \(m\)) is small. With the dilution, we guarantee that a coalition of \(c\) users cannot noticeably alter their own probability of getting confirmed, nor their friend's probability. This implies a strategic coalition has little influence over the expected utility of all users in the coalition. Moreover, we guarantee that a strategic coalition has very little influence on the miner revenue as well: similar to the threshold-based mechanism, except with \(\exp(-\Omega(h))\) probability, the miner revenue is an a-priori fixed amount, that is, \(\min(\frac{h}{4}\cdot\frac{k}{R},k)\cdot m\). Summarizing the above, we can show that the mechanism satisfies ex post UIC, Bayesian \(\epsilon\)-MIC, and Bayesian \(\epsilon\)-SCP in \((h,*,c,*)\)-environments, as long as \(\epsilon\geq m\cdot\frac{h}{2}\cdot e^{-\frac{h}{16}}\).
Finally, for sufficiently large \(h\geq\max(4k,8c\sqrt{\frac{kT}{\epsilon}})\), the mechanism achieves \(k\cdot m\) total miner revenue and \(k\cdot C_{\mathcal{D}}\) user social welfare where \(C_{\mathcal{D}}\) is defined in Theorem 1.1. For example, suppose we are willing to tolerate \(\epsilon=0.01T\), then we just need \(h\geq\max(4k,80c\cdot\sqrt{k})\) to achieve asymptotic optimality in miner revenue and social welfare. The full proof is deferred to Section 5.2.
### Additional Results
Lower bound on miner revenue.In Section 6.1, we prove that \(\Theta(h)\) revenue is optimal in \((h,\rho,c,d)\)-environments (Theorem 1.2). The proof is a generalization of the techniques proposed by Shi et al. [1]. Specifically, they proved that any mechanism that satisfies Bayesian UIC, MIC, and SCP in \((*,\rho,1,*)\)-environments must suffer from \(0\) miner revenue. In their proof, they argue that if we remove one bid, the miner revenue must be unaffected. In our case, because the mechanism is promised a lower bound \(h\) on the number of honest users, we can repeat this argument till there are \(h\) honest bids left, and no more. This gives rise to an \(O(h)\) limit on miner revenue.
Necessity of Bayesian equilibrium.As mentioned, our reasonable-world assumptions (formalized through the definition of an \((h,\rho,c,d)\)-environment) would not have helped had we insisted on ex post notions of equilibrium (for all of UIC, MIC, and SCP). In Section 6.2, we explain why for ex post notions of incentive compatibility, even mechanisms in the \((h,\rho,c,d)\)-environment are subject to the same miner-revenue limitations of universal mechanisms, as proved by Shi et al. [1].
Model and Definitions
Imagine that there are \(n_{0}\) users, and each user has a transaction that wants to be confirmed. For \(i\in[n_{0}]\), let \(v_{i}\) be user \(i\)'s true valuation of getting its transaction confirmed. We want to design a transaction fee mechanism (TFM) such that no individual user or miner or a coalition thereof have any incentive to deviate from honest behavior. Throughout the paper, we consider a single-parameter environment, i.e., each user's bid is represented by a single, non-negative real number.
Chung and Shi [20]'s results ruled out the existence of interesting TFMs in the plain model without cryptography. First, they show a _zero miner-revenue_ bound: any TFM that guarantees incentive compatibility for each individual user as well as for a miner-user coalition must suffer from zero miner revenue. The zero miner-revenue bound holds matter whether the block size is infinite or finite, and even when the miner is allowed to collude with only one user. Second, they prove a _finite-block impossibility_: assuming finite block size, then no TFM can simultaneously guarantee incentive compatibility for each individual user as well as for a miner-user coalition (even when the miner is allowed to collude with at most one user).
The subsequent work of Shi, Chung, and Wu [21] considered how cryptography can helps circumvent these strong impossibilities [20]. They proposed the _MPC-assisted model_, where the rules of the TFM are enforced through a multi-party computation (MPC) protocol jointly executed among a set of miners. Unlike the plain model, the MPC-assisted model guarantees that a single miner cannot unilaterally decide which transactions to include in the block. This ties the hands of the strategic player(s), and Shi et al. [21] showed that indeed this model indeed results in interesting mechanisms that achieve properties that would otherwise be impossible in the plain model. Below, we review the MPC-assisted model proposed by Shi, Chung, and Wu [21].
### MPC-Assisted Model
Ideal-world game.Recently, blockchain projects such as Ethereum are developing "encrypted mempool" techniques which can be viewed as concrete protocols that realize an MPC-assisted model for TFM. As Shi et al. [21] pointed out, however, for understanding the game theoretic landscape, it helps to abstract out the cryptography and think of it as a trusted _ideal functionality_ (henceforth denoted \(\mathcal{F}_{\text{mpc}}\)) that always honestly implements the rules of the TFM.
With the ideal functionality \(\mathcal{F}_{\text{mpc}}\), we can imagine the following game that captures an instance of the TFM:
1. Each user registers zero, one, or more identities with \(\mathcal{F}_{\text{mpc}}\), and submits exactly one bid on behalf of each identity.
2. Using the vector of input bids as input, \(\mathcal{F}_{\text{mpc}}\) executes the rules of the TFM. \(\mathcal{F}_{\text{mpc}}\) now sends to all miners and users the output of the mechanism, including the set of bids that are confirmed, how much each confirmed bid pays, and how much revenue the miner gets.
We make a couple of standard assumptions:
* _Individual rationality_: each confirmed bid should pay no more than the bid itself;
* _Budget feasibility_: the miner revenue should not exceed the total payment from all confirmed bids.
Using standard techniques in cryptography, we can instantiate the ideal functionality \(\mathcal{F}_{\text{mpc}}\) using an actual cryptographic protocol among the miners and users (see Appendix D of Shi et al. [21]).
al. [13]). Further, in the actual instantiation, the users only need to be involved in the input phase: they only need to verifiably secret-share their input bids among all miners, and they need not be involved in the remainder of the protocol. The miners then jointly run some MPC protocols to compute securely the outcome of the auction. We can use an MPC protocol that retains security even when all but one miner are corrupt [12]. Such protocols achieve a security notion called "security with abort", i.e., an adversary controlling a majority coalition can cause the protocol to abort without producing any outcome. Conceptually, one can imagine that in the ideal-world protocol where parties interact with \(\mathcal{F}_{\mathrm{mpc}}\), the adversary is allowed to send \(\bot\) to \(\mathcal{F}_{\mathrm{mpc}}\), in which case \(\mathcal{F}_{\mathrm{mpc}}\) will abort and output \(\bot\) to everyone. However, no strategic coalition should have an incentive to cause the protocol to abort -- in this case, no block will be mined and the coalition has a utility of \(0\). Thus, without loss of generality, we need not explicitly capture aborting as a possible strategy in our ideal-world game mentioned above.
Strategy space.An honest user will always register a single identity and submit only one bid reflecting its true value. A strategic user or miner (possibly colluding with others) can adopt the following strategies or a combination thereof:
* _Bid untruthfully_: a strategic user can misreport its value;
* _Inject fake bids_: a strategic user or miner can _inject fake bids_ by registering fake identities;
* _Drop out_: a strategic user can also _drop out_ by not registering its real identity.
In the real-world cryptographic instantiation, strategic miners can also deviate from the honest MPC protocol. However, as mentioned, the MPC protocol retains security (i.e., can be simulated by the ideal-world game) as long as at least one miner is honest. Therefore, we need not explicitly capture this deviation in the ideal-world game. Finally, strategic miners can cause the MPC protocol to abort without producing output, and as mentioned, this deviation never makes sense since it results in a utility of \(0\); thus we also need not explicitly capture it in the ideal-world game.
### Defining Incentive Compatibility
In the plain model without cryptography, users submit their bids in the clear over a broadcast channel, and a strategic coalition can decide its strategy after observing the remaining honest users' bids. By contrast, in the MPC-assisted model, bids are submitted to the ideal functionality \(\mathcal{F}_{\mathrm{mpc}}\) (in the actual cryptographic instantiation, the users verifiably secret-share their bids among the miners). This means that the strategic coalition must now submit its bids without having observed other users' bids. Therefore, in the MPC-model, it makes sense to consider a _Bayesian_ notion of equilibrium rather than an _ex post_ notion. In an _ex post_ setting, we require that a strategic individual or coalition's best response is to act honestly even after having observed others' actions. In a _Bayesian_ setting, we assume that every honest user's bid is sampled independently from some distribution \(\mathcal{D}\), and we require that acting honestly maximizes the strategic individual or coalition's expected utility where the expectation is taken over not just the random coins of the TFM itself, but also over the randomness in sampling the honest users' bids.
Notations.Henceforth, we use the notation \(\mathbf{b}\) to denote a bid vector. Since we allow strategic players to inject fake bids or drop out, the length of \(\mathbf{b}\) need not be the same as the number of users \(n_{0}\). We use the notation \(\mathcal{C}\) to denote a coalition, and we use \(\mathbf{b}_{-\mathcal{C}}\) to denote the bid vector belonging to honest users outside \(\mathcal{C}\). We use the notation \(\mathcal{D}_{-\mathcal{C}}\) to denote the joint distribution of \(\mathbf{b}_{-\mathcal{C}}\), that is, \(\mathcal{D}_{-\mathcal{C}}=\mathcal{D}^{h}\) where \(h\) denotes the number of honest users outside \(\mathcal{C}\). Similarly, if \(i\) is an individual
strategic user, then the notation \(\mathbf{b}_{-i}\) denotes the bid vector belonging the remaining honest users in \([n_{0}]\backslash\{i\}\). We use the notation \(\mathcal{D}_{-i}\) to denote the joint distribution of the honest bid vector \(\mathbf{b}_{-i}\).
For generality, we define _approximate_ incentive compatibility parameterized by an additive slack \(\epsilon\). The case of _strict_ incentive compatibility can be viewed as the special case when \(\epsilon=0\).
#### Bayesian Incentive Compatibility
**Definition 3.1** (Bayesian incentive compatibility).: We say that an MPC-assisted TFM satisfies Bayesian \(\epsilon\)-incentive compatibility for a coalition or individual \(\mathcal{C}\), iff for any \(\mathbf{v}_{\mathcal{C}}\) denoting the true values of users in \(\mathcal{C}\), sample \(\mathbf{b}_{-\mathcal{C}}\sim\mathcal{D}_{-\mathcal{C}}\), then, no strategy can increase \(\mathcal{C}\)'s expected utility by more than \(\epsilon\) in comparison with honest behavior, where the expectation is taken over randomness of the honest users' bids \(\mathbf{b}_{-\mathcal{C}}\), as well as random coins consumed by the TFM. Specifically, we define the following notions depending on who is the strategic individual or coalition:
* _User incentive compatibility (UIC)._ We say that an MPC-assisted TFM satisfies Bayesian \(\epsilon\)-UIC in some environment \(\mathcal{E}\), iff for any \(n\), for any user \(i\in[n]\), for any true value \(v_{i}\in\mathbb{R}^{\geq 0}\) of user \(i\), for any strategic bid vector \(\mathbf{b}_{i}\) from user \(i\) which could be empty or consist of multiple bids, the following holds as long as the conditions required by the environment \(\mathcal{E}\) are respected: \[\underset{\mathbf{b}_{-i}\sim\mathcal{D}_{-i}}{\mathbf{E}}\left[\mathsf{util} ^{i}(\mathbf{b}_{-i},v_{i})\right]\geq\underset{\mathbf{b}_{-i}\sim\mathcal{ D}_{-i}}{\mathbf{E}}\left[\mathsf{util}^{i}(\mathbf{b}_{-i},\mathbf{b}_{i}) \right]-\epsilon\] where \(\mathsf{util}^{i}(\mathbf{b})\) denotes the expected utility (taken over the random coins of the TFM) of user \(i\) when the bid vector is \(\mathbf{b}\).
* _Miner incentive compatibility (MIC)._ We say that an MPC-assisted TFM satisfies Bayesian \(\epsilon\)-MIC in some environment \(\mathcal{E}\), iff for any miner coalition \(\mathcal{C}\), for any strategic bid vector \(\mathbf{b}^{\prime}\) injected by the miner, the following holds as long as the conditions required by the environment \(\mathcal{E}\) are respected: \[\underset{\mathbf{b}_{-\mathcal{C}}\sim\mathcal{D}_{-\mathcal{C}}}{\mathbf{E} }\left[\mathsf{util}^{\mathcal{C}}(\mathbf{b}_{-\mathcal{C}})\right]\geq \underset{\mathbf{b}_{-\mathcal{C}}\sim\mathcal{D}_{-\mathcal{C}}}{\mathbf{E} }\left[\mathsf{util}^{\mathcal{C}}(\mathbf{b}_{-\mathcal{C}},\mathbf{b}^{ \prime})\right]-\epsilon\] where \(\mathsf{util}^{\mathcal{C}}(\mathbf{b})\) denotes the expected utility (taken over the random coins of the TFM) of the coalition \(\mathcal{C}\) when the input bid vector is \(\mathbf{b}\).
* _Side-contract-proofness (SCP)._ We say that an MPC-assisted TFM satisfies Bayesian \(\epsilon\)-SCP in some environment \(\mathcal{E}\), iff for any miner-user coalition, for any true value vector \(\mathbf{v}_{\mathcal{C}}\) of users in \(\mathcal{C}\), for any strategic bid vector \(\mathbf{b}_{\mathcal{C}}\) of the coalition (whose length may not be equal to the number of users in \(\mathcal{C}\)), the following holds as long as the requirements of the environment \(\mathcal{E}\) are respected: \[\underset{\mathbf{b}_{-\mathcal{C}}\sim\mathcal{D}_{-\mathcal{C}}}{\mathbf{E} }\left[\mathsf{util}^{\mathcal{C}}(\mathbf{b}_{-\mathcal{C}},\mathbf{v}_{ \mathcal{C}})\right]\geq\underset{\mathbf{b}_{-\mathcal{C}}\sim\mathcal{D}^{- \mathcal{C}}}{\mathbf{E}}\left[\mathsf{util}^{\mathcal{C}}(\mathbf{b}_{- \mathcal{C}},\mathbf{b}_{\mathcal{C}})\right]-\epsilon\] Henceforth, if a mechanism satisfies Bayesian \(\epsilon\)-UIC for \(\epsilon=0\) (i.e., the _strict_ incentive compatibility case), we often omit writing the \(\epsilon\), and simply say that the mechanism satisfies Bayesian UIC. The terms "Bayesian MIC", and "Bayesian SCP" are similarly defined.
#### Ex Post Incentive Compatibility
**Definition 3.2** (Ex post incentive compatibility).: We say that a TFM satisfies ex post \(\epsilon\)-UIC, \(\epsilon\)-MIC, and \(\epsilon\)-SCP respectively, for a coalition or individual \(\mathcal{C}\), iff the following conditions hold, respectively:
* _User incentive compatibility (UIC)._ We say that an MPC-assisted TFM satisfies ex post \(\epsilon\)-UIC in some environment \(\mathcal{E}\), iff for any \(n\), for any user \(i\in[n]\), for any bid vector \(\mathbf{b}_{-i}\) denoting the bids of everyone else besides \(i\), for any true value \(v_{i}\in\mathbb{R}^{\geq 0}\) of user \(i\), for any strategic bid vector \(\mathbf{b}_{i}\) from user \(i\) which could be empty or consist of multiple bids, the following holds as long as the conditions required by the environment \(\mathcal{E}\) are respected: \[\mathsf{util}^{i}(\mathbf{b}_{-i},v_{i})\geq\mathsf{util}^{i}(\mathbf{b}_{-i},\mathbf{b}_{i})-\epsilon\] where \(\mathsf{util}^{i}(\mathbf{b})\) denotes the expected utility (taken over the random coins of the TFM) of user \(i\) when the bid vector is \(\mathbf{b}\).
* _Miner incentive compatibility (MIC)._ We say that an MPC-assisted TFM satisfies ex post \(\epsilon\)-MIC in some environment \(\mathcal{E}\), iff for any miner coalition \(\mathcal{C}\), for any bid vector \(\mathbf{b}_{-\mathcal{C}}\), for any strategic bid vector \(\mathbf{b}^{\prime}\) injected by the miner, the following holds as long as the conditions required by the environment \(\mathcal{E}\) are respected: \[\mathsf{util}^{\mathcal{C}}(\mathbf{b}_{-\mathcal{C}})\geq\mathsf{util}^{ \mathcal{C}}(\mathbf{b}_{-\mathcal{C}},\mathbf{b}^{\prime})-\epsilon\] where \(\mathsf{util}^{\mathcal{C}}(\mathbf{b})\) denotes the expected utility (taken over the random coins of the TFM) of the coalition \(\mathcal{C}\) when the input bid vector is \(\mathbf{b}\).
* _Side-contract-proofness (SCP)._ We say that an MPC-assisted TFM satisfies ex post \(\epsilon\)-SCP in some environment \(\mathcal{E}\), iff for any miner-user coalition, for any bid vector \(b_{-\mathcal{C}}\) submitted by non-coalition-members, for any true value vector \(\mathbf{v}_{\mathcal{C}}\) of users in \(\mathcal{C}\), for any strategic bid vector \(\mathbf{b}_{\mathcal{C}}\) of the coalition (whose length may not be equal to the number of users in \(\mathcal{C}\)), the following holds as long as the requirements of the environment \(\mathcal{E}\) are respected: \[\mathsf{util}^{\mathcal{C}}(\mathbf{b}_{-\mathcal{C}},\mathbf{v}_{\mathcal{C}}) \geq\mathsf{util}^{\mathcal{C}}(\mathbf{b}_{-\mathcal{C}},\mathbf{b}_{ \mathcal{C}})-\epsilon\] Henceforth, if a mechanism satisfies ex post \(\epsilon\)-UIC for \(\epsilon=0\) (i.e., the _strict_ incentive compatibility case), we often omit writing the \(\epsilon\), and simply say that the mechanism satisfies ex post UIC. The terms "ex post MIC", "ex post SCP" are similarly defined. Recall that in the game representing the plain model, strategic players can choose their actions _after_ having observed the bids submitted by honest users. This gives rise to the following fact which essentially says it does not make sense to consider Bayesian notions of equilibrium in the plain model. **Fact 3.3**.: _Any plain-model TFM that satisfies Bayesian \(\epsilon\)-UIC (or Bayesian \(\epsilon\)-MIC, Bayesian \(\epsilon\)-SCP resp.) in some environment \(\mathcal{E}\) must also satisfy ex post \(\epsilon\)-UIC (or ex post \(\epsilon\)-MIC, ex post \(\epsilon\)-SCP resp.)._
## 4 Analysis of the LP-Based Mechanism
### Preliminaries: Linear Algebra Tools
We first introduce some linear algebra tools needed for analyzing the LP-based mechanism. Throughout this section, all our indexing for vectors and matrices starts from \(0\). Given a vector \(\mathbf{b}=(b_{0},b_{1},\ldots,b_{n})\) and two integers \(i,j\) such that \(i\leq j\), we define \(\mathbf{b}[i:j]\) to be the subvector \((b_{i},\ldots,b_{j})\). We use \(A=(a_{ij})\in\mathbb{R}^{n,m}\) to denote a matrix in which the entry of the \(i\)-th row and \(j\)-th column is \(a_{ij}\). Let \(A^{T}\) denote the transpose of \(A\), and \(A^{-1}\) denote the inverse of \(A\) if \(A\) is non-singular.
Norm.Define the _infinity-norm_\(\|\mathbf{b}\|_{\infty}\) of a vector \(\mathbf{b}\) to be \(\|\mathbf{b}\|_{\infty}=\max\{|b_{i}|:0\leq i\leq n\}\). For a square \(n\times n\) matrix \(A=(a_{ij})\), define the following matrix norms:
* _Infinity norm_: \(\|A\|_{\infty}=\sup\limits_{\|x\|_{\infty}=1}\|Ax\|_{\infty}=\max_{i}\sum_{j= 1}^{n}|a_{ij}|\).
* \(\ell_{2}\)_-norm_: \(\|A\|_{2}=\sup\limits_{\|x\|_{2}=1}\|Ax\|_{2}\).
* _Frobenius norm_: \(\|A\|_{F}=\left(\sum_{i,j=0}^{n-1}a_{ij}^{2}\right)^{1/2}\).
It is easy to check that \(\|A\|_{\infty}\leq\|A\|_{2}\), and that \(\|Ax\|_{\infty}\leq\|A\|_{\infty}\|x\|_{\infty}\).
Singular value.For a square \(n\times n\) matrix \(A\), the singular values are the square roots of the eigenvalues of \(A^{T}A\).
**Fact 4.1**.: _Let \(A\in\mathbb{R}^{n\times n}\) be non-singular. Let \(\lambda_{1}\geq\cdots\geq\lambda_{n}\) be the singular values of \(A\). Then \(\|A^{-1}\|_{2}=\frac{1}{\lambda_{n}}\)._
**Lemma 4.2** (Yu and Gu [13]).: _Let \(A\in\mathbb{R}^{n\times n}\) be non-singular and \(\lambda\) be the smallest singular value of \(A\). Then_
\[\lambda\geq|\det(A)|\cdot\left(\frac{n-1}{\|A\|_{F}^{2}}\right)^{(n-1)/2}>0.\]
Determinant.The determinant of a matrix \(A=(a_{ij})\in\mathbb{R}^{n\times n}\) is \(\det(A)=\sum_{\sigma\in S_{n}}\mathsf{sgn}(\sigma)\prod_{i=1}^{n}a_{i,\sigma _{i}}\), where \(S_{n}\) is the set of all permutations \(\sigma\) over the set \(\{0,\ldots,n-1\}\). For each permutation \(\sigma\in S_{n}\), let \(\sigma_{i}\) denote the value of the \(i\)-th position after reordering by \(\sigma\). The signature \(\mathsf{sgn}(\sigma)\) of a permutation \(\sigma\) is \(+1\) if the permutation can be obtained by an even number of swaps between two entries and \(-1\) otherwise.
### Proofs for the LP-Based Mechanism
We now prove that the MPC-assisted LP-based mechanism satisfies strict incentive compatibility in an \((h,*,c,d)\)-environment. Suppose that each user's true value is sampled i.i.d. from a distribution \(\mathcal{D}\). Recall that \(m\) denotes the median of the bid distribution \(\mathcal{D}\), and \(C_{\mathcal{D}}=\mathbf{E}_{x\sim\mathcal{D}}[x-m|x\geq m]\) is another constant related to the distribution \(\mathcal{D}\). Without loss of generality, we assume \(\Pr[x\geq m]=\frac{1}{2}\) (see Remark 2.1).
**Theorem 4.3** (Theorem 1.1 restated).: _Suppose that the block size is infinite. Fix any4\(h\geq 2\), and any \(c\leq d\leq\frac{1}{8}\sqrt{\frac{h}{2\log h}}\), the MPC-assisted, LP-based mechanism guarantees ex post UIC, Bayesian MIC, and Bayesian SCP in an \((h,*,c,d)\)-environment, and meanwhile, the mechanism achieves \(\Theta(h\cdot m)\) expected miner revenue, and at least \(\Theta(\widetilde{h}\cdot C_{\mathcal{D}})\) expected social welfare for the users where \(\widetilde{h}\geq h\) is the the actual number of honest users that show up._
Footnote 4: For the special case \(h=1\), we can just use the parity-based mechanism of Section 2.1.
We prove Theorem 4.3 in two steps. First, we show that if the linear program defined in Equations (1) and (2) has a feasible solution, then the resulting mechanism satisfies incentive compatibility, as formally stated below:
**Lemma 4.4**.: _When the linear program defined in Equations (1) and (2) has a feasible solution, the LP-based mechanism satisfies ex post UIC, Bayesian MIC, and Bayesian SCP in an \((h,*,*,d)\)-environment. Moreover, the expected miner revenue is \(\frac{h\cdot m}{4}\), and the user social welfare is \(\Theta(\widetilde{h}\cdot C_{\mathcal{D}})\)._
Proof.: First, it is easy to see that the expected total miner revenue is \(\frac{h\cdot m}{4}\), as guaranteed by the linear program Equations (1) and (2). Moreover, since the expected utility of a user with true value \(v\) is \(v-m\) if \(v\geq m\), the expected user social welfare is at least
\[\sum_{i\in H}\underset{v_{i}\sim\mathcal{D}}{\mathbf{E}}[v_{i}-m\mid v_{i}>m]= \widetilde{h}\cdot\underset{x\sim\mathcal{D}}{\mathbf{E}}[x-m\mid x\geq m],\]
where \(H\) is the set of all honest users.
Next, we prove that the mechanism is strict incentive compatible if the linear program has a solution. UIC is easy to see as in the proof of Lemma A.1. Next, we only prove SCP since MIC follows from the same reasoning.
Scp.Since the confirmation and the payment of each bid are independent of other bids, and the mechanism is strict UIC, the coalition cannot increase colluding users' expected utilities. Therefore, we only need to show that the coalition cannot increase the expected total miner revenue by deviating from the mechanism. Intuitively, the linear program Equations (1) and (2) ensures that for arbitrary \(d\) bids, the total miner revenue taking an expectation over the remaining \(n-d\) bids always remains \(\frac{h\cdot m}{4}\).
Formally, let \(\widetilde{h}\) denote the number of _real honest bids_ and \(\mathbf{b}_{-\mathcal{C}}\) denote the random variable of honest users' bids. Then \(\widetilde{h}\geq n-d=\gamma\). For any bid \(\mathbf{b}_{\mathcal{C}}\) controlled by the coalition, the expected total miner revenue is
\[\underset{\mathbf{b}_{-\mathcal{C}}\sim\mathcal{D}^{\widetilde{h}}}{\mathbf{ E}}[\mu(\mathbf{b}_{-\mathcal{C}},\mathbf{b}_{\mathcal{C}})]=\int\limits_{ \mathbf{t}\sim\mathcal{D}^{\widetilde{h}-\gamma}}\underset{\mathbf{b}\sim \mathcal{D}^{\gamma}}{\mathbf{E}}[\mu(\mathbf{b},\mathbf{t},\mathbf{b}_{ \mathcal{C}})]\,f(\mathbf{t})d\mathbf{t}, \tag{6}\]
where \(f(\cdot)\) is the p.d.f. for \(\mathcal{D}^{\widetilde{h}-\gamma}\). For any fixed \((\mathbf{t},\mathbf{b}_{\mathcal{C}})\), let \(I\) denote the number of bids that are larger than or equal to \(m\) in \((\mathbf{t},\mathbf{b}_{\mathcal{C}})\). Since the probability of an honest bid being at least \(m\) is exactly \(\frac{1}{2}\),
\[\underset{\mathbf{b}\sim\mathcal{D}^{\gamma}}{\mathbf{E}}[\mu(\mathbf{b}, \mathbf{t},\mathbf{b}_{\mathcal{C}})]=\sum_{i=0}^{\gamma}\frac{1}{2^{\gamma}} \binom{\gamma}{i}y_{i+I},\]
which is exactly \(\frac{h\cdot m}{4}\) as guaranteed by Equation (2). Substituting back into (6), for any bid \(\mathbf{b}_{\mathcal{C}}\), we have that
\[\underset{\mathbf{b}_{-\mathcal{C}}\sim\mathcal{D}^{\widetilde{h}}}{\mathbf{ E}}[\mu(\mathbf{b}_{-\mathcal{C}},\mathbf{b}_{\mathcal{C}})]=\int\limits_{ \mathbf{t}\sim\mathcal{D}^{\widetilde{h}-\gamma}}\frac{h\cdot m}{4}\cdot f( \mathbf{t})d\mathbf{t}=\frac{h\cdot m}{4}.\]
Therefore, for any \(d\) bids controlled by the coalition, the expected miner revenue remains \(\frac{h\cdot m}{4}\).
In the main body, we focus on proving the more challenging step, that is, as long as \(d\leq\frac{1}{8}\sqrt{\frac{h}{2\log h}}\), the linear program indeed has a feasible solution, formally stated below.
**Lemma 4.5**.: _For \(h\geq 2\) and \(d\leq\frac{1}{8}\sqrt{\frac{h}{2\log h}}\), the linear program specified by Equations (1) and (2) is guaranteed to have a feasible solution._
Proof.: We will give a constructive solution to the linear program Equations (1) and (2). Let \(\gamma:=n-d\) denote the number of bids that are sampled randomly from \(\mathcal{D}\). Let \(t=\lfloor\frac{\gamma}{4}\rfloor\), and \(\overline{\mu}\) be our target expected miner revenue \(\frac{m\cdot h}{4}\). We start from an "approximate" solution \(\widehat{\mathbf{y}}=(\widehat{y}_{0},\ldots,\widehat{y}_{n})\in\mathbb{R}^{n +1}\) such that \(\widehat{y}_{i}=0\) for any \(i\leq t\), and \(\widehat{y}_{i}=\overline{\mu}\) for any \(i>t\). Our goal is to find a correction \(\mathbf{e}=(e_{0},\ldots,e_{n})\in\mathbb{R}^{n+1}\) that is zero everywhere except for the indices \(i\in[z+d,z+2d]\) for some \(z\geq\frac{\gamma}{2}\) such that \(\widehat{\mathbf{y}}+\mathbf{e}\) is a feasible solution to the linear program Equations (1) and (2). Henceforth, let \(\boldsymbol{\delta}:=\mathbf{e}[z+d,z+2d]/\overline{\mu}\) be the non-zero coordinates of the correction, scaled by \(\overline{\mu}\). Then \(\boldsymbol{\delta}\) must satisfy the linear system \(A(z)\boldsymbol{\delta}=\boldsymbol{\Delta}\), where \(A(z)\) and \(\boldsymbol{\Delta}\) are defined as follows:
\[A(z)=\begin{pmatrix}\binom{\gamma}{z+d}&\binom{\gamma}{z+d+1}&\cdots&\binom{ \gamma}{z+2d}\\ \binom{\gamma}{z+d-1}&\binom{\gamma}{z+d}&\cdots&\binom{\gamma}{z+2d-1}\\ \vdots&\vdots&\ddots&\vdots\\ \binom{\gamma}{z}&\binom{\gamma}{z+1}&\cdots&\binom{\gamma}{z+d}\end{pmatrix}, \qquad\boldsymbol{\Delta}=\begin{pmatrix}\sum_{i=0}^{t}\binom{\gamma}{i}\\ \sum_{i=0}^{t-1}\binom{\gamma}{i}\\ \vdots\\ \sum_{i=0}^{t-d}\binom{\gamma}{i}\end{pmatrix}.\]
If there exists a \(z^{*}\in[\lceil\frac{\gamma}{2}\rceil,\lceil\frac{\gamma}{2}\rceil+2d^{2}]\) such that this linear system \(A(z^{*})\boldsymbol{\delta}=\boldsymbol{\Delta}\) has a solution \(\boldsymbol{\delta}\), then choosing \(\mathbf{e}\) such that \(\mathbf{e}[z^{*}+d:z^{*}+2d]=\overline{\mu}\cdot\boldsymbol{\delta}\) gives a solution \(\widehat{\mathbf{y}}+\mathbf{e}\) that satisfies Equation (2).
**Claim 4.6**.: _There exists a \(z^{*}\in[\lceil\frac{\gamma}{2}\rceil,\lceil\frac{\gamma}{2}\rceil+2d^{2}]\) such that the matrix \(A(z^{*})\) is non-singular, and_
\[\|A(z^{*})^{-1}\|_{\infty}\leq\frac{(z^{*}+2d)^{2d(d+1)}}{\binom{\gamma}{z^{* }}}\cdot\left(\frac{d+1}{\sqrt{d}}\right)^{d}. \tag{7}\]
When choosing this \(z^{*}\), we have a unique solution \(\boldsymbol{\delta}=A(z^{*})^{-1}\boldsymbol{\Delta}\). Moreover, under the given parameter range, the solution \(\boldsymbol{\delta}\) has bounded infinity norm:
**Claim 4.7**.: _For \(h\geq 2\) and \(d\leq\frac{1}{8}\sqrt{\frac{h}{2\log h}}\), we have \(\|\boldsymbol{\delta}\|_{\infty}\leq 1\)._
For now, we assume that Claim 4.6 and Claim 4.7 are true, and we show how they lead to Lemma 4.5. The proofs of the two claims appear right afterward. To prove Lemma 4.5, it suffices to show that \(\widehat{\mathbf{y}}+\mathbf{e}\) indeed satisfies the budget feasibility specified by Equation (1). Since for all \(i\notin[z^{*}+d,z^{*}+2d]\), we have \(\widehat{y}_{i}+e_{i}=\widehat{y}_{i}\leq i\cdot m\), so we only need to show that for the correction position \(z^{*}+d,\ldots,z^{*}+2d\), the budget feasibility is satisfied. Substituting \(\|\boldsymbol{\delta}\|_{\infty}\leq 1\), for each \(i\in[z^{*}+d,z^{*}+2d]\),
\[0=\overline{\mu}-\overline{\mu}\leq\widehat{y}_{i}+e_{i}\leq 2\overline{\mu} \leq\frac{\gamma}{2}\cdot m\leq i\cdot m.\]
Lemma 4.5 thus follows.
Proof of Claim 4.6.: We separate the proof in two parts: we first show that there exists a \(z^{*}\in[\lceil\frac{\gamma}{2}\rceil,\lceil\frac{\gamma}{2}\rceil+2d^{2}]\) such that \(A(z^{*})\) is non-singular; then we show that the infinity norm of the inverse of \(A(z^{*})\) satisfies Equation (7).
Non-singularity.We show that there exists \(z^{*}\) in the given range such that \(\det(A(z^{*}))\neq 0\). Define
\[B(z)=\frac{A(z)}{\binom{\gamma}{z}}\cdot\prod_{i=1}^{2d}(z+i).\]
Since
\[\frac{\binom{\gamma}{z+j}}{\binom{\gamma}{z}}\cdot\prod_{i=1}^{2d}(z+i)=\frac {(\gamma-z-j+1)\ldots(\gamma-z)}{(z+1)\ldots(z+j)}\cdot\prod_{i=1}^{2d}(z+i)= \prod_{i=1}^{j}(\gamma-z-j+i)\cdot\prod_{i=j+1}^{2d}(z+i),\]
\(B(z)\) is equal to the following matrix:
\[\left(\begin{array}{cccc}\prod_{i=1}^{d}(\gamma-z-d+i)\prod_{i=d+1}^{2d}(z+i)& \prod_{i=1}^{d+1}(\gamma-z-d-1+i)\prod_{i=d+2}^{2d}(z+i)&\ldots&\prod_{i=1}^{2d}( \gamma-z-2d+i)\\ \prod_{i=1}^{d-1}(\gamma-z-d+1+i)\prod_{i=d}^{2d}(z+i)&\prod_{i=1}^{d}(\gamma- z-d+i)\prod_{i=d+1}^{2d}(z+i)&\ldots&\prod_{i=1}^{2d-1}(\gamma-z-d+1+i)(z+2d)\\ \vdots&\vdots&\ddots&\vdots\\ \prod_{i=1}^{2d}(z+i)&(\gamma-z)\prod_{i=2}^{2d}(z+i)&\ldots&\prod_{i=1}^{d}( \gamma-z-d+i)\prod_{i=d+1}^{2d}(z+i)\end{array}\right)\]
It is sufficient to show that there exists a \(z^{*}\in[\lceil\frac{\gamma}{2}\rceil,\lceil\frac{\gamma}{2}\rceil+2d^{2}]\) such that \(\det(B(z^{*}))\neq 0\). To show this, note that the determinant of \(B(z)\) is a polynomial \(q(z)\) of \(z\) with degree at most \(2d^{2}\). As long as \(q(z)\) is not a zero polynomial, \(q(z)\) has at most \(2d^{2}\) roots. That means, there must exist a \(z^{*}\in[\lceil\frac{\gamma}{2}\rceil,\lceil\frac{\gamma}{2}\rceil+2d^{2}]\) such that \(q(z^{*})\neq 0\). The non-singularity of \(A(z^{*})\) thus follows.
Hence, it suffices to show that \(q(z)\) is not a zero polynomial. Indeed, when \(z=\gamma-d\), the matrix \(B(z)\) becomes the following lower triangle matrix, which has a positive determinant.
\[\left(\begin{array}{cccc}\prod_{i=1}^{c}i\cdot\prod_{i=c+1}^{2c}(z+i)&0& \ldots&0\\ \prod_{i=1}^{c-1}(i+1)\cdot\prod_{i=c}^{2c}(z+i)&\prod_{i=1}^{c}i\cdot\prod_{i =c+1}^{2c}(z+i)&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ \prod_{i=1}^{2c}(z+i)&c\cdot\prod_{i=2}^{2c}(z+i)&\ldots&\prod_{i=1}^{c}i\cdot \prod_{i=c+1}^{2c}(z+i)\end{array}\right)\]
This implies that \(q(z)\) is not a zero polynomial.
Infinity norm.For simplicity, we use \(A:=A(z^{*})\) in this part. By Fact 4.1, \(\|A^{-1}\|_{2}=\frac{1}{\lambda}\), where \(\lambda\) is the smallest singular value of \(A\). By Lemma 4.2, the smallest singular value \(\lambda\) satisfies
\[\lambda\geq|\det(A)|\cdot\left(\frac{d}{\|A\|_{F}^{2}}\right)^{\frac{d}{2}}.\]
By the definition of Frobenius norm and the fact that the largest term in \(A\) is \(\binom{\gamma}{z^{*}}\),
\[\|A\|_{F}^{2}=\sum_{i=0}^{c}\sum_{j=0}^{d}a_{ij}^{2}\leq(d+1)^{2}\cdot\binom{ \gamma}{z^{*}}^{2}.\]
We only need to bound the determinant of \(A\). Let \(A^{\prime}=(a^{\prime}_{i,j})_{(d+1)\times(d+1)}\) where \(a^{\prime}_{i,j}=\frac{a_{i,j}}{\binom{\gamma}{z^{*}}}\). Then we have that \(|\det(A)|=\binom{\gamma}{z^{*}}^{(d+1)}\cdot|\det(A^{\prime})|\), where
\[A^{\prime}=\begin{pmatrix}\frac{(\gamma-z^{*}-d+1)\ldots(\gamma-z^{*})}{(z^{* }+1)\ldots(z^{*}+d)}&\frac{(\gamma-z^{*}-d)\ldots(\gamma-z^{*})}{(z^{*}+1) \ldots(z^{*}+d+1)}&\ldots&\frac{(\gamma-z^{*}-2d+1)\ldots(\gamma-z^{*})}{(z^{ *}+1)\ldots(z^{*}+2d)}\\ \frac{(\gamma-z^{*}-d+2)\ldots(\gamma-z^{*})}{(z^{*}+1)\ldots(z^{*}+d-1)}& \frac{(\gamma-z^{*}-d+1)\ldots(\gamma-z^{*})}{(z^{*}+1)\ldots(z^{*}+d)}& \ldots&\frac{(\gamma-z^{*}-2d+2)\ldots(\gamma-z^{*})}{(z^{*}+1)\ldots(z^{*}+2d -1)}\\ \vdots&\vdots&\ddots&\vdots\\ 1&\frac{\gamma-z^{*}}{z^{*}+1}&\ldots&\frac{(\gamma-z^{*}-d+1)\ldots(\gamma-z^ {*})}{(z^{*}+1)\ldots(z^{*}+d)}\end{pmatrix}\]
By the definition of determinant, \(\det(A^{\prime})=\sum_{\sigma\in S_{d+1}}\mathsf{sgn}(\sigma)\prod_{i=1}^{d+1 }a^{\prime}_{i,\sigma_{i}}\). For each \(\sigma\in S_{d+1}\), let \(q_{\sigma}\) denote the product \(\prod_{i=1}^{d+1}a^{\prime}_{i,\sigma_{i}}\). Since all the entries in \(A^{\prime}\) are rational numbers, for each \(\sigma\)
the product \(q_{\sigma}\) is also a rational number. Note that the denominator of each entry is a factor of \(\prod_{i=1}^{2d}(z^{*}+i)\), we can write each \(q_{\sigma}=\frac{p_{\sigma}}{\prod_{i=1}^{2d}(z^{*}+i)^{(d+1)}}\) for an integer \(p_{\sigma}\). Thus,
\[|\det(A^{\prime})| =\left|\sum_{\sigma\in S_{d+1}}\mathsf{sgn}(\sigma)\prod_{i=1}^{d +1}a^{\prime}_{i,\sigma_{i}}\right|=\left|\sum_{\sigma\in S_{d+1}}\mathsf{sgn} (\sigma)\frac{p_{\sigma}}{\prod_{i=1}^{2d}(z^{*}+i)^{(d+1)}}\right|\] \[=\frac{|\sum_{\sigma\in S_{d+1}}\mathsf{sgn}(\sigma)p_{\sigma}|} {\prod_{i=1}^{2d}(z^{*}+i)^{(d+1)}}\geq\frac{1}{\prod_{i=1}^{d}(z^{*}+i)^{(d+1 )}},\]
where the last step follows from the fact that \(A^{\prime}\) is non-singular; thus the absolute value of the nominator is at least \(1\). Therefore,
\[\lambda\geq|\det(A)|\cdot\left(\frac{d}{\|A\|_{F}^{2}}\right)^{ \frac{d}{2}} \geq\binom{\gamma}{z^{*}}^{(d+1)}\cdot\frac{1}{\prod_{i=1}^{2d}( z^{*}+i)^{(d+1)}}\cdot\left(\frac{d}{(d+1)^{2}\binom{\gamma}{z^{*}}^{2}} \right)^{\frac{d}{2}}\] \[\geq\binom{\gamma}{z^{*}}\cdot\frac{1}{(z^{*}+2d)^{2d(d+1)}} \cdot\left(\frac{\sqrt{d}}{d+1}\right)^{d}.\]
The claim thus follows from the fact that \(\|A^{-1}\|_{\infty}\leq\|A^{-1}\|_{2}=\frac{1}{\lambda}\).
Proof of Claim 4.7.: Since \(A(z^{*})\) is non-singular, we have \(\boldsymbol{\delta}=A(z^{*})^{-1}\boldsymbol{\Delta}\). By properties of matrix norms, we have that
\[\|\boldsymbol{\delta}\|_{\infty}\leq\|A(z^{*})^{-1}\|_{\infty}\cdot\| \boldsymbol{\Delta}\|_{\infty}. \tag{8}\]
By Claim 4.6 and note that \(\|\boldsymbol{\Delta}\|_{\infty}\leq t\cdot\binom{\gamma}{t}\), we have
\[\|\boldsymbol{\delta}\|_{\infty}\leq\|A(z^{*})^{-1}\|_{\infty}\cdot\| \boldsymbol{\Delta}\|_{\infty}\leq\frac{(z^{*}+2d)^{2d(d+1)}}{\binom{\gamma}{ z^{*}}}\cdot\left(\frac{d+1}{\sqrt{d}}\right)^{d}\cdot t\cdot\binom{\gamma}{t}.\]
Because \(z^{*}+2d\leq\gamma\), \(\frac{d+1}{\sqrt{d}}\leq\gamma\) and \(t\leq\gamma\), we have
\[\|\boldsymbol{\delta}\|_{\infty}\leq\frac{(z^{*}+2d)^{2d(d+1)}}{\binom{\gamma} {z^{*}}}\cdot\left(\frac{d+1}{\sqrt{d}}\right)^{d}\cdot t\cdot\binom{\gamma}{ t}\leq\gamma^{2d^{2}+3d+1}\cdot\frac{\binom{\gamma}{t}}{\binom{\gamma}{z^{*}}}. \tag{9}\]
By the assumption that \(d\leq\frac{1}{8}\sqrt{\frac{h}{2\log h}}\), we have
\[(2d^{2}+3d+1)\cdot\log(h-d)+\frac{\log\frac{6}{5}}{4}\cdot d\leq 8d^{2} \cdot\log h\leq\frac{h}{4}\cdot\log\frac{6}{5}.\]
Re-arrange the inequality and notice that \(h-d\leq\gamma\), we have
\[2d^{2}+3d+1\leq\frac{(h-d)\log\frac{6}{5}}{4\log(h-d)}\leq\frac{\gamma\log \frac{6}{5}}{4\log\gamma}, \tag{10}\]
therefore,
\[\gamma^{2d^{2}+3d+1}\leq\left(\frac{6}{5}\right)^{\frac{\gamma}{4}}. \tag{11}\]
Next, note that for any integers \(a,b\) such that \(a<b\), we have \(\frac{\binom{\gamma}{a}}{\binom{\gamma}{b}}=\frac{(a+1)(a+2)\cdots b}{(\gamma-b+1)( \gamma-b+2)\cdots(\gamma-a)}\leq(\frac{b}{\gamma-a})^{b-a}\). Because \(z^{*}\in[\lceil\frac{\gamma}{2}\rceil,\lceil\frac{\gamma}{2}\rceil+2d^{2}]\), we have \(\binom{\gamma}{z^{*}}\geq\binom{\gamma}{\lceil\frac{\gamma}{2}\rceil+2d^{2}}\). Thus,
\[\frac{\binom{\gamma}{t}}{\binom{\gamma}{z^{*}}}\leq\frac{\binom{\gamma}{t}}{ \binom{\gamma}{\lceil\frac{\gamma}{2}\rceil+2d^{2}}}\leq\left(\frac{\lceil \frac{\gamma}{2}\rceil+2d^{2}}{\gamma-t}\right)^{\lceil\frac{\gamma}{2}\rceil+ 2d^{2}-t}\leq\left(\frac{\frac{\gamma}{2}+2d^{2}+1}{\frac{3}{4}\gamma}\right)^ {\lceil\frac{\gamma}{2}\rceil+2d^{2}-t}. \tag{12}\]
Because \(\gamma\geq h-d\), for any \(h\geq 2\) and \(d\leq\frac{1}{8}\sqrt{\frac{h}{2\log h}}\), it must be \(\frac{\log\frac{6}{8}}{\log\gamma}<\frac{1}{2}\). By Equation (10), we have
\[2d^{2}+1\leq 2d^{2}+3d+1\leq\frac{\gamma\log\frac{6}{5}}{4\log\gamma}\leq \frac{\gamma}{8}.\]
By \(2d^{2}+1\leq\frac{\gamma}{8}\) and Equation (12), we have
\[\frac{\binom{\gamma}{t}}{\binom{\gamma}{z^{*}}}\leq\left(\frac{\frac{\gamma}{2 }+2d^{2}+1}{\frac{3}{4}\gamma}\right)^{\lceil\frac{\gamma}{2}\rceil+2d^{2}-t} \leq\left(\frac{5}{6}\right)^{\lceil\frac{\gamma}{2}\rceil+2d^{2}-t}\leq \left(\frac{5}{6}\right)^{\frac{\gamma}{4}}. \tag{13}\]
Combining Equations (9), (11), and (13), we have that \(\|\boldsymbol{\delta}\|_{\infty}\leq 1\).
## 5 Characterization for Finite Block Size
### Characterization for Strict IC
In this section, we give a characterization for strict incentive compatibility for finite block size. In an \((h,\rho,c,d)\)-environment, we can indeed circumvent the \(0\)-miner revenue impossibility result in [12]. However, it turns out that for \(c=1\) and \(c\geq 2\), the mechanisms are different. Specifically, for \(c\geq 2\), each user's utility has to be \(0\). Therefore, we separately give the mechanisms for \(c=1\) and \(c\geq 2\).
#### 5.1.1 Feasibility for \(c=1\)
For \(c=1\), the mechanism is simply the LP-based mechanism in Section 4 with a random selection process. Still, we assume that each user's true value is drawn i.i.d. from a distribution \(\mathcal{D}\), and the median \(m\) of the distribution satisfies that \(\Pr[x\geq m]=\frac{1}{2}\) (see Remark 2.1).
**MPC-assisted, LP-based mechanism for finite blocks**
Parameters:the block size \(k\), the environment parameter \((h,*,1,d)\), the distribution median \(m\).
Input:a bid vector \(\mathbf{b}=(b_{1},\ldots,b_{n})\).
**Mechanism:**
* _Confirmation Rule._ Let \(\widetilde{\mathbf{b}}=(\widetilde{b}_{1},\ldots,\widetilde{b}_{s})\) denote the bids that are at least \(m\). If \(s\leq k\), confirm all bids in \(\widetilde{\mathbf{b}}\). Otherwise, randomly select \(k\) bids from \(\widetilde{\mathbf{b}}\) to confirm.
* _Payment rule._ Each confirmed bid pays \(m\).
* _Miner revenue rule._ Let \(\mathbf{y}:=(y_{0},y_{1},\ldots,y_{n})\) be any feasible solution to the following linear program: \[\forall i\in[n]: 0\leq y_{i}\leq\min(i,k)\cdot m\] (14) \[\forall 0\leq j\leq d: \sum_{i=0}^{n-d}q_{i}\cdot y_{i+j}=\frac{m\cdot\min(h,k)}{4}\] (15) where \(q_{i}=\frac{1}{2^{n-d}}\binom{n-d}{i}\) is the probability of observing \(i\) heads if we flip \(n-d\) independent fair coins. The total miner revenue is \(y_{t}\) where \(t\) is the number of confirmed bids in the block.
**Theorem 5.1**.: _Suppose the block size is \(k\). Fix any5\(h\geq 2\), and any \(d\leq\frac{1}{8}\sqrt{\frac{h}{2\log h}}\). The MPC-assisted, LP-based mechanism for finite blocks is ex post UIC, Bayesian MIC, and Bayesian SCP in an \((h,*,1,d)\)-environment. Moreover, the expected miner revenue is \(\Theta(\min\{h,k\})\)._
Footnote 5: For the special case \(h=1\), we can just use the parity-based mechanism of Section 2.1 with the random selection.
Proof.: First, Equation (14) guarantees that total miner revenue is at most the total payment of the confirmed users, so the mechanism satisfies budget feasibility.
Next, we show that when the linear program Equations (14) and (15) has a solution, the mechanism satisfies all three incentive-compatible properties.
* **UIC:** By the same reasoning as in Lemma A.1, overbidding or underbidding does not increase the user's utility. Injecting bids cannot increase the user's utility either: it may only decrease the probability that the user gets confirmed. Moreover, dropping out can only give the user zero utility. Therefore, a user cannot increase its utility by deviating.
* **SCP:** By the same reasoning as in the proof of Lemma 4.4, the linear program Equation (15) guarantees that no matter how the coalition chooses the \(d\) bids it controls, the expected total miner revenue remains unchanged. Meanwhile, the coalition cannot increase the colluding user's utility by UIC. Therefore, this mechanism is SCP.
* **MIC:** Follows by the same reasoning as SCP.
It remains to show that the linear program indeed has a feasible solution. We will give a constructive solution. Let \(\widetilde{\mathbf{y}}=(\widetilde{y}_{1},\ldots,\widetilde{y}_{n})\) denote the constructive solution given in the proof of Lemma 4.5 that satisfies
\[\forall 0\leq j\leq d: \sum_{i=0}^{n-d}q_{i}\cdot\widetilde{y}_{i+j}=\frac{m\cdot h}{4}.\]
In the proof of Lemma 4.5, \(\widetilde{\mathbf{y}}\) satisfies that \(0\leq\widetilde{y}_{i}\leq\min(i,h)\cdot m\) for any \(0\leq i\leq n\). There are two possible cases.
* \(h\leq k\). We have \(0\leq\widetilde{y}_{i}\leq\min(i,h)\cdot m\leq\min(i,k)\cdot m\). Thus, \(\widetilde{\mathbf{y}}\) is a feasible solution to the linear program in this case.
* \(h>k\). Let \(\mathbf{y}=(y_{0},\ldots,y_{n})=\frac{k}{h}\cdot\widetilde{\mathbf{y}}\). Then \(y_{i}\) satisfies that \(0\leq y_{i}\leq\frac{k}{h}\min(i,h)\cdot m\leq\min(i,k)\cdot m\). Moreover, for any \(0\leq j\leq d\), \[\sum_{i=0}^{n-d}q_{i}\cdot y_{i+j}=\frac{k}{h}\cdot\sum_{i=0}^{n-d}q_{i}\cdot \widetilde{y}_{i+j}=\frac{m\cdot k}{4}.\] Thus, \(\mathbf{y}=\frac{k}{h}\cdot\widetilde{\mathbf{y}}\) is a feasible solution to the linear program if \(h>k\).
#### 5.1.2 Zero Social Welfare for Users When \(c\geq 2\)
Unfortunately, the above MPC-assisted, LP-based mechanism for finite block size only works for \(c=1\). When \(c\geq 2\), although deviating cannot increase the expected total miner revenue, the coalition can increase a colluding user's utility. Imagine that the coalition consists of some colluding miners and two users \(i\) and \(j\), where user \(i\) has true value \(m\) and user \(j\) has a large true value. Then user \(i\) may choose to drop out to increase the probability of user \(j\) getting confirmed. This strictly increases the expected joint utility of the coalition.
Therefore, to construct a Bayesian SCP mechanism in an \((h,\rho,c,d)\)-environment for \(d\geq c\geq 2\), we need to make sure that deviating cannot increase a colluding user's utility. Indeed, for some (contrived) distributions, we can construct a mechanism that generates optimal miner revenue and achieves UIC, MIC, and SCP in an \((h,\rho,c,d)\)-environment for \(d\geq c\geq 2\). However, the total social welfare for all users is \(0\). For example, imagine that users' true values are drawn i.i.d. from \(\text{Bernoulli}(\frac{1}{2})\). Now, if we run the MPC-assisted, LP-based mechanism for finite blocks (see Section 5.1.1) and set \(m=1\), the resulting mechanism achieves ex post UIC, Bayesian MIC, and Bayesian SCP in \((h,*,c,d)\)-environments, even when \(c\geq 2\) (as long as the condition \(d\leq\frac{1}{8}\sqrt{\frac{h}{2\log h}}\) is satisfied). This is because setting \(m=1\) makes sure that every user's utility is always \(0\). Thus, no matter how the coalition deviates, it cannot increase the strategic users' joint utility. Moreover, as long as the linear program Equations (14) and (15) has a feasible solution, the coalition cannot increase the expected total miner revenue either. The mechanism achieves \(\Theta(m)\) expected miner revenue but unfortunately, the total user social welfare is always \(0\). It turns out that this zero user social welfare limitation is intrinsic, as stated below.
**Theorem 5.2** (Restatement of Theorem 1.4).: _Suppose that the block size is finite, and fix any \(h\geq 1\), any \(d\geq c\geq 2\), and any \(\rho\in(0,1)\). Then, any MPC-assisted TFM that simultaneously satisfies Bayesian UIC, MIC and SCP in an \((h,\rho,c,d)\)-model must suffer from \(0\) social welfare for the users when there actually are more than \(h\) honest bids. Equivalently, for any \(\ell>h\),_
\[\operatorname*{\mathbf{E}}_{\mathbf{b}\sim\mathcal{D}^{\ell}}[\mathsf{USW}( \mathbf{b})]=0. \tag{16}\]
_In the above, \(\mathsf{USW}(\mathbf{b})\) denotes the expected total user social welfare under the bid vector \(\mathbf{b}\) where the expectation is taken over the randomness of the mechanism._
The proof is similar to the proof of Theorem 5.2 of [13]. We will use the following lemma of [13] to prove this theorem. Although the original lemma considers a universal MPC-assisted mechanism, the proof also holds for MPC-assisted TFM in an \((h,\rho,c,d)\)-environment for \(d\geq c\geq 2\). Henceforth, we use \(\mathsf{util}^{i}(\mathbf{b})\) to denote the utility of identity \(i\) when the input bid vector to the mechanism is \(\mathbf{b}\). In the proof, we use \(v_{\mathsf{id}}\)\((b_{\mathsf{id}})\) to denote a bid \(v\)\((b)\) coming from identity \(\mathsf{id}\).
**Lemma 5.3**.: _Fix any \(h\geq 1\), any \(d\geq c\geq 2\), any \(\rho\in(0,1)\). Given any (possibly random) MPC-assisted mechanism that is Bayesian UIC, MIC and SCP in an \((h,\rho,c,d)\)-environment, for any identity \(i\) and identity \(j\), for any bid \(b_{j}\) and \(b^{\prime}_{j}\), it must be that for any \(\ell\geq h\),_
\[\operatorname*{\mathbf{E}}_{(v_{i},\mathbf{b}_{-i,j})\sim\mathcal{D}^{\ell+1} }[\mathsf{util}^{i}(v_{i},b_{j},\mathbf{b}_{-i,j})]=\operatorname*{\mathbf{E} }_{(v_{i},\mathbf{b}_{-i,j})\sim\mathcal{D}^{\ell+1}}[\mathsf{util}^{i}(v_{i},b^{\prime}_{j},\mathbf{b}_{-i,j})], \tag{17}\]
_where \(\mathbf{b}_{-i,j}\) represents all except identity \(i\) and \(j\)'s bids. Moreover, it must be that_
\[\operatorname*{\mathbf{E}}_{(v_{i},\mathbf{b}_{-i,j})\sim\mathcal{D}^{\ell+1} }[\mathsf{util}^{i}(v_{i},b_{j},\mathbf{b}_{-i,j})]=\operatorname*{\mathbf{E} }_{(v_{i},\mathbf{b}_{-i})\sim\mathcal{D}^{\ell+1}}[\mathsf{util}^{i}(v_{i}, \mathbf{b}_{-i})]. \tag{18}\]
Proof.: The proof to this lemma is the same as in Lemma 5.2 and 5.3 of Shi et al. [13], except that now we need to guarantee that at least \(h\) bids are sampled randomly from \(\mathcal{D}\).
**Corollary 5.4**.: _Fix any \(h\geq 1\), any \(d\geq c\geq 2\), any \(\rho\in(0,1)\). Given any (possibly random) MPC-assisted mechanism that is Bayesian UIC, MIC and SCP in an \((h,\rho,c,d)\)-environment, for any two sets \(H\) and \(H^{\prime}\) consisting of at least \(h\) identities, let \(\mathbf{b}_{H}\) (\(\mathbf{b}_{H^{\prime}}\)) denote the bids from identities in \(H\) (\(H^{\prime}\)). For any \(i\notin H\cup H^{\prime}\), it must be that_
\[\underset{(v_{i},\mathbf{b}_{H})\sim\mathcal{D}^{|H|+1}}{\mathbf{E}}[{ \mathbf{u}\mathsf{ti}}^{i}(v_{i},\mathbf{b}_{H})]=\underset{(v_{i},\mathbf{b}_ {H^{\prime}})\sim\mathcal{D}^{|H^{\prime}|+1}}{\mathbf{E}}[{\mathbf{u} \mathsf{ti}}^{i}(v_{i},\mathbf{b}_{H^{\prime}})],\]
_where \(v_{i}\) denotes that identity \(i\) bids \(v\)._
Proof.: Let \(S=H^{\prime}\setminus H\). Without loss of generality, we assume that \(S\) consists of identities \(1,\ldots,|S|\). By the definition of the total expectation, we have
\[\underset{(v_{i},\mathbf{b}_{S},\mathbf{b}_{H})\sim\mathcal{D}^{ |S|+|H|+1}}{\mathbf{E}}[{\mathbf{u}\mathsf{ti}}^{i}(v_{i},\mathbf{b}_{S}, \mathbf{b}_{H})]\] \[= \int_{0}^{\infty}\underset{(v_{i},\mathbf{b}_{S\setminus\{1\}}, \mathbf{b}_{H})\sim\mathcal{D}^{|S|+|H|}}{\mathbf{E}}[{\mathbf{u}\mathsf{ti}}^ {i}(v_{i},z_{1},\mathbf{b}_{S\setminus\{1\}},\mathbf{b}_{H})]f(z_{1})dz_{1}\] \[= \underset{(v_{i},\mathbf{b}_{S\setminus\{1\}},\mathbf{b}_{H}) \sim\mathcal{D}^{|S|+|H|}}{\mathbf{E}}[{\mathbf{u}\mathsf{ti}}^{i}(v_{i},b_{1},\mathbf{b}_{S\setminus\{1\}},\mathbf{b}_{H})]\int_{0}^{\infty}f(z_{1})dz_{1} \text{By Equation \eqref{eq:c_1}}\] \[= \underset{(v_{i},\mathbf{b}_{S\setminus\{1\}},\mathbf{b}_{H}) \sim\mathcal{D}^{|S|+|H|}}{\mathbf{E}}[{\mathbf{u}\mathsf{ti}}^{i}(v_{i},b_{1},\mathbf{b}_{S\setminus\{1\}},\mathbf{b}_{H})]\] \[= \underset{(v_{i},\mathbf{b}_{S\setminus\{1\}},\mathbf{b}_{H}) \sim\mathcal{D}^{|S|+|H|}}{\mathbf{E}}[{\mathbf{u}\mathsf{ti}}^{i}(v_{i}, \mathbf{b}_{S\setminus\{1\}},\mathbf{b}_{H})]\] By Equation (18) \[= \cdots=\underset{(v_{i},\mathbf{b}_{H})\sim\mathcal{D}^{|H|+1}}{ \mathbf{E}}[{\mathbf{u}\mathsf{ti}}^{i}(v_{i},\mathbf{b}_{H})].\]
By the same reasoning, consider \(S^{\prime}=H\setminus H^{\prime}\). Then it must be that
\[\underset{(v_{i},\mathbf{b}_{S^{\prime}},\mathbf{b}_{H^{\prime}})\sim \mathcal{D}^{|S^{\prime}|+|H^{\prime}|+1}}{\mathbf{E}}[{\mathbf{u}\mathsf{ti} }^{i}(v_{i},\mathbf{b}_{S^{\prime}},\mathbf{b}_{H^{\prime}})]=\underset{(v_{ i},\mathbf{b}_{H^{\prime}})\sim\mathcal{D}^{|H^{\prime}|+1}}{\mathbf{E}}[{ \mathbf{u}\mathsf{ti}}^{i}(v_{i},\mathbf{b}_{H^{\prime}})].\]
Note that \(S^{\prime}\cup H^{\prime}=S\cup H=H^{\prime}\cup H\). Hence,
\[\underset{(v_{i},\mathbf{b}_{S^{\prime}},\mathbf{b}_{H^{\prime}})\sim \mathcal{D}^{|S^{\prime}|+|H^{\prime}|+1}}{\mathbf{E}}[{\mathbf{u}\mathsf{ti} }^{i}(v_{i},\mathbf{b}_{S^{\prime}},\mathbf{b}_{H^{\prime}})]=\underset{(v_{ i},\mathbf{b}_{S},\mathbf{b}_{H})\sim\mathcal{D}^{|S|+|H|+1}}{\mathbf{E}}[{ \mathbf{u}\mathsf{ti}}^{i}(v_{i},\mathbf{b}_{S},\mathbf{b}_{H})].\]
Combining the equalities, we have
\[\underset{(v_{i},\mathbf{b}_{H})\sim\mathcal{D}^{|H|+1}}{\mathbf{E}}[{ \mathbf{u}\mathsf{ti}}^{i}(v_{i},\mathbf{b}_{H})] =\underset{(v_{i},\mathbf{b}_{S},\mathbf{b}_{H})\sim\mathcal{D}^{|S|+|H |+1}}{\mathbf{E}}[{\mathbf{u}\mathsf{ti}}^{i}(v_{i},\mathbf{b}_{S},\mathbf{b}_ {H})]\] \[=\underset{(v_{i},\mathbf{b}_{H})\sim\mathcal{D}^{|H^{\prime}|+1}}{ \mathbf{E}}[{\mathbf{u}\mathsf{ti}}^{i}(v_{i},\mathbf{b}_{H^{\prime}})]\]
This corollary implies that when identity \(i\)'s bid is sampled from \(\mathcal{D}\) in a world with \(h\) or more random bids, its expected utility only depends on its identity \(i\). Henceforth we will use the following notation to denote this utility (where the notation \(v_{i}\) means identity \(i\) is bidding the value \(v\)):
\[U_{i}:=\underset{(v_{i},\mathbf{b}_{H})\sim\mathcal{D}^{|H|+1}}{\mathbf{E}}[{ \mathbf{u}\mathsf{ti}}^{i}(v_{i},\mathbf{b}_{H})]. \tag{19}\]
**Lemma 5.5**.: _Fix any \(h\geq 1\), any \(d\geq c\geq 2\), any \(\rho\in(0,1)\). Given any (possibly random) MPC-assisted mechanism that is Bayesian UIC, MIC and SCP in an \((h,\rho,c,d)\)-environment, for any user \(i,j\), it must be that_
\[U_{i}=U_{j}. \tag{20}\]
Proof.: Fix any set \(H\) of at least \(h+1\) number of users. By our symmetric assumption, it must be that
\[\mathop{\mathbf{E}}_{\mathbf{b}_{H}\sim\mathcal{D}^{|H|}}[\mathsf{USW}(v_{i}, \mathbf{b}_{H})]=\mathop{\mathbf{E}}_{\mathbf{b}_{H}\sim\mathcal{D}^{|H|}}[ \mathsf{USW}(v_{j},\mathbf{b}_{H})]\,, \tag{21}\]
where \(v_{i}\) (\(v_{j}\)) denotes that identity \(i\) (\(j\)) bids \(v\), and \(\mathsf{USW}(\mathbf{b})\) denotes the expected social welfare for all users when the input bid vector is \(\mathbf{b}\). For any identity \(l\in H\), for any \(v_{i}\) from identity \(i\), let \(H^{\prime}=H\setminus\{l\}\). It must be
\[\mathop{\mathbf{E}}_{(b_{l},\mathbf{b}_{H^{\prime}})\sim\mathcal{ D}^{|H|}}[\mathsf{util}^{l}(b_{l},v_{i},\mathbf{b}_{H^{\prime}})] =\mathop{\mathbf{E}}_{(b_{l},\mathbf{b}_{H^{\prime}})\sim\mathcal{ D}^{|H|}}[\mathsf{util}^{l}(b_{l},\mathbf{b}_{H^{\prime}})]\] By Equation ( 18 ) \[=U_{l}\] By Equation ( 19 )
By the same reasoning, \(\mathop{\mathbf{E}}_{(b_{l},\mathbf{b}_{H^{\prime}})\sim\mathcal{D}^{|H|}}[ \mathsf{util}^{l}(b_{l},v_{j},\mathbf{b}_{H^{\prime}})]=U_{l}\). Thus, for any value \(v\), the sum of the expected utility of every user in \(H\) is
\[\sum_{l\in H}\mathop{\mathbf{E}}_{\mathbf{b}_{H}\sim\mathcal{D}^{|H|}}[ \mathsf{util}^{l}(v_{i},\mathbf{b}_{H})]=\sum_{l\in H}U_{l}=\sum_{l\in H} \mathop{\mathbf{E}}_{\mathbf{b}_{H}\sim\mathcal{D}^{|H|}}[\mathsf{util}^{l}( v_{j},\mathbf{b}_{H})].\]
Combining this with Equation (21), it must be that for any \(v_{i}\) and \(v_{j}\) (which denote that identity \(i\) and \(j\) bid value \(v\), respectively),
\[\mathop{\mathbf{E}}_{\mathbf{b}_{H}\sim\mathcal{D}^{|H|}}[\mathsf{util}^{i}( v_{i},\mathbf{b}_{H})]=\mathop{\mathbf{E}}_{\mathbf{b}_{H}\sim\mathcal{D}^{|H|}}[ \mathsf{util}^{j}(v_{j},\mathbf{b}_{H})].\]
The lemma follows by taking expectations over \(v\) on both sides.
**Lemma 5.6**.: _Fix any \(h\geq 1\), any \(d\geq c\geq 2\), any \(\rho\in(0,1)\), and suppose that the distribution \(\mathcal{D}\) has bounded support. Given any (possibly random) MPC-assisted mechanism that is Bayesian UIC, MIC and SCP in an \((h,\rho,c,d)\)-environment, for any identity \(i\), it must be that_
\[U_{i}=0.\]
Proof.: Consider a crowded world with many users and all of their bids are sampled independently at random from \(\mathcal{D}\). Let \(K\) be the total number of users. By Corollary 5.4 and Lemma 5.5, every user's expected utility is the same where the expectation is taken over the random coins for sampling all bids as well as random coins of the mechanism. On the other hand, since there are \(K\) total bids, there must exist a user whose confirmation probability is at most \(k/K\), and thus its expected utility is at most \(\max(\mathcal{D})\cdot k/K\) where \(k\) is the block size. The lemma follows by taking \(K\) to be arbitrarily large.
Proof of Theorem 5.2Fix any set \(H\) of size at least \(h+1\). Then
\[\mathop{\mathbf{E}}_{\mathbf{b}\in\mathcal{D}^{|H|}}[\mathsf{USW}(\mathbf{b}) ]=\sum_{i\in H}\mathop{\mathbf{E}}_{\mathbf{b}\in\mathcal{D}^{|H|}}\mathsf{util }^{i}(\mathbf{b}).\]
By Lemma 5.6, for each identity \(i\) in \(H\),
\[\mathop{\mathbf{E}}_{\mathbf{b}\in\mathcal{D}^{|H|}}\mathsf{util}^{i}( \mathbf{b})=U_{i}=0.\]
Therefore, the social welfare is \(0\).
### Feasibility for Approximate IC: Diluted Threshold-Based Mechanism
Although there is no interesting mechanism for strict incentive compatibility when \(c\geq 2\), there are meaningful mechanisms if we allow approximate incentive compatibility. Still, we assume that each user's true value is drawn i.i.d. from a distribution \(\mathcal{D}\), and \(m\) is the median of \(\mathcal{D}\) such that \(\Pr[x\geq m]=\frac{1}{2}\) (see Remark 2.1). In addition, we assume that there is an upper bound \(T\) on users' true values: \(\Pr[x\leq T]=1\). Without loss of generality, we assume \(T\geq\epsilon\).
**MPC-assisted, diluted threshold-based Mechanism**
**Parameters:** the block size \(k\), the environment parameter \((h,*,c,*)\), the approximation parameter \(\epsilon\), the distribution median \(m\), and the upper bound \(T\) of the distribution.
**Input:** a bid vector \(\mathbf{b}=(b_{1},\ldots,b_{n})\).
**Mechanism:**
* _Confirmation rule._ Let \(R:=\max\left(2c\sqrt{\frac{kT}{\epsilon}},k\right)\). Given a bid vector \(\mathbf{b}\), let \(\widetilde{\mathbf{b}}=(\widetilde{b}_{1},\ldots,\widetilde{b}_{s})\) denote the bids that are at least \(m\). If \(s\leq R\), randomly select \(\frac{k}{R}\cdot s\) bids from \(\widetilde{\mathbf{b}}\) to confirm; otherwise, randomly select \(k\) bids from \(\widetilde{\mathbf{b}}\) to confirm.
* _Payment rule._ Every confirmed bid pays \(m\).
* _Miner revenue rule._ If \(s\geq\frac{h}{4}\), the total miner revenue is \(\overline{\mu}:=m\cdot\min\left(\frac{h}{4}\cdot\frac{k}{R},k\right)\). Otherwise, the total miner revenue is \(0\).
**Theorem 5.7**.: _Suppose the block size is \(k\). For any \(h\geq 1\), \(c\geq 1\), and \(\epsilon\geq m\cdot\frac{h}{2}\cdot e^{-\frac{h}{16}}\), the diluted threshold posted price auction satisfies strict ex post UIC, Bayesian \(\epsilon\)-MIC, and Bayesian \(\epsilon\)-SCP in an \((h,*,c,*)\)-environment. Moreover, the expected total miner revenue is \(m\cdot\min\left(\frac{h\sqrt{k\epsilon}}{8c\sqrt{T}},\frac{h}{4},k\right)\), where \(T\) is the upper bound of users' true values._
Proof.: We first show that the budget feasibility is satisfied. Since the mechanism confirms \(\min\left(s\cdot\frac{k}{R},k\right)\) number of bids that are at least \(m\), the total payment is \(m\cdot\min\left(s\cdot\frac{k}{R},k\right)\). When \(s\geq\frac{h}{4}\), the total miner revenue is at most \(m\cdot\frac{h}{4}\cdot\frac{k}{R}\leq m\cdot s\cdot\frac{k}{R}\), which is no more than the total payment of the users. Next, we prove UIC, MIC, and SCP separately.
Uic.Since the mechanism is posted price auction from each user's perspective, each user's best response is to follow the protocol honestly, as in the proof of Theorem 5.1.
Mic.By the same reasoning as in Theorem A.2, by injecting fake bids, the miner can only increase its expected miner revenue if the number of bids that are at least \(m\) from honest users is less than \(\frac{h}{4}\). This happens with a probability at most \(e^{-\frac{h}{16}}\). Thus, the expected total miner revenue increases by no more than
\[\overline{\mu}\cdot e^{-\frac{h}{16}}\leq m\cdot e^{-\frac{h}{16}}\cdot\frac {h}{4}\leq\frac{\epsilon}{2}.\]
Scp.By the same reasoning as in MIC, the expected increase of the miner revenue is at most \(\epsilon/2\) by any deviation. Thus, to show that the mechanism is Bayesian \(\epsilon\)-SCP, it suffices to show that the coalition cannot increase the joint utility of the "users" in the coalition by more than \(\frac{\epsilon}{2}\).
Because injecting bids smaller than \(m\) does not change the confirmation probability and the payment of each confirmed bid is fixed, injecting bids smaller than \(m\) does not increase the users' utilities. On the other hand, injecting bids at least \(m\) will only decrease the probability of each colluding user getting confirmed, which does not increase the users' utilities.
Now, it suffices to show that overbidding and underbidding do not increase the coalition's joint utility since dropping out is equivalent to underbidding to some value less than \(m\). Let \(s\) be the number of bids \(\geq m\) when every user bids truthfully. Each bid is confirmed with probability \(\frac{k}{R}\) if \(s\leq R\), and \(\frac{k}{s}\) if \(s>R\). Let \(s^{\prime}\) be the number of bids \(\geq m\) when the colluding users bid strategically. The colluding users can be partitioned into four groups:
* \(S_{1}\): Those whose true values are less than \(m\) but overbid to values larger than or equal to \(m\);
* \(S_{2}\): Those whose true values are less than \(m\) and bid values less than \(m\);
* \(S_{3}\): Those whose true values are at least \(m\) but underbids to values less than \(m\);
* \(S_{4}\): Those whose true values are at least \(m\) and still bid values at least \(m\).
When the coalition bids strategically, only the utilities of the users in \(S_{4}\) increase compared to the honest case. Consider a colluding user in \(S_{4}\) with the true value \(v\geq m\). Its utility increases by at most
\[(v-m)\cdot\frac{k}{\max\{s^{\prime},R\}}-(v-m)\cdot\frac{k}{\max \{s,R\}}. \tag{22}\]
Note that Equation (22) is positive only when \(s^{\prime}<s\) and \(s>R\). In this case, Equation (22) can be upper bounded by
\[(v-m)\left[\frac{k}{\max\{s^{\prime},R\}}-\frac{k}{s}\right]\] \[\leq (T-m)\left[\frac{k}{s^{\prime}}-\frac{k}{s}\right]\] \[\leq (T-m)\cdot\frac{ck}{s(s-c)}\] By
\[s^{\prime}\geq s-c\]
Since by the choice of \(R\), \(R(R-c)\geq\frac{1}{2}R^{2}\), we have
\[\text{Equation (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
**Lemma 6.1** (Lemma 3.3 of [13]).: _Fix any \(h\geq 1\), \(d\geq c\geq 1\) and \(\rho\in(0,1)\). Given any (possibly randomized) MPC-assisted TFM in an \((h,\rho,c,d)\)-environment that is Bayesian UIC and SCP, for any user \(i\) and any value \(v\), for any \(\ell\geq h\) it must be that_
\[\operatorname*{\mathbf{E}}_{\mathbf{b}\sim\mathcal{D}^{\ell}}\left[\mu( \mathbf{b},v)\right]=\operatorname*{\mathbf{E}}_{\mathbf{b}\sim\mathcal{D}^{ \ell}}\left[\mu(\mathbf{b},0)\right], \tag{23}\]
_where \(\mu(\mathbf{b})\) denotes the total miner revenue when the input bid vector is \(\mathbf{b}\)._
Although the original lemma in [13] is stated for universal mechanisms, the same proof holds for MPC-assisted mechanisms in \((h,\rho,c,d)\)-environment.
**Theorem 6.2** (Theorem 1.2 restated).: _Fix any \(h\geq 1\), \(d\geq c\geq 1\), and \(\rho\in(0,1)\). No MPC-assisted TFM that simultaneously satisfies Bayesian UIC, MIC, and SCP in an \((h,\rho,c,d)\)-environment can achieve more than \(h\cdot\operatorname*{\mathbf{E}}(\mathcal{D})\) expected miner revenue where \(\operatorname*{\mathbf{E}}(\mathcal{D})\) denotes the expectation of the bid distribution \(\mathcal{D}\)._
Proof.: Since the mechanism is MIC, it must be that for any \(\ell\geq h\),
\[\operatorname*{\mathbf{E}}_{\mathbf{b}\sim\mathcal{D}^{\ell}}\left[\mu( \mathbf{b},0)\right]\leq\operatorname*{\mathbf{E}}_{\mathbf{b}\sim\mathcal{D}^ {\ell}}\left[\mu(\mathbf{b})\right], \tag{24}\]
Otherwise, the colluding miners can increase their utility by injecting a \(0\) bid.
Let \(f(\cdot)\) be the p.d.f. of \(\mathcal{D}\). By the law of total expectation,
\[\operatorname*{\mathbf{E}}_{\mathbf{b}\sim\mathcal{D}^{n}}[\mu( \mathbf{b})] =\int_{0}^{\infty}\operatorname*{\mathbf{E}}_{\mathbf{b}^{\prime }\sim\mathcal{D}^{n-1}}[\mu(\mathbf{b}^{\prime},r)]f(r)dr\] \[=\int_{0}^{\infty}\operatorname*{\mathbf{E}}_{\mathbf{b}^{\prime }\sim\mathcal{D}^{n-1}}[\mu(\mathbf{b}^{\prime},0)]f(r)dr\] By Lemma 6.1 \[=\operatorname*{\mathbf{E}}_{\mathbf{b}^{\prime}\sim\mathcal{D}^{ n-1}}[\mu(\mathbf{b}^{\prime},0)]\leq\operatorname*{\mathbf{E}}_{\mathbf{b}^{ \prime}\sim\mathcal{D}^{n-1}}[\mu(\mathbf{b}^{\prime})]\] By Equation (24)
Repeat the above argument for \((n-h)\) steps, we have
\[\operatorname*{\mathbf{E}}_{\mathbf{b}\sim\mathcal{D}^{n}}[\mu( \mathbf{b})] \leq\operatorname*{\mathbf{E}}_{\mathbf{b}^{\prime}\sim\mathcal{ D}^{h}}[\mu(\mathbf{b})]\] \[\leq\operatorname*{\mathbf{E}}_{(b^{\prime}_{1},\dots,b^{\prime} _{h})\sim\mathcal{D}^{h}}\left[\sum_{i=1}^{h}b^{\prime}_{i}\right]\] By budget feasibility \[=h\mathbf{E}[\mathcal{D}].\]
### Necessity of Bayesian Incentive Compatibility
If we insist on ex post incentive compatibility, our new model will not help us overcome the previous impossibility result.
**Lemma 6.3**.: _Fix any \(h\geq 1\), \(d\geq c\geq 1\) and \(\rho\in(0,1)\). Given any (possibly randomized) MPC-assisted TFM in an \((h,\rho,c,d)\)-environment that is ex post \(\epsilon_{u}\)-UIC and \(\epsilon_{s}\)-SCP, for any user \(i\), for any value \(r\) and any \(\mathbf{b}_{-i}\) of length at least \(h\), it must be that_
\[\mu(\mathbf{b}_{-i},r)-\mu(\mathbf{b}_{-i},0)\leq\begin{cases}\frac{2}{\rho}( \epsilon_{s}+\epsilon_{u}),&\text{if }r\leq\epsilon_{s}+\epsilon_{u}\\ \frac{2}{\rho}(\sqrt{r(\epsilon_{s}+\epsilon_{u})}),&\text{if }r>\epsilon_{s}+ \epsilon_{u}.\end{cases} \tag{25}\]
Proof.: The proof of this lemma is the same as the proof of Lemma 3.3 in [13].
**Theorem 6.4**.: _Fix any \(h\geq 1\), \(d\geq c\geq 1\) and \(\rho\in(0,1)\). Suppose there are \(n\) users whose true values are drawn i.i.d. from some distribution \(\mathcal{D}\). Given any (possibly randomized) MPC-assisted TFM in an \((h,\rho,c,d)\)-environment that is ex post \(\epsilon_{u}\)-UIC and ex post \(\epsilon_{s}\)-SCP it must be that for any \(\mathbf{b}=(b_{1},\ldots,b_{n})\) of length \(n>h\),_
\[\mu(\mathbf{b})=\frac{2n\epsilon}{\rho}+\frac{2\sqrt{\epsilon}}{\rho}\sum_{i=1 }^{n}\sqrt{b_{i}}, \tag{26}\]
_where \(\epsilon=\epsilon_{s}+\epsilon_{u}\)._
Proof.: For any \(\mathbf{b}=(b_{1},\ldots,b_{n})\), it must be that
\[\mu(\mathbf{b}) =\mu(b_{1},b_{2},\ldots,b_{n})\] \[\leq\mu(b_{1},\ldots,b_{n-1},0)+\frac{2}{\rho}\epsilon+\frac{2}{ \rho}\sqrt{b_{n}\epsilon}\] By Lemma 6.3 \[\leq\mu(b_{1},\ldots,b_{n-2},0,0)+\frac{4}{\rho}\epsilon+\frac{2} {\rho}\sqrt{b_{n}\epsilon}+\frac{2}{\rho}\sqrt{b_{n-1}\epsilon}\] \[\leq\cdots\leq\mu(0\ldots,0)+\frac{2n\epsilon}{\rho}+\frac{2 \sqrt{\epsilon}}{\rho}\sum_{i=1}^{n}\sqrt{b_{i}}\] \[\leq\frac{2n\epsilon}{\rho}+\frac{2\sqrt{\epsilon}}{\rho}\sum_{i =1}^{n}\sqrt{b_{i}}.\]
## Acknowledgments
This work is in part supported by NSF awards 2212746, 2044679, 1704788, a Packard Fellowship, a generous gift from the late Nikolai Mushegian, a gift from Google, and an ACE center grant from Algorand Foundation. |
2306.00966 | The Hidden Language of Diffusion Models | Text-to-image diffusion models have demonstrated an unparalleled ability to
generate high-quality, diverse images from a textual prompt. However, the
internal representations learned by these models remain an enigma. In this
work, we present Conceptor, a novel method to interpret the internal
representation of a textual concept by a diffusion model. This interpretation
is obtained by decomposing the concept into a small set of human-interpretable
textual elements. Applied over the state-of-the-art Stable Diffusion model,
Conceptor reveals non-trivial structures in the representations of concepts.
For example, we find surprising visual connections between concepts, that
transcend their textual semantics. We additionally discover concepts that rely
on mixtures of exemplars, biases, renowned artistic styles, or a simultaneous
fusion of multiple meanings of the concept. Through a large battery of
experiments, we demonstrate Conceptor's ability to provide meaningful, robust,
and faithful decompositions for a wide variety of abstract, concrete, and
complex textual concepts, while allowing to naturally connect each
decomposition element to its corresponding visual impact on the generated
images. Our code will be available at: https://hila-chefer.github.io/Conceptor/ | Hila Chefer, Oran Lang, Mor Geva, Volodymyr Polosukhin, Assaf Shocher, Michal Irani, Inbar Mosseri, Lior Wolf | 2023-06-01T17:57:08Z | http://arxiv.org/abs/2306.00966v3 | # The Hidden Language of Diffusion Models
###### Abstract
Text-to-image diffusion models have demonstrated an unparalleled ability to generate high-quality, diverse images from a textual concept (_e.g._, _"a doctor"_, _"love"_). However, the internal process of mapping text to a rich visual representation remains an enigma. In this work, we tackle the challenge of understanding concept representations in text-to-image models by decomposing an input text prompt into a small set of interpretable elements. This is achieved by learning a pseudo-token that is a sparse weighted combination of tokens from the model's vocabulary, with the objective of reconstructing the images generated for the given concept. Applied over the state-of-the-art Stable Diffusion model, this decomposition reveals non-trivial and surprising structures in the representations of concepts. For example, we find that some concepts such as _"a president"_ or _"a composer"_ are dominated by specific instances (_e.g._, _"Obama"_, _"Biden"_) and their interpolations. Other concepts, such as _"happiness"_ combine associated terms that can be concrete (_"family"_, _"laughter"_) or abstract (_"friendship"_, _"emotion"_). In addition to peering into the inner workings of Stable Diffusion, our method also enables applications such as single-image decomposition to tokens, bias detection and mitigation, and semantic image manipulation. Our code will be available at: [https://hila-chefer.github.io/Conceptor/](https://hila-chefer.github.io/Conceptor/).
Figure 1: Concept decomposition using Conceptor. (a) Given a concept of interest and a text-to-image model, we generate a set of images to visually represent the concept. Concept then learns to decompose the concept into a small set of interpretable tokens, with the objective of reconstructing the generated images. The decomposition reveals interesting behaviors such as reliance on exemplars (_e.g._, _“Obama”_, _“Biden”_). (b) Our method enables various applications such as single-image decomposition to tokens and allows us to naturally visualize each token in the decomposition.
Introduction
Consider a simple textual concept such as _"summer"_ or _"love"_. What comes to mind? Naturally, we learn to associate concepts with combinations of other related concepts. For example, some may consider _"summer"_ to be a composition of _"sun"_, _"beach"_ and _"ice cream"_, and _"love"_ to be the composition of _"hug"_, _"friendship"_ and _"romance"_. Studies in the fields of cognitive science and natural language processing [9; 27] support the hypothesis that natural language concepts are represented by humans as a set of symbolic non-arbitrary links to other concepts. For example, as pointed out in [27], the human concept of _"cat"_ is intuitively linked to other concepts such as _"ears"_ and _"whiskers"_. However, while concept representations in the human brain and in natural language have been studied extensively [19; 20; 9], the same cannot be said about image generation models.
Recently, generative models have demonstrated unprecedented capabilities to create high-quality, diverse images based on textual descriptions [2; 10; 36; 39; 43]. However, as these models become increasingly expressive, our ability to understand _how_ they map textual inputs into rich visual representations remains limited. In this work, we aim to demystify this internal process by interpreting the model's latent representations of concepts using its textual space. Concretely, given a textual description of a concept, such as _"happiness"_, we propose to decompose the latent representation of the concept into a small set of interpretable tokens from the model's vocabulary (see Fig. 1). To extract this set of features, our method, Concept, learns a _pseudo-token_, which is a sparse linear combination of existing token embeddings, with the objective of reconstructing the concept images. Importantly, we show that this process results in a non-trivial, diverse set of learned tokens.
We use Concept to analyze how state-of-the-art text-to-image diffusion models represent various concepts, including concrete (_e.g._, _"a secretary"_) and abstract (_e.g._, _"affection"_) concepts, and special cases of concepts with a double meaning (_e.g._, _"a crane"_). Applying our method over the state-of-the-art Stable Diffusion model reveals interesting observations about the model's behavior. First, as demonstrated in Fig. 1(b), our method can decompose every generated image to its own subset of underlying tokens. We find that similar to the hypothesis above, the generated images are often represented by direct combinations of related concepts that control different semantic aspects of the generated image. Second, we observe that some concepts such as _"a president"_ or _"a composer"_ are represented mostly by well-known instances from the concept (see Fig. 1 for example), such that the generated images are interpolations of those instances. We additionally find that, consistent with previous work [37], the model learns to mix the multiple meanings of homograph concepts, and leverages these meanings simultaneously when generating images from the concept. Finally, we demonstrate our method's effectiveness in bias detection and semantic image manipulation.
To conclude, our work makes the following contributions:
* We develop a method to decompose a textual concept into a small set of interpretable tokens.
* We demonstrate single-image decompositions to determine what features caused the generation.
* We demonstrate interesting observations such as reliance on exemplars and entanglement of multiple meanings of a concept.
* We demonstrate fine-grained concept editing via manipulation of the coefficients in the decomposition. These manipulations assist in linking textual information to visual features.
* We demonstrate the detection of biases that are not easily observable visually. These observations raise important ethical questions regarding the social implications of leveraging these models.
## 2 Related Work
Early works studied text-guided image synthesis in the context of GANs [46; 55; 48; 52; 49]. More recently, impressive results were achieved with large-scale auto-regressive models [36; 50] and diffusion models [35; 31; 39; 43]. In the context of text-to-image diffusion models, a related line of work aims to introduce personalized concepts to a pre-trained text-to-image model by learning to map a set of images to a "token" in the text space of the model [11; 41; 22; 18]. However, these works do not investigate the inner representations of concepts but focus on concepts unfamiliar to the model.
A similar analysis to ours was conducted on concept representations in the context of language models [32; 23; 26; 28], often through projections to the vocabulary space [13; 12; 34]. Additionally, shred text-image representations such as CLIP [33] have been analyzed in the past [6; 5; 51] and have also been used to explain other models [30]. However, none of these works has been generalized to
image generation models. As far as we can ascertain, the closest effort to explaining text-to-image models is a simple visualization of the cross-attention maps per token in the prompt [15; 14; 4].
Finally, some works [37; 3] have attempted to investigate the images produced by text-to-image diffusion models, and have even found evidence of memorization [3]. However, these works rely entirely on the generated images and do not attempt to dissect the model's inner representations. Unlike all the above, our method analyzes the inner workings of the model using its textual space, and our conclusions transcend those that can be obtained by simply examining the output images.
## 3 Method
### Preliminaries: Latent Diffusion Models
We apply our method over the state-of-the-art Stable Diffusion (SD) model [39]. SD employs a denoising diffusion probabilistic model (DDPM) [44; 17] over an input latent vector \(z_{T}\sim\mathcal{N}(0,1)\) and gradually denoises it. Namely, at each timestep \(t=T,\ldots,1\), the DDPM receives a noised latent vector \(z_{t}\) and produces a less noisy vector \(z_{t-1}\), which serves as the input to the next step.
During the denoising process, the model is typically conditioned on a text encoding for an input prompt \(\mathcal{P}\), produced by a frozen CLIP text encoder [33], which we denote by \(\mathcal{C}\). The text encoder converts the textual prompt \(\mathcal{P}\) to a sequence of tokens, which can be words, sub-words, or punctuation marks. Then, the encoder's vocabulary, \(\mathcal{V}\in\mathbb{R}^{N,d}\), is used to map each token in the prompt to an embedding vector \(w\in\mathbb{R}^{d}\), where \(d\) is the embedding dimension of the encoder, and \(N\) is the number of tokens in the vocabulary. The DDPM model is then trained to minimize the loss,
\[\mathcal{L}_{reconstruction}=\mathbb{E}_{z_{t},\mathcal{P},\varepsilon \sim\mathcal{N}(0,1),t}\left[||\varepsilon-\varepsilon_{\theta}(z_{t},t, \mathcal{C}(\mathcal{P}))||_{2}^{2}\right], \tag{1}\]
for,
\[z_{t}=\sqrt{\alpha_{t}}z+\sqrt{1-\alpha_{t}}\varepsilon, \tag{2}\]
where \(\varepsilon_{\theta}\) is a trained UNet [40], and \(0=\alpha_{T}<\alpha_{T-1}<\cdots<\alpha_{0}=1\). In words, during training, the input image \(x\) is encoded to its corresponding latent vector \(z\). A noise vector \(\varepsilon\) and a timestep \(t\) are drawn randomly. The noise vector \(\varepsilon\) is then added to the latent vector \(z\) as specified in Eq. 2, and the UNet is trained to predict the added noise \(\varepsilon\).
### Concept
Our goal is to discover what features are used to encode a given concept \(c\) in a text-to-image diffusion model \(\varepsilon_{\theta}\). Formally, given a prompt \(\mathcal{P}^{c}\) for the concept \(c\) (_e.g., "a photo of a nurse"_), we learn a representation of the concept using the vocabulary \(\mathcal{V}\). This representation is realized as a pseudo-token
Figure 2: Illustration of the Concept method. Given the concept of interest (_e.g., “a president”_), we generate \(100\) concept images. Next, a learned MLP network maps each word embedding \(w_{i}\) to a coefficient \(f(w_{i})\), and the pseudo token \(w_{N}^{*}\) is constructed as a linear combination of the vocabulary. We then add random noises \(\varepsilon^{1},\ldots,\varepsilon^{|\mathcal{T}|}\) to the images, and use the model to predict the noise based on the text “_a photo of a <\(w_{N}^{*}\)>_”. We train the MLP with the objective of reconstructing the images (\(\mathcal{L}_{reconstruction}\)) and add a sparsity loss to encourage sparse coefficients (\(\mathcal{L}_{sparsity}\)).
\(w^{*}\notin\mathcal{V}\) that is constructed as a weighted combination of a subset of tokens from \(\mathcal{V}\), _i.e._,
\[w^{*} =\sum_{i=1}^{n}\alpha_{i}w_{i} \text{s.t. } w_{i}\in\mathcal{V}, \tag{3}\]
where \(n\leq N\) is a hyperparameter that determines the number of tokens to use in the combination.
Learning the set of \(n\) vocabulary elements \(w_{i}\) and their associated coefficients \(\alpha_{i}\) is done separately for each concept \(c\). To learn a meaningful pseudo-token \(w^{*}\), we optimize it to reconstruct the images generated from \(\mathcal{P}^{c}\), _i.e._, with the same objective as Eq. 1. We note that our method was purposefully constructed such that the optimization process mimics the training process of the model. This design is meant to encourage our pseudo-token to imitate the concept's denoising process. In the following, we describe our method in detail, as illustrated in Fig. 2.
We begin by collecting a training set \(\mathcal{T}\) of \(100\) images generated from the concept. These images will be used for our reconstruction objective. Next, we compute the pseudo-token. Our method, Concept, assigns a coefficient \(\alpha\) for each word embedding \(w\) using a learned MLP on \(w\). This way, the rich textual embedding space of CLIP is utilized in determining the coefficients. Specifically,
\[\forall w\in\mathcal{V}:\ \alpha=f(w)=W_{2}(\sigma(W_{1}(w))), \tag{4}\]
where \(\sigma\) is the ReLU non-linearity [1], and \(W_{1},W_{2}\) are linear mappings. Based on \(f\), we compute \(w_{N}^{*}=\sum_{i=1}^{N}f(w_{i})w_{i}\). Note that this pseudo-token is not identical to the output token \(w^{*}\) since it contains all the tokens in \(\mathcal{V}\). \(w^{*}\) is obtained by the top tokens from \(w_{N}^{*}\), as described in Eq. 5.
Next, we turn to describe the reconstruction objective. To compute a reconstruction loss in the form of Eq. 1, we draw a random noise vector \(\varepsilon\sim\mathcal{N}(0,1)\), and a random timestep \(t\in\{1,\dots,T\}\) for each of the images in the training batch, and noise the batch images according to Eq. 2. The reconstruction objective \(\mathcal{L}_{\text{reconstruction}}\) is identical to the training objective specified in Eq. 1. However, \(x\) is now a noised version of a training image from \(\mathcal{T}\), the weights of \(\varepsilon_{\theta}\) are frozen, and the pseudo-token \(w_{N}^{*}\), used by the prompt \(\mathcal{P}^{w_{N}^{*}}\)=_"a photo of a <\(w_{N}^{*}\)>"_, is learned. In other words, while the diffusion method trains the UNet for a frozen text prompt, we freeze the UNet and train the text prompt with the same objective. This is similar to personalization methods such as [11].
As mentioned, the pseudo-token \(w_{N}^{*}\) considers all the tokens in the vocabulary. However, for better interpretability, we wish to represent the input concept with a _small_ set of \(n<<N\) tokens, where \(n\) is a hyperparameter that can be selected by the user. Notate by \(w_{1},\dots,w_{n}\in\mathcal{V}\) the tokens with the highest learned coefficients. We add a regularization loss to encourage the pseudo-token \(w_{N}^{*}\) to be dominated by these top \(n\) tokens, _i.e._,
\[\mathcal{L}_{sparsity}=1-cosine\left(w^{*},w_{N}^{*}\right). \tag{5}\]
This encourages the pseudo-token \(w^{*}\), defined by the top \(n\) tokens in \(\mathcal{V}\), to be similar to \(w_{N}^{*}\), which is defined by the entire vocabulary. Our overall objective function is, therefore,
\[\mathcal{L}=\mathcal{L}_{reconstruction}+\lambda_{sparsity}\mathcal{L}_{sparsity}, \tag{6}\]
In our experiments, we set \(\lambda_{sparsity}=0.001,n=50\). At inference time, we employ the per-concept MLP on the vocabulary \(\mathcal{V}\) to obtain the coefficients and consider the top \(n=50\) tokens to compose \(w^{*}\), as specified in Eq. 3. Implementation details can be found in the supplementary materials.
### Single-Image Decomposition
Given an image \(I\) that was generated by SD for a concept \(c\), we wish to determine the subset of the tokens from the decomposition, \(w^{*}\), that were involved in the generation of this specific image (see Fig. 1(b)). This is done via an iterative process over the tokens \(w_{j}\in w^{*}\) as follows; at each step, we attempt to remove a single token from the decomposition, \(w_{j}^{*}=\sum_{i\neq j}\alpha_{i}w_{i}\), and generate the corresponding image \(I_{j}\) with the prompt \(\mathcal{P}^{w_{j}^{*}}\) and the same seed. Next, we use CLIP's image encoder to determine if \(I_{j}\) is semantically identical to \(I\). If the CLIP score of the two images is higher than \(95\), we remove \(w_{j}\) from \(w^{*}\) and continue to the next token. This criterion avoids incorporating tokens whose removal only causes minor non-semantic modifications to the image \(I\) (such as a slight shift in pose). This process is repeated for all tokens in \(w^{*}\) until no tokens are removed.
## 4 Experiments
We conduct experiments to show our method's ability to produce meaningful decompositions for various concepts, from basic concepts (_e.g._, _"dog"_, _"cat"_) to rich concepts (_e.g._, _"doctor"_, _"painter"_) and abstract concepts (_e.g._, _"happiness"_, _"fear"_). Throughout this section, we notate by \(w^{c}\) the token(s) corresponding to the concept \(c\) (_e.g._, for _"a nurse"_, \(w^{c}\) is nurse).
DataWe construct a diverse dataset of \(58\) concepts, which are both concrete and abstract. For the concrete concepts, we consider the basic classes from CIFAR-10 [21], and the list of \(28\) professions from the Bias in Bios dataset [8]. For the abstract concepts, we use \(10\) basic emotions and \(10\) basic actions. A full list of all our considered concepts is provided in the supplementary materials.
BaselinesAs far as we can ascertain, our work is the first to tackle concept representation in text-to-image diffusion models. We, therefore, compare our method with reasonable and intuitive baselines. First, the most closely related method to ours is Hard Prompts Made Easy (PEZ) [47]. Given a set of input images, PEZ aims to learn a prompt such that when fed into SD, the resulting images will match the input images. This is done by prompt optimization to maximize the CLIP score between the text and the image. Second, we consider two baselines that leverage the state-of-the-art image captioning model BLIP-2 [24]: (ii) _BLIP-2 sentence_, extracts a single caption per concept. This is done by decoding the mean CLIP image embedding of the set of \(100\) training images \(\mathcal{T}\) generated for this concept. (iii) _BLIP-2 combination_ creates one caption per each image \(I\in\mathcal{T}\) and ranks the tokens obtained from all of the training set by their frequency across all such captions. Then, a single token is computed as the combination of the tokens weighted by their frequencies.
### Motivation Experiments
In the following, we provide motivational experiments to better understand the capabilities of our method. We begin by addressing the seemingly unintuitive result that \(w^{*}\neq w^{c}\). One may expect \(w^{c}\), which generated the concept images, to be better than any other token in denoising them. However, this is not necessarily the case, since (1) \(w^{*}\) is optimized over linear combinations of tokens, including \(w^{c}\). Therefore, given a successful optimization process, \(w^{*}\) is expected to perform at least as good as \(w^{c}\), and potentially even better. (2) \(w^{c}\) generates each image using a specific initial random noise but is not guaranteed to be better in denoising the images after applying other random noises. Next, we compare the denoising quality of \(w^{c}\) and \(w^{*}\) quantitatively and qualitatively.
First, we wish to quantitatively verify that the denoising capabilities of \(w^{*}\) generalize to unseen images generated by the concept, beyond those used for training (Sec. 3). We begin by sampling a test set of
Figure 3: Denoising tests comparing the concept token, \(w^{c}\), and our token, \(w^{*}\). (a) Quantitative test on all \(58\) concepts using \(100\) test images per concept. For each timestep, we draw random noises for all images and compare the reconstruction with our pseudo token \(w^{*}\), the concept token \(w^{c}\), and \(w^{o}\), a continuous token optimized for the same task (Optimized Token). We report the MSE after subtracting the reconstruction score of a random token, to reflect different levels of noise (lower is better). Note that the graph does not reflect a convergence of a reconstruction process, as timesteps are independent. Error bars are marked on each timestep (zoom in for better visibility). (b) Qualitative denoising examples. An image \(I\) is generated from the input concept “_a nurse_”, and different random noises are added \(\varepsilon_{1},\dots,\varepsilon_{5}\) (1st row). Denoising is done with \(w^{c}\) (2nd row) and with \(w^{*}\) (3rd row).
\(100\) images for each concept, generated by \(w^{c}\). Then, for each denoising step \(t\in\{1,\dots,T\}\) and each test image, we draw a random noise and apply it as in Eq. 2. Finally, we test the reconstruction loss specified in Eq. 1 with the pseudo-token \(w^{*}\) compared to the concept token \(w^{c}\). We additionally compare to \(w^{o}\), a vector optimized with the same reconstruction objective on the entire continuous embedding space \(\mathbb{R}^{d}\) without restrictions, similar to [11]. Note that, unlike \(w^{*}\), \(w^{o}\) does not offer interpretable information, but it provides a lower bound on the obtainable error. We observe that there is a large variance in the MSE score across timesteps. Latents in early steps are very noisy, and therefore obtain a very high loss (\(\sim 0.8\)), while the last steps contain virtually no noise, and the MSE is very low (\(\sim 1e^{-3}\)). Therefore, we compute a baseline score to normalize the scale by denoising the same images using a _random token_ which serves as an upper bound for the MSE. The final MSE score for each token \(w\in\{w^{c},w^{*},w^{o}\}\) is obtained by subtracting the MSE score of the random token from the MSE score of \(w\), such that we maintain the convention that a lower score is better. Fig. 3(a) presents the results averaged across all \(58\) concepts, showing that the concept token \(w^{c}\) obtains a score worse than both \(w^{*}\) and the optimized token \(w^{o}\), which obtains the best results. These differences are statistically significant, as shown by the error bars marked on every timestep. Evidently, by optimizing a token in a larger domain, we can outperform the original concept token in the denoising task.
Fig. 3(b) provides a qualitative comparison between \(w^{c}\) and \(w^{*}\). An input image \(I\) generated by _"a photo of a nurse"_ is noised and then denoised back from different denoising steps, using the concept token \(w^{c}\) and our pseudo-token \(w^{*}\). As can be seen, there are cases where, given a different random seed, \(w^{c}\) does not preserve the features in the original image \(I\) (_e.g._, it adds hats, face masks, and black and white effects), while \(w^{*}\) does. Intuitively, this can be attributed to the rich representation learned by \(w^{*}\), which can include both semantic and style features. Both experiments motivate the diversity of the learned decomposition. Since \(w^{c}\) is not necessarily optimal for Eq. 1, \(w^{*}\) learns additional features to improve the denoising quality. Thus, \(w^{*}\) balances two objectives- interpretability and better reconstruction.
Note that since \(N>>d\), there are many linear combinations that yield \(w^{*}\). However, due to the specific MLP-based structure and the sparsity constraints, the decomposition is stable, see supplementary materials for empirical evidence and additional experiments.
Figure 4: Single-image decompositions by Conceptor. Each of the examples depicts an image generated by Stable Diffusion for the concept, and its corresponding decomposition. The top rows present images found to contain two concepts. The last row shows more complex mixtures by adding one token at a time from left to right (original image by SD on the right).
### Qualitative and Quantitative Evaluation
reconstruction. Next, we conduct qualitative comparisons of Concept to the baselines. Tab. 2 compares the textual decompositions, showing that Concept learns a variety of features for each concept. Some concepts, such as _"a composer"_, _"a dog"_ are dominated by instances of the concept (_e.g_. Beethoven, Schubert), while others, such as _"affection"_, are represented by abstract and concrete tokens associated with the concept (_e.g_., loving, puppies). In contrast, the baselines either produce decompositions that are not interpretable (PEZ) or oversimplistic (BLIP-2). Fig. 5 presents a comparison of the reconstruction by each method given the same \(4\) seeds. As can be observed, Concept successfully preserves the image features, even when the concept entails fine features (_e.g_., _"a bird"_). Conversely, the baseline methods either produce results that lack diversity (_e.g_., BLIP-2 only generates black and white birds), or do not accurately embody all features of the concept (PEZ). The corresponding images and decompositions of the examples in Tab. 2 and Fig. 5 can be found in the supplementary materials, alongside experiments using exemplar-based concepts.
Feature VisualizationOur method enables natural visualization for each token in the decomposition by manipulating its corresponding coefficient. By increasing the coefficient, the presence of the token becomes stronger, and vice versa. Fig. 6 presents examples of such visualizations.
First, following [37], we present examples of dual-meaning concepts (first, second row of Fig. 6). As can be seen, our decomposition allows us to visualize the impact of each of the meanings on the generated images. We observe that in some cases, such as _"a big apple"_, both meanings are generated in the original image, while other cases, such as _"crane"_ generate a single object. Even in the latter cases, our method demonstrates that both meanings impact the generated image, implicitly. For example, when reducing the feature _stork_ from _"crane"_ we observe that the structure of the crane changes. Evidently, the dual meaning of the bird influenced the shape of the generated crane.
Next, we present an example of a feature that may appear unintuitive, abstract for the concept _"sculpture"_. We observe that this feature controls the level of detail in the generated image. Finally,
Figure 5: Reconstruction comparison to the baselines. For each concept (column) we generate the images from scratch starting from the same pure random noise with our method and all the baselines, and compare to the original concept images generated by Stable Diffusion (Original Images).
Figure 6: Feature visualizations. For each of the listed concepts, we manipulate a single textual token from the decomposition and observe its visual impact on the generated image.
we present an example of an interpolation of notable instances. As can be observed, the token Trump controls the semantic similarity to Donald Trump and adds features that correspond to his identity.
Bias Detection and MitigationOne important capability of our method is bias discovery and mitigation. Text-to-image models, and specifically Stable Diffusion, have been shown to represent social biases [7; 29]. The decompositions obtained by Concept can be used to discover such biases by analyzing the tokens in the decomposition. Tab. 3 lists some concepts that contain features that may be considered socially insensitive. Our method detects behaviors that are not necessarily observable visually such as millennials for _"drinking"_. These findings substantiate the need to conduct more research on concept representations in text-to-image models, as biases can impact the generation even if they are hard to detect visually. Using our method, users can also choose to generate debiased versions of these concepts by employing manipulations, as demonstrated in Fig. 6, which exemplifies our method's ability to perform fine-grained concept editing. This manipulation enables the user to gradually decrease the biased tokens until an equal representation is achieved.
### Ablation Study
We conduct an ablation study to examine the impact of each component on our method. First, we ablate the choice of employing an MLP to learn the coefficients and instead learn them directly. Next, we ablate each of our loss functions and the choice of \(n=50\). Last, we ablate our choice of vocabulary \(\mathcal{V}\) and instead extract the top \(50\) tokens by their CLIP similarity to the mean image.
The results are listed in Tab. 4. Replacing the MLP with a vector of weights is detrimental to all metrics except for token diversity. Both loss functions \(\mathcal{L}_{sparsity},\mathcal{L}_{reconstruction}\) are required to achieve good results. Without the reconstruction loss, the images are not linked to the decomposition, which severely damages both LPIPS and FID. Without the sparsity loss, the top \(50\) tokens do not necessarily reflect the learned token \(w^{*}_{N}\) and all metrics except for word diversity deteriorate. Additionally, observe that the performance decreases when employing \(n=10\), since the decomposition is not rich enough to represent all features. For \(n=100\), the results are similar to the full method, other than the diversity which improves a little. This indicates that Concept is relatively stable to this parameter. Finally, when only considering the top words by CLIP similarity to the mean image, the performance decreases substantially, supporting the reliance of our method on a wide variety of tokens from the vocabulary, and not just the ones most correlated with the images.
## 5 Discussion and Limitations
While our method provides faithful concept decompositions, there are several limitations to consider. First, as mentioned in Sec. 4.2, the visual impact of the obtained tokens may not be completely aligned with their lexical meaning. For example, the token suffrage, which refers to a historical
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & CLIP [33] pairwise\(\uparrow\) & LPIPS [53]\(\downarrow\) & FID [16]\(\downarrow\) & Token diversity [38]\(\uparrow\) \\ \hline \hline Concept & **87.0**\(\pm\) 5.5 & **0.45**\(\pm\) 0.07 & **107.96**\(\pm\) 31.0 & 69.7 \(\pm\) 3.4 \\ w/o MLP & 78.0 \(\pm\) 6.7 & 0.55 \(\pm\) 0.06 & 142.88 \(\pm\) 45.1 & **75.9**\(\pm\) 3.0 \\ w/o \(\mathcal{L}_{sparsity}\) & 80.3 \(\pm\) 11.6 & 0.52 \(\pm\) 0.09 & 146.4 \(\pm\) 63.4 & 73.2 \(\pm\) 2.1 \\ w/o \(\mathcal{L}_{reconstruction}\) & 61.5 \(\pm\) 8.6 & 0.65 \(\pm\) 0.06 & 246.33 \(\pm\) 73.8 & 68.3 \(\pm\) 3.7 \\ \(n=10\) & 82.9 \(\pm\) 7.8 & 0.49 \(\pm\) 0.11 & 129.41 \(\pm\) 55.3 & 54.6 \(\pm\) 9.4 \\ \(n=100\) & 85.6 \(\pm\) 6.9 & 0.47 \(\pm\) 0.07 & 114.36 \(\pm\) 39.7 & 72.8 \(\pm\) 1.8 \\ CLIP [33] top words & 80.1 \(\pm\) 9.9 & 0.513 \(\pm\) 0.1 & 130.9 \(\pm\) 57.2 & 66.3 \(\pm\) 3.9 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study of our method, conducted on the professions subset [8].
\begin{table}
\begin{tabular}{l l} \hline \hline Concept & Decomposition \\ \hline \hline Secretary & clerk, prosecutor, teachers, wife, hostess, actress, womens, girl,ladies... \\ Opera singer & vanity, fat, obese, chiffon, soprano, overweight... \\ Pastor & Nigerian, directors, gospel, worship, tux... \\ Journalist & stranger, refugee, press, paparazzi, jews, tripod, photographing... \\ Drinking & cheating, millennials, liquid, blonde, pitcher, drunk, toast, smiling, booze... \\ \hline \hline \end{tabular}
\end{table}
Table 3: Top tokens obtained by Concept that reveal potential social insensitivity.
movement for women's rights, is highly influential when generating images of nurses. Visually, for the concept _"nurse"_, this token changes the style of the image to match that of a century ago.
Additionally, as demonstrated in Sec. 4.1, our pseudo-token \(w^{*}\) improves the denoising quality of \(w^{c}\), which indicates the information added to \(w^{*}\) beyond the tokens of the concept. We find that for complex tokens with rich representations such as the professions or the abstract concepts, our learned token \(w^{*}\) improves the denoising quality significantly and learns a larger variety of features, while for simpler concepts such as those of CIFAR-10, the improvement over \(w^{c}\) is less significant. We refer the reader to the supplementary materials for further details.
## 6 Conclusions
How does a generative model perceive the world? Focusing on text-to-image models, we investigate the model's internal knowledge of real-world concepts. We ask not _what_ is generated but _why_ these generations came to be. Our method, Concept, proposes a decomposition scheme that mimics the model's training process to uncover its inner representations. We find that, in correlation with humans, these models learn to link concepts to other related concepts. Via a per-image decomposition algorithm, we observe that the model leverages these connections in non-trivial ways that transcend the lexical meaning of the tokens. For example, in Fig. 4, _"sweet peppers"_ are linked to _"fingers"_ due to their structural similarity, in Fig. 1 _"smail"_ is linked to _"winding"_ due to the texture of its shell, _etc_. These findings demonstrate a generation process that is based on a semantic image-centered organization of the data, rather than on simple memorization. Furthermore, our method exposes less intuitive behaviors, such as the reliance on exemplars, mixing dual meanings of concepts, or non-trivial biases. In all cases, the novel paradigm allows us to peer into the inner workings of a model that, similarly to other foundation models, can still be considered an enigma.
## 7 Acknowledgements
This work was done during an internship at Google Research. We thank Shiran Zada, Ariel Ephrat, Omer Tov, and Roni Paiss for their early feedback and insightful discussions.
|
2310.13742 | Simulating Scattering of Composite Particles | We develop a non-perturbative approach to simulating scattering on classical
and quantum computers, in which the initial and final states contain a fixed
number of composite particles. The construction is designed to mimic a particle
collision, wherein two composite particles are brought in contact. The initial
states are assembled via consecutive application of operators creating
eigenstates of the interacting theory from vacuum. These operators are defined
with the aid of the M{\o}ller wave operator, which can be constructed using
such methods as adiabatic state preparation or double commutator flow equation.
The approach is well-suited for studying strongly coupled systems in both
relativistic and non-relativistic settings. For relativistic systems, we employ
the language of light-front quantization, which has been previously used for
studying the properties of individual bound states, as well as for simulating
their scattering in external fields, and is now adopted to the studies of
scattering of bound state systems.
For simulations on classical computers, we describe an algorithm for
calculating exact (in the sense of a given discretized theory) scattering
probabilities, which has cost (memory and time) exponential in momentum grid
size. Such calculations may be interesting in their own right and can be used
for benchmarking results of a quantum simulation algorithm, which is the main
application of the developed framework. We illustrate our ideas with an
application to the $\phi^4$ theory in $1+1\rm D$. | Michael Kreshchuk, James P. Vary, Peter J. Love | 2023-10-20T18:00:50Z | http://arxiv.org/abs/2310.13742v2 | # Simulating Scattering of Composite Particles
###### Abstract
We develop a non-perturbative approach to simulating scattering on classical and quantum computers, in which the initial and final states contain a fixed number of composite particles. The construction is designed to mimic a particle collision, wherein two composite particles are brought in contact. The initial states are assembled via consecutive application of operators creating eigenstates of the interacting theory from vacuum. These operators are defined with the aid of the Moller wave operator, which can be constructed using such methods as adiabatic state preparation or double commutator flow equation.
The approach is well-suited for studying strongly coupled systems in both relativistic and non-relativistic settings. For relativistic systems, we employ the language of light-front quantization, which has been previously used for studying the properties of individual bound states, as well as for simulating their scattering in external fields, and is now adopted to the studies of scattering of bound state systems.
For simulations on classical computers, we describe an algorithm for calculating exact (in the sense of a given discretized theory) scattering probabilities, which has cost (memory and time) exponential in momentum grid size. Such calculations may be interesting in their own right and can be used for benchmarking results of a quantum simulation algorithm, which is the main application of the developed framework. We illustrate our ideas with an application to the \(\phi^{4}\) theory in \(1+1\)D.
## I Introduction
Calculation of scattering probabilities is one of the primary tasks of quantum field theory (QFT). Most analytic approaches describe scattering using the S-matrix, a unitary operator connecting the states of incoming and outgoing particles, which are considered asymptotically free. S-matrix theory has proven enormously successful in the studies of weakly coupled quantum field theories, such as quantum electrodynamics or high-energy quantum chromodynamics [1].
Most proposals for _ab initio_ simulation of scattering in QFT rely either on significant reduction of problem complexity (i.e., truncation of many-body Hilbert space to a small number of particles and/or classical description of fields [2; 3; 4; 5]) or on the usage of quantum hardware [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16], with the latter being the primary application of results of this work.
Since the early days of quantum computation [17; 18], quantum simulation has been recognized as one of its primary applications, and was considered as a way of achieving a long-time dream of quantum physicists -- solving quantum field theory non-perturbatively. Currently, quantum simulation is the only proposal offering a way of making calculations in general quantum many-body systems and field theories, that requires computational resources growing polynomially with the problem size and precision [6; 7; 8; 19; 20; 21; 22; 23; 24; 25]. While certain properties of relativistic bound states can be calculated on near-term devices with limited resources [26; 27; 28], simulation of scattering is likely to require quantum computers functioning in the fault-tolerant regime [29; 30; 31; 32].
Most existing approaches [6; 7; 8; 11; 12; 13; 14] to the quantum simulation of scattering comprise the following steps: (a) non-trivial preparation of the vacuum state; (b) preparation of wave packets;1 (c) adiabatic interaction turn-on and turn-off; (d) measurement. In this work, we propose a paradigm for simulation of scattering, in which: (a) the initial and final scattering states are defined as states of a fixed number of composite particles in the interacting theory; (b) the many-particle states belong to the theory with momentum cut-offs higher than those in the theories describing the states of individual particles. That is, the addition of new particles into
the system requires the addition of higher momentum modes, upon which the Hamiltonian operator can act. Due to the increase of the total momentum of the system, states containing multiple composite particles are not eigenstates of the combined system, and, therefore, undergo non-trivial time evolution.
Our approach combines ideas from non-relativistic many-body physics [33; 34; 35], quantum field theory [36; 37; 38; 39; 40; 41; 42] and quantum simulation [12; 13; 7]. Furthermore, it is based upon the second-quantized formulation, and so allows one to employ appropriate basis sets to efficiently describe systems involving localized bound states [40].
In application to high-energy physics, our work can be considered as an extension of the time-dependent Basis Light Front Quantization (tBLFQ) which has been successfully used for a number of scattering applications to date [2; 3; 4; 5; 15]. In tBLFQ, the initial and final states of the system are typically chosen to be the eigenstates of some interacting Hamiltonian. The non-trivial evolution of such eigenstates in time is then achieved by coupling the original system to an external field. This, for example, allows one to naturally model the scattering of light particles on heavy nuclei [41]. We generalize this construction to include situations in which scattering is caused by the interacting quantum field itself, as in the cases when beams of particles of comparable mass are scattered.
In Section II, we present the main ideas of our approach, while leaving a more general discussion to Appendix A. In Section III we illustrate those using as an example the formulation of the \(\phi^{4}\) theory in \(1+1\)D obtained within the Discretized Light-Cone Quantization framework [38]. In Section IV, we discuss a detailed implementation of our approach via a purely classical calculation. This will require resources exponential in the momentum grid size, and will give the exact solution to the problem. Here and in what follows, _exact solution_ refers to an solution of the chosen discretized model approximating the original continuous theory. As such, we do not address the effects arising at the stage of discretizing the continuous theory. One way to approach the "exact" solution in the sense of the continuous theory amounts to extrapolating simulation results to infinite values of cutoffs [43]. In Section V we describe an efficient quantum simulation algorithm, which is the main motivation for introducing the new framework.
## II States of multiple composite particles
In this Section, we introduce the main conceptual ideas of our method. For clarity, we will make certain simplifications (e.g. adopt the momentum basis for single-particle states and ignore other quantum numbers) while deferring a more general discussion to Appendix A.
Consider a non-interacting theory described by the Hamiltonian operator \(\mathcal{H}_{\text{free},\,\Lambda}\), provided that \(\mathcal{H}_{\text{free},\,\Lambda}\) can be written in terms of creation and annihilation operators, and modes of momentum up to \(\Lambda\) are included. In such a theory, the single-particle states of momentum \(|\mathbf{p}|\leq\Lambda\) are created from the vacuum state as
\[|\mathbf{p}\rangle^{(\Lambda)}=a^{\dagger}_{\Lambda,\,\mathbf{p}}|\text{vac}\rangle\,. \tag{1}\]
Having in mind applications of our formalism to non-relativistic many-body theory and Light-Front (LF) QFT [36; 37; 38; 39; 40] (see also footnote 4 below), here and in what follows we assume the uniqueness of the vacuum state, which is the same for both free and interacting theories [44; 45]. The notation \(|\mathbf{p}\rangle^{(\Lambda)}\) in eq. (1) was introduced solely for uniformity purposes, as \(a^{\dagger}_{\Lambda_{1},\,\mathbf{p}}\equiv a^{\dagger}_{\Lambda_{2},\,\mathbf{p}} \equiv a^{\dagger}_{\mathbf{p}}\) and \(|\mathbf{p}\rangle^{(\Lambda_{1})}\equiv|\mathbf{p}\rangle^{(\Lambda_{2})}\equiv|\mathbf{ p}\rangle\) (where it is assumed that \(|\mathbf{p}|\leq\min{(\Lambda_{1},\Lambda_{2})}\)).
Note that in order to include states of the form \(a^{\dagger}_{\Lambda_{1},\,\mathbf{p}_{1}}a^{\dagger}_{\Lambda_{2},\,\mathbf{p}_{2}}| \text{vac}\rangle\), one generally has to consider a theory with the cutoff being at least \(\Lambda_{1}+\Lambda_{2}\):
\[|\mathbf{p}_{1},\mathbf{p}_{2}\rangle^{(\Lambda_{1}+\Lambda_{2})}=a^{\dagger}_{\Lambda _{1},\,\mathbf{p}_{1}}a^{\dagger}_{\Lambda_{2},\,\mathbf{p}_{2}}|\text{vac}\rangle\,. \tag{2}\]
In this way one can always define the action of \(a^{\dagger}_{\Lambda,\,\mathbf{p}}\) on any Fock state -- in accord with its action in the theory with \(\Lambda=\infty\), which includes Fock states of arbitrary momentum.
In analogy with eq. (1), in the interacting theory described by the Hamiltonian \(\mathcal{H}_{\text{full},\,\Lambda}\), we introduce the second-quantized operator \(\mathsf{A}^{\dagger}_{\Lambda,\,\mathbf{p}}\) creating a single-particle state of momentum \(\mathbf{p}\) (in the sense of the interacting theory) from vacuum:
\[|\widetilde{\mathbf{p}}\rangle^{(\Lambda)}=\mathsf{A}^{\dagger}_{\Lambda,\,\mathbf{p} }|\text{vac}\rangle\,. \tag{3}\]
One possible way of constructing \(\mathsf{A}^{\dagger}_{\Lambda,\,\mathbf{p}}\) amounts to writing \(|\widetilde{\mathbf{p}}\rangle^{(\Lambda)}\) as a polynomial \(\mathsf{P}^{\dagger}_{\Lambda,\,\mathbf{p}}\) in the free creation operators \(a^{\dagger}_{\Lambda,\,\mathbf{p}}\) of momenta up to \(\Lambda\), acting on the vacuum state,
\[\begin{split}|\widetilde{\mathbf{p}}\rangle^{(\Lambda)}=\mathsf{P}^{ \dagger}_{\Lambda,\,\mathbf{p}}|\text{vac}\rangle\,,\\ \mathsf{P}^{\dagger}_{\Lambda,\,\mathbf{p}}&=\sum_{n} \int^{\Lambda}\mathrm{d}\mathbf{p}_{1}\dots\mathrm{d}\mathbf{p}_{n}\\ &\times f^{(n)}(\mathbf{p}_{1},\dots,\mathbf{p}_{n})\prod_{j=1}^{n}a^{ \dagger}_{\Lambda,\,\mathbf{p}_{n}}\,,\end{split} \tag{4}\]
and then defining \(\mathsf{A}^{\dagger}_{\Lambda,\,\mathbf{p}}\equiv\mathsf{P}^{\dagger}_{\Lambda,\, \mathbf{p}}\). Note, however, that since \(\mathsf{P}^{\dagger}_{\Lambda,\,\mathbf{p}}\) only carries information on the state \(|\widetilde{\mathbf{p}}\rangle^{(\Lambda)}\) (in the form of how the latter can be prepared from the vacuum), its action on any other state in the Fock space is not, generally, particularly sensible. Nevertheless, in certain cases it may serve as an approximate version of the creation operator defined below.
Alternatively, one can define \(\mathsf{A}^{\dagger}_{\Lambda,\,\mathbf{p}}\) by unitarily rotating the operator \(a^{\dagger}_{\Lambda,\,\mathbf{p}}\):
\[\mathsf{A}^{\dagger}_{\Lambda,\mathbf{p}}=\mathcal{U}_{\Lambda}a^{\dagger}_{ \Lambda,\,\mathbf{p}}\,\mathcal{U}^{\dagger}_{\Lambda}\,, \tag{5}\]
with \(\mathcal{U}_{\Lambda}\) being a transformation relating the bases of the free and interacting theories. In this case, using the language of spectral theory, we shall refer to \(\mathcal{U}_{\Lambda}\) as Moller _wave operator_[46, 47, 48, 49]. Definition (5) is consistent with eq. (3),
\[\begin{split}\mathsf{A}^{\dagger}_{\Lambda,\mathbf{p}}|\text{vac} \rangle&=\mathcal{U}_{\Lambda}a^{\dagger}_{\Lambda,\,\mathbf{p}} \mathcal{U}^{\dagger}_{\Lambda}|\text{vac}\rangle\\ &=\mathcal{U}_{\Lambda}a^{\dagger}_{\Lambda,\,\mathbf{p}}|\text{vac} \rangle=|\widetilde{\mathbf{p}}\rangle^{(\Lambda)}\,,\end{split} \tag{6}\]
given that \(\mathcal{U}_{\Lambda}\) acts on the vacuum and single-particle states as follows:2
Footnote 2: Equations (6) and (7) can be generalized to theories with non-trivial interacting vacua as \(\mathsf{A}^{\dagger}_{\Lambda,\mathbf{p}}|\widetilde{\text{vac}}\rangle^{(\Lambda) }=\mathcal{U}_{\Lambda}a^{\dagger}_{\Lambda,\,\mathbf{p}}\mathcal{U}^{\dagger}_{ \Lambda}|\text{vac}\rangle\) and \(\mathcal{U}_{\Lambda}|\text{vac}\rangle=|\widetilde{\text{vac}}\rangle^{(\Lambda)}\), where \(|\widetilde{\text{vac}}\rangle^{(\Lambda)}\) is the vacuum state in the interacting theory with cutoff \(\Lambda\). In such cases, one would also need to distinguish vacua of interacting theories with different cutoffs.
\[\mathcal{U}_{\Lambda}|\text{vac}\rangle\ =|\text{vac}\rangle\ \, \tag{7}\]
\[\mathcal{U}_{\Lambda}|\mathbf{p}\rangle^{(\Lambda)}=|\widetilde{\mathbf{p}}\rangle^{( \Lambda)}\,. \tag{8}\]
If \(\mathcal{U}_{\Lambda}\) respects the symmetries of the theory, definition (5) implies that \(\mathsf{A}^{\dagger}_{\Lambda,\mathbf{p}}\) and \(a^{\dagger}_{\Lambda,\,\mathbf{p}}\) create states with the same values of charges. For an operator creating a state in the interacting theory whose quantum numbers differ from those of the free particles (such as proton), eq. (5) has to be generalized by replacing \(a^{\dagger}_{\Lambda,\,\mathbf{p}}\) with an operator creating the ground state of the free theory in the sector with corresponding charges, see Appendix A.
Definition (5) is closely related to the notion of _effective particles_ in QFT [50, 51, 52, 53, 54, 55, 56, 57, 58, 59]. Unlike \(\mathsf{P}^{\dagger}_{\Lambda,\,\mathbf{p}}\), the wave operator \(\mathcal{U}_{\Lambda}\) carries information about the entire spectrum of the system, up to momentum cutoff \(\Lambda\). Unless otherwise specified, in the remainder of the paper we shall, therefore, assume the usage of definition (5).
Note that, in contrast with the free theory where \(a^{\dagger}_{\Lambda,\,\mathbf{p}}\) simply increases the occupancy of mode \(\mathbf{p}\) in a Fock state, in the interacting theory \(\mathsf{A}^{\dagger}_{\Lambda,\mathbf{p}}\) may act on all modes of momentum up to \(\Lambda\), which implies that, generally, \(\mathsf{A}^{\dagger}_{\Lambda_{1},\,\mathbf{p}}\neq\mathsf{A}^{\dagger}_{ \Lambda_{2},\,\mathbf{p}}\) and \(|\widetilde{\mathbf{p}}\rangle^{(\Lambda_{1})}\neq|\widetilde{\mathbf{p}}\rangle^{( \Lambda_{2})}\).
Following the analogy with the free theory, we define states with a fixed number of particles in the interacting theory by successive application of creation operators (5) to vacuum:
\[\mathsf{A}^{\dagger}_{\Lambda_{1},\,\mathbf{p}_{1}}\mathsf{A}^{\dagger}_{\Lambda_{2 },\,\mathbf{p}_{2}}\dots|\text{vac}\rangle\,, \tag{9}\]
which mimics the definition (2). Here we set aside the detailed issue of normalization of state (9).
Note that whether the operators \(\mathsf{A}^{\dagger}_{\Lambda_{j},\,\mathbf{p}_{j}}\) are defined as in eq. (4) or as in eq. (5), the states in (9) are not the eigenstates of \(\mathcal{H}_{\text{full},\,\Lambda_{1}+\Lambda_{2}+\dots}\), since neither construction assumes solving \(\mathcal{H}_{\text{full},\,\Lambda_{1}+\Lambda_{2}+\dots}\). Therefore, the time evolution of states of the form (9) under the action of \(\mathcal{H}_{\text{full},\,\Lambda_{1}+\Lambda_{2}+\dots}\) is non-trivial, and involves modes of higher momentum, which are absent in the one-particle states but present in the many-particle state (9). Intuitively, this picture reflects the fact that bringing new particles into the system increases its total momentum and entails inclusion of
higher-momentum modes in the interaction, see Figure 1. The so-defined states are then evolved with time and used for measurement.
Operators creating particles in the interacting system are often chosen to act on disjoint sets of modes. If spatial discretization is used, this can be achieved by imposing a finite spatial extent of particle wavefunctions and assuming large separation in the coordinate space between them [12; 13; 42; 7]. In the momentum basis, a similar situation is encountered if the center-of-mass momenta of composite particles are large compared to relative momenta of their constituents. While definitions (4) and (5) are not equivalent in the sense of their action on arbitrary Fock states, both lead to the two-particle wavefunction \(\mathsf{A}^{\dagger}_{\Lambda_{1},\,\mathbf{p}_{1}}\mathsf{A}^{\dagger}_{\Lambda_ {2},\,\mathbf{p}_{2}}|\text{vac}\rangle\) being a tensor product of \(\mathsf{A}^{\dagger}_{\Lambda_{1},\,\mathbf{p}_{1}}|\text{vac}\rangle\) and \(\mathsf{A}^{\dagger}_{\Lambda_{2},\,\mathbf{p}_{2}}|\text{vac}\rangle\) when \(\mathsf{A}^{\dagger}_{\Lambda_{1},\,\mathbf{p}_{1}}\) and \(\mathsf{A}^{\dagger}_{\Lambda_{2},\,\mathbf{p}_{2}}\) act on disjoint sets of free particle modes [60].
The situation becomes more complicated if the sets of modes, upon which \(\mathsf{A}^{\dagger}_{\Lambda_{1},\,\mathbf{p}_{1}}\) and \(\mathsf{A}^{\dagger}_{\Lambda_{2},\,\mathbf{p}_{2}}\) act, _do_ overlap -- an example of such a scenario is considered below in Sections III and IV, in the context of LF quantization.3 In this case, definitions (4) and (5) may lead to significantly different forms of the state \(\mathsf{A}^{\dagger}_{\Lambda_{1},\,\mathbf{p}_{1}}\mathsf{A}^{\dagger}_{\Lambda _{2},\,\mathbf{p}_{2}}|\text{vac}\rangle\), neither of which would be a tensor product of \(\mathsf{A}^{\dagger}_{\Lambda_{1},\,\mathbf{p}_{1}}|\text{vac}\rangle\) and \(\mathsf{A}^{\dagger}_{\Lambda_{2},\,\mathbf{p}_{2}}|\text{vac}\rangle\); see the discussion at the end of this section in item (e).
Footnote 3: Considering overlapping wavefunction of hadrons at the stage of initial state preparation may be necessary not only in the LF formulation but for equal-time gauge theories as well [61].
Several ways of constructing the wave operator \(\mathcal{U}\) can be considered (for brevity, we suppress the dependence of \(\mathcal{U}\) on \(\Lambda\) in the remainder of the section), see Figure 2:
* Within the unitary coupled cluster (UCC) method, one seeks for \(\mathcal{U}\) in the form of exponential of a Hermitian polynomial \(\mathcal{V}\) in creation and annihilation operators: \[\begin{split}\mathcal{U}=\mathrm{e}^{-i\mathcal{V}}\,,\\ \mathcal{V}=\text{poly}(a_{\mathbf{p}},a^{\dagger}_{\mathbf{p}})\,,\quad \mathcal{V}=\mathcal{V}^{\dagger}\,.\end{split}\] (10) When the UCC construction is used for approximating low-lying states in the spectrum of \(\mathcal{H}_{\text{full}}\), \(\mathcal{V}\) may include only low powers of creation and annihilation operators, while the coefficients in the polynomial are found by means of a heuristic procedure [62; 63; 64]. However, in order for \(\mathcal{U}\) to implement the exact rotation between the eigen-bases of \(\mathcal{H}_{\text{free}}\) and \(\mathcal{H}_{\text{full}}\), the cluster operator \(\mathcal{V}\) has to include a significantly larger number of terms. The procedure for finding those is discussed in Appendix E, and will be referred to as _exact_ UCC.
* Implementing \(\mathcal{U}_{\Lambda}\) presents a new challenge to quantum simulation methods. Adiabatic state preparation uses quantum simulation of a time-dependent Hamiltonian to prepare the ground state of some target Hamiltonian [65; 66; 67; 24; 68; 69; 70; 71]. In our case, we first wish to apply \(\mathcal{U}_{\Lambda}\) to \(a^{\dagger}_{\Lambda,\,\mathbf{p}}|\text{vac}\rangle\). The state \(a^{\dagger}_{\Lambda,\,\mathbf{p}}|\text{vac}\rangle\) is the ground state of the single particle sector of the interacting theory. Therefore one application of \(\mathcal{U}_{\Lambda}\) is a map between ground states of different theories. Time evolution under a time dependent Hamiltonian which interpolates between the initial and final Hamiltonian will obey the adiabatic theorem and hence will drag one ground state into another provided the evolution is slow enough. The maximum speed is set by the gap between the ground and first excited states. This gap may be unknown, but we could proceed using the procedure outlined in [6] in which the gap is estimated as one goes along, and this estimate is used to control the speed of the adiabatic evolution.
However, our goal is to produce multiparticle states of the interacting theory by sequential application of raising operators. The second application of \(\mathsf{A}^{\dagger}_{\Lambda,\,\mathbf{p}}\) involves adiabatically turning on \(\mathcal{U}_{\Lambda}\) but this time with a starting state that is neither a ground state nor a superposition of ground states. This is different from the case of adiabatic state preparation of wavepackets, which in the free theory are superpositions of single-particle states, each being a ground state of the free Hamiltonian in the sector of particular momentum.
In principle one could imagine the adiabatic theorem holding for all eigenvalues along the path, however in general this cannot be the case. We expect that \(\mathcal{U}_{\Lambda}\) will have an exponentially large number of eigenvalues, while its generating Hamiltonian will have a polynomially bounded norm. It is therefore in general impossible to "squeeze" the whole spectrum into the norm of the Hamiltonian without having two or more levels coming exponentially close to one another.
Moreover, the states to which \(\mathcal{U}_{\Lambda}\) is applied no
longer belong to the theory used for defining the action of \(\mathcal{U}_{\Lambda}\) (they, instead, act in some larger Hilbert space, see, e.g., eq. (33) below), which raises the question of what the conditions for implementing \(\mathcal{U}_{\Lambda}\) in such a scenario even are. To the best of our knowledge, such a question has not been discussed in the literature.
The problem posed for quantum simulation therefore calls for new techniques. One possible avenue is discussed in the next section.
* \(\mathcal{U}\) can also be constructed using the Glazek-Wilson-Wegner (GWW) double commutator flow equation [50; 51; 52; 53; 54; 55; 56; 57; 72; 73]. While in the context of LF QFT, the double commutator flow equation technique is known as the Renormalization Group Procedure for Effective Particles (RGPEP) [50; 51; 52; 53; 54; 55; 56; 57; 58; 59], and has been predominantly used for renormalizing continuous theories which were then solved by various numerical techniques [53; 54; 55; 56], recently a number of numerical schemes based on the double commutator flow equation have been suggested [49; 74; 75; 76; 77; 78; 79; 80; 81; 82], including an implementation for quantum computers [83]. Unlike the adiabatic methods, the GWW approach can be used to describe phase transitions [84; 85; 86; 87], which is crucial in the studies of confinement in quantum chromodynamics [57; 55; 59].
In the GWW approach, one defines a family of unitary operators labeled by a continuous parameter \(l\in[0,\infty)\) which perform a _similarity transformation_ of the original Hamiltonian (which includes all the interaction terms) in such a way that increasing the evolution parameter \(l\) entails the decrease of non-diagonal terms, with \(\mathcal{H}_{\infty}\) ultimately becoming diagonal in the limit \(l\to\infty\).
The original \(\mathcal{H}_{0}\) and transformed \(\mathcal{H}_{l}\) Hamiltonians are related via
\[\mathcal{H}_{l}=\mathcal{U}_{l}^{\dagger}\mathcal{H}_{0}\mathcal{U}_{l}\,, \tag{11}\]
where \(\mathcal{H}_{l}\) obeys the flow equation
\[\partial_{l}\mathcal{H}_{l}=[\mathcal{G}_{l},\mathcal{H}_{l}] \tag{12}\]
with the _flow generator_\(\mathcal{G}_{l}=(\partial_{l}\mathcal{U}_{l})\mathcal{U}_{l}^{\dagger}\) defining the form of \(\mathcal{U}_{l}\). In non-relativistic theories, \(\mathcal{G}_{l}\) can be defined as a commutator between the diagonal \(\Delta(\mathcal{H}_{l})\) and off-diagonal \(\sigma(\mathcal{H}_{l})\) parts of the Hamiltonian operator:
\[\mathcal{G}_{l}=[\Delta(\mathcal{H}_{l}),\sigma(\mathcal{H}_{l})]\,. \tag{13}\]
In the LF formulation, this changes to
\[\mathcal{G}_{l}=[\Delta(\mathcal{H}_{l}),\tilde{\sigma}(\mathcal{H}_{l})]\,, \tag{14}\]
where \(\tilde{\sigma}(\mathcal{H}_{l})\) has additional structure ensuring the Lorentz invariance of the flow [52]. Other choices of \(\mathcal{G}_{l}\) are possible as well [52].
* Preparation of states with multiple composite particles is most easily achieved when the corresponding creation operators act on disjoint sets of degrees of freedom, and the wave function factorizes. As in such cases the action of operators, creating composite particles in the interacting theory, has to be defined only upon the vacuum state, a wider class of state preparation techniques is readily available in addition to those discussed above, including variational [12; 13] and projection-based methods [88; 89; 90].
Several other remarks are in order regarding the definition (9) of scattering states.
1. The suggested formalism is well-suited for situations in which the incoming and outgoing particles are the states of strongly interacting systems,
Figure 2: Approaches to constructing the _wave operator_[47; 48; 49]\(\mathcal{U}\) relating the eigenbases of the free \(\mathcal{H}_{\text{free}}\) and full \(\mathcal{H}_{\text{full}}\) Hamiltonian operators. Following the solid lines renders the exact solution to the problem: \(\mathcal{U}\) can be obtained by diagonalizing the matrix \(H_{\text{full}}\) of \(\mathcal{H}_{\text{full}}\), forming the _modal matrix_\(U\) comprised of the eigenvectors of \(H_{\text{full}}\), and finding such a cluster operator \(\mathcal{V}=i\ln\mathcal{U}\) that its matrix elements are identical to those of \(V=i\ln U\) (see Appendix E for details). Explicitly using the matrix \(H_{\text{full}}\) results in a procedure whose cost is polynomial in the Hilbert space dimension and, therefore, exponential in momentum cutoffs. The dashed line corresponds to approximate solutions to the problem: constructing \(\mathcal{U}\) by employing adiabatic state preparation or solving the double commutator flow equation [50; 51; 72; 73]. Implementing those on a quantum computer [6; 67; 68; 24; 69; 70; 71; 83] may be efficient, i.e., have cost polynomial in momentum cutoffs.
such as hadrons or heavy nuclei, which can be approximated neither by the eigenstates of the free Hamiltonian nor by wavepackets.
2. In the relativistic setting, our approach is most naturally applicable to studying systems described in the language of LF quantization [36; 37; 38; 39; 40] which recasts relativistic many-body problems in a form that is strikingly similar to non-relativistic many-body theory. For example, the unique LF vacuum state of the free theory coincides with that one of the interacting theory.4 Other advantages of the LF formulation include separation of internal and center-of-mass degrees of freedom; form-invariance of the Hamiltonian under Lorentz transformations; linearity of equations of motion, leading to a smaller number of independent field components; simple form of observables, balanced treatment of gauge and matter fields [45]. Footnote 4: Certain light-front field theories, including the \(\phi^{4}\), are known to develop a non-trivial vacuum expectation value at critical coupling, if zero modes are taken into consideration. The effect of their inclusion may result in the improved convergence of numerical results [91] and changes in the value of critical coupling [92; 93; 94].
3. In the treatment of a relativistic QFT, one typically starts with a _canonical_ theory describing point-like interactions and operating with infinite ranges of momenta. In order to obtain a numerically sensible non-perturbative _effective_ theory, one has to first _regulate_ the divergent interactions in the canonical theory and/or impose cutoffs on the Hilbert space dimension. The relation between the values of coupling constants and observables in canonical and effective theories is governed by the renormalization group flow [95; 96; 97; 50; 98; 99; 51; 72]. Renormalization in the LF formalism can follow one of several approaches. The original approach to renormalizing LF QFTs amounted to using the Pauli-Villars regularization scheme [99; 100; 101], in which the canonical Hamiltonian is regulated by introducing additional quantum fields, some of which have negative norm. As a consequence of this, the canonical Hamiltonian turns into a non-Hermitian operator whose spectrum contains a number of unphysical states. In the sector-dependent renormalization approach [111; 112; 113; 114; 115; 116; 117], the canonical Hamiltonian operator is equipped with counterterms whose value depends on the number of particles in a Fock state the Hamiltonian acts upon. Both Hamiltonian operator non-Hermiticity and sector dependence of its coefficients inevitably pose difficulties for quantum simulation. A more systematic approach to renormalization, which is most suitable from the quantum simulation perspective, is pursued within RGPEP [50; 51; 52; 53; 54; 55; 56; 57; 118] mentioned above in the context of the GWW flow. In this case, the effective Hamiltonian operator is Hermitian and its action is defined in Fock sectors with arbitrary number of particles and without adding to the physical degrees of freedom.
4. In practice, constructing the operator \(\mathsf{A}^{\dagger}_{\Lambda,\,\mathbf{p}}\) will require switching between various reference frames. In the non-relativistic setting, the wave functions of individual bound states are typically found in their respective center-of-mass frames, and then boosted into the center-of-mass frame of the combined system. Similarly, the LF formulation of QFT allows one to find the wave functions of bound states using the so-called _intrinsic coordinates_. In Appendix B we discuss construction of composite state wave functions in the LF dynamics.
5. While for operators \(\mathsf{A}^{\dagger}_{\Lambda_{1},\,\mathbf{p}_{1}}\) and \(\mathsf{A}^{\dagger}_{\Lambda_{2},\,\mathbf{p}_{2}}\), acting on disjoint sets of modes, the (anti-)symmetry properties of the state (9) would be satisfied automatically, in general this would not be the case. The sensitivity of eq. (9) to the order of operators is the consequence of finiteness of momentum cutoffs in the theory, as the effective particle operators in RGPEP obey the same commutation relations as the original ones [119]. This issue has to be addressed in the future work.
6. In Sections III and IV we illustrate our ideas within the framework of Discretized Light-Cone Quantization (DLCQ) [36; 37; 38; 39]. In particular, we shall demonstrate how our technique can be used to obtain "exact" results at exponential cost with a classical simulation. Yet it is most naturally implemented by means of a quantum simulation, which we discuss in Section V.
## III Discretized Light-Cone Quantization
In this Section, we apply the construction introduced in Section II to \(\phi^{4}\) theory in \(1+1\)D, formulated within the DLCQ [36; 37; 38; 43; 45] framework.
Our primary objective here is to consider a scattering scenario in which the wavefunction of the combined system does not factorize, and operators creating composite particles act upon overlapping sets of modes. We shall, therefore, ignore the issue of boosting the LF wavefunctions of composite particles (in the notations of Appendix B, the wavefunctions discussed below correspond to \(|\Psi^{(l)}(x_{i},\mathbf{k}_{\perp i})\rangle\)). We adopt eq. (5) for defining operators which create particles of the interacting theory from vacuum, and use the exact unitary coupled cluster procedure, eq. (10), for finding the wave operator \(\mathcal{U}\).
DLCQ is a discretized gauge-fixed (\(A^{+}=0\), if gauge fields are present) Hamiltonian formulation of QFT, in which one quantizes the theory in a box, using the light-cone coordinates \(x^{\pm}=c\pm x\). The evolution of the system along the light-cone time \(x^{+}\) is governed by operator \(\mathcal{P}^{-}\), which in \(1+1\)D is related to the mass-squared operator as \(\mathcal{M}^{2}=\mathcal{P}^{+}\mathcal{P}^{-}\), where \(\mathcal{P}^{+}\) is the operator of LF momentum. After rescalng the operators as \(\mathcal{P}^{+}=(2\pi/L)\mathcal{K}\) and \(\mathcal{P}^{-}=(2\pi/L)^{-1}\mathcal{H}\), where \(L\) is the box size, the mass-squared operator can be written as \(\mathcal{M}^{2}=\mathcal{K}\mathcal{H}\). The operator of dimensionless discretized LF momentum \(\mathcal{K}\) is termed _harmonic resolution_, while \(\mathcal{H}\) is typically referred to simply as the Hamiltonian, despite having the dimension of squared mass [36; 39].
The normal-ordered Hamiltonian of the DLCQ \(\phi^{4}\) model in \(1+1\)D with periodic boundary condition has the form [38] (see also Appendix D):
\[\mathcal{H}_{\text{full}} =\mathcal{H}_{\text{free}}+\mathcal{H}^{I}\,, \tag{15}\] \[\mathcal{H}_{\text{free}} =\text{m}^{2}\sum_{n=1}^{\infty}\frac{1}{n}a_{n}^{\dagger}a_{n}\,,\] \[\mathcal{H}^{I} =\frac{1}{4}\frac{\lambda}{4\pi}\sum_{\text{klmn}=1}^{\infty} \frac{a_{k}^{\dagger}a_{l}^{\dagger}a_{m}a_{n}}{\sqrt{\text{klmn}}}\delta_{m+n,k+l}\] \[+\frac{1}{6}\frac{\lambda}{4\pi}\sum_{\text{klmn}=1}^{\infty} \frac{a_{k}^{\dagger}a_{l}a_{m}a_{n}}{\sqrt{\text{klmn}}}\,\,\delta_{k,m+n+l}\] \[+\frac{1}{6}\frac{\lambda}{4\pi}\sum_{\text{klmn}=1}^{\infty} \frac{a_{n}^{\dagger}a_{m}^{\dagger}a_{l}^{\dagger}a_{k}}{\sqrt{\text{klmn}}}\, \delta_{k,m+n+l}\,.\] \[\mathcal{K} =\sum_{n=1}^{\infty}\mathfrak{n}(a_{n}^{\dagger}a_{n})\,. \tag{16}\]
The Canonical Sets of Commuting Observables (CSCOs) of the free and interacting theories consist of operators \(\{\mathcal{H}_{\text{free}},\,\mathcal{K},\,\mathcal{N}\}\) and \(\{\mathcal{H}_{\text{full}},\,\mathcal{K},\,\mathcal{Z}\}\), correspondingly, where \(\mathcal{N}=\sum_{n=1}^{\infty}a_{n}^{\dagger}a_{n}\) is the number operator, while the operator \(\mathcal{Z}=\mathcal{N}(\text{mod }2)\) marks the sectors of odd and even number of particles. Neither \(\mathcal{N}\) nor \(\mathcal{Z}\) will play a role in the following discussion.
The Hamiltonian (15) is solved in the basis of Fock states of the form
\[|\mathcal{F}\rangle =|n_{1}^{w_{1}},\,n_{2}^{w_{2}},\,n_{3}^{w_{3}},\ldots\rangle\,, \tag{17}\] \[n_{j},\,w_{j}=1,2,3,\ldots\]
where \(n_{j}\) are the momentum quantum numbers and \(w_{j}\) are the occupancies.
The Fock space splits into blocks of fixed harmonic resolution \(\mathcal{K}=K\) and fixed even or odd particle number [38]. These blocks have finite dimension owing to the fact that all the states within each block contain particles whose momenta are positive integers summing up to \(K\):
\[|\mathcal{F}^{\{K\}}\rangle:\qquad\sum_{j}n_{j}w_{j}=K\,. \tag{18}\]
Using more general notation (see Appendix A), one can say that each such block is obtained by restricting the infinite-dimensional Fock space to a subspace comprised of vectors with momentum modes in \(\mathfrak{S}\), with the cutoff on the maximum number of excitations in each momentum mode \(\mathfrak{n}\) given by \(W(\mathfrak{n})\), and with the maximum number of modes in a state being \(I\):
\[\mathfrak{S}: \{1,2,\ldots,K\}\,, \tag{19a}\] \[W(\mathfrak{n}) =\lfloor K/\mathfrak{n}\rfloor\,,\] (19b) \[I =K\,. \tag{19c}\]
The number of Fock states at a fixed value of \(K\) is exactly equal to the number of integer partitions \(p(K)\), which grows as \(p(K)=\Theta(\exp(\sqrt{K}))\)[120; 121].
The versions of \(\mathcal{H}_{\text{free},\,K}\) and \(\mathcal{H}_{\text{full},\,K}\), whose action is restricted to momentum modes up to \(K\), are obtained by choosing \(K\) as the cutoff for the sums in eq. (15). Within a sector of fixed \(K\), diagonalizing \(\mathcal{M}^{2}=\mathcal{K}\mathcal{H}_{\text{full}}\) is equivalent to diagonalizing \(\mathcal{H}_{\text{full}}\). Determining the spectrum of \(\mathcal{M}^{2}\) at higher values of \(K\) may be interpreted as studying the system at a higher resolution, which explains the notion of _harmonic resolution_[36; 37].
In the DLCQ treatment of \(1+1\)D models, one typically does not renormalize the coupling constant [36; 37; 38], while mass renormalization is performed by adjusting the bare mass \(\mathfrak{m}\) in eq. (15) so that for each value of \(K\) the lowest eigenvalue of \(\mathcal{M}^{2}\) remains unchanged [38]. This, however, implies that the value of bare mass depends on \(K\), which is not compatible with our approach. For the same reason, neither is the sector-dependent renormalization discussed in Section II). A better option would
be to first renormalize the original continuous theory using RGPEP, and then to solve it using DLCQ [50; 51; 52; 53; 54; 55; 56; 72]. In this paper we assume the values of both m and \(\lambda\) to be fixed.
Note that in order to reproduce the action of \(\mathcal{H}_{\text{full}}\) on a state \(|\mathcal{F}^{\{K\}}\rangle\), one has to include all the terms acting on momentum modes up to \(K\):
\[\mathcal{H}_{\text{full},\,K^{\prime}}|\mathcal{F}^{\{K\}}\rangle =\mathcal{H}_{\text{full}}|\mathcal{F}^{\{K\}}\rangle\,,\quad K^{ \prime}\geq K\,, \tag{20a}\] \[\mathcal{H}_{\text{full},\,K^{\prime}}|\mathcal{F}^{\{K\}}\rangle \neq\mathcal{H}_{\text{full}}|\mathcal{F}^{\{K\}}\rangle\,,\quad K ^{\prime}<K\,. \tag{20b}\]
We denote the matrices of \(\mathcal{H}_{\text{free}}\) and \(\mathcal{H}_{\text{full}}\) (or, equivalently, of \(\mathcal{H}_{\text{free},\,K}\) and \(\mathcal{H}_{\text{full},\,K}\)) in the basis of \(|\mathcal{F}^{\{K\}}\rangle\) by \(H_{\text{free},K}\) and \(H_{\text{full},K}\). At a fixed value of harmonic resolution, the matrix \(H_{\text{full},K}\) can be further block-diagonalized, owing to the fact the self-interaction in eq. (15) either preserves the particle number in a Fock state, or changes it by two. Thereby, the Hilbert space splits into the _even_ and _odd_ sectors, in which the Fock states contain either even or odd number of particles, correspondingly [38].
We adopt the standard normalization of Fock states [36; 37; 38]
\[\langle\text{vac}|\text{vac}\rangle=\langle\mathcal{F}|\mathcal{F}\rangle=1\,. \tag{21}\]
Similar to non-relativistic many-body theory, the LF vacuum state is simply a state without any excitations, and it is the only state with total LF momentum zero [36; 39].
Let operator \(\mathsf{A}_{[K,n]}^{\dagger}\) create the \(n\)th (in the order of increasing \(\mathcal{H}_{\text{free}}\) eigenvalue) Fock state \(|\mathcal{F}_{n}^{\{K\}}\rangle\). It is a monomial in single-mode creation operators \(a_{\mathfrak{n}}^{\dagger}\), with the constant fixed by eq. (21). For the ground states \(|\mathcal{F}_{0}^{\{K\}}\rangle\) in sectors of even and odd numbers of particles, operators creating those from vacuum will be denoted by \({}^{\mathsf{even}}\mathsf{a}_{[K,0]}^{\dagger}\) and \({}^{\mathsf{odd}}\mathsf{a}_{[K,0]}^{\dagger}\equiv\mathsf{a}_{[K,0]}^{ \dagger}\)5. The ground states as well as the corresponding creation operators expressed in terms of \(a_{\mathfrak{n}}^{\dagger}\) are shown in Table 1.
Footnote 5: The convention \({}^{\mathsf{odd}}\mathsf{a}_{[K,0]}^{\dagger}\equiv\mathsf{a}_{[K,0]}^{ \dagger}\) is justified by the fact that for each \(K\), the ground state energy in the odd sector of the free Hamiltonian is always lower than the ground state in the even sector, see eq. (15).
At a fixed value of \(K\), the interacting eigenstates \(|\widetilde{\mathcal{F}}^{\{K\}}\rangle\) are obtained by diagonalizing the matrix \(H_{\text{full},K}\) in the basis of \(|\mathcal{F}^{\{K\}}\rangle\). The two orthonormal bases \(|\mathcal{F}^{\{K\}}\rangle\) and \(|\widetilde{\mathcal{F}}^{\{K\}}\rangle\) are related by a unitary matrix \(U_{K}\), comprised of the eigenvectors of \(H_{\text{full},K}\), written in the basis of Fock states \(|\widetilde{\mathcal{F}}^{\{K\}}\rangle\) (_modal matrix_):6
Footnote 6: Unless stated otherwise, the sets \(\left\{|\mathcal{F}^{\{K\}}\rangle\right\}\) and \(\left\{|\widetilde{\mathcal{F}}^{\{K\}}\rangle\right\}\) are assumed to contain states from both even and odd particle number sectors.
\[|\widetilde{\mathcal{F}}_{n}^{\{K\}}\rangle=\sum_{m}(U_{K})_{n}^{m}|\mathcal{ F}_{m}^{\{K\}}\rangle\,, \tag{22}\]
where \(|\widetilde{\mathcal{F}}_{n}^{\{K\}}\rangle\) stands for the \(n\)th state. We assume the ordering of states \(|\widetilde{\mathcal{F}}_{n}^{\{K\}}\rangle\) to match that one given by the adiabatic interaction turn-on, so that \(|\widetilde{\mathcal{F}}_{n}^{\{K\}}\rangle\) is the adiabatic continuation of \(|\mathcal{F}_{n}^{\{K\}}\rangle\). For a weakly-coupled theory, the free and interacting eigenstates could be matched by sorting both \(|\mathcal{F}_{n}^{\{K\}}\rangle\) and \(|\widetilde{\mathcal{F}}_{n}^{\{K\}}\rangle\) in the order of growing \(\mathcal{H}_{\text{free},\,K}\) and \(\mathcal{H}_{\text{full},\,K}\) eigenvalues. For a strongly-coupled theory, one could match the free and interacting basis states using an approximate implementation of the adiabatic turn-on.
While (22) should be understood as a _matrix equation_, which holds within a block of fixed \(K\), we would now like to find a wave operator operator \(\mathcal{U}_{K}\), whose action on \(\left\{|\mathcal{F}^{\{K\}}\rangle\right\}\) is given by \(U_{K}\).
By writing \(\mathcal{U}_{K}\) in terms of single-mode creation and annihilation operators, we shall _extend_ the action of \(U_{K}\) to Fock states \(\left\{|\mathcal{F}\rangle\right\}\) from sectors of arbitrary harmonic resolution. While, strictly speaking, there is no unique way of defining such an extension, below we explicitly construct operator \(\mathcal{U}_{K}\), acting as wave operator for sectors of Hilbert space or harmonic resolution up to \(K\).
We begin by defining a Hermitian operator \(\mathcal{V}_{K}=\mathcal{V}_{K}^{\dagger}\) of the form
\[\mathcal{U}_{K}=\mathrm{e}^{-i\mathcal{V}_{K}}\,. \tag{23}\]
We represent \(\mathcal{V}_{K}\) as a polynomial in single-mode creation operators of the free theory, \(\mathcal{V}_{K}=\text{poly}(a_{\mathfrak{n}}^{\dagger},a_{\mathfrak{n}})\). Then, we postulate the following defining properties of \(\mathcal{U}_{K}\):
1. The operator \(\mathcal{U}_{K}\) is unitary: \[\mathcal{U}_{K}^{\dagger}\mathcal{U}_{K}=\mathds{1}\,.\] (24) 2. The action of \(\mathcal{U}_{K}\) on \(\left\{|\mathcal{F}_{n}^{\{K\}}\rangle\right\}\) is given by (22): \[\mathcal{U}_{K}|\mathcal{F}_{n}^{\{K\}}\rangle=U_{K}|\mathcal{F}_{n}^{\{K\}} \rangle=|\widetilde{\mathcal{F}}_{n}^{\{K\}}\rangle\,,\] (25)
i.e. \(U_{K}\) is the matrix of the \(\mathcal{U}_{K}\) operator in the basis of \(\left\{\left|\mathcal{F}_{n}^{(K)}\right\rangle\right\}\).
3. For the operator \(\mathcal{V}_{K}\), the following holds: \[\mathcal{V}_{K}\supseteq\mathcal{V}_{K-1}\supseteq\ldots\supseteq\mathcal{V}_{2 }\supseteq\mathcal{V}_{1}\,,\] (26) where the notation \(\mathcal{V}_{j}\supseteq\mathcal{V}_{j-1}\) for polynomials \(\mathcal{V}_{j}\) and \(\mathcal{V}_{j-1}\) means that \(\mathcal{V}_{j}\) contains all the terms from \(\mathcal{V}_{j-1}\). With eq. (26) we require that (25) holds for any \(K^{\prime}\leq K\): \[\mathcal{U}_{K}|\mathcal{F}_{n}^{\{K^{\prime}\}}\rangle=U_{K^{\prime}}| \mathcal{F}_{n}^{\{K^{\prime}\}}\rangle\quad\text{for }K^{\prime}\leq K\] (27)
4. Operator \(\mathcal{V}_{K}\) contains a minimal number of terms required to satisfy (25).
With properties 1 and 2 we ensure that \(\mathcal{U}_{K}\) is the wave operator in sector of momentum \(K\). With properties 3 and 4 we additionally require that is \(\mathcal{V}_{K}\) has the simplest possible form, such that \(\mathcal{U}_{K}\) is also the wave operator in sectors of momenta below \(K\). In other words, we require the matrix of \(\mathcal{U}_{K}\) in the basis of all the Fock states \(\left\{\left|\mathcal{F}^{\{\leq K\}}\right\rangle\right\}\) of momenta up to \(K\) to be equal to \(\operatorname{diag}\left\{U_{1},\ldots,U_{K}\right\}\). As follows from the definition above, \(\mathcal{U}_{K}\) can be thought of as an "exact" wave operator for any \(K^{\prime}\leq K\) and as its "reduced" version for \(K^{\prime}>K\). It is interesting to note that results for \(\lambda\phi^{4}\) in \(1+1\)D from multiple values of \(K\) were employed simultaneously to calculate form factors and to search for kink condensation in [122].
In order to explicitly construct \(\mathcal{U}_{K}\), we seek \(\mathcal{V}_{K}\) in the form of
\[\mathcal{V}_{K}=\sum_{r,s=1}^{K}\sum_{\begin{subarray}{c}\mathrm{i}_{1}, \mathrm{j}_{2},\ldots,\mathrm{i}_{r}\\ \mathrm{j}_{1},\mathrm{j}_{2},\ldots,\mathrm{j}_{s}\end{subarray}}\!\!\theta _{\mathrm{i}_{1},\ldots,\mathrm{i}_{r},\mathrm{j}_{1},\ldots,\mathrm{j}_{s}}a _{\mathrm{i}_{1}}^{\dagger}\ldots a_{\mathrm{i}_{r}}^{\dagger}a_{\mathrm{j}_{ 1}}\ldots a_{\mathrm{j}_{s}}\,, \tag{28}\]
whose normal-ordered form automatically guarantees that \(\mathcal{U}_{K}|\mathrm{vac}\rangle=|\mathrm{vac}\rangle\), and which has enough free parameters to ensure that the matrix of \(\mathcal{U}_{K}\) coincides with \(\operatorname{diag}\left\{U_{1},\ldots,U_{K}\right\}\) in the basis of \(\left\{\left|\mathcal{F}^{\{\leq K\}}\right\rangle\right\}\). It is assumed that \(\mathcal{V}_{K}\) commutes with the symmetry operators of the system (such as \(\mathcal{K}\)), which ensures that so does \(\mathcal{U}_{K}\). While eq. (28) contains, in principle, all the monomials whose matrix elements are non-zero for _some_ pair of Fock states, the number of terms in it is excessive. Indeed, physically, we expect \(\mathcal{U}(\mathfrak{S},W,I)\) to depend on \(W\) and \(I\) only due to the cutoff artifacts -- which are absent in \(1+1\)D LF QFTs (meaning that the Hilbert space of the Hamiltonian (15) is finite for a fixed value of \(K\)). We describe the algorithm for finding the coefficients in eq. (28) in Appendix E.
The states \(|\widetilde{\mathcal{F}}_{n}^{\{K\}}\rangle\) can be created from vacuum upon the application of operators \(\mathsf{A}_{[K,n]}^{\dagger}\), which we define as
\[\mathsf{A}_{[K,n]}^{\dagger}=\left(\mathcal{U}_{K}\right)\bigl{(}\mathsf{a}_{ [K,n]}^{\dagger}\bigr{)}\bigl{(}\mathcal{U}_{K}\bigr{)}^{\dagger}\,. \tag{29}\]
Indeed, due to the uniqueness of the LF vacuum, one can write:
\[\mathsf{A}_{[K,n]}^{\dagger}|\mathrm{vac}\rangle =\left(\mathcal{U}_{K}\right)\bigl{(}\mathsf{a}_{[K,n]}^{\dagger }\bigr{)}\bigl{(}\mathcal{U}_{K}\bigr{)}^{\dagger}|\mathrm{vac}\rangle \tag{30}\] \[=\left(\mathcal{U}_{K}\right)\bigl{(}\mathsf{a}_{[K,n]}^{\dagger }\bigr{)}|\mathrm{vac}\rangle\] \[=\left(\mathcal{U}_{K}\right)\lvert\mathcal{F}_{n}^{\{K\}} \rangle=|\widetilde{\mathcal{F}}_{n}^{\{K\}}\rangle\,.\]
Generally speaking, in order to simulate non-trivial time evolution within a Hamiltonian formulation of QFT, one has to either initialize the system in an eigenstate and then switch on an external field [2; 15; 41] or prepare an initial state, which is a non-stationary superposition of eigenstates [2; 3; 4; 5]. We follow the latter approach by introducing the
\begin{table}
\begin{tabular}{|l|c|c|} \hline Number of particles & Ground states of \(\mathcal{H}_{\text{free}}\) at \(\mathcal{K}=K\), and operators creating those from vacuum \\ \cline{2-3} in Fock states & Even \(K\) & Odd \(K\) \\ \hline \multirow{2}{*}{Even} & \(|(\frac{K}{2})^{2}\rangle=\mathsf{{}^{even}}\mathsf{a}_{[K,0]}^{\dagger}| \mathrm{vac}\rangle\) & \(|\frac{K-1}{2},\frac{K+1}{2}\rangle=\mathsf{{}^{even}}\mathsf{a}_{[K,0]}^{ \dagger}|\mathrm{vac}\rangle\) \\ \cline{2-3} & \(\mathsf{{}^{even}}\mathsf{a}_{[K,0]}^{\dagger}=1/\sqrt{2!}\left(a_{K/2}^{ \dagger}\right)^{2}\) & \(\mathsf{{}^{even}}\mathsf{a}_{[K,0]}^{\dagger}=a_{(K+1)/2}^{\dagger}a_{(K-1)/2} ^{\dagger}\) \\ \hline \multirow{2}{*}{Odd} & \(|K\rangle=\mathsf{{}^{odd}}\mathsf{a}_{[K,0]}^{\dagger}|\mathrm{vac}\rangle\) \\ \cline{2-3} & \(\mathsf{{}^{a}}_{[K,0]}^{\dagger}\equiv\mathsf{{}^{odd}}\mathsf{a}_{[K,0]}^{ \dagger}=a_{K}^{\dagger}\) \\ \hline \end{tabular}
\end{table}
Table 1: Top cells: ground states in the sectors of even and odd numbers of particles of the free \(\phi^{4}_{1+1}\) theory in the DLCQ formulation, for even and odd values of harmonic resolution \(K\). Bottom cells: operators creating these ground states from vacuum, expressed in terms of single-mode creation operators.
notation
\[\begin{split}&\big{|}[K_{1},n_{1}]^{\mathsf{w}_{1}},[K_{2},n_{2}]^{ \mathsf{w}_{2}},\ldots\big{\rangle}\\ &\equiv D_{\{[K_{1},n_{1}]^{\mathsf{w}_{1}},[K_{2},n_{2}]^{\mathsf{ w}_{2}},\ldots\}}\\ &\quad\times\big{(}\mathsf{A}^{\dagger}_{[K_{1},n_{1}]}\big{)}^{ \mathsf{w}_{1}}\big{(}\mathsf{A}^{\dagger}_{[K_{2},n_{2}]}\big{)}^{\mathsf{w }_{2}}\ldots|\text{vac}\big{\rangle}\,,\end{split} \tag{31}\]
for a state in the full theory with "\(\mathsf{w}_{1}\) particles of momentum \(K_{1}\) in the \(n_{1}\)th excited state, \(\mathsf{w}_{2}\) particles of momentum \(K_{2}\) in the \(n_{2}\)th excited state, etc.", carrying the total momentum \(K_{\text{tot}}=\sum_{j}K_{j}\mathsf{w}_{j}\). Unlike the analogous state \(|n_{1}^{w_{1}},n_{2}^{w_{2}},\ldots\rangle\) of the free system, which _is_ an eigenstate of \(H_{\text{free},K_{\text{tot}}}\), the state (31) is generally _not_ an eigenstate of either \(H_{\text{free},K_{\text{tot}}}\) or \(H_{\text{full},K_{\text{tot}}}\).7
Footnote 7: Note that, while the eigenstates of \(H_{\text{full},K_{\text{tot}}}\) are generally the superpositions of Fock states containing modes of LF momentum up to \(K_{\text{tot}}\), in (31) only modes of LF momentum up to \(\text{max}\{K_{j}\}\) are included. However, not containing single-particle momenta above \(\text{max}\{K_{j}\}\) is _not_ by itself a reason for \(\big{|}[K_{1},n_{1}]^{\mathsf{w}_{1}},[K_{2},n_{2}]^{\mathsf{w}_{2}},\ldots\big{\rangle}\) to not be an eigenstate of \(H_{\text{full},K_{\text{tot}}}\). Indeed, while \(|n_{1}^{w_{1}},n_{2}^{w_{2}},\ldots\rangle\) similarly does not carry LF momenta higher than \(\text{max}\{K_{j}\}\), it _is_ an eigenstate of \(H_{\text{free},K_{\text{tot}}}\). What matters is that \(\mathcal{U}_{K_{\text{tot}}}\) has not been used in the definition (31).
A state containing two particles of momenta \(K_{1}\) and \(K_{2}\), each in the ground state of the corresponding Hamiltonian, has the form of
\[\begin{split}&\big{|}i\big{\rangle}=\big{|}[K_{1},0],[K_{2},0] \big{\rangle}\\ &=D_{\{[K_{1},0],[K_{2},0]\}}\big{(}\mathsf{A}^{\dagger}_{[K_{1},0]}\big{)}\big{(}\mathsf{A}^{\dagger}_{[K_{2},0]}\big{)}|\text{vac}\rangle \\ &=D_{\{[K_{1},0],[K_{2},0]\}}\mathcal{U}_{K_{1}}a^{\dagger}_{K_{1 }}\mathcal{U}_{K_{2}}a^{\dagger}_{K_{2}}\mathcal{U}^{\dagger}_{K_{2}}|\text{ vac}\rangle\\ &=D_{\{[K_{1},0],[K_{2},0]\}}\mathcal{U}_{K}a^{\dagger}_{K_{1}} \mathcal{U}^{\dagger}_{K_{2}}a^{\dagger}_{K_{2}}|\text{vac}\rangle\,.\end{split} \tag{32}\]
A particularly simple case of (32) is obtained by setting \(K_{1}=K_{2}=K\):
\[\begin{split}\big{|}[K,0]^{2}\big{\rangle}&=D_{\{[K,0]^{2}\}}\big{(}\mathsf{A}^{\dagger}_{[K,0]}\big{)}^{2}|\text{vac}\rangle \\ &=D_{\{[K,0]^{2}\}}\mathcal{U}_{K}\big{(}a^{\dagger}_{K}\big{)}^{ 2}|\text{vac}\rangle\,.\end{split} \tag{33}\]
The so-defined state is not an eigenstate of \(\mathcal{H}_{\text{full}}\), as \(\mathcal{U}_{K}\) would only produce an eigenstate of \(\mathcal{H}_{\text{full}}\)_when acting on states of momentum up to \(K\)_. The state in eq. (33) should not be confused with the even sector ground state of \(H_{\text{full},2K}\):
\[\begin{split}&\big{|}[K,0]^{2}\big{\rangle}\neq\big{|}[2K,0] \big{\rangle}\\ &\quad=D_{\{[2K,0]\}}\big{(}\mathcal{U}_{2K}\big{)}\big{(}a^{ \dagger}_{K}\big{)}^{2}|\text{vac}\rangle\,.\end{split} \tag{34}\]
For elastic and inelastic \(2\!\to\!2\) scattering processes, we define the final states to be
\[|f_{\text{elastic}}\rangle =\big{|}[K^{\prime}_{1},0],[K^{\prime}_{2},0]\big{\rangle}\,, \tag{35a}\] \[|f_{\text{inelastic}}\rangle =\big{|}[K^{\prime}_{1},n_{1}],[K^{\prime}_{2},n_{2}]\big{\rangle}\,, \tag{35b}\]
where \(K_{1}\!+\!K_{2}\!=\!K^{\prime}_{1}\!+\!K^{\prime}_{1}\) and \(n_{1}\!+\!n_{2}\!>\!0\) is assumed in the second line.
Upon reviewing the DLCQ formulation of \(\phi^{4}\) theory in \(1+1\)D, we introduced the operator \(\mathcal{U}_{K}\) relating the bases of the free and interacting theories. Using this operator, in (29) we defined operators \(\mathsf{A}^{\dagger}_{[K,n]}\) creating eigenstates of the interacting theory from vacuum. We then used these operators to define in (31) the multi-particle states in the interacting theory. In the following Section, time evolution of such states will be studied.
## IV Classical simulation of time evolution
In this Section, we calculate the time-dependent transition probability \(\big{|}\langle f|\mathrm{e}^{-i\mathcal{H}_{\text{full}}t}|i\rangle\big{|}^{2}\) for an elastic scattering process, in which the initial state is defined as in eq. (33) with \(K_{1}=K_{2}=3\) and the final states are defined as in eq. (32) with \(K_{1}+K_{2}=6\). We set the constants in the Hamiltonian (15) to be \(\mathrm{m}=1\) and \(\lambda=30\).
Following eq. (33), we prepare the initial state \(\big{|}i\big{\rangle}=\big{|}[3,0]^{2}\big{\rangle}\) with the aid of the \(\mathcal{U}_{3}\) operator, which is constructed by means of the unitary coupled cluster [62; 63; 64] procedure. First, we list all the modal matrices \(U_{K}\) for \(K\leq 3\), see Table 2, and find such an operator \(\mathcal{V}_{3}\) that for any \(K\leq 3\) its matrix in the basis of Fock states \(\big{\{}|\mathcal{F}^{\{K\}}\rangle\big{\}}\) is \(V_{K}\equiv i\ln\bigl{(}U_{K}\bigr{)}\) (see Appendix E for details):
\[\mathcal{V}_{3}=0.0364i\big{(}(a^{\dagger}_{1})^{3}a_{3}-a^{\dagger}_{3}(a_{1})^ {3}\big{)}. \tag{36}\]
Next, we define \(\mathcal{U}_{3}\!=\!\mathrm{e}^{-iV_{3}}\) and \(\mathsf{A}^{\dagger}_{[3,0]}=\mathcal{U}_{3}a^{\dagger}_{3}\mathcal{U}^{\dagger}_{3}\), which relate the free and interacting ground states at \(K=3\) as follows:
\[\mathsf{a}^{\dagger}_{[3,0]}|\text{vac}\rangle=a^{\dagger}_{3}| \text{vac}\rangle=|3\rangle\, \tag{37}\] \[\mathsf{A}^{\dagger}_{[3,0]}=\mathcal{U}_{3}\mathsf{a}^{\dagger}_{[3,0 ]}\mathcal{U}^{\dagger}_{3} =\mathrm{e}^{0.0364i((a^{\dagger}_{1})^{3}a_{3}-a_{3}(a^{\dagger}_{1})^{3}) }a^{\dagger}_{3}\] (38) \[\times\mathrm{e}^{-0.0364i(a^{\dagger}_{3}(a^{\dagger}_{1})^{3}-(a _{1})^{3}a^{\dagger}_{3})}\] \[\big{|}[3,0]\big{\rangle} =\mathsf{A}^{\dagger}_{[3,0]}|\text{vac}\rangle\] (39) \[=(0.996a^{\dagger}_{3}-(a^{\dagger}_{1})^{3}/(\sqrt{6}))|\text{vac}\rangle\]
Note that if, instead of \(\mathsf{A}^{\dagger}_{[3,0]}\) defined in eq. (38), had we used the polynomial \(\mathsf{P}^{\dagger}_{[3,0]}\) to create particles in the interacting theory, the latter, according to eq. (39), would have acquired the form of
\[\mathsf{P}^{\dagger}_{[3,0]}=0.996a_{3}^{\dagger}-(a_{1}^{\dagger})^{3}/(\sqrt{ 6})\,. \tag{40}\]
While \(\mathsf{P}^{\dagger}_{[3,0]}\) and \(\mathsf{A}^{\dagger}_{[3,0]}\) act identically on \(|\mathrm{vac}\rangle\),
\[\mathsf{P}^{\dagger}_{[3,0]}|\mathrm{vac}\rangle=\mathsf{A}^{\dagger}_{[3,0]} |\mathrm{vac}\rangle\,, \tag{41}\]
they, generally, act differently on arbitrary Fock states, see Table 3. Consider, as an example, the state \(\big{|}[3,0]^{2}\big{\rangle}\), containing two particles of momentum \(3\). Whether it is defined as \(\big{(}\mathsf{A}^{\dagger}_{[3,0]}\big{)}^{2}|\mathrm{vac}\rangle\) or \(\big{(}\mathsf{P}^{\dagger}_{[4,0]}\big{)}^{2}|\mathrm{vac}\rangle\), it will belong to the space \(\mathrm{span}\big{\{}|3^{2}\rangle,|1^{3},3\rangle,|1^{6}\rangle\big{\}}\). However, the amplitudes in the two cases will not be equal. The difference becomes more pronounced at \(K=4\), where, due to the presence of terms such \(a_{1}^{\dagger}a_{3}^{\dagger}\big{(}a_{2}\big{)}^{2}\) in \(\mathcal{V}_{4}\), the state \(\big{(}\mathsf{A}^{\dagger}_{[4,0]}\big{)}^{2}|\mathrm{vac}\rangle\) belongs to a subspace spanned over a larger number of Fock vectors than \(\big{(}\mathsf{P}^{\dagger}_{[4,0]}\big{)}^{2}|\mathrm{vac}\rangle\):
\[\big{(}\mathsf{A}^{\dagger}_{[4,0]}\big{)}^{2}|\mathrm{vac} \rangle\subset\mathrm{span}\big{\{}|4^{2}\rangle,|1^{2},2,4\rangle,|1^{4},2 \rangle, \tag{42a}\] \[|1^{2},3^{2}\rangle,|1,2^{2},3\rangle,|2^{4}\rangle,|1^{5},3 \rangle,|1^{8}\rangle\big{\}}\,,\] \[\big{(}\mathsf{P}^{\dagger}_{[4,0]}\big{)}^{2}|\mathrm{vac} \rangle\subset\mathrm{span}\big{\{}|4^{2}\rangle,|1^{2},2,4\rangle,|1^{4},2 \rangle\big{\}}\,. \tag{42b}\]
Similarly to \(\mathcal{U}_{3}\), we find the wave operators \(\mathcal{U}_{2}\), \(\mathcal{U}_{4}\), and \(\mathcal{U}_{5}\), and use those to define the initial and final states according to (32). The amplitudes of the resulting states in the basis of \(K=6\) Fock states are shown in Table 3, as well as the states obtained with the aid of operators \(\mathsf{P}^{\dagger}_{[n,0]}\).
Finding the exact \(\mathcal{U}_{K}\) is a costly procedure: calculating the modal matrix requires diagonalizing the Hamiltonian matrix whose dimension is exponential in \(K\), and the number of free parameters in \(\mathcal{V}_{K}\) grows exponentially with \(K\) as well. These parameters can be found via solving a linear system of equations, as described in Appendix E.
Once the initial and final states are determined, the unitary time evolution is simulated using the full Hamiltonian operator acting on momentum modes up to \(K=6\). The transition probabilities for the chosen initial and final states are shown in Figure 3. In order to make sense of these graphs, we calculate their Fourier transform (shown in Figure 4) and confirm that the plots in the frequency domain have their peaks in the points corresponding to the differences \((E_{\mathrm{full,\,}K=6,\,j}-E_{\mathrm{full,\,}K=6,\,k})\) between the eigenvalues of the exact spectrum:8
Footnote 8: If \(\mathrm{e}^{-iHt}|i\rangle=\sum_{m}c_{n}\mathrm{e}^{-i\omega_{n}t}|n\rangle\) and \(|f\rangle=\sum_{m}d_{m}|m\rangle\), then \(\big{|}\langle f|\exp(-iHt)|i\rangle\big{|}^{2}=\sum_{mn}d_{n}^{\ast}c_{n}d_{ m}c_{m}^{\ast}\mathrm{e}^{-i(\omega_{n}-\omega_{m})t}\), where \(\omega_{n}\) and \(|n\rangle\) are the eigenvalues and eigenvectors of \(H\).
\[\mathrm{spec}\,H_{K=6,\,\mathrm{even}} \tag{43}\] \[=\{0.605,0.879,1.769,8.313,10.104,24.329\}\,.\]
To further interpret plots in Figure 4, it is useful to compare the initial and final scattering states with the eigenvectors of the \(K=6\) Hamiltonian corresponding to eigenvalues in (43), see Table 3. While the Hamiltonian operator used for generating time evolution in Figure 3 encodes the entire spectrum of the system, the similarity of initial and final states with the lowest states in the \(K=6\) sector explains why the effects of higher frequencies are barely noticeable on frequency plots in Figure 4. These plots can also be contrasted with the transition probability \(\big{|}\langle\Psi_{\mathrm{eq}}|\mathrm{e}^{-i\mathcal{H}_{\mathrm{full}}t}| \Psi_{\mathrm{eq}}\rangle\big{|}^{2}\) of a trial state \(|\Psi_{\mathrm{eq}}\rangle\), an equal superposition of all the six eigenstates from the \(K=6\) even sector, see Figure 5.
Let us further investigate the effect of higher momentum modes on the evolution of low momentum states. To do so, we consider the time-dependent parton distribution function of the initial state, defined as:
\[\begin{split}\mathrm{PDF}(x=\mathfrak{n}/K,t)&=\langle i (t)|a^{\dagger}_{\mathfrak{n}}a_{\mathfrak{n}}|i(t)\rangle\\ &=\big{\langle}i\big{|}\mathrm{e}^{i\mathcal{H}_{\mathrm{full}}} a^{\dagger}_{\mathfrak{n}}a_{\mathfrak{n}}\mathrm{e}^{-i\mathcal{H}_{\mathrm{full}}} \big{|}i\big{\rangle}\.\end{split} \tag{44}\]
The initial PDF of the state \(|i\rangle=|[3,0]^{2}\rangle\) is shown in Figure 6. As follows from the form of this state (see Table 3), the PDF is dominated by modes of momentum \(1\) and \(3\). The time evolution of this initial PDF is shown in Figure 7. The plot illustrates the effect of higher-momentum modes \(4\) and \(5\) on the evolution of the initial state.9
Footnote 9: The mode of momentum \(6\) is absent in Figure 7 because at harmonic resolution \(K=6\) the only state with this mode, \(|6\rangle\), contains a single particle and thus belongs to the odd sector.
In order to calculate the transition probability between the two-particle states in the interacting theory, we defined those using operators \(\mathsf{A}^{\dagger}_{[n,0]}\) creating the eigenstates of the inter
acting theory from vacuum. We constructed such operators with the aid of the wave operators \(\mathcal{U}_{K}\) which relate the eigenbases of the free and interacting theories at particular values of harmonic resolution \(K\), see eqs. (29) and (30). To find the wave operator \(\mathcal{U}_{K}\), we diagonalized the interacting Hamiltonian matrix in the basis of Fock states, calculated the logarithm of the corresponding modal matrix, and found the simplest cluster operator \(\mathcal{V}_{K}\) of such a form that the unitary transformation \(\mathcal{U}_{K}=\mathrm{e}^{-i\mathcal{V}_{K}}\) obeys the desired properties, see eqs. (24)-(26). We also contrasted the so-defined operators with those obtained via the definition eq. (4), which only involves polynomials in creation operators. The constructed multi-particle states predominantly belong to the low LF energy subspace of the theory (in the sense of both free and interacting Hamiltonians), yet their time evolution is affected by states of higher LF energies. While the considered model was simple enough for the calculations to be performed exactly, the practical usage of the method would rely on approximate techniques and/or quantum computing. In Section V, the latter path is outlined.
Figure 5: Fourier spectrum of the transition probability \(\left|\langle\Psi_{\mathrm{eq}}|\mathrm{e}^{-i\mathcal{H}_{\mathrm{full}}t}|\Psi_ {\mathrm{eq}}\rangle\right|^{2}\) for an equal weight superposition of \(\mathcal{H}_{\mathrm{full,\,}K=6}\) even sector eigenstates. The state \(\left|\Psi_{\mathrm{eq}}\right\rangle\) does not carry physical meaning, the plot is shown to be contrasted with Figure 4, where the frequencies are shifted toward the left of the spectrum.
## V Quantum simulation of scattering
The approach to scattering given in Sections III and IV is well suited to serve as a starting point for designing quantum simulation algorithms. Implementing the action of operators, restricted to a certain subset of modes, is achieved via manipulating the corresponding qubit registers on the quantum computer. If the initial wavefunctions of incoming particles do not overlap, separate state preparation procedures can be used for the disjoint parts of the system. The corresponding quantum circuits will then act on disjoint subsets of qubits, and the usage of variational and projection-based techniques is possible. In a more general scenario, when preparation of the composite particles involves overlapping sets of modes, it is most natural to prepare the initial states with the aid of the wave operator. The action of the latter can be then implemented by means of such techniques as adiabatic state preparation or GWW flow.
Consider a circuit for preparing the state \(\left|\left[K_{1},0\right],\left[K_{2},0\right]\right\rangle\) defined in (32). As shown in Figure 8, adding to the state a particle of momentum \(K_{1}\) (\(K_{2}\)) only involves acting on modes of momentum up to \(K_{1}\) (\(K_{2}\)). The single-mode creation operators \(a^{\dagger}_{K_{1,2}}\) can be implemented using ancillary qubits (see e.g. step 2 in Sec. 3.2 of [6] or Section V. in [123]). The first \(\mathcal{U}^{\dagger}_{K_{2}}\) gate can be omitted if one starts from a vacuum state. Thus, the cost of the state preparation circuit comes from implementing wave operator (twice for each particle), as well as from implementing the action of the free creation operator. In a similar manner one can design circuits for efficient preparation of any states of the form (31).
Following the state preparation step depicted in Figure 8, one evolves the system in time and measures the observables of interest in the final state of the system. The simplest (yet likely not the most efficient) way of calculating the transition probability \(\left|\left\langle f|\mathrm{e}^{-i\mathcal{H}_{\mathrm{full}}t}|i\rangle \right\rangle\right|^{2}\) amounts to doubling the number of qubits and using the SWAP-test [124] circuit shown in Figure 9.
While in Figures 8 and 9 we implicitly assumed the usage of _direct_ mapping [121], in which a particular qubit register is assigned to each momentum mode, the described approach can be implemented using any available encoding and state preparation algorithm, such as that discussed in [123].
Efficient realization of eq. (4) containing an exponential number of terms on a quantum computer would pose a challenging task. Given a state \(\sum_{\mathcal{F}}c_{\mathcal{F}}|\mathcal{F}\rangle=\sum_{\mathcal{F}}c_{ \mathcal{F}}\mathrm{a}^{\dagger}_{\mathcal{F}}|\mathrm{vac}\rangle\), implementing the action of the operator \(\sum_{\mathcal{F}}c_{\mathcal{F}}\mathrm{a}^{\dagger}_{\mathcal{F}}\) would likely require
Figure 6: \(\mathrm{PDF}(x=n/K,t=0)\), the parton distribution function, as defined in (44), for the initial state \(|i\rangle=\left|[3,0]^{2}\right\rangle\), as defined in eq. (33) and shown in Table 3.
Figure 7: Time-dependent parton distribution function from eq. (44) for the state \(|i\rangle=|[3,0]^{2}\rangle\). The plot illustrates how the the modes of higher LF momentum \((\mathsf{n}=4,5)\) participate in the time evolution of the state \(|i\rangle=|[3,0]^{2}\rangle\), in which only the modes \(\mathsf{n}=1,3\) were initially occupied.
access to a subroutine producing the action of \(\mathsf{a}_{\mathcal{F}}^{\dagger}\) controlled on the state \(\left|\mathcal{F}\right\rangle\). This would necessitate the usage of fault-tolerant hardware, in which case using the wave operator approach seems to be a more plausible option. A possible near-term strategy could amount to representing the ground state as a superposition of a polynomial number of basis states using subspace-based methods [125; 126; 127; 128; 129; 130; 131].
## VI Summary and Outlook
In this work, we proposed a framework for simulating scattering processes, which generalizes existing techniques from non-relativistic many-body quantum mechanics [33; 34; 35] and relativistic quantum field theory [7; 12; 13; 40; 41] in several directions. Our approach is well-suited for the studies of strongly
Figure 8: Quantum circuit for preparing a two-particle state (32). Single-mode creation operators \(a_{K_{1,2}}^{\dagger}\) can be implemented using ancillary qubits (see Appendix F). The first \(\mathcal{U}_{K_{2}}^{\dagger}\) circuit can be omitted if one starts from the vacuum state.
interacting systems, in which the initial and final states are comprised of multiple composite particles with overlapping wavefunctions. In analogy with the free theory, these states are constructed with the aid of operators creating eigenstates of the interacting Hamiltonian from vacuum. Such operators are defined in terms of creation operators of the free theory and wave operators [47; 48; 49] relating the eigenbases of the free and interacting theory at particular values of cutoffs, see eq. (5). This definition is related to the notion of effective particles in LF QFT [50; 51; 52; 53; 54; 55; 56; 57; 58; 59] and allows for several ways of implementation. Note, however, that operators defined in eq. (5) are not the _effective particle operators_[50; 51; 52; 53; 54; 55; 56; 57; 58; 59] of the combined system in the sense of RGPEP (at infinite flow parameter), as they are defined in theories with smaller cutoffs. This explains why their successive application does not produce eigenstates of the combined system, and leads to their non-commutativity which has to be further addressed in a future work.
We illustrated the construction of multi-particle states using, as an example, the \(\phi^{4}\) theory in \(1+1\)D formulated in the language of the Discretized Light-Cone Quantization framework [38]. To construct the wave operator exactly, we used the unitary coupled cluster procedure [62; 63; 64]. We also argued that our approach provides a formulation suitable for simulating scattering of composite particles on quantum computers. The most natural implementation of the wave operators is via adiabatic state preparation (ASP). This means that the efficiency of the quantum algorithms is subject to the usual conditions for the efficiency of ASP [7; 24; 67; 68; 69; 70; 71]. Quantum algorithms based on the similarity renormalization group are emerging [83].
While preparation of the wave operator, generally, requires the knowledge of the entire spectrum of the system, of great interest is the possibility of its approximations limited to the low-energy subspace of the Hamiltonian, which can be achieved on near-term quantum hardware using subspace-based methods [125; 126; 127; 128; 129; 130; 131]. We discussed the construction of circuits for initial and final state preparation, as well as measurement procedure, and left for further investigation the construction of circuits implementing Galilean boosts and LF momentum transformations.
In the present work, we ignored the issue of renormalization. The LF formulation of QFT provides various ways to perform non-perturbative renormalization of QFT. An important question is, therefore, which of these methods is most suitable for our approach. Of particular interest is the possibility of combining the usage of the double commutator flow equation for both renormalizing the canonical QFT and solving it numerically [74; 75; 76; 77; 78; 79; 80; 81; 82; 83].
Our work further motivates the development of discretized LF QFT in the position representation. When choosing a discretization scheme for performing calculations in LF QFT on classical computers, one typically prioritizes the optimal basis choice [132; 40; 116] allowing for better convergence and/or earlier truncation -- similar to the studies of localized bound states in quantum chemistry [133; 33] and low-energy nuclear physics [134; 135]. Indeed, it is the dimension of Hilbert space and the sparsity of the Hamiltonian matrix that ultimately determine the computational complexity of classical simulation. On the contrary, the major parameters determining the complexity of quantum simulation are the number of elementary operator terms in the Hamiltonian and their locality. From this perspective, the optimal choice is given by spatial discretization, which was utilized in the construction of recently developed nearly-optimal algorithms for quantum simulation of quantum chemistry [136; 137; 138], and will be investigated in the context of LF QFT. In addition to these arguments, spatial discretization may provide a way to construct non-overlapping wavefunctions of LF bound states, leading to the factorized form of the initial state, which would enable the usage of variational and projection-based state preparation techniques.
## VII Acknowledgements
This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator. MK acknowledges additional support from the DOE grant PH-HEP24-QuantISED, B&R KA2401032/34, and is grateful to Stanislaw D. Glazek for fruitful discussions that greatly improved the manuscript. JPV acknowledges support from US Department of Energy grant DE-SC0023692. JPV and PJL acknowledge support from US Department of Energy grant DE-SC0023707. |
2305.17792 | A schematic model for the direct cross-section in reactions induced by
exotic and stable projectiles | A geometric model for the direct contribution of the reaction cross section
induced by light ions on different targets is presented. The model separates
the total reaction cross section into two components, one for total fusion and
another for direct reactions. We show that the direct part scales as $2 \pi
Ra$, where $R$ is related to the nuclear radius and $a$ is the width of a ring,
which is related to the nuclear diffuseness. A simple expression is presented
to calculate the radius $R$ and the width parameter $a$ in terms of the masses
and charges of the system. The method is applied to experimental data of
exotic, weakly bound, and strongly bound projectiles on several targets.
Different diffuseness parameters were obtained for different types of
projectiles: exotic n-rich, stable weakly bound, stable strongly bound and
exotic p-rich exotic projectiles. | A. Serra, R. Lichtenthäler, O. C. B. Santos, K. C. C. Pires, U. Umbelino | 2023-05-28T18:35:46Z | http://arxiv.org/abs/2305.17792v2 | A schematic model for the direct cross-section in reactions induced by exotic and stable projectiles
###### Abstract
A geometric model for the direct contribution of the reaction cross section induced by light ions on different targets is presented. The model separates the total reaction cross section into two components, one for total fusion and another for direct reactions. We show that the direct part scales as \(2\pi Ra\), where \(R\) is related to the nuclear radius and \(a\) is the width of a ring, which is related to the nuclear diffuseness. A simple expression is presented to calculate the radius \(R\) and the width parameter \(a\) in terms of the masses and charges of the system. The method is applied to experimental data of exotic, weakly bound, and strongly bound projectiles on several targets. Different diffuseness parameters were obtained for different types of projectiles: exotic \(n\)-rich, stable weakly bound, stable strongly bound and exotic \(p\)-rich exotic projectiles.
## I Introduction
Low energy reactions induced by light weakly bound and exotic projectiles have been extensively studied in the last two decades [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. A larger total reaction cross section has been observed in systems involving exotic \(n\)-rich projectiles, in comparison with stable weakly bound and strongly bound projectiles. Before comparing cross sections of systems with different masses, it is necessary to re-scale the cross sections to remove trivial geometric effects such as different radii and Coulomb barriers. Different methods have been proposed to reduce the total reaction cross sections [4; 11; 12; 13; 14; 15; 16; 17]. Application of these methods to experimental data shows that there are three main classes of reduced cross sections. Exotic neutron rich projectiles such as \({}^{6}\)He usually present the larger reduced cross section followed by the weakly bound such as \({}^{6,7,8}\)Li, \({}^{7}\)Be and, finally, the strongly bound projectiles such as alpha particles, \({}^{12}\)C and \({}^{16}\)O. Although this enhancement has been observed mainly in reactions with heavy targets, it has been reported on light targets as well [3]. The reasons for this enhancement are not yet completely understood. Light nuclei usually exhibit a strong cluster structure which is expected to play an important role in the reaction mechanisms. In particular, at low energies, around and below the Coulomb barrier, coupled channels effects are expected to be more important, however, it is not clear how fusion and direct cross sections are affected by coupled channels effects.
In collisions induced by projectiles with alpha-structure, a large yield of alpha particles has been observed in the spectra as early as 1961 [18] and, latter, in other nuclides such as \({}^{6}\)He, \({}^{7}\)Li as well [19]. Investigation of the angular and energy distributions of these fragments indicate that they are produced mainly in direct processes, such as neutron transfer and projectile breakup. Direct reaction should have quite different characteristics compared to non-direct (compound nucleus) processes, mainly regarding the energy and angular distributions of the reaction products. Due to the very different time scales of direct (\(\approx 10^{-22}\) s) and compound nucleus (\(\approx 10^{-19}\) s) processes, the angular and energy distributions of the reaction products are expected to be very different. Particles produced by direct processes are expected to have a forward peaked distribution with energies near the energy of the projectile. On the other hand, for processes which occur via compound nucleus formation, a more isotropic angular distribution is expected with an energy distribution shifted toward lower energies. Nevertheless, in practical terms, the experimental separation of direct and non-direct reactions (fusion) is not trivial at low energies. In particular below the Coulomb barrier, the experimental separation between fragments from direct processes and those coming from complete fusion becomes tricky. Reactions such as incomplete fusion can contribute in a region of energies where it is difficult to separate from pure direct processes, requiring the measurement of more involved degrees of freedom such as neutrons and gammas in coincidence. At energies above the barrier the situation improves and it seems to be possible to obtain reliable direct cross sections by measuring only the charged fragments distributions.
In the present paper, we propose a method to estimate the direct part of the total reaction cross section at energies above the Coulomb barrier. In section II, we present the formalism. In section III, the method is applied to analyse experimental data. In section IV, we present the conclusions.
## II The method and its application to data.
At energies above the Coulomb barrier, for weakly deformed nucleii, the total reaction cross section can be calculated using the well know geometric formula, given below:
\[\sigma_{R}=\pi R_{b}^{2}(1-\frac{V_{b}}{E}) \tag{1}\]
where \(R_{b}\) and \(V_{b}\) are, respectively, the Coulomb radius
and the Coulomb barrier. This equation has been used to reduce the total reaction cross section data [4; 15; 20; 21; 22]. The total reaction cross section tends to \(\pi R_{b}^{2}\) for \(E\gg V_{b}\) and the term inside parentheses gives its energy dependence as a function of the ratio \(E/V_{b}\). For energies below the barrier, one may use the Wong formula [23; 24] - Eq.[2].
\[\sigma_{R}=\left[R_{b}^{2}\hbar\omega/2E\right]\ln\{1+\exp(2\pi(E-V_{b})/\hbar \omega)\} \tag{2}\]
It is easy to show that, for energies slightly above the Coulomb barrier, Eq.[2] reduces exactly to the geometric Eq.[1].
The next step is to write the total reaction cross section as the sum of two contributions, one for total fusion and another for direct reactions:
\[\sigma_{R}=\sigma_{fus}+\sigma_{dir} \tag{3}\]
Direct reactions are expected to be more peripheral processes, taking place in a limited angular momentum range located near the grazing angular momentum with a certain width. At high energies in \(r\)-space, this corresponds to a ring of radius \(R_{x}\) and width \(a\) which leads to the next relation - see Fig.1 and Ref.[25]:
\[\pi R_{b}^{2}=\pi R_{fus}^{2}+2\pi R_{x}a \tag{4}\]
Thus, the simple substitution \(\pi R_{b}^{2}\to 2\pi R_{x}a\) in Eqs.[1; 2] leads to the following expression for the total cross section for direct processes:
\[\sigma_{dir}=2\pi R_{x}a\left(1-\frac{V_{b}}{E}\right) \tag{5}\]
and similarly for Eq.[2]. Indeed, as early as 1947, for high energy deuterons (around 190 MeV), similar expression was used in Ref.[26]. Following this geometric approach, we can define \(R_{x}=R_{fus}+a/2\) and \(a=R_{b}-R_{fus}\). The parameter \(a\) is the principal quantity in our methodology.
From now on, we apply a model [4; 27] which relates the Coulomb \(R_{b}\) and the nuclear \(R\) radii. This model provides a quite natural connection with the geometric picture of Fig.1. From this model:
\[R_{b}=R+a_{n}{\rm ln}(X) \tag{6}\]
and, in this last expression:
\[R_{fus}\approx R=r_{0}(A_{p}^{1/3}+A_{t}^{1/3}) \tag{7}\]
with \(r_{0}=1.3\) fm standing for the reduced nuclear radius, and \(a_{n}\) is the diffuseness of the nuclear potential, whose standard value for stable ions is \(a_{n}=0.65\) fm. The parameter \(X\) is given by [27]:
\[X=27.1\frac{[A_{p}^{1/3}+A_{t}^{1/3}]^{2}}{Z_{p}Z_{t}} \tag{8}\]
The parameters of the above equation have been obtained in Ref.[27] using a real Woods-Saxon nuclear potential that fits the tail of a double folding potential. It is interesting to note that the parameter \(X\) has a dependence on A/Z, which seems to be present in the data, as we will show later. It has been shown that fusion cross sections scale as in Eq.[7] with normal \(r_{0}\) values around 1.2-1.5 fm [14]. Thus, the simple substitution \(R\approx R_{fus}\) in Eq.[6] provides a formula for the width \(a\) of the direct reaction disk, as given below:
\[a=a_{n}{\rm ln}(X) \tag{9}\]
A universal curve for the reduced direct cross section, \(\sigma_{red}\), is obtained simply by dividing Eq.[5] by \(2\pi R_{x}a\):
\[\sigma_{red}=1-\frac{1}{x} \tag{10}\]
with \(x=E/V_{b}\). In the case of the Wong formula (Eq.[2]), a slightly different reduction holds: \(\sigma_{red}^{\prime}=\sigma_{dir}E/(aR_{x}\hbar\omega)\), which leads to:
\[\sigma_{red}^{\prime}=\ln\{1+\exp(2\pi x^{\prime})\} \tag{11}\]
with \(x^{\prime}=(E-V_{b})/\hbar\omega\). Recently, Ref.[28], some transfer direct reaction channels were normalized using the usual Wong formula with free parameters related to barrier shift and separation energies. Both, the reduced cross sections, \(\sigma_{red}\) and \(\sigma_{red}^{\prime}\), and the independent variables, \(x\) and \(x^{\prime}\) are dimensionless quantities and will be used to compare this method with experimental data in the next section.
Figure 1: A geometric picture of the model. \(R_{b}\) and \(R\) determine the total reaction and fusion cross sections disks, respectively. \(R_{x}\) and \(a\) stand for radius and width of the direct reaction ring, respectively.
## III Application to data
This model was applied to analyze experimental data corresponding to the direct part of total reaction cross section. We selected the experimental data by two methods:
1. directly, considering the angle-energy integrated cross section for the detected projectile charged fragments, obtained in the following Refs.[9; 10; 19; 28; 30; 31; 33; 35; 36; 37; 38; 40; 41; 42; 43; 48; 49; 50; 52; 54; 61; 62]. In the \({}^{9}\)Be+\({}^{9}\)Be case Ref. [44], a detailed \(\gamma\)-spectroscopy analysis was performed to identify all the relevant reaction channels.
2. indirectly, from the difference, \(\sigma_{dir}=\sigma_{R}-\sigma_{fus}\), for the cases where total reaction and complete fusion cross sections (CF) are available [29; 32; 39; 40; 46; 47; 49; 55; 56; 57; 58; 59; 60]. In the present study, we considered only CF cross sections.
In Table 1, we present a list of all systems considered and in Fig.2, a plot of the selected experimental data is presented as a function of the variable \(x=E/V_{b}\). In Fig. 2, we see that the data are scattered; however, it is possible to observe four different data groups of cross sections: the exotic _n_-rich projectiles (red) with the largest cross sections except for the \({}^{11}\)Li case (yellow squares) which will be discussed latter; the weakly bound (WB - blue) projectiles present a considerably smaller cross section; and, finally, the strongly bound (SB - green) with the smallest cross sections. There are also very few points for the proton rich \({}^{8}\)B and \({}^{17}\)F projectiles (_p_-rich black), with cross sections comparable to the strongly bound. This result is quite surprising considering that these proton rich projectiles are very loosely bound and a large direct cross section for breakup and other reaction channels is expected above the Coulomb barrier. However, there are very few points above the barrier which prevents any definite conclusion at this point.
To compare the data with our model, all cross sections from Fig.2 were divided by \(\sigma_{dir}=2\pi R_{x}a\). \(R_{x}\) and \(a\) were calculated using Eqs.[6; 8; 9] with a single parameter \(a_{n}\), the nuclear diffuseness, adjusted to best fit the universal curve Eq.[10]. The results are shown in Fig. 3. As one can see by comparing Figs.2 and 3, the division by \(\sigma_{dir}=2\pi R_{x}a\) clearly condenses the data in well defined groups. In fact, the bi-dimensional reduced variance [67] between the \((x,\sigma_{dir}/\sigma_{dir,max})\) and \((x,\sigma_{dir}/\sigma_{red})\) is decreased by a factor of approximately 5 in the \(x>1.8\) range; however, in the \(x\) lower range, there is not any significant change the bi-dimensional reduced variance.
For energies well above the barrier, \(x\geq 1.8\), all the points fit the universal curve [10], using different values of \(a_{n}\) for each type of projectile, exotic _n_-rich, stable weakly bound (WB), stable strongly bound (SB) and exotic _p_-rich. The following results were obtained: \(\hat{a}_{n}^{n-rich}=1.32(06)\) fm; \(\hat{a}_{n}^{WB}=0.66^{+0.16}_{-0.12}\) fm and \(\hat{a}_{n}^{SB}=0.68^{+0.16}_{-0.04}\) fm. The above values of \(a_{n}\) and errors were obtained by minimizing the chi-square, separately for each type of projectile. In the _p_-rich case, only two points are above \(x\geq 1.8\) which were adjusted to the universal curve to obtain \(a_{n}^{p-rich}=0.55\) fm, with no error estimation. This \(a_{n}^{p-rich}\) leads to an estimation of \(R_{x}=7.16\) fm and \(a=1.23\) fm for the \({}^{8}\)B + \({}^{28}\)Si system that is reasonable close to the ones evaluated at Ref.[49] where the values of 7.5 and 1.02 fm were obtained, respectively.
The results show that a considerably larger value of \(a_{n}\) is necessary to fit the exotic _n_-rich projectiles to the universal curve, in comparison to the weakly bound, strongly bound and exotic _p_-rich. Those three last cases gave values of diffuseness near the expected standard value of \(a_{n}\approx 0.65\) fm. _P_-rich exotic projectiles namely \({}^{8}\)B and \({}^{17}\)F present the lowest diffuseness, similar to the strongly bound stable systems. This result contrasts with previous analyses where the total reaction cross section for exotic _p_-rich projectiles presents a large value compared to the stable systems [22; 68]. However, no conclusion is possible here due to the small number of experimental points considered.
On the other hand, in the region \(x<1.8\) the situation seems different. A considerable amount of data for exotic _n_-rich and WB projectiles present similar reduced cross sections, both considerably above the universal curve and this behavior persists for energies below the Coulomb barrier. We see two possible explanations for this behavior. Coupled channels effects are expected to be more important at lower energies and could cause this enhancement. Another possible explanation could be the fusion contamination in the direct cross section data, as the experimental separation between these processes becomes more difficult at lower energies.
One may argue that, for the region below the Coulomb barrier, the \((x^{\prime},\sigma^{\prime}_{red})\) reduction scheme of Eq.[11] using the Wong formula would be more appropriate. The result using the Wong formula is presented in Fig.4, where the same \(a_{n}\) values fitted at high energies were used. We see that the enhancement is not explained also by the Wong formula.
Finally, the \({}^{11}\)Li+\({}^{208}\)Pb case is remarkable as it falls much above the universal curve. The data for the \({}^{11}\)Li\(\rightarrow\)\({}^{9}\)Li+\(n+n\) breakup reaction on a \({}^{208}\)Pb target was obtained in a nice clean experiment and are presented in Ref.[36]. There is no doubt about those experimental cross sections, and the results show how large the \({}^{11}\)Li breakup cross section on heavy targets can be, possibly, due to the contribution of the breakup in the Coulomb field of the heavy target. The measured \({}^{11}\)Li breakup cross section of \(\approx 5\) barns probably exhausts the total reaction cross section in this case. In the case of \({}^{11}\)Li there are also measurements on light targets \({}^{12}\)C and \({}^{9}\)Be [35] above the barrier which fit very well with \(\hat{a}_{n}^{n-rich}=1.32(06)\) fm as obtained for _n_-rich projectiles. This indicates that the enormous effect seen in the \({}^{11}\)Li+\({}^{208}\)Pb case is probably caused by the Coulomb breakup contribution. It is interesting to mention that, for the \({}^{7}\)Li projectile, the situation is different and both
\begin{table}
\begin{tabular}{l
\({}^{7}\)Li+\({}^{208}\)Pb and \({}^{7}\)Li+\({}^{9}\)Be cases gave similar reduced cross sections in better agreement with the universal curve.
## IV Conclusions
A model to estimate the contribution of direct reactions to the total reaction cross section of different projectile-target systems is proposed. The method separates the total reaction cross section into two contributions, one from the total fusion cross section, which scales as the disk area \(\pi R^{2}\) and another from the direct processes, which scale as the area of a ring \(2\pi Ra\), \(a\) being the width of the ring in \(r\)-space. The method is applied for reactions induced by stable and exotic projectiles on different mass targets and energies around and above the Coulomb barrier. It was found that the direct part of the cross section scales well with the \(2\pi Ra\) expression for energies above the Coulomb barrier, and for a large range of target masses. The width parameter \(a\) is directly related to the nuclear diffuseness \(a_{n}\) and seems to be dependent basically on the projectile structure. A considerably larger nuclear diffuseness \(a_{n}\) was obtained for the exotic \(n\)-rich \({}^{6,8}\)He, \({}^{11}\)Be and \({}^{11}\)Li projectiles compared to the stable weakly bound and strongly bound projectiles such as \({}^{4}\)He, \({}^{12}\)C and \({}^{16}\)O.
Our results indicate that, well above the Coulomb barrier, the enhancement observed in the direct cross section for the neutron halo projectiles is considerable and can be accounted by a larger nuclear diffuseness \(a_{n}\) parameter in the model.
On the other hand, for \(x<1.8\), the model does not account for the observed enhancements in neutron halo and weakly bound systems. We believe that, for energies below the Coulomb barrier, coupled channel effects become more important and should be explicitly taken into account in order to reproduce the data, which is not the case for the present geometric model. Moreover, at lower energies, the experimental separation between direct and non-direct processes becomes more difficult and some contamination from fusion in the direct cross section could also explain a part of this enhancement.
The geometric model presented here is not intended to reproduce all the complexity and details of specific direct reaction channels. However, it may be useful to scale the contribution of total direct processes to the reaction cross section, allowing the comparison of different systems with a common framework.
\begin{table}
\begin{tabular}{l l l} \hline \hline System & Ref. & Note \\ \hline \multicolumn{3}{c}{Type: exotic \(n\)-rich} \\ \hline \({}^{6}\)He + \({}^{209}\)Bi & [29] & The raw data is total fusion cross sections evaluated at Ref.[62] by fusion products discriminated on the basis of delayed \(\alpha\) yields. ICF (Incomplete Fusion) of the \({}^{4}\)He core was not considered due to the Q-value for this reaction channel. \\ \hline \multicolumn{3}{c}{Type: WB} \\ \hline \({}^{6}\)Li + \({}^{208}\)Pb & [29] & Fusion cross sections were evaluated at Ref.[63] by fusion residuals yields. ICF residuals were not observed. \\ \hline \({}^{7}\)Li + \({}^{208}\)Pb & [40] & Barrier-penetration model calculations which used its own optical model parameters. This method can show reasonable accuracy [64]. \\ \hline \({}^{7}\)Li + \({}^{9}\)Be & [39] & Fusion cross-sections are evaluated by \(\alpha\) yields at backwards angles and PACE calculations. \\ \hline \({}^{9}\)Be + \({}^{208}\)Pb & [46; 47] & CF cross sections were obtained by summing the cross sections for Rn isotopes residues originated from important neutron evaporation channels and the fission one. \\ \hline \multicolumn{3}{c}{Type: exotic \(p\)-rich} \\ \hline \({}^{8}\)B + \({}^{58}\)Ni & [49] & Proton, \(\alpha\) yields and PACE calculations were used to evaluate CF fusion cross sections. ICF and CF were discriminated by the proton evaporation. \\ \hline \multicolumn{3}{c}{Type: SB} \\ \hline \({}^{16}\)O+\({}^{58}\)Ni & [60] & Barrier penetration model using the best fit optical model potentials. Recent experimental fusion data at Ref.[65] are available for beam energies below those of Ref.[60], where BPM calculations are adnent to these experimental data. \\ \hline \({}^{12}\)C+\({}^{209}\)Bi & [59] & Evaporation residues and fission fragments measurements. ICF is not considered a issue in this system. \\ \hline \({}^{12}\)C+\({}^{13}\)C & [56] & Total fusion cross sections evaluated considering the integrated angular distributions of evaporation residues with Z \(\geq 6\). \\ \hline \({}^{12}\)C+\({}^{16}\)O & [66] & Fusion fragments measured using energy and time-of-flight techniques. ICF is not considered a issue in this system. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of CF/ICF data (Complete Fusion/Incomplete Fusion). In the present study we are only interested in CF cross sections.
## Acknowledgments
This work has been partially supported by Fundacao de Amparo a Pesquisa do Estado de Sao Paulo, Brazil (FAPESP) - contracts no. 2019/07767-1, 2019/02759-0, and 2021/12254-3; Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior, Brazil (CAPES) - Finance Code 88887.355019/2019 and 88887.620098/2021 and Conselho Nacional de Desenvolvimento Cientifico e Tecnologico, Brazil (CNPq) and by the project INCT-FNA Proc. No. 464898/2014-5. We would like to thank Prof. V. V. Parkar for providing us with experimental data for the \({}^{7}\)Li + \({}^{124}\)Sm system and Prof. Wayne Allan Seale for the text review.
Figure 4: Reduced cross sections \(\sigma^{\prime}_{red}\) as a function of the reduced energy - \(x^{\prime}=(E-V_{b})/\hbar\omega\) using the Wong’s formula. The same symbols and color system applied at table 1 and Fig.2 are used. The black continuous line is the function \(\sigma^{\prime}_{red}=\ln(1+\exp(2\pi x^{\prime}))\). See text for more details.
Figure 3: Reduced direct cross sections (dimensionless) as a function of the dimensionless reduced energy - \(x=E/V_{b}\). The same symbols and color system applied at table 1 and Fig.2 are used. The black continuous line is the function \(\sigma_{red}=1-1/x\). In this plot \(\hat{a}^{n-halo}_{n}=1.32(06)\) fm; \(\hat{a}^{WB}_{n}=0.66^{+0.16}_{-0.12}\) fm; \(\hat{a}^{SB}_{n}=0.68^{+0.16}_{-0.04}\) fm; \(\hat{a}^{p-halo}_{n}=0.55\) fm. As in Fig.2, this figure uses, in its vertical scale, four orders of magnitude. The universal curve for the reduced direct cross sections is meaningless for \(x<1\).
Figure 2: Direct cross section data (mb) as function of \(x=E/V_{b}\). Different types of projectiles are shown by different colors, exotic \(n\)-rich (red), weakly bound (WB) (blue), strongly bound (SB) (green) and exotic \(p\)-rich (black). The symbols are the same as in table 1. The yellow squares represent the \({}^{11}\)Li+\({}^{208}\)Pb system - Ref.[36] - see text for details. The dotted magenta line means \(x=1.8\). |
2301.05661 | Choice-free Dualities for Lattice Expansions: Application to Logics with
a Negation Operator | Constructive dualities have been recently proposed for some lattice based
algebras and a related project has been outlined by Holliday and Bezhanishvili,
aiming at obtaining "choice-free spatial dualities for other classes of
algebras [$\ldots$], giving rise to choice-free completeness proofs for
non-classical logics''. We present in this article a way to complete the
Holliday-Bezhanishvili project (uniformly, for any normal lattice expansion) by
recasting recent relational representation and duality results in a choice-free
manner. These results have some affinity with the Moshier and Jipsen duality
for bounded lattices with quasi-operators, except for aiming at representing
operators by relations, extending the J\'{o}nsson-Tarski approach for BAOs, and
Dunn's follow up approach for distributive gaggles, to contexts where
distribution may not be assumed. To illustrate, we apply the framework to
lattices (and their logics) with some form or other of a (quasi)complementation
operator, obtaining canonical extensions in relational frames and choice-free
dualities for lattices with a minimal, or a Galois quasi-complement, or
involutive lattices, including De Morgan algebras, as well as Ortholattices and
Boolean algebras, as special cases. | Chrysafis Hartonas | 2023-01-13T16:59:47Z | http://arxiv.org/abs/2301.05661v2 | # Choice-free Dualities for Lattice Expansions: Application to Logics with a Negation Operator
###### Abstract
Constructive dualities have been recently proposed for some lattice based algebras and a related project has been outlined by Holliday and Bezhanishvili, aiming at obtaining "choice-free spatial dualities for other classes of algebras [...], giving rise to choice-free completeness proofs for non-classical logics".
We present in this article a way to complete the Holliday-Bezhanishvili project (uniformly, for any normal lattice expansion) by recasting recent relational representation and duality results in a choice-free manner. These results have some affinity with the Moshier and Jipsen duality for bounded lattices with quasi-operators, except for aiming at representing operators by relations, extending the Jonsson-Tarski approach for BAOs, and Dunn's follow up approach for distributive gaggles, to contexts where distribution may not be assumed. To illustrate, we apply the framework to lattices (and their logics) with some form or other of a (quasi)complementation operator, obtaining canonical extensions in relational frames and choice-free dualities for lattices with a minimal, or a Galois quasi-complement, or involutive lattices, including De Morgan algebras, as well as Ortholattices and Boolean algebras, as special cases.
## 1 Introduction
Choice-free dualities have been lately proposed for Boolean algebras, by Holliday and Bezhanishvili [2], for Ortholattices, by MacDonald and Yamamoto [22], for modal lattices by Bezhanishvili, Dmitrieva, de Groot and Moraschini [1] and for De Vries algebras by Massas [21]. These are part of a project, outlined by Holliday and Bezhanishvili and aiming at obtaining "choice-free spatial dualities for other classes of algebras [...], giving rise to choice-free completeness proofs for non-classical logics" [2, page 45]. The project has its origins in Holliday's 'possibility frames' for modal logic [17], as noted in [2].
A choice-free representation and duality for bounded lattices with quasioperators, by Moshier and Jipsen [23, 24], had already appeared in print, influencing at least Dmitrieva's research, with Bezhanishvili, de Groot and Morachini, on modal lattices. The Moshier-Jipsen duality is related to results by this author [10], and with Dunn [14], some detail on these relations is presented in [13, Remark 4.2, Remark 4.8] and we revisit the issue in Proposition 4.7 in this article. The duality of [23, 24] is not primarily intended to provide logic related, relational semantics applications. This becomes clear by their choice to represent lattice quasi-operators as strongly continuous and meet preserving point operators on the dual topological spaces, whereas for semantic purposes one typically aims for first-order definable classes of relational frames.
In the recent few years, this author has pursued a project of extending Jonsson and Tarski's approach for Boolean algebras with operators (BAOs) [19, 20] and Dunn's follow up approach for distributive generalized Galois logics (gaggles) [4, 5, 6] to the case of general lattices with quasi-operators (normal lattice operators, in our preferred terminology), building on older work by the author (with Dunn) in [14], while working within the framework of canonical extensions [8] of lattice expansions. This project developed in parallel with Gehrke's (with co-workers) generalized Kripke frames approach (RS frames) [7] and the relations between the two approaches have been detailed in [12]. We note that, as far as the objectives of the current article are concerned, Gehrke's approach of RS-frames in [7] builds on Hartung's lattice representation [16], which inherits from Urquhart's lattice representation [25] an essential use of the axiom of choice. Choice was also assumed in this author's [13, 14] (Alexander's subbasis lemma, whose proof uses Zorn's lemma, was used to prove compactness of the space), but we show in this article that the use of choice is inessential and we can easily recast the duality in a choice-free manner, switching to a spectral topology.
In Section 2 we present the algebras of interest, bounded lattices with a quasi-complementation operator. We restrict attention to some distinguished cases, allowing for both a distributive (notably De Morgan and Boolean algebras) and a non-distributive lattice base (such as involutive, or orthocomplement lattices).
Section 3 starts with a review subsection (Section 3.1) for sorted frames with relations and generated operators, drawing on [13], and concludes with Section 3.2 where frames for quasi-complemented lattices are discussed. A first-order axiomatic specification of the classes of frames with respect to which the logics of the algebraic structures of section 2 can be shown to be sound is provided in Table 1.
Section 4 presents choice-free representations of semilattices (Section 4.1) and bounded lattices (Section 4.2) and concludes with Section 4.3 detailing the representation of arbitrary normal lattice operators, drawing on [13].
In Section 5 we apply the representation framework of section 4.3 to the particular case of quasi-complementation operators on bounded lattices. The results of this section establish that the varieties of quasi-complemented lattices we consider are closed under canonical extensions. Thereby, completeness theorems via a canonical model construction can be proven for the logics of the
algebraic structures considered.
Spectral duality theorems are proven in Section 6. The main result in this section is Theorem 3.1, where we detail the duality between the categories \(\mathbf{M}\) of bounded lattices with a minimal quasi-complementation operator and the category \(\mathbf{SRF}^{*}_{\nu\mathrm{M}}\) of sorted residuated frames whose first-order axiomatization is given in Table 2. The remaining dualities are then easily obtained. In particular, Proposition 3.7, relying on Theorem 3.5, provides a first-order frame condition for the lattice of Galois stable sets to be completely distributive, which is then used for the cases of representation and duality for De Morgan algebras and Boolean algebras.
Some concluding remarks are made in Section 7.
## 2 Quasi-complemented Lattices
Let \(\{1,\partial\}\) be a \(2\)-element set, \(\mathbf{L}^{1}=\mathbf{L}\) and \(\mathbf{L}^{\partial}=\mathbf{L}^{\mathrm{op}}\) (the opposite lattice). Extending established terminology [19], a function \(f:\mathbf{L}_{1}\times\cdots\times\mathbf{L}_{n}\longrightarrow\mathbf{L}_{n+1}\) will be called _additive_ and _normal_, or a _normal operator_, if it distributes over finite joins of the lattice \(\mathbf{L}_{i}\), for each \(i=1,\ldots n\), delivering a join in \(\mathbf{L}_{n+1}\).
**Definition 2.1**.: An \(n\)-ary operation \(f\) on a bounded lattice \(\mathcal{L}\) is _a normal lattice operator of distribution type \(\delta(f)=(i_{1},\ldots,i_{n};i_{n+1})\in\{1,\partial\}^{n+1}\)_ if it is a normal additive function \(f:\mathcal{L}^{i_{1}}\times\cdots\times\mathcal{L}^{i_{n}}\longrightarrow \mathcal{L}^{i_{n+1}}\) (distributing over finite joins in each argument place), where each \(i_{j}\), for \(j=1,\ldots,n+1\), is in the set \(\{1,\partial\}\), hence \(\mathcal{L}^{i_{j}}\) is either \(\mathcal{L}\), or \(\mathcal{L}^{\partial}\).
If \(\tau\) is a tuple (sequence) of distribution types, a _normal lattice expansion of (similarity) type \(\tau\)_ is a lattice with a normal lattice operator of distribution type \(\delta\) for each \(\delta\) in \(\tau\).
The _category_\(\mathbf{NLE}_{\tau}\), for a fixed similarity type \(\tau\), has normal lattice expansions of type \(\tau\) as objects. Its morphisms are the usual algebraic homomorphisms.
In this article we focus on the class of lattices \(\mathbf{L}=(L,\leq,\wedge,\lor,0,1,\nu)\) with a quasi-complementation operator \(\nu\), of increasing axiomatization strength, including at least the following:
\begin{tabular}{l l} (antitonicity) & \(a\leq b\longrightarrow\nu b\leq\nu a\) \\ (normality) & \(\nu 0=1\) \\ (\(\lor\wedge\)) & \(\nu(a\lor b)\leq\nu a\wedge\nu b\). \\ \end{tabular}
Given antionicity, the operation \(\nu\) satisfies the identity \(\nu(a\lor b)=\nu a\wedge\nu b\), hence it is a normal lattice operator of distribution type \(\delta(\nu)=(1;\partial)\). We list some basic facts in Lemma 2.2.
**Lemma 2.2**.: Let \(\mathbf{L}=(L,\leq,\wedge,\lor,0,1,\nu)\) be a bounded lattice with an antitone operation \(\nu\).
1. \(\nu\) forms a Galois connection on \(\mathbf{L}\) (\(a\leq\nu b\) iff \(b\leq\nu a\)) iff it satisfies the inequation \(a\leq\nu\nu a\)
2. if \(a\leq\nu\nu a\) holds in the lattice, then the normality axiom \(\nu 0=1\) and the identity \(\nu(a\lor b)=\nu a\land\nu b\) are derivable
3. if \(\nu\nu a\leq a\) holds in the lattice, then the identity \(\nu(a\wedge b)=\nu a\lor\nu b\) is derivable
4. if either of the De Morgan identities \(\nu(a\lor b)=\nu a\land\nu b\), or \(\nu(a\wedge b)=\nu a\lor\nu b\), holds in the lattice, then antitonicity of \(\nu\) is a derivable property
5. if \(\nu\) is an involution \((a=\nu\nu a)\) and, in addition, the lattice is distributive, then it is a _De Morgan algebra_
6. if \(\nu\) is an involution and, in addition, it satisfies the intuitionistic explosion principle (ex falso quidlibet) \(a\land\nu a\leq 0\), then the lattice is an _Ortholattice_ (orthocomplemented lattice)
7. if \(\nu\) is an involution satisfying the antilogism rule \((a\wedge b\leq c\longrightarrow a\land\nu c\leq\nu b)\), then the lattice is a _Boolean algebra_
Proof.: Each of the claims (1) to (6) has a straightforward proof, left to the interested reader. For (7), the hypothesis implies that \(a\wedge b\leq c\) iff \(a\land\nu c\leq\nu b\). This means that \(\land\) is self-conjugate with respect to the involution \(\nu\). To see that this implies distributivity, define \(a\to c=\nu(a\land\nu c)\) and observe that the conjugacy condition is equivalent to residuation of \(\land\) and \(\rightarrow\), i.e. \(a\wedge b\leq c\) iff \(a\land\nu c\leq\nu b\) iff \(b\leq a\to c\). Distribution then follows from residuation. In addition, by part (2), \(\nu 0=1\) holds and then also \(\nu 1=\nu\nu 0=0\). Hence the intuitionistic principle \(a\land\nu a\leq 0=\nu 1\) follows, since we can infer it from \(a\wedge 1\leq a\) using antilogism. By the hypothesis that \(\nu\) is an involution, the explosion principle \(a\land\nu a\leq 0\) is equivalent to excluded middle \(a\lor\nu a=1\). Hence the lattice is a Boolean algebra.
Figure 1 summarizes the above results, where \(\mathbb{DMA},\mathbb{O},\mathbb{INV}\) designate the equational classes (varieties) of De Morgan algebras, Ortholattices and lattices with an involution, respectively, \(\mathbb{BA}\) designates the variety of Boolean algebras, the remaining two labels \(\mathbb{M},\mathbb{G}\) designate the varieties of lattices with a minimal, or a Galois connected quasi-complementation operator, respectively, and the arrow label (dist) indicates addition of the distribution law \(a\land(b\lor c)\leq(a\wedge b)\lor(a\wedge c)\).
## 3 Sorted residuated frames (SRFs)
### Frames, Relations and (Sorted) Image Operators
We review in this section definitions, notational conventions and results from [12, 13], to the extent needed for our current purposes.
Regard \(\{1,\partial\}\) as a set of sorts and let \(Z=(Z_{1},Z_{\partial})\) be a sorted set. Sorted residuated frames \(\mathfrak{F}=(Z_{1},\pm,Z_{\partial})\) are triples consisting of nonempty sets \(Z_{1}=X,Z_{\partial}=Y\) and a binary relation \(\pm\subseteq X\times Y\).
The relation \(\bot\) will be referred to as the _Galois relation_ of the frame. It generates a Galois connection \((\ )^{\bot}:\wp(X)\leftrightarrows\wp(Y)^{\partial}:{}^{\bot}(\ )\ (V\subseteq U^{\bot}\) iff \(U\subseteq{}^{\bot}V)\)
\[\begin{array}{rl}U^{\bot}&=\{y\in Y\mid\forall x\in U\ x\bot y\}\ =\{y\in Y\mid U\bot y\}\\ {}^{\bot}V&=\{x\in X\mid\forall y\in V\ x\bot y\}\ =\{x\in X\mid x\bot V\}. \end{array}\]
We will also have use for the complement \(I\) of the Galois relation \(\bot\) and we will designate frames using either the Galois relation \(\bot\), or its complement \(I\).
A subset \(A\subseteq X\) will be called _stable_ if \(A={}^{\bot}(A^{\bot})\). Similarly, a subset \(B\subseteq Y\) will be called _co-stable_ if \(B=({}^{\bot}B)^{\bot}\). Stable and co-stable sets will be referred to as _Galois sets_, disambiguating to _Galois stable_ or _Galois co-stable_ when needed and as appropriate. The following quasi-seriality condition will be assumed for sorted frames
\[\forall x\in X\,\exists y\in Y\ xIy\qquad\qquad\qquad\forall y\in Y\,\exists x \in X\ xIy \tag{1}\]
Note that assuming (1), the empty set is (co)stable and we have \(\wp^{\bot}=Y\), \({}^{\bot}Y=\emptyset\) and similarly \({}^{\bot}\emptyset=X\) and \(X^{\bot}=\emptyset\).
By \(\mathcal{G}(X),\mathcal{G}(Y)\) we designate the complete lattices of stable and co-stable sets, respectively. Note that the Galois connection restricts to a dual isomorphism \((\ )^{\bot}:\mathcal{G}(X)\backsimeq\mathcal{G}(Y)^{\partial}:{}^{\bot}(\ )\).
Preorder relations are induced on each of the sorts, by setting for \(x,z\in X\), \(x\preceq z\) iff \(\{x\}^{\bot}\subseteq\{z\}^{\bot}\) and, similarly, for \(y,v\in Y\), \(y\preceq v\) iff \({}^{\bot}\{y\}\subseteq{}^{\bot}\{v\}\). A (sorted) frame is called _separated_ if the preorders \(\preceq\) (on \(X\) and on \(Y\)) are in fact partial orders \(\leq\).
Figure 1: (Quasi)Complemented Lattices
Our notational conventions are these of [13, Remark 3.2]. We repeat them below, for the reader's convenience.
**Remark 3.1** (**Notational conventions)**.: For a sorted relation \(R\subseteq\prod_{j=1}^{j=n+1}Z_{i_{j}}\), where \(i_{j}\in\{1,\partial\}\) for each \(j\) (and thus \(Z_{i_{j}}=X\) if \(i_{j}=1\) and \(Z_{i_{j}}=Y\) when \(i_{j}=\partial\)), we make the convention to regard it as a relation \(R\subseteq Z_{i_{n+1}}\times\prod_{j=1}^{j=n}Z_{i_{j}}\), we agree to write its sort type as \(\sigma(R)=(i_{n+1};i_{1}\cdots i_{n})\) and for a tuple of points of suitable sort we write \(uRu_{1}\cdots u_{n}\) for \((u,u_{1},\ldots,u_{n})\in R\).
We use \(\Gamma\) to designate upper closure \(\Gamma U=\{z\in X\mid\exists x\in U\ x\leq z\}\), for \(U\subseteq X\), and similarly for \(U\subseteq Y\). The set \(U\) is _increasing_ (an upset) iff \(U=\Gamma U\). For a singleton set \(\{x\}\subseteq X\) we write \(\Gamma x\), rather than \(\Gamma(\{x\})\) and similarly for \(\{y\}\subseteq Y\).
We typically use the standard Formal Concept Analysis priming notation for each of the two Galois maps \({}^{\perp}(\ ),(\ )^{\perp}\). This allows for stating and proving results for each of \(\mathcal{G}(X),\mathcal{G}(Y)\) without either repeating definitions and proofs, or making constant appeals to duality. Thus for a Galois set \(G\), \(G^{\prime}=G^{\perp}\), if \(G\in\mathcal{G}(X)\) (\(G\) is a Galois stable set), and otherwise \(G^{\prime}={}^{\perp}\!G\), if \(G\in\mathcal{G}(Y)\) (\(G\) is a Galois co-stable set).
For an element \(u\) in either \(X\) or \(Y\) and a subset \(W\), respectively of \(Y\) or \(X\), we write \(u|W\), under a well-sorting assumption, to stand for either \(u\perp W\) (which stands for \(u\perp w\), for all \(w\in W\)), or \(W\perp u\) (which stands for \(w\perp u\), for all \(w\in W\)), where well-sorting means that either \(u\in X,W\subseteq Y\), or \(W\subseteq X\) and \(u\in Y\), respectively. Similarly for the notation \(u|v\), where \(u,v\) are elements of different sort.
We designate \(n\)-tuples (of sets, or elements) using a vectorial notation, setting \((G_{1},\ldots,G_{n})=\tilde{G}\in\prod_{j=1}^{j=n}\mathcal{G}(Z_{i_{j}})\), \(\tilde{U}\in\prod_{j=1}^{j=n}\wp(Z_{i_{j}})\), \(\tilde{u}\in\prod_{j=1}^{j=n}Z_{i_{j}}\) (where \(i_{j}\in\{1,\partial\}\)). Most of the time we are interested in some particular argument place \(1\leq k\leq n\) and we write \(\tilde{G}[F]_{k}\) for the tuple \(\tilde{G}\) where \(G_{k}=F\) (or \(G_{k}\) is replaced by \(F\)). Similarly \(\tilde{u}[x]_{k}\) is \((u_{1},\ldots,u_{k-1},x,u_{k+1},\ldots,u_{n})\).
For brevity, we write \(\tilde{u}\preceq\tilde{v}\) for the pointwise ordering statements \(u_{1}\preceq v_{1},\ldots,u_{n}\preceq v_{n}\). We also let \(\tilde{u}\in\tilde{W}\) stand for the conjunction of component-wise membership \(u_{j}\in W_{j}\), for all \(j=1,\ldots,n\).
To simplify notation, we write \(\Gamma\tilde{u}\) for the \(n\)-tuple \((\Gamma u_{1},\ldots,\Gamma u_{n})\). For a unary map \(f\) and a tuple \(\tilde{u}\) we write \(f[\tilde{u}]\) for the tuple \((f(u_{1}),\ldots,f(u_{n}))\). Note that the same notation is used for the image \(f[S]=\{f(x)\mid x\in S\}\) of a set under a function \(f\), but context will make it clear what the intended meaning is. The convention can be nested, so that if \(S\) is a set (or sequence) of tuples \(\tilde{u}_{i}\), then \(f[S]\) is the set (or sequence) consisting of the elements \(f[\tilde{u}_{i}]\).
To refer to sections of relations (the sets obtained by leaving one argument place unfilled) we make use of the notation \(\tilde{u}[\_]_{k}\) which stands for the \((n-1)\)-tuple \((u_{1},\ldots,u_{k-1},[\_],u_{k+1},\ldots,u_{n})\) and similarly for tuples of sets, extending the membership convention for tuples to cases such as \(\tilde{u}[\_]_{k}\in\tilde{F}[\_]_{k}\) and similarly for ordering relations \(\tilde{u}[\_]_{k}\preceq\tilde{v}[\_]_{k}\). We also quantify over tuples (with, or without a hole in them), instead of resorting to an iterated quantification over the elements of the tuple, as for example in \(\exists\tilde{u}[\_]_{k}\in\tilde{F}[\_]_{k}\exists v,w\in G\ w\tilde{u}[v]_{k}\).
We extend the vectorial notation to distribution types, summarily writing \(\delta=(\tilde{i_{j}};i_{n+1})\) for \((i_{1},\ldots,i_{n};i_{n+1})\). Then, for example, \(\tilde{i_{j}}[\partial]_{k}\) is the tuple with
\(i_{k}=\partial\). Furthermore, we let \(\overline{i_{j}}=\partial\), if \(i_{j}=1\) and \(\overline{i_{j}}=1\), when \(i_{j}=\partial\).
**Lemma 3.2** ([13, Lemma 3.3] ).: Let \(\mathfrak{F}=(X,\underline{\pm},Y)\) be a polarity and \(u\) a point in \(Z=X\cup Y\).
1. \(\underline{\pm}\) is increasing in each argument place (and thereby its complement \(I\) is decreasing in each argument place).
2. \((\Gamma u)^{\prime}=\{u\}^{\prime}\) and \(\Gamma u=\{u\}^{\prime\prime}\) is a Galois set.
3. Galois sets are increasing, i.e. \(u\in G\) implies \(\Gamma u\subseteq G\).
4. For a Galois set \(G\), \(G=\bigcup_{u\in G}\Gamma u\).
5. For a Galois set \(G\), \(G=\bigvee_{u\in G}\Gamma u=\bigcap_{v\mid G}\{v\}^{\prime}\).
6. For a Galois set \(G\) and any set \(W\), \(W^{\prime\prime}\subseteq G\) iff \(W\subseteq G\). \(\Box\)
It is typical in the context of canonical extensions of lattices to refer to principal upper sets \(\Gamma x\in\mathcal{G}(X)(x\in X=\operatorname{Filt}(\mathbf{L}))\), as _closed_, or _filter_ elements of \(\mathcal{G}(X)\) and to sets \({}^{\perp}\{y\}\in\mathcal{G}(X)(y\in Y=\operatorname{Idl}(\mathbf{L}))\) as _open_, or _ideal_ elements of \(\mathcal{G}(X)\), and similarly for sets \(\Gamma y,\{x\}^{\perp}\) with \(x\in X,y\in Y\). This creates an unfortunate clash of terminology and we shall have to rely on context to disambiguate. Furthermore, a closed element \(\Gamma x\) is said to be _clopen_ if \(\Gamma x={}^{\perp}\{y\}\) for some \(y\in Y\), which is unique when the frame is separated.
By Lemma 3.2, the closed elements of \(\mathcal{G}(X)\) join-generate \(\mathcal{G}(X)\), while the open elements meet-generate \(\mathcal{G}(X)\) (similarly for \(\mathcal{G}(Y)\)).
**Definition 3.3** (Galois dual relation).: For a relation \(R\), of sort type \(\sigma\), its _Galois dual relation_\(R^{\prime}\) is the relation defined by \(uR^{\prime}\tilde{v}\) iff \(\forall w\;(wR\tilde{v}\longrightarrow w|u)\). In other words, \(R^{\prime}\tilde{v}=(R\tilde{v})^{\prime}\).
**Definition 3.4** (Sections of relations).: For an \((n+1)\)-ary relation \(R^{\sigma}\) (of sort \(\sigma\)) and an \(n\)-tuple \(\tilde{u}\), \(R^{\sigma}\tilde{u}=\{w\mid wR^{\sigma}\tilde{u}\}\) is the _section_ of \(R^{\sigma}\) determined by \(\tilde{u}\). To designate a section of the relation at the \(k\)-th argument place we let \(\tilde{u}[\_]_{k}\) be the tuple with a hole at the \(k\)-th argument place. Then \(wR^{\sigma}\tilde{u}[\_]_{k}=\{v\mid wR^{\sigma}\tilde{u}[v]_{k}\}\subseteq Z _{i_{k}}\) is the \(k\)-th section of \(R^{\sigma}\).
If \(R\) is a relation on a sorted residuated frame \(\mathfrak{F}=(X,I,Y)\), of some sort type \(\sigma=\sigma(R)=(i_{n+1};i_{1}\cdots i_{n})\), then as in the unsorted case, \(R\) generates a (sorted) _image operator_\(\alpha_{R}\), defined by (2), of sort \(\sigma(\alpha_{R})=(i_{1},\ldots,i_{n};i_{n+1})\), defined by the obvious generalization of the Jonsson-Tarski image operators [19],
\[\alpha_{R}(\tilde{W})=\;\{w\in Z_{i_{n+1}}\mid\exists\tilde{w}\;(wR\tilde{w} \wedge\bigwedge_{j=1}^{j=n}(w_{j}\in W_{j}))\}\qquad=\bigcup_{\tilde{w}\in W} R\tilde{w}, \tag{2}\]
where for each \(j\), \(W_{j}\subseteq Z_{i_{j}}\) (and recall that \(Z_{i_{j}}=X\) when \(i_{j}=1\) and \(Z_{i_{j}}=Y\), if \(i_{j}=\partial\)). Let \(\overline{\alpha}_{R}\) be the closure of the restriction of \(\alpha_{R}\) to Galois sets \(\tilde{F}\),
\[\overline{\alpha}_{R}(\tilde{F})=(\alpha_{R}(\tilde{F}))^{\prime\prime}=\left( \bigcup_{j=1,\ldots,n}^{w_{j}\in F_{j}}R\tilde{w}\right)^{\prime\prime}= \bigvee_{\tilde{w}\in\tilde{F}}(R\tilde{w})^{\prime\prime}, \tag{3}\]
where \(F_{j}\in\mathcal{G}(Z_{i_{j}})\), for each \(j\in\{1,\ldots,n\}\).
**Theorem 3.5** ([13, Theorem 3.12]).: Let \(\mathfrak{F}=(X,\underline{\perp},Y,R)\) be a frame with an \((n+1)\)-ary sorted relation, of some sort \(\sigma(R)=(i_{n+1};i_{j})\) and assume that for any \(w\in Z_{\overline{i_{n+1}}}\) and any \((n-1)\)-tuple \(\vec{p}[\_]_{k}\) with \(p_{j}\in Z_{i_{j}}\), for each \(j\in\{1,\ldots,n\}\setminus\{k\}\), the sections \(wR^{\prime}\vec{p}[\_]_{k}\) of the Galois dual relation \(R^{\prime}\) of \(R\) are Galois sets. Then \(\overline{\alpha}_{R}\) distributes at the \(k\)-th argument place over arbitrary joins in \(\mathcal{G}(Z_{i_{k}})\).
The Galois set operator \(\overline{\alpha}_{R}\) is sorted. Single-sorted operators \(\overline{\alpha}_{R}^{1}\) on \(\mathcal{G}(X)\) and \(\overline{\alpha}_{R}^{\partial}\) on \(\mathcal{G}(Y)\) are obtained by composition with the Galois connection, which is a duality of \(\mathcal{G}(X)\) and \(\mathcal{G}(Y)\).
**Definition 3.6** (Full complex algebra).: Let \(\mathfrak{F}=(X,\underline{\perp},Y,R)\) be a polarity with a relation \(R\) of some sort \(\sigma(R)=(i_{n+1};i_{1}\cdots i_{n})\). The _full complex algebra of \(\mathfrak{F}\)_ is the structure \(\mathfrak{F}^{+}=(\mathcal{G}(X),\overline{\alpha}_{R}^{1})\) and its _dual full complex algebra_ is the structure \(\mathfrak{F}^{\partial}=(\mathcal{G}(Y),\overline{\alpha}_{R}^{\partial})\). Subalgebras of full complex algebras will be referred to as complex algebras of a frame.
**Proposition 3.7**.: Let \(\mathfrak{F}=(X,\underline{\perp},Y)\) be a sorted frame (a polarity) and \(\mathcal{G}(X)\) the complete lattice of stable sets. Let \(R\) be the ternary upper bound relation on \(X\) defined by \(xRuz\) iff both \(u\preceq x\) and \(z\preceq x\). If all sections of the Galois dual relation \(R^{\prime}\) of \(R\) are Galois sets, then \(\mathcal{G}(X)\) is completely distributive.
Proof.: Let \(\alpha_{R}\) be the image operator generated by \(R\), \(\alpha_{R}(U,W)=\bigcup_{u\in U}^{w\in W}Ruw\). Notice that, for stable sets \(A,C\) (more generally, for increasing sets), \(\alpha_{R}(A,C)=A\cap C\). Hence \(\overline{\alpha}_{R}(A,C)=\alpha_{R}(A,C)=A\cap C\), since Galois sets are closed under intersection. Given the section stability hypothesis for the Galois dual relation \(R^{\prime}\) of \(R\), Theorem 3.5 applies, from which distribution of \(\overline{\alpha}_{R}\) (i.e. of intersection) over arbitrary joins of stable sets is concluded.
The image operator \(\alpha_{R}:\prod_{j=1}^{j=n}\wp(Z_{i_{j}})\longrightarrow\wp(Z_{i_{n+1}})\) is a sorted normal and completely additive function in each argument place, therefore it is residuated, i.e. for each \(k\) there is a set-operator \(\beta_{R}^{k}\) satisfying the condition
\[\alpha_{R}(\tilde{W}[V]_{k})\subseteq U\ \ \text{iff}\ \ V\subseteq\beta_{R}^{k} (\tilde{W}[U]_{k}). \tag{4}\]
Hence \(\beta_{R}^{k}(\tilde{W}[U]_{k})\) is the largest set \(V\) s.t. \(\alpha_{R}(\tilde{W}[V]_{k})\subseteq U\) and it is thereby definable by
\[\beta_{R}^{k}(\tilde{W}[U]_{k})=\bigcup\{V\mid\alpha_{R}(\tilde{W}[V]_{k}) \subseteq U\}. \tag{5}\]
We let \(\beta_{R/}^{k}\) be the restriction of \(\beta_{R}^{k}\) of equation (5) to Galois sets, according to its sort type, explicitly defined by (6)
\[\beta_{R/}^{k}(\tilde{E}[G]_{k})=\bigcup\{F\in\mathcal{G}(Z_{i_{k}})\mid \alpha_{R}(\tilde{E}[F]_{k})\subseteq G\}. \tag{6}\]
**Theorem 3.8** ([13, Theorem 3.14, Lemma 3.15]).: If \(\overline{\alpha}_{R}\) is residuated in the \(k\)-th argument place, then \(\beta_{R/}^{k}\) is its residual and \(\beta_{R/}^{k}(\tilde{E}[G]_{k})\) is a Galois set, i.e. the union in equation (6) is actually a join in \(\mathcal{G}(Z_{i_{k}})\). Furthermore, equations (7) and (8)
\[\beta_{R/}^{k}(\tilde{E}[G]_{k})=\bigcup\{\Gamma u\in\mathcal{G}(Z_{i_{k}}) \mid\alpha_{R}(\tilde{E}[\Gamma u]_{k})\subseteq G\}, \tag{7}\]
\[\beta_{R/}^{k}(\tilde{E}[G]_{k})=\{u\in Z_{i_{k}}\mid\alpha_{R}(\tilde{E}[ \Gamma u]_{k})\subseteq G\} \tag{8}\]
give an equivalent definition of \(\beta_{R/}^{k}\). \(\Box\)
Thus, under the assumption of section stability for the Galois dual relation \(R^{\prime}\), the operation of closure of restriction to Galois sets preserves residuation and we obtain \(\overline{\alpha}_{R}\dashp\beta_{R/}^{k}\).
Frame morphisms are the weak bounded morphisms whose definition in [13, Definition 3.20] is repeated below, where we let \(I\) be the complement of the Galois relation \(\pm\) of a frame.
**Definition 3.9**.: If \(\pi=(p,q):(X_{2},I_{2},Y_{2})\longrightarrow(X_{1},I_{1},Y_{1})\) is a pair of maps \(p:X_{2}\longrightarrow X_{1}\), \(q:Y_{2}\longrightarrow Y_{1}\), then \(\pi\) will be called a _(sorted) weak bounded morphism_ iff
1. \(\forall x^{\prime}\in X_{2}\forall y^{\prime}\in Y_{2}\;(x^{\prime}I_{2}y^{ \prime}\longrightarrow\pi(x^{\prime})I_{1}\pi(y^{\prime}))\)
2. \(\forall x\in X_{1}\forall y^{\prime}\in Y_{2}(xI_{1}\pi(y^{\prime}) \longrightarrow\exists x^{\prime}\in X_{2}(x\leq\pi(x^{\prime})\wedge x^{ \prime}I_{2}y^{\prime}))\)
3. \(\forall x^{\prime}\in X_{2}\forall y\in Y_{1}(\pi(x^{\prime})I_{1}y \longrightarrow\exists y^{\prime}\in Y_{2}(y\leq\pi(y^{\prime})\wedge x^{ \prime}I_{2}y^{\prime}))\).
**Definition 3.10**.: If \((p,q):(X_{2},I_{2},Y_{2})\rightarrow(X_{1},I_{1},Y_{1})\), with \(p:X_{2}\to X_{1}\) and \(q:Y_{2}\to Y_{1}\), then we let \(\pi=(p,q)\) and we define \(\pi^{-1}\) by setting
\[\pi^{-1}(W)=\left\{\begin{array}{ll}p^{-1}(W)\in\not{\wp}(X_{2})&\mbox{ if }W\subseteq X_{1}\\ q^{-1}(W)\in\not{\wp}(Y_{2})&\mbox{ if }W\subseteq Y_{1}.\end{array}\right.\]
Similarly, we let
\[\pi(w)=\left\{\begin{array}{ll}p(w)\in X_{1}&\mbox{ if }w\in X_{2}\\ q(w)\in Y_{1}&\mbox{ if }w\in Y_{2}.\end{array}\right.\]
**Proposition 3.11** ([13, Corollary 3.21]).: The inverse \(\pi^{-1}=(p,q)^{-1}\) of a weak bounded morphism is a complete lattice homomorphism of the lattices of Galois stable sets of sorted residuated frames. \(\Box\)
For frames with relations, let \(\pi\) be a weak bounded morphism, \(\pi=(p,q):(X_{2},I_{2},Y_{2},(S_{\sigma})_{\sigma\in\tau})\longrightarrow(X_{ 1},I_{1},Y_{1},(R_{\sigma})_{\sigma\in\tau})\), and let \(R_{\sigma},S_{\sigma}\) be corresponding relations in the two frames, of the same sort type. For simplicity, we omit the subscript \(\sigma\) in the sequel. We recall the following from [13].
**Proposition 3.12** ([13, Proposition 3.24, Lemma 3.25]).: If for any \(\vec{u}\) it holds that \(\pi^{-1}\overline{\alpha}_{R}(\Gamma\vec{u})=\overline{\alpha}_{S}(\pi^{-1}[ \Gamma\vec{u}])\), then for any tuple \(\vec{F}\) of Galois sets of the required sort \(\pi^{-1}\overline{\alpha}_{R}(\vec{F})=\overline{\alpha}_{S}(\pi^{-1}[\vec{F}])\). Furthermore, equations
\[\pi^{-1}\alpha_{R}(\Gamma\vec{u}) = \alpha_{S}(\pi^{-1}[\Gamma\vec{u}]), \tag{9}\] \[\pi(v)R\vec{u} \mbox{ iff }\ \exists\vec{w}(\vec{u}\leq\pi[\vec{w}]\, \wedge\,vS\vec{w}). \tag{10}\]
provide equivalents to the above assumption that \(\pi^{-1}\overline{\alpha}_{R}(\Gamma\vec{u})\) be identical to \(\overline{\alpha}_{S}(\pi^{-1}[\Gamma\vec{u}])\). \(\Box\)
We list in Table 1, after [13], the minimal axiomatization we shall assume for a sorted residuated frame with relations \(\mathfrak{F}=(X,I,Y,(R_{\sigma})_{\sigma\in\tau})\). The axiomatization will be strengthened in the sequel imposing, among others, a spectral topology on each of \(X,Y\).
Note that axioms (F1) and (F2) imply that there is a (sorted) function \(\widehat{f}_{R}\) on the points of the frame such that \(\widehat{f}_{R}(\vec{u})=w\) iff \(R\vec{u}=\Gamma w\). The following immediate observation will be useful in the sequel.
**Lemma 3.13** ([13, Lemma 3.16]).: Let \(\mathfrak{F}\) be a frame of similarity type \(\tau\) and assume that axioms (F1)-(F3) in Table 1 hold. Then for a frame relation \(R\) of type \(\sigma\) in \(\tau\), \(\overline{\alpha}_{R}(\Gamma\vec{u})=R\vec{u}=\alpha_{R}(\Gamma\vec{u})= \Gamma(\widehat{f}_{R}(\vec{u}))\). \(\Box\)
### Frames for Quasi-Complemented Lattices
We now consider sorted frames \(\mathfrak{F}=(X,\pm,Y,S_{\nu})\) with \(\sigma(S_{\nu})=(\partial;1)\), i.e. \(S_{\nu}\subseteq Y\times X\), and we assume that axioms (F0)-(F4) of Table 1 hold.
Let \(S^{\prime}_{\nu}\) be the Galois dual relation of \(S_{\nu}\), defined by \(S^{\prime}_{\nu}z=(S_{\nu}z)^{\prime}\) for each \(z\in X\), let \(\eta_{S}:\mathfrak{g}(X)\longrightarrow\mathfrak{g}(Y)\) be the sorted image operator generated by \(S_{\nu}\)
\begin{table}
\begin{tabular}{l}
**(F0)**: The complement \(I\) of the Galois relation \(\pm\) of the frame is quasi-serial, in the sense of condition (1) \\ \end{tabular}
\end{table}
Table 1: Axioms for Sorted Residuated Frames of similarity type \(\tau\)
defined on \(U\subseteq X\) by
\[\eta_{S}(U)=\{y\in Y\mid\exists x\in X(yS_{\nu}x\text{ and }x\in U)\}=\bigcup_{x \in U}S_{\nu}x, \tag{11}\]
and \(\overline{\eta}_{S}:\mathcal{G}(X)\longrightarrow\mathcal{G}(Y)\) be the closure of its restriction to Galois sets. Furthermore, let \(\widetilde{\nu}:X\longrightarrow Y\) be the point operator of Lemma 3.13, so that for a closed element \(\Gamma x\in\mathcal{G}(X)\) we have \(\overline{\eta}_{S}(\Gamma x)=S_{\nu}x=\eta_{S}(\Gamma x)=\Gamma(\widetilde{ \nu}(x))\). By axiom (F2), \(S_{\nu}x\) is a Galois set. Hence for a stable set \(A\in\mathcal{G}(X)\),
\[\overline{\eta}_{S}(A)=\left(\bigcup_{x\in A}S_{\nu}x\right)^{\prime\prime}= \bigvee_{x\in A}S_{\nu}x=\bigvee_{x\in A}\Gamma(\widetilde{\nu}(x))=\bigvee_{ x\in A}\overline{\eta}_{S}(\Gamma x)\in\mathcal{G}(Y). \tag{12}\]
Let also \(\overline{\eta}_{\nu}(A)=(\overline{\eta}_{S}(A))^{\prime}=\bigcap_{z\in A}S _{\nu}^{\prime}z\), so that \(\overline{\eta}_{\nu}\) is a single-sorted operation (on \(\mathcal{G}(X)\)) derived from \(\overline{\eta}_{S}\) by composition with the Galois connection.
**Remark 3.14** (Switching Notation).: Hereafter, we simplify notation, switching to the more familiar \(\bot\) for the incompatibility relation \(S_{\nu}^{\prime}\) and letting \(\overline{\eta}_{\nu}(A)\) be designated by \(A^{*}\).
For subsequent use, we make a note of the fact that
\[x\in A^{*}\ \text{ iff }\ \forall z(z\in A\longrightarrow x_{\bot z})\ \text{ iff }\ x_{\bot}A. \tag{13}\]
Since \(\overline{\eta}_{S}:\mathcal{G}(X)\longrightarrow\mathcal{G}(Y)\) distributes over arbitrary joins, by the axioms in Table 1 and Theorem 3.5, it is residuated with a map \(\overline{\zeta}_{S}:\mathcal{G}(Y)\longrightarrow\mathcal{G}(X)\), which maps meets of \(\mathcal{G}(Y)\) (hence joins of \(\mathcal{G}(X)\)) to meets of \(\mathcal{G}(X)\), defined on \(B\in\mathcal{G}(Y)\) (using also Theorem 3.8) by
\[\overline{\zeta}_{S}(B)=\bigvee\{A\in\mathcal{G}(X)\mid\overline{\eta}_{S}(A) \subseteq B\}=\bigcup\{A\in\mathcal{G}(X)\mid\overline{\eta}_{S}(A)\subseteq B\}.\]
By [13, Lemma 3.15], \(\overline{\zeta}_{S}(B)\) is equivalently defined by equation (14), specializing equation (8),
\[\overline{\zeta}_{S}(B)=\{x\in X\mid\overline{\eta}_{S}(\Gamma x)\subseteq B\}. \tag{14}\]
By duality of \(\mathcal{G}(X)\) and \(\mathcal{G}(Y)\), every \(B\in\mathcal{G}(Y)\) is \(B=C^{\prime}\) for some \(C\in\mathcal{G}(X)\). Hence we obtain that \(\overline{\eta}_{S}(A)\subseteq C^{\prime}\) iff \(A\subseteq\overline{\zeta}_{S}(C^{\prime})\). From this, setting \(\overline{\zeta}_{\underline{\nu}}(C)=\overline{\zeta}_{S}(C^{\prime})\), we obtain the Galois connection condition \(C\subseteq\overline{\eta}_{\nu}(A)\) iff \(A\subseteq\overline{\zeta}_{\nu}(C)\). Recalling that we have switched notation to \(A^{*}\) for \(\overline{\eta}_{\nu}(A)\) and setting \(A^{*}=\overline{\zeta}_{\nu}(A)\), we can rewrite the Galois condition as \(A\subseteq C^{*}\) iff \(C\subseteq A^{*}\).
**Lemma 3.15**.: Let \(\mathfrak{F}=(X,\bot,Y,S_{\nu})\) be a frame subject to the axioms of Table 1 and let \(A\in\mathcal{G}(X)\) be any stable set.
1. The following are equivalent 1. \(\bot\) is symmetric 2. \(A\subseteq A^{**}\) 3. \(A^{*}=A^{*}\)
2. \(A^{**}=\bigvee_{x\in A}\overline{\zeta}_{S}\overline{\eta}_{S}(\Gamma x)\)
3. The following are equivalent 1. \(A^{**}\subseteq A\), for any \(A\in\mathcal{G}(X)\) 2. \(\overline{\zeta}_{S}\overline{\eta}_{S}(\Gamma x)\subseteq\Gamma x\), for any \(x\in X\) 3. \(\forall x,z\in X[\forall v\in Y(vS_{\nu}z\longrightarrow vS_{\nu}x) \longrightarrow x\leq z]\)
4. The following are equivalent 1. \(\bot\) is irreflexive 2. \(A\cap A^{*}=\emptyset\)
Proof.: For (1) and the case (a)\(\Rightarrow\)(b), suppose, for a contradiction, that \(x\in A\), but \(x\not\in A^{**}\). Let \(z\in A^{*}\) such that \(x\bot z\) fails. But \(z\in A^{*}=\{z\mid\forall u(u\in A\longrightarrow z\bot u)\}\) and \(x\in A\), so \(z\bot x\) holds. By symmetry of \(\bot\) it follows \(x\bot z\), contradiction. Hence \(A\subseteq A^{**}\), for any \(A\in\mathcal{G}(X)\).
For (b)\(\Rightarrow\)(c), by Lemma 2.2, \((\ )^{*}\) forms a Galois connection on \(\mathcal{G}(X)\). By uniqueness of adjoints it then follows that \(A^{*}=A^{*}\).
For (c)\(\Rightarrow\)(b), by definition \((\ )^{*}\) is Galois connected with \((\ )^{*}\) and hence if the two are equal, then the result follows by using Lemma 2.2.
For (b)\(\Rightarrow\)(a), recall that \(\bot=S^{\prime}_{\nu}\), both sections of which are stable sets, which are increasing sets by Lemma 3.2, and that by Lemma 2.2 the hypothesis means that \((\ )^{*}\) is a Galois connection on the lattice of stable sets. Assuming \(x\bot z\), we then have \(x\bot\Gamma z\), which means that \(x\in(\Gamma z)^{*}=\{x\mid\forall u(z\leq u\longrightarrow x\bot u)\}\). If \(x\leq w\) and \(z\leq u\), then by \(x\bot z\) we also have \(w\bot u\) and this means that \(\Gamma x\subseteq(\Gamma z)^{*}\). Since \((\ )^{*}\) is antitone, \((\Gamma z)^{**}\subseteq(\Gamma x)^{*}\). But then \(\Gamma z\subseteq(\Gamma z)^{**}\subseteq(\Gamma x)^{*}\) from which we obtain \(z\in(\Gamma x)^{*}\) and so \(z\bot x\) follows.
For claim (2), using definitions we obtain that
\[\begin{array}{llll}(A^{*})^{*}&=&\overline{\zeta}_{S}\left((A^{*})^{\prime }\right)&=&\overline{\zeta}_{s}\left((\bigcap_{x\in A}\{\overline{\nu}(x)\}^ {\prime})^{\prime}\right)\\ &=&\overline{\zeta}_{S}\left(\vee_{x\in A}\Gamma(\overline{\nu}(x))\right)=& \vee_{x\in A}\overline{\zeta}_{S}(\Gamma(\overline{\nu}(x)))\\ &=&\vee_{x\in A}\overline{\zeta}_{S}\overline{\eta}_{S}(\Gamma x).\end{array}\]
For claim (3) and the case (a)\(\Rightarrow\)(b), assuming that \(A^{**}\subseteq A\), for any \(A\in\mathcal{G}(X)\) and choosing \(A=\Gamma x\), for arbitrary \(x\in X\), it is immediate, given the identity proven in claim (2), that \((\Gamma x)^{**}=\bigvee_{x\leq z}\overline{\zeta}_{S}\overline{\eta}_{S}( \Gamma z)=\overline{\zeta}_{S}\overline{\eta}_{S}(\Gamma x)\subseteq\Gamma x\).
The converse, (b)\(\Rightarrow\)(a), is immediate: \(A^{**}=\bigvee_{x\in A}\overline{\zeta}_{S}\overline{\eta}_{S}(\Gamma x) \subseteq\bigvee_{x\in A}\Gamma x=A\).
To prove (b)\(\Leftrightarrow\)(c), we have
\[\begin{array}{llll}\overline{\zeta}_{S}\overline{\eta}_{S}(\Gamma x)&=& \overline{\zeta}_{S}(S_{\nu}x)\\ &=&\bigcup\{z\in X\mid\overline{\eta}_{S}(\Gamma z)\subseteq S_{\nu}x\}\\ &=&\bigcup\{z\in X\mid S_{\nu}z\subseteq S_{\nu}x\}\end{array}\]
Hence, for any \(x\in X\)
\[\begin{array}{llll}\overline{\zeta}_{S}\overline{\eta}_{S}(\Gamma x)\subseteq \Gamma x&\mbox{iff}&\forall z(S_{\nu}z\subseteq S_{\nu}x\longrightarrow x \leq z)\\ &\mbox{iff}&\forall z[\forall v(vS_{\nu}z\longrightarrow vS_{\nu}x) \longrightarrow x\leq z]\end{array}\]
For claim (4), if \(x\in A\cap A^{*}\neq\emptyset\), then it must be that \(x\bot x\) and thus \(\bot\) is not irreflexive. Conversely, assume \(A\cap A^{*}=\emptyset\), for any \(A\in\mathcal{G}(X)\), but suppose that \(x\bot x\) for some \(x\in X\). Then \(x\in(\Gamma x)^{*}\) and thus \(\Gamma x\cap(\Gamma x)^{*}\neq\emptyset\), contradiction.
**Corollary 3.16**.: Let \(\mathfrak{F}=(X,\bot,Y,S_{\nu})\) be a frame satisfying axioms (F0)-(F4) of Table 1, where we set \(\bot=S^{\prime}_{\nu}\), and let \(\mathfrak{F}^{+}\) be its full complex algebra, \(\mathfrak{F}^{+}=(\mathcal{G}(X),\leq,\bigcap,\lor,\emptyset,X,(\ )^{*})\).
1. \(\mathfrak{F}^{+}\) is a complete lattice with a minimal quasi-complementation operator \((\ )^{*}\) on stable sets
2. \(\mathfrak{F}^{+}\) is a complete lattice with a quasi-complementation operator \((\ )^{*}\) which is a Galois connection on stable sets iff \(\bot\) is symmetric
3. \(\mathfrak{F}^{+}\) is a complete lattice with an involution \((\ )^{*}\) iff \(\bot\) is symmetric and the condition \(\forall x,z\in X[\forall v\in Y(vS_{\nu}z\longrightarrow vS_{\nu}x) \longrightarrow x\leq z]\) holds in the frame
4. \(\mathfrak{F}^{+}\) is a complete ortholattice iff \(\bot\) is symmetric and irreflexive and the condition \(\forall x,z\in X[\forall v\in Y(vS_{\nu}z\longrightarrow vS_{\nu}x) \longrightarrow x\leq z]\) holds in the frame
5. \(\mathfrak{F}^{+}\) is a complete De Morgan algebra if (a) \(\bot\) is symmetric, (b) the condition \(\forall x,z\in X[\forall v\in Y(vS_{\nu}z\longrightarrow vS_{\nu}x) \longrightarrow x\leq z]\) holds in the frame and (c) the sections of the Galois dual relation \(R^{\prime}\) of the upper bound relation \(R\) of Proposition 3.7 are stable
6. \(\mathfrak{F}^{+}\) is a complete Boolean algebra if the conditions (a)-(c) of the previous case hold in the frame and, in addition, (d) \(\bot\) is irreflexive.
Proof.: Immediate, given Lemma 3.15 and Proposition 3.7.
## 4 Choice-Free Representation of NLEs
### Semilattice Representation
Let \(\mathbf{M}=(M,\leq,\wedge,1)\) be a meet semilattice with a unit (top) element \(1\) and \(X=\operatorname{Filt}(\mathbf{M})\) its set of proper filters (we assume filters are nonempty, i.e. \(1\in x\) for any \(x\in X\)). For each \(a\in M\), let \(X_{a}=\{x\in X\mid a\in x\}\) and \(\mathcal{B}=\{X_{a}\subseteq X\mid a\in M\}\) and notice that \(X_{a}\cap X_{b}=X_{a\wedge b}\in\mathcal{B}\), so that \(\mathcal{B}\) itself is a meet semilattice with unit element \(X=X_{1}\).
Let \(\mathfrak{X}=(X,\mathcal{B})\) be the topological space with carrier set \(X\) and topology \(\Omega\) generated by taking \(\mathcal{B}\) as a basis.
**Remark 4.1** (Notation).: Principal filters of meet semilattices and lattices are typically designated with the notation \(x_{a}\) (\(=a\dagger\)), for an element \(a\) of the (semi)lattice. Join semilattice and lattice principal ideals are similarly designated by \(y_{a}=a\downarrow\). We typically use \(x,z\) for filters, \(y,v\) for ideals and \(u,w\) for either case.
**Lemma 4.2**.: Given a filter \(\mathcal{F}\) in the lattice \(\Omega(\mathfrak{X})\) of open sets of \(\mathfrak{X}\), define \(F=\{a\in M\mid X_{a}\in\mathcal{F}\}\) and \(x_{\mathcal{F}}\) to be the filter of \(\mathbf{M}\) generated by the set \(F\). Then
1. for any basic open set \(X_{a}\in\mathcal{B}\), \(x_{\mathcal{F}}\in X_{a}\) iff \(X_{a}\in\mathcal{F}\)
2. for any open set \(U\) of \(\mathfrak{X}\), if \(x_{\mathcal{F}}\in U\), then \(U\in\mathcal{F}\)
3. if \(\mathcal{F}\) is a completely prime filter in the lattice \(\Omega(\mathfrak{X})\) of open sets of \(\mathfrak{X}\), then \(x_{\mathcal{F}}\in U\) iff \(U\in\mathcal{F}\).
Proof.: For (1), if \(x_{\mathcal{F}}\in X_{a}\), then by definition of \(X_{a}=\{x\in X\mid a\in x\}\) we have \(a\in x_{\mathcal{F}}\). By definition of \(x_{\mathcal{F}}\), let \(e_{1},\ldots,e_{n}\in M\) be such that \(e_{1}\wedge\cdots\wedge e_{n}\leq a\) and for each \(i=1,\ldots,n\), \(X_{e_{i}}\in\mathcal{F}\). Then \(\bigcap_{i=1}^{n}X_{e_{i}}\in\mathcal{F}\). Also, \(\bigcap_{i=1}^{n}X_{e_{i}}=X_{e_{1}\wedge\cdots\wedge e_{n}}\subseteq X_{a}\) and so \(X_{a}\in\mathcal{F}\) as well. Conversely, if \(X_{a}\in\mathcal{F}\), then \(a\in x_{\mathcal{F}}\), which is to say that \(x_{\mathcal{F}}\in X_{a}\).
For (2), if \(U\) is open, let \(E\subseteq M\) be such that \(U=\bigcup_{e\in E}X_{e}\). If \(x_{\mathcal{F}}\in U\), then let \(e\in E\) be an element such that \(x_{\mathcal{F}}\in X_{e}\). By (1), \(X_{e}\in\mathcal{F}\) and then since \(X_{e}\subseteq U\) and \(\mathcal{F}\) is a filter, \(U\in\mathcal{F}\) follows.
For (3), it suffices to show, given (2) above, that if \(\mathcal{F}\) is completely prime and \(U\in\mathcal{F}\), then \(x_{\mathcal{F}}\in U\). Now \(U=\bigcup_{e\in E}X_{e}\) for some \(E\subseteq M\) and then by complete primeness of \(\mathcal{F}\) we get \(X_{e}\in\mathcal{F}\), for some \(e\in E\). It then follows by part (1) that \(x_{\mathcal{F}}\in X_{e}\subseteq U\).
Given any space \(\mathfrak{X}\), a _filter_\(F\) of \(\mathfrak{X}\) is a non-empty upper set (with respect to the specialization order \(\subseteq\) on \(\mathfrak{X}\)) such that for any \(x,z\in F\) a lower-bound \(u\in X\) of \(\{x,z\}\) is in \(F\). We let \(\mathtt{KOF}(\mathfrak{X})\) designate the family of compact-open filters of \(\mathfrak{X}\), following the notation of [23].
**Proposition 4.3**.: Let \(\mathbf{M}\) be a meet semilattice, \(\mathfrak{X}=(X,\mathcal{B})\) its dual topological space (where \(X=\operatorname{Filt}(\mathbf{M})\) and \(\mathcal{B}=\{X_{a}\mid a\in M\}\) is a basis for the topology \(\Omega\) on \(X\)). Then
1. the space \(\mathfrak{X}\) is a spectral space
2. \(\mathcal{B}=\{X_{a}\mid a\in M\}=\mathtt{KOF}(\mathfrak{X})\).
Proof.: Recall that a topological space is _spectral_ if it is \(T_{0}\), coherent, compact and sober, which we prove in turn in order to establish part (1).
For the \(T_{0}\) property, if \(x\neq z\) are distinct filters, without loss of generality we may assume that \(a\in x\), but \(a\not\in z\), for some semilattice element \(a\in M\). Then the open set \(X_{a}\) separates \(x,z\), since \(x\in X_{a}\) but \(z\not\in X_{a}\).
For the coherence property we verify that the basis \(\mathcal{B}\) of the topology consists of compact-open sets and that it is closed under finite intersections. For the latter requirement, \(\mathcal{B}\) is easily seen to be closed under finite intersections, since \(X_{a}\cap X_{b}=X_{a\wedge b}\) and the intersection of the empty family of \(X_{a}^{\prime}s\) is \(X\) itself, which is identical to \(X_{1}=\{x\in X\mid 1\in x\}\). For the first requirement of coherence, the \(X_{a}^{\prime}s\) are certainly open, by definition of the topology. For compactness, let \(C\subseteq M\) and suppose that \(\{X_{e}\mid e\in C\}\) covers \(X_{a}\), i.e. \(X_{a}\subseteq\bigcup_{e\in C}X_{e}\). Then the principal filter \(x_{a}=a\,\dagger\in X_{a}\) is in \(X_{e}\), for some \(e\in C\), hence \(e\in x_{a}\), i.e. \(a\leq e\). Then \(e\in x\), for any \(x\in X_{a}\) and this shows that \(X_{a}\subseteq X_{e}=\{z\in X\mid e\in z\}\). Hence \(\{X_{e}\}\), for this \(e\), is the needed finite subcover of \(X_{a}\).
Since \(X=X_{1}=\{x\in X\mid 1\in x\}\), compactness of \(\mathfrak{X}\) follows from the previous argument.
Sobriety of the space is equivalent to the requirement that every completely prime filter \(\mathcal{F}\) in the lattice \(\Omega(\mathfrak{X})\) of open sets of \(\mathfrak{X}\) is generated by a single point \(x_{{}_{\mathcal{F}}}\), in other words that \(\mathcal{F}=\{U\in\Omega(\mathfrak{X})\mid x_{{}_{\mathcal{F}}}\in U\}\). This was shown to hold in Lemma 4.2, part (3).
For part (2), left to right, \(X_{a}\) is compact-open, by the proof of coherence for \(\mathfrak{X}\) in part (1) of this proposition. Furthermore, \(x\in X_{a}\) iff \(a\in x\) iff \(x\in\Gamma x_{a}\). Hence \(X_{a}\) is a (principal) filter.
Conversely, let \(F\subseteq X\) be a compact-open filter of \(X\). Being open, let \(F=\bigcup_{a\in E\subseteq M}X_{a}\), so that by compactness, \(F=X_{a_{1}}\cup\cdots\cup X_{a_{n}}\) for some \(n\). By \(a_{i}\upuparrow=x_{a_{i}}\in X_{a_{i}}\subseteq F\), all the \(x_{a_{i}}\)'s, for \(i=1,\ldots,n\), are in \(F\), hence so is their meet (intersection), since \(F\) is a filter. Letting \(u=\bigcap_{i=1}^{n}x_{a_{i}}\), we show that \(F=\Gamma u=u\upuparrow\). Right-to-left is obvious since \(u\in F\), which is a filter, so \(\Gamma u\subseteq F\). For left-to-right, let \(z\in F\), so that \(z\in X_{a_{k}}=\Gamma x_{a_{k}}\) for some \(k\in\{1,\ldots,n\}\), hence \(x_{a_{k}}\subseteq z\). By definition of \(u\) we obtain \(u\subseteq x_{a_{k}}\subseteq z\) and then \(z\in\Gamma u\). Hence \(F\subseteq\Gamma u\) follows, too. Thus \(F=\Gamma u=\bigcup_{i=1}^{n}X_{a_{i}}\) and thus \(u\in X_{a_{i}}=\Gamma x_{a_{i}}\) for some \(i\). But then \(x_{a_{i}}\subseteq u=\bigcap_{i=1}^{n}x_{a_{i}}\subseteq x_{a_{i}}\) so that \(u=x_{a_{i}}\), for this \(i\), and so \(F=\Gamma x_{a_{i}}=X_{a_{i}}\in\mathcal{B}\).
It should be pointed out that, except for the phrasing, notation and detail, the arguments in the proofs of Lemma 4.2 and Proposition 4.3 are the same as these involved in showing that the space of proper filters of a Boolean algebra is spectral [2, Proposition 3.12], or that the space of proper filters of an ortholattice is spectral [22, Proposition 3.4.1]. It is really only the semilattice-structure that is relevant in the argument, which is one of the reasons that we included a proof of Proposition 4.3, the other reason relating to the observation made in [13, 14] that a lattice can be always regarded as a diagram of dually isomorphic meet semilattices. For the case of Boolean algebras and ortholattices, this duality may be taken to be Boolean complementation, or ortho-complementation, respectively. For general bounded lattices the semilattice duality can be taken to be the identity map \(\imath:\mathbf{L}_{\wedge}\leftrightarrow(\mathbf{L}_{\vee})^{\partial}\), where \(\mathbf{L}_{\vee}=(\mathbf{L}_{\wedge})^{\partial}\), as in [13, 14], and as we explain in more detail in the sequel.
Note that the topology \(\Omega\) on \(X\) is the Scott topology, with respect to the specialization order \(\subseteq\) on \(X\)[18, chapter II.1], which is inclusion of filters (of \(\mathbf{M}\)). To see that specialization coincides with filter inclusion note that if \(x\sqsubseteq z\), i.e. \(N^{o}(x)\subseteq N^{o}(z)\) and \(a\in x\), then \(x\in X_{a}\in N^{o}(x)\subseteq N^{o}(z)\), hence also \(z\in X_{a}\), i.e. \(a\in z\) and so \(x\subseteq z\). Conversely, if \(x\subseteq z\) and \(U\) is an open neighborhood of \(X\), let \(X_{a}\) be a basic open such that \(x\in X_{a}\subseteq U\). From \(x\in X_{a}\) we get \(a\in x\subseteq z\), so also \(z\in X_{a}\subseteq U\), hence \(U\in N^{o}(z)\), i.e. \(N^{o}(x)\subseteq N^{o}(z)\) which by definition means that \(x\sqsubseteq z\).
It follows from Proposition 4.3 that the space \(\mathfrak{X}\) is an HMS space (named so in [23], in honour of Hofmann, Mislove and Stralka), defined by a set of equivalent conditions in [23, Theorem 2.5]).
The following representation result is an immediate consequence of Proposition 4.3.
**Corollary 4.4**.: Given a meet semilattice \(\mathbf{M}\), the map \(a\mapsto X_{a}\) is a semilattice isomorphism \(\mathbf{M}\simeq\mathtt{KOF}(\operatorname{Filt}(\mathbf{M}))\). \(\Box\)
### The Canonical Dual Space of a Lattice
If \(\mathbf{N}\) is a join semilattice with unit (bottom) element \(0\), then \(\mathbf{N}^{\partial}\) (the opposite semilattice, order reversed) is a meet semilattice with unit (top) and \(\operatorname{Filt}(\mathbf{N}^{\partial})=Y=\operatorname{Idl}(\mathbf{N})\). The topology generated by the basis of sets \(Y^{a}=\{y\in Y\mid a\in y\}\), for \(a\in N\), is a spectral topology by Proposition 4.3, observing that \(Y^{a}\cap Y^{b}=Y^{a\lor b}\), which ensures that the basis \(\mathcal{C}=\{Y^{a}\mid a\in N\}\) is closed under finite intersections (with the empty intersection being \(Y_{0}=Y\) itself).
For the lattice case, just as orthonegation is represented in [9] by the binary relation \(\bot\subseteq X\times X\) defined by \(x\perp z\) iff \(\exists a(a\in x\wedge a^{\bot}\in z)\), the identity trivial duality \(\imath:\mathbf{L}\simeq(\mathbf{L}^{\partial})^{\partial}\) is similarly represented [13, 14] by the sorted binary relation \(\bot\subseteq X\times Y\) defined by \(x\perp y\) iff \(\exists a(a\in x\wedge\imath(a)\in y)\) iff \(x\cap y\neq\emptyset\).
Note that the quasi-seriality condition (1) holds for the complement of the canonical Galois relation \(\bot\).
As in [14], we represent lattices and normal lattice expansions, more generally, in topologized sorted frames (polarities) \(\mathfrak{F}=(X,\bot,Y,(R_{\sigma})_{\sigma\in\tau})\), where for each normal lattice operator \(f\) of distribution type \(\sigma=(i_{1},\ldots,i_{n};i_{n+1})\) the frame is equipped with a sorted relation \(R_{\sigma}\) of sort \(\sigma=(i_{n+1};i_{1}\cdots i_{n})\), i.e. \(R_{\sigma}\subseteq Z_{i_{n+1}}\times\prod_{j=1}^{n}Z_{i_{j}}\) and where \(Z_{i_{j}}=X\) when \(i_{j}=1\) and \(Z_{i_{j}}=Y\) when \(i_{j}=\partial\).
**Proposition 4.5**.: For a bounded lattice \(\mathbf{L}\), the bases \(\mathcal{B}=\{X_{a}\mid a\in L\}\) and \(\mathcal{C}=\{Y^{a}\mid a\in L\}\) of the spaces \(\mathfrak{X}=(X,\mathcal{B}),\mathfrak{Y}=(Y,\mathcal{C})\), where \(X=\operatorname{Filt}(\mathbf{L})\) and \(Y=\operatorname{Idl}(\mathbf{L})\), are the families of compact-open Galois stable and co-stable, respectively, sets. Furthermore, \(\mathcal{B}\) and \(\mathcal{C}\) are dually isomorphic bounded lattices, with \(\mathcal{B}\) a sublattice of \(\mathcal{G}(X)\) and \(\mathcal{C}\) a sublattice of \(\mathcal{G}(Y)\).
Proof.: The two cases for \(\mathcal{B}\) and \(\mathcal{C}\) are similar. The proof follows from [14, Lemma 2.7]. That lemma is phrased in terms of clopen sets in a Hausdorff space, which are then compact-open and compactness and (co)stability are the only properties needed and used in the argument. Other than that, the claim in [14, Lemma 2.7] is phrased in terms of an arbitrary duality \(\ell:\mathbf{S}\simeq\mathbf{K}^{\partial}:r\) between meet semilattices. The case for lattices follows by specializing the argument to the trivial duality \(\ell=r=\imath:\mathbf{L}_{\wedge}\leftrightarrows(\mathbf{L}_{\vee})^{\partial}\), which we do below.
First, stability of the sets \(X_{a},Y^{a}\) follows by Lemma 3.2, observing that \(X_{a}=\Gamma x_{a}\) and \(Y^{a}=\Gamma y_{a}\). It remains to show that every stable compact-open subset \(A\) of \(X\) is of the form \(X_{a}\), for some lattice element \(a\in L\). Assume \(A=A^{\prime\prime}\) is compact-open and let \(x\not\in A^{\prime\prime}=A\). Let \(y\in A^{\prime}\) such that \(x\not\perp y\), i.e. \(x\cap y=\emptyset\). By \(y\in A^{\prime}\) we have \(A\perp y\), i.e. for any \(z\in A\) we have \(z\perp y\), i.e. \(a_{z}\in z\cap y\), for some lattice element \(a_{z}\). Thus \(A\subseteq\bigcup_{z\in A}X_{a_{z}}\) and, by compactness, it follows that \(A\subseteq X_{a_{z_{1}}}\cup\cdots\cup X_{a_{z_{n}}}\), for some \(n\). Letting \(a_{x}=a_{z_{1}}\vee\cdots\lor a_{z_{n}}\) it follows that for all \(i=1,\ldots,n\), \(X_{a_{z_{i}}}\subseteq X_{a_{x}}\), hence \(A\subseteq X_{a_{x}}\). Notice that \(a_{x}\not\perp x\), since \(a_{x}\in y\). Hence \(x\not\perp X_{a_{x}}\). This shows that \(-A\subseteq-X_{a_{x}}\) and given we also obtained \(A\subseteq X_{a_{x}}\) it follows that \(A=X_{a_{x}}\).
That both \(\mathcal{B},\mathcal{C}\) are (dually isomorphic) lattices follows from the fact that \((X_{a})^{\prime}=Y^{a}\) and \((Y^{a})^{\prime}=X_{a}\). Joins in \(\mathcal{B},\mathcal{C}\) are defined by taking closures of unions: \(A\lor C=(A\cup C)^{\prime\prime}\), as in \(\mathcal{G}(X)\) and \(\mathcal{G}(Y)\).
Let \(\mathsf{KOG}(X),\mathsf{KOG}(Y)\) be the families of Galois compact-open subsets of \(X\) and of \(Y\), respectively. By Propositions 4.3 and 4.5, \(\mathsf{KOF}(\mathfrak{X})=\mathsf{KOG}(X)\) and \(\mathsf{KOF}(\mathfrak{Y})=\mathsf{KOG}(Y)\).
The following choice-free lattice representation theorem is an immediate consequence of our so far results in this article.
**Theorem 4.6** (Choice-free Lattice Representation).: Let \(\mathbf{L}=(L,\leq,\wedge,\vee,0,1)\) be a bounded lattice and \((X,\pm,Y)\) its dual filter-ideal frame \((X=\operatorname{Filt}(\mathbf{L}),Y=\operatorname{Idl}(\mathbf{L}))\), with \(\pm\subseteq X\times Y\) defined by \(x\pm y\) iff \(x\cap y\not=\emptyset\). Let \(\mathfrak{X}=(X,\mathcal{B})\) and \(\mathfrak{Y}=(Y,\mathcal{C})\) be the spectral spaces generated by the bases \(\mathcal{B}=\{X_{a}\mid a\in L\}\) and \(\mathcal{C}=\{Y^{a}\mid a\in L\}\), respectively.
Then the map \(a\mapsto X_{a}\) is a lattice isomorphism \(\mathbf{L}\simeq\mathsf{KOG}(\operatorname{Filt}(\mathbf{L}))\) and the map \(a\mapsto Y^{a}\) is a dual isomorphism \(\mathbf{L}^{\partial}\simeq\mathsf{KOG}(\operatorname{Idl}(\mathbf{L}))\).
The representation of (semi)lattices detailed above is essentially the same as that in [14], by this author and Dunn, the difference lying in the choice of the topology to be imposed on the filter space. The lattice \(\mathcal{G}(X)\) of Galois stable sets is a canonical extension of the lattice, see [8, Proposition 2.6], which is unique up to an isomorphism that commutes with the lattice embeddings, by [8, Proposition 2.7].
Moshier and Jipsen [23] provide a topological construction of the canonical extension of a lattice. Proposition 4.5, together with uniqueness of canonical extensions up to isomorphism, entails that the filter space of a lattice is what Moshier and Jipsen call a BL-space, defined by a number of equivalent conditions in [23, Theorem 3.2]. It can be easily verified that the canonical extension \(\mathsf{FSat}(X)\) defined in [23] is literally identical to \(\mathcal{G}(X)\). We substantiate this claim below. Recall first that \(\mathsf{OF}(X)\) designates in [23] the family of open filters of \(X=\operatorname{Filt}(\mathbf{L})\).
**Proposition 4.7**.: The following hold
1. A Galois stable set \(A=A^{\prime\prime}\) is a filter of \(X=\operatorname{Filt}(\mathbf{L})\) and, similarly, a Galois co-stable set \(B=B^{\prime\prime}\) is a filter of \(Y=\operatorname{Idl}(\mathbf{L})\).
2. Every open filter \(F\in\mathsf{OF}(X)\) is of the form \({}^{\pm}\{y\}\) for a unique, by separation of the frame (cf Lemma 3.2), ideal \(y\in Y=\operatorname{Idl}(\mathbf{L})\)
3. For any subset \(U\subseteq X\), \(U^{\prime\prime}=\mathsf{fsat}(U)\). Therefore, \(\mathcal{G}(X)=\mathsf{FSat}(X)\).
Proof.: For part (1), the proof is straightforward and we only discuss the case of stable sets. First, Galois sets are upsets, by Lemma 3.2. Now let \(x,z\in A=A^{\prime\prime}\) and suppose that \(x\cap z\not\in A^{\prime\prime}\). Then there exists an ideal \(y\in A^{\prime}\) such that \((x\cap z)\not\perp y\), i.e. \((x\cap z)\cap y=\emptyset\). By \(x,z\in A^{\prime\prime}\), let \(a\in x\cap y\not=\emptyset\) and \(b\in z\cap y\not=\emptyset\). Then both \(a,b\in y\), hence \(a\lor b\in y\). But \(x,z\) are filters, hence \(a\lor b\in x\cap z\), which contradicts the assumption that \(x\cap z\not\perp A^{\prime\prime}\).
For part (2), for any \(y\in Y\), \({}^{\perp}\{y\}\) is Galois stable, hence a filter of \(X\), by part (1). To see that it is an open set, let \(x\in{}^{\perp}\{y\}\), i.e. \(x\perp y\), so that \(a\in x\cap y\neq\emptyset\). Thus \(x\in X_{a}\). Since \(a\in y\), \(X_{a}\perp y\), so that we obtain \(x\in X_{a}\subseteq{}^{\perp}\{y\}\). Thus \({}^{\perp}\{y\}\in\mathtt{OF}(X)\), for any \(y\in Y\).
Conversely, let \(F\in\mathtt{OF}(X)\) and let \(E\subseteq L\) be such that \(F=\bigcup_{a\in E}X_{a}\). Let \(y\) be the ideal generated by \(E\). Thus \(e\in y\) iff there exist \(e_{1},\ldots,e_{n}\in E\), for some \(n\), such that \(e\leq e_{1}\vee\cdots\lor e_{n}\). We show that \(F={}^{\perp}\{y\}\), for this \(y\).
If \(x\in F=\bigcup_{a\in E}X_{a}\), then \(x\in X_{a}\), for some \(a\in E\). By definition of \(y\), we get \(a\in y\), so \(x\perp y\), i.e. \(x\in{}^{\perp}\{y\}\). Hence \(F\subseteq{}^{\perp}\{y\}\). Conversely, let \(x\perp y\) so that \(e\in x\cap y\). Then \(e\leq e_{1}\vee\cdots\lor e_{n}\), where \(\{e_{1},\ldots,e_{n}\}\subseteq E\). It follows that \(x_{e_{1}}\cap\cdots\cap x_{e_{n}}=x_{e_{1}\vee\cdots\lor e_{n}}\subseteq x_{e}\in x\). Since \(e_{i}\in E\), we have \(X_{e_{i}}\subseteq F\). Because \(x_{e_{i}}\in X_{e_{i}}\), all principal filters \(x_{e_{1}},\ldots,x_{e_{n}}\in F\). Since \(F\) is a filter, their intersection is in \(F\) and then also \(x\in F\). Hence \({}^{\perp}\{y\}\subseteq F\).
For part (3), by Lemma 3.2 the set of open elements \({}^{\perp}\{y\}\) is meet-dense in \(\mathcal{G}(X)\). By part (2) above, \(\mathtt{OF}(X)=\{{}^{\perp}\{y\}\mid y\in Y\}\). Using also the definition of \(F\)-saturation in [23] we obtain
\[\mathtt{fsat}(U)=\bigcap\{F\in\mathtt{OF}(X)\mid U\subseteq F\}=\bigcap{}^{ \perp}\{y\}\mid U\perp y\}=U^{\prime\prime}\]
and so \(\mathtt{FSat}(X)=\{A\subseteq X\mid A=\mathtt{fsat}(A)\}=\{A\subseteq X\mid A =A^{\prime\prime}\}=\mathcal{G}(X)\).
### Representing Normal Lattice Operators
Let \(\mathbf{L}=(L,\leq,\wedge,\vee,0,1,f)\) be a bounded lattice with a normal operator \(f\). Then \(f\) extends to a completely normal operator \(F\), of the same distribution type as \(f\), on the canonical extension \(\mathcal{G}(\operatorname{Filt}(\mathbf{L}))\) of \(\mathbf{L}\).
For the proof, we refer the reader to [13, Sections 3.1,4.1,4.2]. The representation of the operator is the same as that given in [11], the difference lying in the axiomatization of the dual frame of the lattice expansion, in particular on the axioms for the relations corresponding to the lattice operator. Particular instances of the representation were also given in [15].
To keep this article as self-contained as possible, we sketch the representation steps, drawing on [13, Section 4.1].
The base polarity \(\mathfrak{F}=(\operatorname{Filt}(\mathcal{L}),\pm,\operatorname{Idl}( \mathcal{L}))\) consists of the sets \(X=\operatorname{Filt}(\mathcal{L})\) of filters and \(Y=\operatorname{Idl}(\mathcal{L})\) of ideals of the lattice and the relation \(\pm\subseteq\operatorname{Filt}(\mathcal{L})\times\operatorname{Idl}( \mathcal{L})\), defined by \(x\pm y\) iff \(x\cap y\neq\emptyset\), while the representation map \(\zeta_{1}\) sends a lattice element \(a\in L\) to the set of filters that contain it, \(\zeta_{1}(a)=\{x\in X\mid a\in x\}=\{x\in X\mid x_{a}\subseteq x\}=\Gamma x_{a}\). Similarly, a co-represenation map \(\zeta_{\partial}\) is defined by \(\zeta_{\partial}(a)=\{y\in Y\mid a\in y\}=\{y\in Y\mid y_{a}\subseteq y\}= \Gamma y_{a}\).
For each normal lattice operator a relation is defined, such that if \(\delta=(i_{1},\ldots,i_{n};i_{n+1})\) is the distribution type of the operator, then \(\sigma=(i_{n+1};i_{1}\cdots i_{n})\) is the sort type of the relation. Without loss of generality, we may restrict to just two normal operators \(f\), of output type \(1\), and \(h\), of output type \(\partial\). We then define two corresponding relations \(R,S\) of respective sort types \(\sigma(R)=(1;i_{1}\cdots i_{n})\) and \(\sigma(S)=(\partial;t_{1}\cdots t_{n})\), where for each \(j\), \(i_{j}\) and \(t_{j}\) are in \(\{1,\partial\}\). In other words \(R\subseteq X\times\prod_{j=1}^{j=n}Z_{i_{j}}\) and \(S\subseteq Y\times\prod_{j=1}^{j=n}Z_{t_{j}}\).
To define the relations, we use the point operators introduced in [10] (see also [11]). In the generic case we examine, we need to define two sorted operators
\[\widehat{f}\colon\prod_{j=1}^{j=n}Z_{i_{j}}\longrightarrow Z_{1}\qquad\widehat{h }\colon\prod_{j=1}^{j=n}Z_{t_{j}}\longrightarrow Z_{\partial}\qquad\text{ (recall that $Z_{1}=X,Z_{\partial}=Y$)}.\]
Assuming for the moment that the point operators have been defined, the canonical relations \(R,S\) are defined by
\[xR\bar{u}\] iff \[\widehat{f}(\bar{u})\subseteq x\;\;\text{(for $\;x\in X$ \; and $\;\bar{u}\in\prod_{j=1}^{j=n}Z_{i_{j}}$)},\] \[yS\bar{v}\] iff \[\widehat{h}(\bar{v})\subseteq y\;\;\text{(for $\;y\in Y$ \; and $\;\bar{v}\in\prod_{j=1}^{j=n}Z_{t_{j}}$)}. \tag{15}\]
Returning to the point operators and letting \(x_{e},y_{e}\) be the principal filter and principal ideal, respectively, generated by a lattice element \(e\), these are uniformly defined as follows, for \(\bar{u}\in\prod_{j=1}^{j=n}Z_{i_{j}}\) and \(\bar{v}\in\prod_{j=1}^{j=n}Z_{t_{j}}\)
\[\widehat{f}(\bar{u})\;=\;\bigvee\{x_{f(\bar{a})}\;|\;\bar{a}\in\bar{u}\} \qquad\qquad\widehat{h}(\bar{v})\;=\;\bigvee\{y_{h(\bar{a})}\;|\;\bar{a}\in \bar{v}\}. \tag{16}\]
In other words, \(\widehat{f}(\bar{u})\) is the filter generated by the set \(\{f(\bar{a})\;|\;\bar{a}\in\bar{u}\}\). Similarly \(\widehat{h}(\bar{v})\) is the ideal generated by the set \(\{h(\bar{a})\;|\;\bar{a}\in\bar{v}\}\).
**Proposition 4.8**.: In the canonical lattice frame all axioms of Table 1 hold. In particular, all sections of the Galois dual relations \(R^{\prime},S^{\prime}\) of the canonical relations \(R,S\), defined by equations (15), are Galois sets.
Proof.: The proof for axioms (F1)-(F3) is given in [13, Lemma 4.3]. For axiom (F4), the claim was first stated as [12, Lemma 25] and a proof of one of the subcases was detailed, the other one being sufficiently similar. The omitted proof of the other subcase was provided in [13, Lemma 4.6]. Axiom (F0) obviously holds in the canonical frame since every proper filter \(x\) does not intersect the principal ideal \(y_{a}\), for any \(a\not\in x\). Similarly for ideals.
The following results will be useful in the sequel.
**Lemma 4.9** ([12, Lemma 23]).: In the canonical frame, \(xR\bar{u}\) holds iff \(\forall\bar{a}\in L^{n}\) (\(\bar{a}\in\bar{u}\longrightarrow f(\bar{a})\in x\)). Similarly, \(yS\bar{v}\) holds iff \(\forall\bar{a}\in L^{n}\) (\(\bar{a}\in\bar{v}\longrightarrow h(\bar{a})\in y\)).
**Lemma 4.10** ([12, Lemma 24]).: Where \(R^{\prime},S^{\prime}\) are the Galois dual relations of the canonical relations \(R,S\), \(yR^{\prime}\bar{u}\) holds iff \(\widehat{f}(\bar{u})\perp y\) iff \(\exists\bar{b}(\bar{b}\in\bar{u}\wedge f(\bar{b})\in y)\). Similarly, \(xS^{\prime}\bar{v}\) holds iff \(x\perp\widehat{h}(\bar{v})\) iff \(\exists\bar{e}(\bar{e}\in\bar{v}\wedge h(\bar{e})\in x)\).
Each of the relations \(R,S\) generates a classical, though sorted, completely additive image operator \(\alpha_{R},\eta_{S}\), respectively, and we designate by \(\overline{\alpha}_{R},\overline{\eta}_{S}\) the closure of their restriction to Galois sets (stable, or co-stable, according to the distribution types of \(f,h\)). By Theorem 3.5 and Proposition 4.8, \(\overline{\alpha}_{R},\overline{\eta}_{S}\) distribute over arbitrary joins of Galois sets. Composing with the Galois connection, which is a duality of the complete lattices of Galois stable and co-stable
sets, completely normal operators are obtained, \(\overline{\alpha}_{f},\overline{\eta}_{h}:\prod_{i=1}^{n}\mathcal{G}(X) \longrightarrow\mathcal{G}(X)\), of the same distribution type as \(f,h\), respectively, explicitly defined by
\[\overline{\alpha}_{f}(A_{1},\ldots,A_{n})= \overline{\alpha}_{R}(\ldots,\underbrace{A_{j}}_{i_{j}=1},\ldots, \underbrace{A_{r}^{\prime}}_{i_{r}=\partial}) (A_{1},\ldots,A_{n}\in\mathcal{G}(X)), \tag{17}\] \[\overline{\eta}_{h}(B_{1},\ldots,B_{n})= \overline{\eta}_{S}(\ldots,\underbrace{B_{r}}_{i_{r}=\partial}, \ldots,\underbrace{B_{j}^{\prime}}_{i_{j}=1}) (B_{1},\ldots,B_{n}\in\mathcal{G}(Y)). \tag{18}\]
By [13, Proposition 5.2], \(\overline{\alpha}_{f},\overline{\eta}_{h}\) restrict to normal operators of the respective distribution type on the lattice \(\mathsf{KOG}(X)\) (which, in [13], is identified as the lattice of clopen (in the lattice-theoretic sense) elements of \(\mathcal{G}(X)\)). We can then conclude with the representation theorem below.
**Theorem 4.11**.: Given a similarity type \(\tau\), let \(\tau_{1},\tau_{\partial}\) be the subtypes consisting of all distribution types in \(\tau\) of output type 1 and \(\partial\), respectively. If \(\mathbf{L}\) is a normal lattice expansion of type \(\tau\), \(\mathbf{L}=(L,\leq,\wedge,\lor,0,1,(f_{\delta})_{\delta\in\tau_{1}},(h_{\delta^ {\prime}})_{\delta^{\prime}\in\tau_{\partial}})\), then the representation map \(\zeta(a)=\{x\in X\mid a\in x\}=X_{a}\), where \(X=\operatorname{Filt}(\mathbf{L})\), is an isomorphism \(\zeta:\mathbf{L}\succeq(\mathsf{KOG}(X),\subseteq,\cap,\vee,\emptyset,X,( \overline{\alpha}_{f})_{\delta(f)\in\tau_{1}},(\overline{\eta}_{h})_{\delta(h) \in\tau_{\partial}})\) of normal lattice expansions. \(\Box\)
The next Proposition identifies the canonical extension of normal operators we have defined as their \(\sigma/\pi\)-extension.
**Proposition 4.12**.: The operations \(\overline{\alpha}_{f}\), \(\overline{\eta}_{h}\) are the \(\sigma\) and \(\pi\) extensions of \(f,h\), respectively, as these are defined in [8].
Proof.: The argument has been detailed in [12, Section 3.3]. Roughly, given Proposition 4.8, Lemma 3.13 applies so that if \(R\) is the relation constructed from a normal lattice operator \(f\), then \(\overline{\alpha}_{R}(\Gamma\bar{u})=R\bar{u}=\alpha_{R}(\Gamma\bar{u})= \Gamma(\widehat{f}_{R}(\bar{u}))\). Assuming \(f\) is of distribution type \(\delta=(\tilde{i_{j}};1)\), \(f^{\sigma}\) as defined in [8] is a sorted map and it is defined on a tuple \(\tilde{F}\) of Galois sets by extending its definition on closed elements, by \(f^{\sigma}(\tilde{F})=\vee_{\bar{u}\in\tilde{F}}\,f_{\sigma}(\Gamma\bar{u})\), where \(f_{\sigma}\) is defined on closed elements. It is shown in [12, Section 3.3] that \(f_{\sigma}(\Gamma\bar{u})\), as defined in [8], satisfies the identity \(f_{\sigma}(\Gamma\bar{u})=\Gamma(\widehat{f}_{R}\bar{u})\in\mathcal{G}(X)\). Thereby, the \(\sigma\)-extension of \(f\) coincides with the operation \(\overline{\alpha}_{R}\) that we defined. For an operator \(h\) of distribution type \((\tilde{t_{j}};\partial)\), with corresponding canonical frame relation \(S\), its dual \(\sigma\)-extension on closed elements satisfies, respectively, the identity \(h_{\sigma}^{\partial}(\Gamma\bar{u})=\Gamma(\widehat{h}_{S}\bar{u})\in \mathcal{G}(Y)\). Extending to Galois sets we similarly have \(h_{\sigma}^{\partial}(\tilde{F})=\overline{\eta}_{S}(\tilde{F})\). The single-sorted \(\sigma\) and \(\pi\)-extensions of \(f,h\), respectively, are then obtained by composing appropriately with the Galois connection, resulting in the maps \(\overline{\alpha}_{f}=f^{\sigma},\overline{\eta}_{h}=h^{\pi}\), as defined in equations (17) and (18), respectively.
## 5 Representing Quasi-Complemented Lattices
In this section we extend the lattice representation of section 4.2 to the case of a lattice with an additional quasi-complementation operator, assuming the
axiomatization of at least the minimal system of Figure 1 and specializing the constructions of Section 4.3.
The canonical dual frame is the structure \((X,\pm,Y,S_{\nu})\), where \(X=\operatorname{Filt}(\mathbf{L})\), \(Y=\operatorname{Idl}(\mathbf{L})\), \(\pm\subseteq X\times Y\) is defined by \(x\pm y\) iff \(x\cap y\neq\emptyset\) and \(S_{\nu}\subseteq Y\times X\) is the canonical relation defined using the point operator \(\widetilde{\nu}:X\longrightarrow Y\), by equations (15) and (16). For the case at hand, the definitions are given by equation (19)
\[\widetilde{\nu}(x)=\bigvee\{y_{\nu a}\mid a\in x\}\qquad\quad yS_{\nu}x\text{ iff }\nu(x)\subseteq y\qquad\quad(x\in X,y\in Y). \tag{19}\]
Observe that \(S_{\nu}x=\Gamma(\widetilde{\nu}(x))\).
By Lemma 4.9, \(S_{\nu}\subseteq Y\times X\) is equivalently defined by the condition
\[yS_{\nu}x\text{ iff }\ \forall a\in L(a\in x\longrightarrow\nu a\in y).\]
By Lemma 4.10 its Galois dual relation \(S^{\prime}_{\nu}\subseteq X\times X\) is defined by
\[zS^{\prime}_{\nu}x\text{ iff }\forall y\in Y(yS_{\nu}x\longrightarrow z\pm y )\text{ iff }z\pm\widetilde{\nu}(x)\text{ iff }\exists a\in L(a\in x\text{ and }\nu a\in z). \tag{20}\]
Observe also that \(S^{\prime}_{\nu}x=\{\widetilde{\nu}(x)\}^{\prime}=\prescript{\pm}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{} }{{}}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{} }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{ }{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{ }{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{ }{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{ }{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{ }{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{ }{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{ }{{}}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{ }{{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{ }{{}{}{}{{}}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{ }{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{ }{{}{}{}{{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{ }{{}{}{{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{ }{{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{ {}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{ {}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{} {}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{} {}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{
\(\mathcal{G}^{*}=(\overline{\eta}_{S}(\varnothing))^{\prime}=\mathcal{G}^{\prime}=X\), using normality of the classical, though sorted, image operator \(\eta_{S}\).
If \(\nu\) satisfies the Galois condition \(a\leq\nu\nu a\), then it is immediate that the canonical relation \(\bot=S^{\prime}_{\nu}\), defined by equation (20), is symmetric. By Lemma 3.15, this implies that \(A\subseteq A^{**}\), for any \(A\in\mathcal{G}(X)\).
Assume now that \(\nu\) is an involution. To show that \(A^{**}\subseteq A\), it suffices to verify that the equivalent condition (3)c of Lemma 3.15 holds in the canonical frame. Given that \(S_{\nu}z=\Gamma(\widetilde{\nu}(z))\), where for a filter \(z\), \(\widetilde{\nu}(z)\) is the ideal generated by the set \(\{\nu e\mid e\in z\}\), the inclusion \(S_{\nu}z\subseteq S_{\nu}x\) is equivalent to the inclusion \(\Gamma(\widetilde{\nu}(z))\subseteq\Gamma(\widetilde{\nu}(x))\), hence to \(\widetilde{\nu}(x)\subseteq\widetilde{\nu}(z)\). To see that this implies \(x\subseteq z\), let \(a\in x\). Then \(\nu a\in\widetilde{\nu}(x)\), so \(\nu a\in\widetilde{\nu}(z)\). Let then \(e_{1},\ldots,e_{n}\in z\) such that \(\nu a\leq\nu e_{1}\vee\cdots\lor\nu e_{n}\). This is equivalent to \(\nu(\nu e_{1}\vee\cdots\lor\nu e_{n})\leq\nu\nu a\leq a\), in turn equivalent to \(e_{1}\wedge\cdots\wedge e_{n}\leq a\). Since \(z\) is a filter, this implies \(a\in z\) and this completes the proof that \(x\subseteq z\) under the given assumption.
If the lattice is an ortholattice, then by the argument for the case of lattices with an involution previously given and by Lemma 3.15 it suffices to verify that the canonical relation \(\bot=S^{\prime}_{\nu}\) is irreflexive. By Lemma 4.10, \(x\bot z\) holds iff there exists a lattice element \(e\) such that \(e\in z\) and \(\nu e\in x\). Reflexivity, \(x\bot x\), would then imply that \(e\wedge\nu e=0\in x\), contradicting the fact that \(x\) is a proper filter.
For the case where the lattice is a De Morgan algebra, i.e. a distributive lattice with an involution, it suffices to prove that \(\mathcal{G}(X)\) is distributive. An algebraic proof of this has been given in [8, Lemma 5.1], but we provide here a new proof based on the constructions we have presented.
Note first that both lattice join \(\vee\) and meet \(\wedge\) are trivially normal lattice operators in the sense of Definition 2.1, but meet is an operator (in the Jonsson-Tarski sense) only when it distributes over joins. When this is the case, meet also has the distribution type \((1,1;1)\). Its \(\sigma\)-extension \(\wedge_{\sigma}\), is constructed as outlined in section 4.3. Specifically, letting \(\wedge=f\), the point operator \(\widehat{f}\) on filters is defined by \(\widehat{f}(x,z)=\vee\{x_{a\wedge b}\mid a\in x\text{ and }b\in z\}\) and the canonical relation \(R_{\wedge}\) is then defined by \(xR_{\wedge}uz\) iff \(\forall a,b(a\in u\text{ and }b\in z\longrightarrow a\wedge b\in x)\), using Lemma 4.9. Note that \(R_{\wedge}\) is the upper bound relation of Proposition 3.7. Considering the image operator \(\alpha_{R}:\mathcal{G}(X)\times\mathcal{G}(X)\longrightarrow\mathcal{G}(X)\) defined by \(\alpha_{R}(U,W)=\{x\in X\mid\exists u\in U\exists z\in W\ xR_{\wedge}uz\}\), we obtain that \(\alpha_{R}(A,C)=A\cap C\), for \(A,C\in\mathcal{G}(X)\). By Proposition 4.8, all sections of the Galois dual relation of \(R_{\wedge}\) are stable. It then follows by Proposition 3.7 that intersection distributes over arbitrary joins, in other words, \(\mathcal{G}(X)\) is a completely distributive lattice.
Finally, for the case of Boolean algebras, combine the arguments given for ortholattices and De Morgan algebras.
## 6 Spectral Duality
Let \(\mathbf{M},\mathbf{G},\mathbf{INV},\mathbf{DMA},\mathbf{O}\) and \(\mathbf{BA}\) be the categories of algebras in the respective varieties \(\mathbb{M},\mathbb{G},\mathbb{INV},\mathbb{DMA},\mathbb{O}\) and \(\mathbb{BA}\) of Figure 1 with the usual algebraic homomorphisms.
As in [13], \(\mathbf{SRF}_{\tau}\) designates the category of sorted residuated frames with
* The complement \(I\) of the Galois relation \(\pm\) of the frame is quasi-serial, in other words \(\forall x\in X\,\exists y\in Y\;xIy\) and \(\forall y\in Y\,\exists x\in X\;xIy\)
* The frame is separated
* For each \(z\in X\), \(S_{\nu}z\) is a closed element of \(\mathcal{G}(Y)\) and if \(z\) is a clopen element (i.e. \(\Gamma z=\mbox{${}^{\pm}$}\{v\}\) for a (unique, by separation) point \(v\) in \(Y\)), then \(S_{\nu}z\) is a clopen element of \(\mathcal{G}(Y)\)
* For each \(y\in Y\) the set \(yS_{\nu}\) is decreasing (a down set)
* Both sections of the Galois dual relation \(S_{\nu}^{\prime}\) of \(S_{\nu}\) are Galois sets
* Clopen elements are closed under finite intersections in each of \(\mathcal{G}(X),\mathcal{G}(Y)\)
* The family of closed elements, for each of \(\mathcal{G}(X),\mathcal{G}(Y)\), is the intersection closure of the respective family of clopens
* Each of \(X,Y\) carries a spectral topology generated by the basis of their respective families of clopen elements
* For a sorted map \(\pi=(p,q):(X_{2},I_{2},Y_{2},S_{2\nu})\longrightarrow(X_{1},I_{1},Y_{1},S_{1 \nu})\), where \(p:X_{2}\longrightarrow X_{1}\) and \(q:Y_{2}\longrightarrow Y_{1}\)
* \(\forall x^{\prime}\in X_{2}\forall y^{\prime}\in Y_{2}\;(x^{\prime}I_{2}y^{ \prime}\longrightarrow p(x^{\prime})I_{1}q(y^{\prime}))\)
* \(\forall x\in X_{1}\forall y^{\prime}\in Y_{2}(xI_{1}q(y^{\prime})\longrightarrow \exists x^{\prime}\in X_{2}(x\leq p(x^{\prime})\wedge x^{\prime}I_{2}y^{ \prime}))\)
* \(\forall x^{\prime}\in X_{2}\forall y\in Y_{1}(p(x^{\prime})I_{1}y\longrightarrow \exists y^{\prime}\in Y_{2}(y\leq q(y^{\prime})\wedge x^{\prime}I_{2}y^{ \prime}))\)
* \(\forall z\in X_{1}\forall v\in Y_{2}(q(v)S_{1\nu}z\longrightarrow\exists x\in X _{2}(z\leq p(x)\mbox{ and }vS_{2\nu}x))\)
* for all points \(u\), \(\pi^{-1}(\Gamma u)=\Gamma v\), for some (unique, by separation) \(v\)
a relation \(R^{\sigma}\) for each \(\sigma\in\tau\). For our present purposes we only consider frames \(\mathfrak{F}=(X,\pm,Y,S_{\nu})\), with \(\sigma(S_{\nu})=(\partial;1)\), given that the distribution type of the normal lattice operator \(\nu\) under study is \(\delta(\nu)=(1;\partial)\). In particular then we let \(\mathbf{SRF}_{\nu}=\mathbf{SRF}_{\{(1;\partial)\}}\) designate the category with objects the sorted residuated frames with a relation \(S_{\nu}\) as above, subject to the axioms of Table 1. \(\mathbf{SRF}_{\nu}\) is too large a category for duality purposes and we specify full subcategories for each of the cases of interest. In [13], the notation \(\mathbf{SRF}_{\tau}^{*}\) was used to designate the intended subcategory and we keep with this notation, while also subscripting appropriately to distinguish between the different categories of interest in this article. For a frame \(\mathfrak{F}\) in \(\mathbf{SRF}_{\tau}^{*}\), we let \(\mathtt{L}(\mathfrak{F})\) be the full complex algebra \(\mathfrak{F}^{+}\) of stable sets and \(\mathtt{L}^{*}(\mathfrak{F})\) the complex algebra of clopen elements.
**Theorem 6.1**.: Let \(\mathbf{SRF}_{\nu{\rm M}}^{*}\) be the full subcategory of \(\mathbf{SRF}_{\nu}\) axiomatized by the axioms in Table 2. There exist functors \({\mathtt{F}},{\mathtt{L}}^{*}\) forming a categorical duality \({\mathtt{F}}:{\mathbf{M}}\backsimeq(\mathbf{SRF}_{\nu{\rm M}}^{*})^{\rm op}:{ \mathtt{L}}^{*}\).
Proof.: Let \({\mathbf{L}}=(L,\leq,\wedge,\vee,0,1,\nu)\) be a lattice with a minimal quasi complementation operation. Define \({\mathtt{F}}({\mathbf{L}})=(\operatorname{Filt}({\mathbf{L}}),\pm, \operatorname{Idl}({\mathbf{L}}),S_{\nu})\) to be the canonical frame of the lattice constructed in Section 5. Axioms (F0)-(F4) hold for the canonical frame, by Proposition 4.8. Note that axiom (F2) of Table 2 is a strengthening of the corresponding axiom in Table 1.
To verify the stronger version of (F2), suppose \(\Gamma z={}^{\perp}\{v\}\) is a clopen element. Clopen elements of \(\mathcal{G}(X)\) in the canonical frame are precisely the stable compact-open sets \(X_{a}=\Gamma x_{a}=x_{a}\mathord{\uparrow}={}^{\perp}\{y_{a}\}\). By definition of the point operator \(\widetilde{\nu}\) and of the canonical relation \(S_{\nu}\) in equation (19), \(S_{\nu}x_{a}=\Gamma(\widetilde{\nu}(x_{a}))\). It is straightforward to see that \(\widetilde{\nu}(x_{a})=y_{\nu a}\), hence \(S_{\nu}x_{a}=\Gamma y_{\nu a}=Y^{\nu a}\) is a clopen element of \(\mathcal{G}(Y)\), where \(y_{\nu a}=(\nu a)\mathord{\downarrow}\) is the principal ideal generated by the lattice element \(\nu a\).
Axiom (F5) holds, since \(X_{a}\cap X_{b}=X_{a\wedge b}\), while also \(Y^{a}\cap Y^{b}=Y^{a\lor b}\). For (F6), by join-density of principal filters, any filter \(x\) is the join \(x=\vee_{a\in x}x_{a}\), hence every closed element \(\Gamma x\) of \(\mathcal{G}(X)\) is an intersection \(\Gamma x=\bigcap_{a\in x}\Gamma x_{a}=\Gamma\left(\vee_{a\in x}x_{a}\right)\) and similarly for closed elements \(\Gamma y\in\mathcal{G}(Y)\). Finally, axiom (F7) was verified in Proposition 4.3 for meet semilattices and the same proof applies to establish that the topology on each of \(X=\operatorname{Filt}({\mathbf{L}})\) and \(Y=\operatorname{Idl}({\mathbf{L}})\) is a spectral topology.
By the above argument, \({\mathtt{F}}({\mathbf{L}})\) is an object in the category \(\mathbf{SRF}_{\nu{\rm M}}^{*}\). By Proposition 4.5, \(\{X_{a}\mid a\in L\}=\mathtt{KOG}(X)\) is the complex algebra of clopen elements \({\mathtt{L}}^{*}{\mathtt{F}}({\mathbf{L}})\) and we then verified in Theorem 4.6 that the representation map \(a\mapsto X_{a}=\{x\in X\mid a\in x\}\) is a lattice isomorphism \({\mathbf{L}}\backsimeq\mathtt{KOG}(X)\). Since \(X^{\nu a}=\Gamma x_{\nu a}=\left(\Gamma(\widetilde{\nu}(x_{a}))\right)^{\prime} =(\Gamma x_{a})^{*}=(X_{a})^{*}\), the representation map is an isomorphism \({\mathbf{L}}\backsimeq{\mathtt{L}}^{*}{\mathtt{F}}({\mathbf{L}})\) of lattices with a (minimal) quasi-complementation operation \(\nu\).
For morphisms \(h:{\mathbf{L}}_{1}\longrightarrow{\mathbf{L}}_{2}\), the argument that \({\mathtt{F}}(h):{\mathtt{F}}({\mathbf{L}}_{2})\longrightarrow{\mathtt{F}}({ \mathbf{L}}_{1})\) is a frame morphism satisfying axioms (M1)-(M4) is a special instance of the argument given in the proof of [13, Proposition 4.9], handling the general case of arbitrary normal lattice expansions. The proofs regarding axioms (M5) and (M6) were given in [13, Proposition 5.6, Proposition 5.7]. This establishes that \({\mathtt{F}}:{\mathbf{M}}\longrightarrow(\mathbf{SRF}_{\nu{\rm M}}^{*})^{\rm op}\) is a contravariant functor satisfying \({\mathbf{L}}\backsimeq{\mathtt{L}}^{*}{\mathtt{F}}({\mathbf{L}})\).
Now let \(\mathfrak{F}\) be a sorted residuated frame in the category \(\mathbf{SRF}_{\nu{\rm M}}^{*}\). We have let \({\mathtt{L}}(\mathfrak{F})=\mathfrak{F}^{+}=(\mathcal{G}(X),\leq,\sqcap,\wedge, \varnothing,X,(\ )^{*})\) be its full complex algebra (Definition 3.6) and \({\mathtt{L}}^{*}(\mathfrak{F})=(\mathtt{KOG}(X),\leq,\cap,\vee,\varnothing,X,( \ )^{*})\) be its subalgebra of clopen elements. That the operation \((\ )^{*}\) restricts to clopen elements is clear, since \((X_{a})^{*}=X_{\nu a}\). By Corollary 3.16, \({\mathtt{L}}^{*}(\mathfrak{F})\) is an object in the category \({\mathbf{M}}\) of lattices with a minimal quasi-complementation operation.
If \(\pi=(p,q):\mathfrak{F}_{2}\longrightarrow\mathfrak{F}_{1}\) is a frame morphism in \(\mathbf{SRF}_{\nu{\rm M}}^{*}\), then it was verified in [13, Corollary 3.21] that \({\mathtt{L}}^{*}(\pi)=\pi^{-1}:\mathcal{G}(X_{1})\longrightarrow\mathcal{G}(X_ {2})\) is a complete lattice homomorphism of the complete lattices of stable sets of the frames. Given axiom (M4), it was established in [13, Proposition 3.24, Lemma 3.25] that \(\pi^{-1}:\mathfrak{F}_{1}^{+}\longrightarrow\mathfrak{F}_{2}^{+}\) is in fact a homorphism of the full complex algebras of the frames.
By axiom (M5), \(\pi^{-1}\) preserves closed elements, hence by [13, Lemma 3.23] it also preserves clopen elements (from which continuity of \(\pi\) follows, since clopen stable elements are precisely the basic open sets in the topology).
The above argument has established that \(\mathtt{L}^{*}\) is a contravariant functor from the category \(\mathbf{SRF}^{*}_{\nu\mathrm{M}}\) to the category \(\mathbf{M}\) of lattices with a minimal quasi-complementation operation. We have already also established that for any object \(\mathbf{L}\) in \(\mathbf{M}\) we have an isomorphism \(\mathbf{L}\backsimeq\mathtt{L}^{*}\mathtt{F}(\mathbf{L})\) and it remains to argue that for any sorted residuated frame \(\mathfrak{F}\) in the category \(\mathbf{SRF}^{*}_{\nu\mathrm{M}}\) we also have that \(\mathfrak{F}\backsimeq\mathtt{FL}^{*}(\mathfrak{F})\). To avoid repetitions, we refer the reader to the proof of the general duality theorem for any normal lattice expansion [13, Theorem 5.8].
**Definition 6.2**.: Categories of frames \(\mathfrak{F}=(X,\pm,Y,S_{\nu})\), where we set \(\bot=S_{\nu}^{\prime}\), corresponding to lattices with a quasi complementation operation are axiomatized by the axioms of Table 2 as well as one or more of the additional axioms below, as specified for each category.
\begin{tabular}{l l} (G) & \(\bot\) is symmetric \\ (INV) & \(\forall x,z\in X[\forall v\in Y(vS_{\nu}z\longrightarrow vS_{\nu}x) \longrightarrow x\leq z]\) \\ (O) & \(\bot\) is irreflexive \\ (D) & All sections of the Galois dual relation \(R^{\prime}\) of the upper bound relation \(R\) of Proposition 3.7 are stable \\ \end{tabular}
\begin{tabular}{l l} \(\mathbf{SRF}^{*}_{\nu\mathrm{M}}\) & Table 2 axioms \\ \(\mathbf{SRF}^{*}_{\nu\mathrm{G}}\) & Table 2 axioms + Axiom (G) \\ \(\mathbf{SRF}^{*}_{\nu\mathrm{INV}}\) & Table 2 axioms + Axiom (G) + Axiom (INV) \\ \(\mathbf{SRF}^{*}_{\nu\mathrm{O}}\) & Table 2 axioms + Axiom (G) + Axiom (INV) + Axiom (O) \\ \(\mathbf{SRF}^{*}_{\nu\mathrm{DMA}}\) & Table 2 axioms + Axiom (G) + Axiom (INV) + Axiom (D) \\ \(\mathbf{SRF}^{*}_{\nu\mathrm{BA}}\) & Table 2 axioms + Axiom (G) + Axiom (INV) + Axiom (O) + Axiom (D). \\ & + Axiom (D). \\ \end{tabular}
**Theorem 6.3**.: The spectral duality of Theorem 6.1 specializes to dualities for each of the frame categories of Definition 6.2 and their respective categories of bounded lattices with a (quasi) complementation operation.
Proof.: That the double dual of a lattice in one of the varieties of Figure 1 is in the variety in question was verified in Theorem 5.2. That the (full) complex algebra of a frame in one of the frame categories which satisfies one or more axioms from the list in Definition 6.2 is an algebra in the respective variety corresponding to the frame category was verified in Corollary 3.16. The rest of the duality argument for each of the cases is the same as in Theorem 6.1.
## 7 Concluding Remarks
In this article, we have provided alternative constructions for the choice-free representation and duality for Boolean algebras and Ortholattices, first given
in [2, 22]. A Stone duality result for De Morgan algebras, using choice, was published by Bimbo in [3] and we have given here a choice-free version of the duality. Our background motivation has been the Jonsson-Tarski [19, 20] approach, constructing set-operators from relations to represent operators on Boolean algebras, a project that has been extended with Dunn's research on generalized Galois logics (gaggles). In [13], we presented a generalization of this project of relational representation to cases where distribution may not be assumed. The framework of [13] was applied in this article to the case of bounded lattices with a quasi-complementation operator, recasting the duality of [13] in a choice-free manner. By treating both De Morgan (and Boolean) algebras, as well as Ortholattices, it has been shown that the presence or not of distribution does not create any significant obstacle, as distribution in the complete lattice of stable sets of a sorted frame has been shown to be first-order definable. For the distributive case, the resulting semantics from the approach presented appears to have strong affinities to Holliday's possibility semantics [17].
The approach presented can be extended to any normal lattice expansion, including the case of modal lattices studied in [1], based on the framework developed in [13].
|
2303.04306 | Abstract Orientable Incidence Structure and Algorithms for Finite
Bounded Acyclic Categories. I. Incidence Structure | A generalization of incidence relations in abstract polytope has been
explored, and parameterized surfaces are used as primers. The abstract
orientable incidence structure is defined as an algebraic model of incidence
relations, in which some algebraic properties in abtract polytope theory are
generalized. The geometric interpretation of abstract orientable incidence
structure are also discussed. The orientable incidence structure in a
semi-regular normal CW complex are briefly investigated. | Yu-Wei Huang | 2023-03-08T00:56:41Z | http://arxiv.org/abs/2303.04306v1 | Abstract Orientable Incidence Structure and Algorithms for Finite Bounded Acyclic Categories. I. Incidence Structure
###### Abstract
A generalization of incidence relations in abstract polytope has been explored, and parameterized surfaces are used as primers. The abstract orientable incidence structure is defined as an algebraic model of incidence relations, in which some algebraic properties in abstract polytope theory are generalized. The geometric interpretation of abstract orientable incidence structure are also discussed. The orientable incidence structure in a semi-regular normal CW complex are briefly investigated.
## 1 Introduction
The concept of incidence relations can be traced back to the face lattice of classical polytopes, which is a combinatorial structure among facets [3]. This structure can be studied as an abstract polytope, which is a binary relation describing whether a facet is contained by another facet. However, abstract polytope is limited in its ability to describe certain types of incidence relations. For instance, the convexity of polytopes ensures that the relation between two facets is unique, leading to a poset structure in the abstract polytope. The flatness of geometric objects also makes the shape uniquely determined by the boundaries. By relaxing these constraints, more fundamental structures of incidence relations can be revealed. Some geometric complexes like simplicial complex and CW complex can be seen as a generalization of polytopes [2, 1]. However, these approaches require the introduction of auxiliary geometric objects and are not well-suited for studying incidence structures. Although it is well known that incidence relations can be manifested by posets, there has been little discussion of generalizing this concept to acyclic categories in the same manner. Therefore, this article will re-describe incidence relations between geometric objects, and show that it has the structure of acyclic category.
## 2 Incidence Structure
### Facet and Incidence Relation
Below we will introduce incidence structure via parameterized \(n\)-dimensional surfaces, which are referred to as **facets**. Roughly speaking, a parameterized \(n\)-dimensional surface is a surface defined by a function of \(n\) parameters. In order to rule out some strange geometries, some constraints are adopted. The points on the surface are described by the parameters called **coordinates** faithfully, so that the topological properties of this surface are fully described by the coordinate space. The coordinate space is not limit to an Euclidean space, so that it can describe non-trivial topology like torus. The values of coordinates are restricted to a given range in the coordinate space, which should be a compact and connected open subset. For example, the Figure 1 is a facet of a fragment of spherical shell described by \(\{S(\theta,\phi)\mid 0<\theta<\pi/4,0<\phi<\pi/4\}\), where \(S(\theta,\phi)\equiv\cos\theta\cos\phi\,\hat{x}+\sin\theta\cos\phi\,\hat{y}+ \sin\phi\,\hat{z}\) is a point on the surface with coordinate \((\theta,\phi)\). The topology of
this surface can be described by the coordinate range \((0,\pi/4)\times(0,\pi/4)\subset\mathbb{R}^{2}\). To describe a full sphere, the coordinate space cannot be an Euclidean space because it cannot faithfully refer to all points on the sphere. It is useful for doing actual geometric calculations if the coordinate space has a metric, which is important when it is not a Cartesian coordinate system. A good coordinate space should have small distortion, such that geometric calculations can be performed accurately. A cone can be described by \(\{Y(x,y)|0<x^{2}+y^{2}<1\}\) with \(Y(x,y)\equiv x\,\hat{x}+y\,\hat{y}+\,\sqrt{x^{2}+y^{2}}\hat{z}\), where the origin of the coordinate space \((0,0)\) has been removed since such point is not differentiable. This coordinate range is valid although it is not homeomorphic to a disk; there is no need to cut it off to make it homeomorphic to the disk. These examples shows how the flexibility of coordinate helps us to deal with real geometry more easily. As a special case, a point, described by a singleton set \(\{P\}\), has a coordinate space with zero parameter, which is just a coordinate space that has only one valid coordinate.
If the coordinate space is orientable, the orientation of a facet can be defined. For convenience, we assume they are always orientable. The orientation of a facet at the point \(F(u,v,\dots)\), denoted as \(\hat{d}F(u,v,\dots)\), is defined as the normalization of the wedge product of the partial derivatives \(dF(u,v,\dots)=\frac{\partial}{\partial u}F(u,v,\dots)\wedge\frac{\partial}{ \partial v}F(u,v,\dots)\wedge\dots\). In the example of Figure 1, the orientation is \(\hat{d}S(\theta,\phi)=\cos\theta\cos\phi\,\hat{y}\wedge\hat{z}+\sin\theta\cos \phi\,\hat{z}\wedge\hat{x}+\sin\phi\,\hat{x}\wedge\hat{y}\), which can be seen as a radial pseudovector. Each point in a \(n\)-dimensional facet has an orientation as a normalized multivector of degree \(n\) (more precisely, a \(n\)-blade). In the same sense, the orientation of a line is just a multivector of degree 1, which is an unit tangent vector; the orientation of a point is just a multivector of degree 0, which only can be a sign. The orientation of a point cannot be derived naturally, so one should attach a sign to define its orientation, which is called a **signed point**.
A valid facet should be possible to be extended by including the boundary of the coordinate range, which is called **closed facets**. The image of the open coordinate range is called the **inner** of the facet, and the image of the boundaries of the coordinate range is called the **boundaries** of the facet. The inner and the boundaries of a facet should be disjoint. Unlike the inner of the facet, the boundaries are not faithfully described by coordinates, but are at least locally faithfully. To describe the boundaries of a given facet, a set of disjoint facets are introduced such that the union of them is equal to the boundaries, and these facets are called **subfacets** of the given facet. The relations between a facet and its subfacets are called **incidence relations**, which are described by embed function and local connectedness. **Embed function** is an mapping between coordinate spaces of facets, which shows how a facet is embedded into another one. For
Figure 1: A fragment of spherical shell. The arrows on the surface represent its orientation. The arc on its bottom boundary, whose orientation is represented as the arrow on the line, is positively-oriented with respect to the spherical shell.
example, an arc \(\{C(\theta)\mid 0<\theta<\pi/4\}\) with \(C(\theta)\equiv\cos\theta\,\hat{x}+\sin\theta\,\hat{y}\) is covered by the surface of the example of Figure 1, which can be described by the equation \(C(\theta)=S(\theta,0)\). This embed function is denoted as \(\phi:C\to S\), where \(C\) is a subfacet of the facet \(S\). Notice that there can be multiple embed functions between two facets. For example, the circle, described by \(\{C(\theta)|0<\theta<2\pi\}\) with \(C(\theta)\equiv\cos\theta\,\hat{x}+\sin\theta\,\hat{y}\), covers the point \(P\equiv 1\,\hat{x}+0\,\hat{y}\) in two ways: \(P=C(0)\) and \(P=C(2\pi)\). This is different from the conventional polytope, in which there is only one relation between incident facets.
The definiton of subfacets also implies two special facets. The null face \(\varnothing\) is defined as an empty set, which has no valid coordinate. The null face is usually considered as a \((-1)\)-dimensional facet. The null face \(\varnothing\) is covered by all facets, because there is a map from an empty set to any set. Consider multiple disjoint facets with shared subfacets, the space containing these facets, called the universe \(\mathbb{U}\), can also be considered as a facet. For example, 3D Euclidean space is defined as \(\mathbb{U}(x,y,z)=x\,\hat{x}+y\,\hat{y}+z\,\hat{z}\). The universe \(\mathbb{U}\) covers all facets because it is the codomain of functions defining facets. The null face \(\varnothing\) and the universe \(\mathbb{U}\) are called improper facets.
Another property of an incidence relation called the **local connectedness** is manifested by the relative position between two incident facets. If their dimensions differ by only 1, the relative position can always be described by a sign (denoted as \(|\phi|\)), determining which the side of the subfacet faces the inner of the facet. In the example of Figure 1, the spherical shell is on the positive side of the arc according to the right-hand rule, so we define \(|\phi|=+1\) (see Figure 1). Generalizing to incidence relation between \(n\)-dimensional and \((n{+}1)\)-dimensional facets, we define the orientation between them as the difference of wedge products: if the \(n\)-dimentional subfacet \(B\) is embedding into the \((n{+}1)\)-dimentional facet \(F\) by mapping \(\phi:B\to F\), define \(|\phi|\) as a sign such that \(\hat{d}B\wedge\hat{n}=|\phi|\hat{d}F\), where \(\hat{d}F\) and \(\hat{d}B\) are their orientations at a point in the subfacet \(B\), and \(\hat{n}\) is a vector on this point pointing to the inner of the facet \(F\). \(|\phi|=+1\) (\(|\phi|=-1\)) means the subfacet \(B\) is positively- (negatively-) oriented with respect to the facet \(F\) (see Figure 2). Note that all points in the subfacet \(B\) should give the same value of \(|\phi|\). Under this definition, one can say: a line segment has one positive-oriented subfacet and one negative-oriented subfacet, and both of them are positive points at the endpoints; a circle with counter-clockwise tangent vectors is a positive-oriented subfacet of the enclosed disk; a sphere defined like the above example is negatively-oriented with respect to the enclosed ball. The incidence relation between the null face and a signed point is a special case: positive (negative) point always is positively- (negatively-) oriented with respect to the null face. A subfacet can be covered by a facet with
Figure 2: The relative orientation of signed points, lines, and planes. Their orientations are represented as a chain of arrows (drawn as simplexes), so that the wedge product of these vectors forms the orientation. The shaded regions represent positive side of facets.
multiple orientations. For example, consider a plane \(S\) and a line segment \(L\) which is in the middle of the plane. Since the plane is on the left and right of this line, there are two choices of the inward pointing vectors. Then the incidence relations between them are described by \(\phi_{+}\) and \(\phi_{-}\), which are the two identical embed functions with different orientations by letting \(|\phi_{+}|=+1\) and \(|\phi_{-}|=-1\).
It is obviously that incidence relations are composable: if a point can embed into a line, and the line can embed into another surface, then this point can also embed into this surface. The equivalence relations between composed embed functions manifest the local connectedness, which is not just a sign. The incidence relations and their composition rules constitute an **incidence structure**. Figure 3 shows the incidence relations between a square and its vertex. The vertex point embeds into a square by mapping \(\phi\), which is equal to composition \(\phi_{L}^{\prime}\circ\phi_{L}\) and \(\phi_{R}^{\prime}\circ\phi_{R}\), where \(\phi_{L}\) (\(\phi_{R}\)) is the incidence relation from this point to the left (right) adjacent edge, and \(\phi_{L}^{\prime}\) (\(\phi_{R}^{\prime}\)) is the incidence relation from the left (right) adjacent edge to the square. Such equivalence relation is just the diamond property in abstract polytope, and this property describes the completeness of the vertex. In abstract polytope, this diamond always exists and is unique for any two incident facets with dimension differing by 2. If there is a shape that has two vertices sharing the
Figure 4: Multiple incidence relations on a crescent shape. There are two diamonds on the point \(P\), which represents two directions (drawn as two red fan shapes) go from this point to the inner of the crescent shape. The right diagram represents the embed functions between facets, while it’s difficult to be filled up to represent their diamond properties.
Figure 3: The incidence relations between a square and its vertex. The incidence relations from vertex to edge and from edge to square can be composed, resulting in the incidence relation from the vertex to the square. These incidence relations form a diamond shape (right diagram), which is filled up to indicate the equivalence of composition. The equivalence of composition represents the completeness of the vertex, drawn as a fan shape on the vertex.
same point, this point is covered by this area in two directions. For example, consider the area between two internally tangent circles called crescent shape (see Figure 4), the tangent point is covered by outer circle in two directions, say \(\phi_{L}\) and \(\phi_{R}\), and outer circle is also covered by the crescent shape, say \(\phi_{S}\). Then one can go from the tangent point to the inner of the crescent shape in two ways, which is equal to \(\phi_{S}\circ\phi_{L}\) and \(\phi_{S}\circ\phi_{R}\). These two incidence relations are not equivalent, so they don't form a diamond; the angle between these two edges has been separated by the inner circle. Instead, the diamonds are formed by \(\phi_{S}\circ\phi_{L}=\phi_{S}^{\prime}\circ\phi_{L}^{\prime}\) and \(\phi_{S}\circ\phi_{R}=\phi_{S}^{\prime}\circ\phi_{R}^{\prime}\), where \(\phi_{L}^{\prime}\), \(\phi_{R}^{\prime}\) and \(\phi_{S}^{\prime}\) are the same as above but relative to the inner circle. These two diamonds represent two different directions of incidence relations between the tangent point and the crescent shape; there are two separated ranges of angles that can go from this point to the inner. It shows the difference between diamonds indicates the local connectedness around the point.
Not all incidence relations can be defined continuously. The Riemann surface \(R(z)\equiv(z,f(z))\) for the function \(f(z)=\sqrt{z}(1-|z|)\) with \(0<|z|<1\) is a 2-dimensional facet in a 4-dimensional space, which doubly cover the unit circle \(C(z)\equiv(z,0)\) with \(|z|=1\). Their are two incidence relations: any point on the circle, say \(C(e^{i\theta})\), should be mapped to \(R(e^{i\theta})\) and \(R(e^{i(2\pi+\theta)})\), representing two embed functions respectively. Depending on the choice of the branch cut, the embed functions are discontinuous at that point. It is impossible to write such embed functions as continuous functions. Breaking of continuity may cause some geometric calculation problems. To solve this problem, the calculation should be limited to a small range, such that in this range the embed functions can be written as continuous functions depending on where it is. A facet with a non-orientable subfacet will also encounter similar problems, in which the incidence relations cannot have a continuous sign. These kind of incidence relations are said to be **non-orientable**. To simplify the discussion, we only focus on the incidence structure with orientable incidence relations, which we call an **orientable incidence structure**.
### Vertex Figure and Local Connectedness
To explain the concept of local connectedness more clearly, we shall introduce vertex figures, which is defined in the same manner as polytope. To make a vertex figure on a 0-dimensional facet (a positive point), one puts a ball \(D^{n}\) on it, which is small enough so that only the facets that cover this point intersect this ball. Then the sliced images of facets (the intersection of the boundary of the ball \(\partial D^{n}\) and facets) are defined as the **vertex figure** of the corresponding facets. For example, the vertex figure on a vertex of a cube is a spherical triangle, which is an intersection of the cube and a sphere centered at this vertex (see Figure 5). The orientation of the sliced image \(L\) of the facet \(F\) should obey \(\hat{n}\wedge\hat{d}L=\hat{d}F\), where \(\hat{n}\) is outward pointing vector from the center of the slicing ball. The incidence relations of this vertex figure can be trivially derived by restricting the domain and codomain of mapping. It's easy to prove that the sign of incidence relations are the same. More general, the vertex figure on any facet can be defined as: pick a point on given \(m\)-dimensional facet \(V\), and put a ball \(D^{n-m}\) on this point, which is perpendicular to its orientation \(\hat{dV}\) at this point. Then the sliced images of facets (the intersections of the boundary of this ball \(\partial D^{n-m}\) and facets) are defined as the vertex figure of the corresponding facets on the facet \(V\). All points on the facet \(V\) should give the same vertex figure. The orientation of the sliced image \(L\) of the facet \(F\) should obey \(\hat{dV}\wedge\hat{n}\wedge\hat{d}L=\hat{d}F\), where \(\hat{n}\) is outward pointing vector from the center of the the slicing ball. For example, the vertex figure on an edge (i.e., the edge figure) of a cube is a segment of arc, which is the intersection of the cube and a circle with this edge as its axis (see Figure 5). Its obviously the operations of taking vertex figure are composable. For example, the edge figure of a cube is the vertex figure of the vertex figure, as shown in Figure 5.
Some sliced images on a subfacet have multiple connected components, each of them is defined as an individual facet in this vertex figure. This kind of facets are said to be **locally separated** by this subfacet. For example, the vertex figure of a plane on a point lying in the middle of the plane is a circle, which is connected, so we say the plane is not locally separated by this point. But the vertex figure of a plane on a line lying in the middle of the plane becomes two points, which have two connected components, so we say the plane is locally separated by this line. The crescent shape (Figure 4) is a non-trivial example, whose vertex figures on the tangent point are two line segments, so we say the crescent shape is locally separated by the tangent point.
Figure 5: A vertex figure (top left) and edge figure (top right) of a cube (bottom). Where \(U\) and \(F\) denote two faces, \(E\) denotes an edge, and \(V\) denote a vertex. The vertex figure on the vertex \(V\) is formed by sliced images of those facets. Further taking vertex figure on the vertex \(E\) is the same as taking vertex figure on the edge \(E\) in the original cube.
It shows the local connectedness property described by incidence relation becomes connectedness property of the vertex figure, and such correspondence is naturally required if one considers the incidence structure of the vertex figure. Also, the local connectedness property can be propagated via connectivity, so incidence relations can be tested in every points in the connected subfacet. These descriptions manifest the local and global properties of incidence relations.
The orientable incidence structure can be drawn as directed acyclic multigraphs, where nodes are facets, and edges are nondecomposable embed functions. In this diagram, called the **Hasse diagram**, all incident facets can be connected by directed paths, manifesting the poset structure of incidence relations. The Hasse diagram is not enough to describe an orientable incidence structure, since the composition rules for incidence relations are not shown. Because sliced images of connected facets may become disconnected, the incidence structure of a vertex figure is not just a subgraph of the Hasse diagram. For example, the vertex figure on the intersection point of two circles on a torus is much more complicated to itself (see Figure 6), but the composition rules of incidence relations are always the same.
In our theory, the incidence relations are not unique to two incident facets. The difference of two incidence relations represents the local connectedness of the facet, which is the main difference to the conventional incidence structure. The existence of multiple incidence relations forces us to consider the composition rules between them. Unlike abstract polytope, where the structure can be described by a poset, the orientable incidence structure act like a category. We will extract out such structure in the next section.
## 3 Abstract Orientable Incidence Structure
### Bounded Acyclic Category and Bounded Poset
The abstract orientable incidence structure is an algebraic structure which represents the orientable incidence relations between facets. Unlike previous section, it only captures its combinatorial nature without specifying geometric properties. An **abstract orientable incidence structure**, denoted as \(\mathcal{F}\), is a **graded bounded
Figure 6: The vertex figure of a torus. The top is the torus (left) and its Hasse diagram (right); the bottom is the vertex figure on point \(P\) (left) and the corresponding Hasse diagram (right), which is much more complicated.
acyclic category** satisfying **semi-diamond property**. An **acyclic category** is a category without non-trivial cycles. It is impossible to have two non-identity morphisms in opposite directions at the same time, so morphisms can only go in one direction. For a finite acyclic category, there is a rank function that maps an object to an integer, so that all morphisms increase the rank of object except identity. One can define the rank of an object as the dimension of corresponding facet. The acyclic category with a rank function is called **graded acyclic category**. **Bounded** means it has exactly one initial object and terminal object. The initial object, which has the lowest rank, represents the null face, denoted as \(\varnothing\); the terminal object, which has the highest rank, represents the universe, denoted as \(\mathbb{U}\). Morphisms represent incidence relations between the facets. The morphism \(\phi:F\to S\) means that facet \(F\) is covered by facet \(S\). There may have multiple morphisms between two objects, and this property represents the local connectedness of the incidence relation, as we discussed above. Below we mainly focus on the property of bounded acyclic category, and compare it with the bounded poset.
Recall the definition of a **bounded poset** (a poset which includes its lower and upper bounds), which is the base of abstract polytope:
1. _irreflexivity_: \(a\not<a\).
2. _asymmetry_: if \(a<b\) then \(b\not<a\).
3. _transitivity_: if \(a<b\) and \(b<c\) then \(a<c\).
4. _least_: there exists exactly one \(\varnothing\) such that forall \(a\neq\varnothing\) there is \(\varnothing<a\).
5. _greatest_: there exists exactly one \(\mathbb{U}\) such that forall \(a\neq\mathbb{U}\) there is \(a<\mathbb{U}\).
Now compare to the definition of **bounded acyclic category**:
1. _irreflexivity_: there is no \(\phi:F\to F\) for any \(\phi\neq\mathrm{id}\).
2. _asymmetry_: if there is \(\phi:F\to S\) for some \(\phi\neq\mathrm{id}\), then there is no \(\phi^{\prime}:S\to F\) for any \(\phi^{\prime}\neq\mathrm{id}\).
3. _transitivity_: if there are \(\phi:P\to F\) and \(\phi^{\prime}:F\to S\) for some \(\phi\neq\mathrm{id}\) and \(\phi^{\prime}\neq\mathrm{id}\), then \(\phi^{\prime}\circ\phi:P\to S\).
4. _initial_: there exists exactly one \(\varnothing\), such that forall \(F\neq\varnothing\) there is \(\phi:\varnothing\to F\) for _exactly one_\(\phi\neq\mathrm{id}\).
5. _terminal_: there exists exactly one \(\mathbb{U}\), such that forall \(F\neq\mathbb{U}\) there is \(\phi:F\to\mathbb{U}\) for _exactly one_\(\phi\neq\mathrm{id}\).
The key different is that, there can be multiple relations between two objects in the bounded acyclic category, and the transitivity of order relations becomes the composition of morphisms. A bounded poset is just a thin bounded acyclic category. It is natural to define an induced bounded poset for a bounded acyclic category by forgetting differences between multiple relations. In this sense, one says acyclic category is a generalization of poset. Abstract orientable incidence structure also loosens three constraints of abstract polytope, which will be discussed in the end of this section. Notice that the rules _least_ and _greatest_ in a bounded poset becomes _initial_ and _terminal_, which looks stricter in the bounded acyclic category. The initial object and the terminal object are called improper objects.
### Upper Category and Downward Functor
The upper closure of an element \(x\) in a bounded poset \(\mathcal{P}\) is a subset whose element is greater than or equal to \(x\). Such subset is also a bounded poset, where the order relation is inherited from the host poset \(\mathcal{P}\). To emphasize this, it is denoted as \(\mathcal{P}\!\uparrow\!x\). Similarly, one can construct a full subcategory \(\mathcal{F}^{\prime}\) by only including the upper closure of an object \(F_{m}\). There is no non-trivial morphism that point to the object \(F_{m}\), but it is not an initial object since the morphism \(\phi:F_{m}\to S\) may not be unique for the object \(S\), so that it is not a bounded acyclic category. To fix it, one should split objects such that there is only one initial morphism for each object. Construct **upper category** of object \(F_{m}\) in a bounded acyclic category \(\mathcal{F}\), denoted as \(\mathcal{F}\!\uparrow\!F_{m}\): \(\mathrm{Obj}(\mathcal{F}\!\uparrow\!F_{m})\) is a set of \(F_{s}|\phi_{sm})\) for all objects \(F_{s}\) and morphisms \(\phi_{sm}:F_{m}\to F_{s}\). \(\mathrm{Hom}(F_{s}|\phi_{sm}),F_{t}|\phi_{tm})\) is a set of \(\phi_{ts}|\phi_{sm}): \(F_{s}|\phi_{sm})\to F_{t}|\phi_{tm})\) for all morphisms \(\phi_{ts}:F_{s}\to F_{t}\) and \(\phi_{sm}:F_{m}\to F_{s}\) such that \(\phi_{ts}\circ\phi_{sm}=\phi_{tm}\). The composition rule is \(\phi_{pt}|\phi_{tm})\circ\phi_{ts}|\phi_{sm})=\phi_{ps}|\phi_{sm})\) with \(\phi_{ps}=\phi_{pt}\circ\phi_{ts}\). The object \(F_{t}\) have been marked with morphism \(\phi_{tm}\), so that it is splitted into multiple objects corresponding to each initial morphism \(\phi_{tm}\). Although the object splits, the composition rule is inherited from the host category \(\mathcal{F}\). In terms of incidence structure, the upper category \(\mathcal{F}\!\uparrow\!F_{m}\) just corresponds to the vertex figure on an
object \(F_{m}\): in upper category, an object \(F_{n}|\phi_{nm}\rangle\) represents a connected component of sliced images, and a morphism \(\phi_{ts}|\phi_{sm}\rangle\) represents an incidence relation between facets \(F_{s}|\phi_{sm}\rangle\) and \(F_{t}|\phi_{tm}\rangle\).
There is a functor \(F_{m}^{\downarrow}:\mathcal{F}\!\uparrow\!F_{m}\rightarrow\mathcal{F}\) that maps object \(F_{s}|\phi_{sm}\rangle\) to \(F_{s}\), and maps morphism \(\phi_{ts}|\phi_{sm}\rangle:F_{s}|\phi_{sm}\rangle\to F_{t}|\phi_{tm}\rangle\) to \(\phi_{ts}:F_{s}\to F_{t}\). This functor is a reversed operation of taking upper category, so it is named **downward functor**. Drawn as Hasse diagrams, the downward functor becomes graph homomorphism between two graphs. Unlike the abstract polytope, this is not injective on nodes and edges, which manifests the local connectedness of incidence relations.
Since an upper category itself is also a bounded acyclic category, one can further construct an upper category of an object in the upper category. The category \(\mathcal{F}\!\uparrow\!F_{m}\!\uparrow\!F_{n}|\phi_{nm}\rangle\) is an upper category of object \(F_{n}|\phi_{nm}\rangle\), where \(\mathrm{Obj}(\mathcal{F}\!\uparrow\!F_{m}\!\uparrow\!F_{n}|\phi_{nm}\rangle)\) is a set of object \(F_{s}|\phi_{sm}\rangle|\phi_{sn}|\phi_{nm}\rangle\rangle\), or abbreviated as \(F_{s}|\phi_{sn},\phi_{nm}\rangle\), for all objects \(F_{s}\) and morphisms \(\phi_{sn}:F_{n}\to F_{s}\); \(\mathrm{Hom}(F_{t}|\phi_{tn},\phi_{nm}\rangle,F_{s}|\phi_{sn},\phi_{nm}\rangle)\) is a set of \(\phi_{ts}|\phi_{sm}\rangle|\phi_{sn}|\phi_{nm}\rangle\rangle:F_{s}|\phi_{sm} \rangle|\phi_{sn}|\phi_{nm}\rangle\rangle\to F_{t}|\phi_{tm} \rangle|\phi_{tn}|\phi_{nm}\rangle\rangle\), or abbreviated as \(\phi_{ts}|\phi_{sn},\phi_{nm}\rangle:F_{s}|\phi_{sn},\phi_{nm}\rangle\to F _{t}|\phi_{tn},\phi_{nm}\rangle\) for all morphisms \(\phi_{ts}:F_{s}\to F_{t}\) and \(\phi_{sn}:F_{n}\to F_{s}\) such that \(\phi_{ts}\circ\phi_{sn}=\phi_{tn}\). Noting that we assume \(\phi_{sm}=\phi_{sn}\circ\phi_{nm}\) and \(\phi_{tm}=\phi_{tn}\circ\phi_{nm}\), and the abbreviation works because \(\phi_{sm}\) can be determined uniquely. It is isomorphic to the upper category \(\mathcal{F}\!\uparrow\!F_{n}\), which shows that taking upper category is composable. This rule is consistent with above discussion: taking vertex figure is composable. Similarly, downward functors are also composable. Downward functor can be reduced down to a functor \(\phi_{mn}^{\downarrow}:\mathcal{F}\!\uparrow\!F_{m}\rightarrow\mathcal{F}\! \uparrow\!F_{n}\) for morphism \(\phi_{mn}:F_{n}\to F_{m}\), in which object \(F_{s}|\phi_{sm}\rangle\) is mapped to \(F_{s}|\phi_{sn}\rangle\), and morphism \(\phi_{ts}|\phi_{sm}\rangle:F_{s}|\phi_{sm}\rangle\to F_{t}|\phi_{tm}\rangle\) is mapped to \(\phi_{ts}|\phi_{sn}\rangle:F_{s}|\phi_{sn}\rangle\to F_{t}|\phi_{tm}\rangle\), where \(\phi_{sn}=\phi_{sm}\circ\phi_{mn}\). Upper closures of a poset form a poset by inclusion relation, its opposite category is isomorphic to the host poset. Similarly, upper categories \(\{\mathcal{F}\!\uparrow\!F_{m}|F_{m}\in\mathrm{Obj}\}\) and downward functors \(\{\phi_{mn}^{\downarrow}|\phi_{mn}\in\mathrm{Hom}\}\) also form a bounded acyclic category, its opposite category is isomorphic to the host category \(\mathcal{F}\) (see Figure 7).
The Hasse diagram of a bounded acyclic category can easily see which part is splittable. Two proper objects are said to be **linked** if they are connected in the Hasse diagram excluding improper objects. If not all proper objects are linked, this category is said to be **splittable**. It is the generalization of connectedness condition of abstract polytope. The linked objects may form multiple clusters, and they can be splitted into separated bounded acyclic categories. The linked clusters can be determined by linkages between **minimal/maximal objects**, which are defined as minimal/maximal elements of the induced poset of all proper objects. A bounded acyclic category is said to be **strongly unsplittable** iff all sections are unsplittable, which is an analogue of strongly connectedness of posets. The definition of sections in an acyclic category will be introduced latter. The words "linked" and "splittable" are used because "disjoint" and "disconnected" are reserved for actual geometric properties.
Figure 7: The category of upper categories (right) of a cone (left). It is drawn as a set of Hasse diagrams with graph homomorphisms between them (purple arrows). This representation is sufficiently to describe a bounded acyclic category.
### Section Category and Morphism Chain
There is a dual concept of upper category called **lower category**, which is a generalization of lower closure of a poset. Upper category is constructed by treating one object as the initial object, and split objects such that outgoing morphisms are initial morphisms. In the opposite, lower category, denoted as \(\mathcal{F}\!\downarrow\!F_{m}\), is constructed by treating one object as the terminal object, and split objects such that incoming morphisms are terminal morphisms: \(\mathrm{Obj}(\mathcal{F}\!\downarrow\!F_{m})\) is a set of \(\langle\phi_{mt}|F_{t}\) for all objects \(F_{t}\) and morphisms \(\phi_{mt}:F_{t}\to F_{m}\); \(\mathrm{Hom}(\langle\phi_{ms}|F_{s},\langle\phi_{mt}|F_{t}\rangle\) is a set of \(\langle\phi_{mt}|\phi_{ts}:\langle\phi_{ms}|F_{s}\rangle\rightarrow\langle\phi _{mt}|F_{t}\) for all morphisms \(\phi_{ts}:F_{s}\to F_{t}\) and \(\phi_{mt}:F_{t}\to F_{m}\) such that \(\phi_{mt}\circ\phi_{ts}=\phi_{ms}\). The composition rule is \(\langle\phi_{mp}|\phi_{pt}\circ\langle\phi_{mt}|\phi_{ts}=\langle\phi_{mp}| \phi_{ps}\) with \(\phi_{ps}=\phi_{pt}\circ\phi_{ts}\). In geometry, lower category corresponds to **face figure**, which is like imagining an ant living in a 2D world. The face figure of a sphere with one meridian is equivalent to a 2D space bounded by two lines (see Figure 8). The line is duplicated because it is covered by the sphere in two ways. As a creature living in 3D space, we know it is one line, but for the 2D creatures living on the sphere, they only see a space rift, and they cannot confirm if there is a wider space in this rift.
There is also a categorical analogue of the interval of a poset: \([a,b]\equiv\{c\in\mathcal{P}|a\leq c\leq b\}\), which can be seen as a lower closure of an upper closure. Given a poset, by taking non-trivial intervals between all minimal and maximal elements, the given poset can be covered by multiple bounded posets. Similarly, an acyclic category can be turned into direct sum of bounded acyclic categories by splitting objects, and those bounded acyclic categories are called **sections**, whose name comes from abstract polytope. Consider an acyclic category, say \(\mathcal{F}\), define a **section category** of morphism \(\phi_{0}\) on \(\mathcal{F}\), denoted as \(\langle\mathcal{F}\rangle_{\phi_{0}}\): \(\mathrm{Obj}(\langle\mathcal{F}\rangle_{\phi_{0}})\) is a set of \(\langle\phi_{n}^{*}|\phi_{n}\rangle\) for all morphisms \(\phi_{n}^{*}\) and \(\phi_{n}\) such that \(\phi_{n}^{*}\circ\phi_{n}=\phi_{0}\). \(\mathrm{Hom}(\langle\phi_{n}^{*}|\phi_{n}\rangle,\langle\phi_{m}^{*}|\phi_{m}\rangle)\) is a set of \(\langle\phi_{m}^{*}|\phi_{mn}|\phi_{n}\rangle\) for all morphisms \(\phi_{m}^{*}\), \(\phi_{mn}\), \(\phi_{n}\) such that \(\phi_{m}^{*}\circ\phi_{mn}\circ\phi_{n}=\phi_{0}\).
taking section category doesn't break the algebra of morphisms, it just renames some objects so that it is bounded. "The algebra of morphisms" is the generalization of "the transitivity of order relation". Define **local-embedding function** for a poset: an order-preserving function \(f\) is local-embedding iff it is an order isomorphism between \([x,y]\) and \([f(x),f(y)]\) for any pair \(x\leq y\). That is, the element \(f(x)\leq z^{\prime}\leq f(y)\) implies there exists exactly one \(x\leq z\leq y\) such that \(f(z)=z^{\prime}\). Note that it is not an one-to-one function. The generalization to an acyclic category becomes a **local-embedding functor**: for any morphism \(\phi\), the pair of morphisms \(\psi^{\prime}\) and \(\psi^{\prime\prime}\) with constraint \(\psi^{\prime\prime}\circ\psi^{\prime}=\mu(\phi)\) implies there exists exactly one pair of morphisms \(\phi^{\prime}\) and \(\phi^{\prime\prime}\) with constraint \(\phi^{\prime\prime}\circ\phi^{\prime}=\phi\) such that \(\mu(\phi^{\prime})=\psi^{\prime}\) and \(\mu(\phi^{\prime\prime})=\psi^{\prime\prime}\). The inverse of taking section categories is the local-embedding functor. Local-embedding functor describes the correspondence of algebra of morphisms without mention any object. It shows that the identity of object isn't important in a bounded acyclic category; initial morphisms and terminal morphisms are enough to indicate objects in a bounded acyclic category.
The chain representation provides a clear view of this. Recall the definition of the chain of a poset, which is defined as a total ordering subset. A subset of a chain is also total ordering, called a subchain. In an acyclic
Figure 8: A 2D ant lives on a sphere with a meridian (left), which is the same as living on its face figure from the ant’s point of view (right).
category, the morphism between two objects is not unique, so the generalization becomes: the **morphism chain** of a morphism \(\phi_{n0}:F_{0}\to F_{n}\) is a non-empty sequence of morphisms \(\langle\phi_{n},\ldots,\phi_{2},\phi_{1}\rangle\) that composed to this morphism. Its **subchain** can be constructed by composing adjacent morphisms into one. In other words, a subchain just skips some intermediate objects except the start and end, which is slightly different from the conventional definition of the morphism chain. A morphism chain containing identity morphisms is said to be degenerated. A morphism chain with \(n\) intermediate object is called \(n\)-chain. 0-chain is just the host morphism itself \(\langle\phi_{n0}\rangle\), so it is said to be a trivial chain. 1-chain represents an object of the section of the morphism \(\phi_{n0}\), and 2-chain represents a morphism, whose source object and target object are just two non-trivial subchains.
Abstract orientable incidence structure can be described by only their subchain relations, called a **nerve**, which is also slightly different from the conventional definition. The nerve of an acyclic category is composed by a collection of \(n\)-chains \(N_{n}=\{\langle\phi_{n},\ldots,\phi_{1},\phi_{0}\rangle\}\) for \(n=0,1,2,\ldots\) and the face maps \(d_{i}:N_{n}\to N_{n-1}\) for \(i=1\sim n\), which skip the \(i\)-th object in the \(n\)-chain, and the degeneracy maps \(s_{i}:N_{n}\to N_{n+1}\) for \(i=1\sim n\), which insert an identity morphism at the \(i\)-th object. The laws for face maps and degeneracy maps are the same as the conventional one, which should not be repeated here. Note that in our definition, a 1-chain represents an object, not a morphism. A bounded acyclic category has nerve with only one 0-chain, which will back to the conventional definition by replacing \(n\) with \(n-1\). The upper category of a 1-chain \(F\) becomes easy to define in this formulation, which is just letting \(N^{\prime}_{n}=d_{1}^{-n}(F)\) and \(d^{\prime}_{i}=d_{i}\), \(s^{\prime}_{i}=s_{i}\), where \(d_{1}^{-n}\) indicates taking preimage of \(d_{1}\)\(n\) times. Lower category and section category can be defined in a similar approach. Two acyclic categories will have the same nerve if they are surjectively section-embedded by the same direct sum of bounded acyclic categories. Because no relabeling is required, nerves are more natural for describing orientable incidence structures.
A section category not only can be defined on an acyclic category, but also on the **non-recursive category**, which is a category without non-trivial non-degenerated recursive morphism chain, that is, \(\psi\circ\phi\circ\psi^{\prime}=\phi\) implies \(\psi\) and \(\psi^{\prime}\) being identity morphisms. This property only rules out the infinity of morphisms, and non-trivial cycles of morphisms are still possible, while ensuring that their section categories are bounded acyclic categories. Non-recursiveness and finiteness naturally induce acyclicity.
### Semi-Regular Normal CW Complex
The **diamond property** of an abstract orientable incidence structure is stated as: a 2-rank morphism should be divided into _exactly_ two maximum chains. 2-rank morphism means the rank of source object and target object differ by 2. In other words, a 1-dimensional facet should have exactly 2 0-dimensional subfacets, that means only line segments are valid 1-dimensional facets. Moreover, the multiplication of the sign of chains should be different, that is, a valid diamond should obey \(\phi_{+}\circ\psi_{+}=\phi_{-}\circ\psi_{-}\) and \(|\phi_{+}|\times|\psi_{+}|=-|\phi_{-}|\times|\psi_{-}|\). This property captures the topological nature of the Euclidean space. To include infinite lines and rays as valid 1-dimensional facets, this property should be loosen. The **semi-diamond property** can be stated as: a 2-rank morphism can be divided into _at most_ two maximum chains. In other words, a 1-dimensional facet should have at most 2 0-dimensional subfacets.
Except diamond property, the conventional abstract polytope have two more constraints: strongly connected and uniform maximal chains. Strongly connected means all facets in the polytope are firmly glued together. With this property, it is not valid to glue two cubes together with one edge. Also, a disk with a hole is also invalid, since its boundaries are not connected. Uniform maximal chains property states that all maximal chains contain the same number of facets. With this property, the ranks of facets of a maximal chain are differed by 1 adjacently, so that a rank function can be derived naturally. That means all facets only have direct subfacets with dimensions differing by 1. They can be generalized as two additional properties for an abstract orientable incidence structure. That is, **strongly decomposable**: all morphisms (except terminal morphisms) can be decomposed down to 1-rank morphisms; **strongly unsplittable**: all section categories of morphisms with rank greater than 2 are unsplittable. A weaker version, called **strongly initial unsplittable**, can be defined as: a bounded acyclic category is said to be initial unsplittable iff all section categories of initial
morphisms with rank greater than 2 are unsplittable. Strongly initial unsplittable further states that section categories of all non-initial morphisms can always be splitted into strongly initial unsplittable categories. Strongly initial unsplittable relaxes the constraint of shared vertices, but still limit to connected boundaries.
If only facets homeomorphic to \(n\)-disks are considered, it becomes a semi-regular and normal CW complex. A **regular CW complex** is a CW complex whose gluing maps are homeomorphisms onto images. A CW complex is said to be **normal** if each closed cell is a subcomplex. The incidence structure of a regular and normal CW complex is given by inclusion relations between closed cells, which forms a CW poset [1]. Define **semi-regular CW complex** as a CW complex whose gluing maps are local homeomorphisms onto images. A semi-regular and normal CW complex possesses an incidence structure manifested by the fibers of gluing maps, which forms a bounded acyclic category. It obeys diamond property and is strongly decomposable and strongly initial unsplittable: it is initial unsplittable due to connectivity of cells, and because of local embedding gluing maps, the vertex figure of a cell is always a disjoint union of simply-connected components, which leads to strongly initial unsplittable property. It seems there are more constraints on the incidence structure for a semi-regular and normal CW complex, such that the geometric realization of an abstract orientable incidence structure satisfying such constraints is also a semi-regular and normal CW complex.
Under this constraint, the barycentric subdivision can be defined without geometric property. Recall that the barycentric subdivision of a conventional polytope is just the geometric realization (more precisely, the order complex) of its face lattice (see Figure 9). The barycentric subdivision of a semi-regular normal CW complex can be defined similarly, and it is just the geometric realization of the nerve of its incidence structure. In our definition of morphism chains, it should be defined as: the geometric realization of a nerve is constructed by making \((n{-}1)\)-simplex for each non-degenerated \(n\)-chain \(\langle\phi_{n},\dots,\phi_{2},\phi_{1}\rangle\) which is bounded by the corresponding simplexes of its direct subchains, where the orientation of this simplex should be defined such that the first boundary \(\langle\phi_{n}\circ\phi_{n-1},\dots\rangle\) is positively-oriented with respect to it, and the second boundary \(\langle\phi_{n},\phi_{n-1}\circ\phi_{n-2},\dots\rangle\) is negatively-oriented with respect to it, and so on. The \((-1)\)-simplex is the null face. The geometric realization of the nerve of a semi-regular normal CW complex is just its barycentric subdivision (see Figure 10). More interestingly, the incidence structure of a vertex figure is just making a vertex figure of the geometric realization of upper closure of its incidence structure. There is a similar relation to the face figure. It shows **"the incidence structure of a geometric object is related to the geometry of an incidence structure"**. This series of articles will not discuss such correspondence further, since it only appears in a restricted incidence structure. In the next article, we will focus on finite bounded acyclic categories, and develop algorithms for them.
|
2310.13024 | Towards Anytime Fine-tuning: Continually Pre-trained Language Models
with Hypernetwork Prompt | Continual pre-training has been urgent for adapting a pre-trained model to a
multitude of domains and tasks in the fast-evolving world. In practice, a
continually pre-trained model is expected to demonstrate not only greater
capacity when fine-tuned on pre-trained domains but also a non-decreasing
performance on unseen ones. In this work, we first investigate such anytime
fine-tuning effectiveness of existing continual pre-training approaches,
concluding with unanimously decreased performance on unseen domains. To this
end, we propose a prompt-guided continual pre-training method, where we train a
hypernetwork to generate domain-specific prompts by both agreement and
disagreement losses. The agreement loss maximally preserves the generalization
of a pre-trained model to new domains, and the disagreement one guards the
exclusiveness of the generated hidden states for each domain. Remarkably,
prompts by the hypernetwork alleviate the domain identity when fine-tuning and
promote knowledge transfer across domains. Our method achieved improvements of
3.57% and 3.4% on two real-world datasets (including domain shift and temporal
shift), respectively, demonstrating its efficacy. | Gangwei Jiang, Caigao Jiang, Siqiao Xue, James Y. Zhang, Jun Zhou, Defu Lian, Ying Wei | 2023-10-19T06:34:40Z | http://arxiv.org/abs/2310.13024v1 | # Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompts
###### Abstract
Continual pre-training has been urgent for adapting a pre-trained model to a multitude of domains and tasks in the fast-evolving world. In practice, a continually pre-trained model is expected to demonstrate not only greater capacity when fine-tuned on pre-trained domains but also a non-decreasing performance on unseen ones. In this work, we first investigate such anytime fine-tuning effectiveness of existing continual pre-training approaches, concluding with unanimously decreased performance on unseen domains. To this end, we propose a prompt-guided continual pre-training method, where we train a hypernetwork to generate domain-specific prompts by both agreement and disagreement losses. The agreement loss maximally preserves the generalization of a pre-trained model to new domains, and the disagreement one guards the exclisiveness of the generated hidden states for each domain. Remarkably, prompts by the hypernetwork alleviate the domain identity when fine-tuning and promote knowledge transfer across domains. Our method achieved improvements of 3.57% and 3.4% on two real-world datasets (including domain shift and temporal shift), respectively, demonstrating its efficacy.
## 1 Introduction
Pre-trained language models (LMs), such as GPT-3 Brown et al. (2020) and BERT Devlin et al. (2019), have revolutionized a wide spectrum of downstream natural language processing (NLP) tasks. Being initially pre-trained on a vast unlabeled corpus (e.g., \(C_{0}\) in Fig. 1), unfortunately, they struggle to keep up to date with language evolution (e.g., _emerging internet slang, expanded meaning of "Omicron"_) and domain shift (e.g., _electronic health records for medical diagnosis_).
Continual pre-training methods Jin et al. (2022); Ke et al. (2023) have recently emerged to address it by continually adapting an LM to a sequence of domains (e.g., \(T\) domains in Fig. 1). Two major lines of existing approaches, including knowledge distillation Jin et al. (2022) and parameter isolation Ke et al. (2023, 2022), make strides toward (1) maximizing the _adaptability_, i.e., the performance of an LM (e.g., \(B^{2}\) in Fig. 1) when fine-tuning it onto the domain where it is pre-trained (e.g., \(D_{2}\) in Fig. 1), and (2) avoiding _catastrophic forgetting_ (CF), which is measured by the fine-tuned performance of an LM (e.g., \(B^{2}\) in Fig. 1) on the already pre-trained domains (e.g., \(D_{1}\) in Fig. 1).
Beyond the above two criteria, in practice, a continually pre-trained LM is also anticipated to offer non-decreasing _generalization_ capability on unseen domains. As illustrated in Fig. 1, it is likely that the unlabeled corpus for the domain of interest (e.g., electronic health records as \(D_{T}\)) remains inaccessible to an LM (e.g., \(B^{2}\)) beforehand, while this LM should be superior or at least on par with its preceding models (e.g., \(B^{1}\)) on the \(T\)-th domain. On
Figure 1: Illustration of continual pre-training and the evaluation protocol of anytime fine-tuning, in which \(a_{j}^{i}\) in the accuracy table denotes the fine-tuned accuracy of the LM at any \(i\)-th stage, i.e., \(B^{i}\), on the \(j\)-th _pre-trained_ (blue), _current_ (red), and _unseen_ domains (orange).
this account, we propose the comprehensive evaluation protocol named _anytime fine-tuning_ that subsumes all the three aspects, where a continually pre-trained LM can be fine-tuned and evaluated on either previously pre-trained, current, or unseen domains. The effectiveness of current methods in terms of anytime fine-tuning remains largely unclear.
In this paper, we first conduct an empirical investigation of existing pre-training approaches under anytime fine-tuning (see Fig. 2) and identify the following two prominent unresolved research questions. **(1)** Parameter-efficient pre-training, such as training adapters (Ke et al., 2021) and prompts (Razdaibiedina et al., 2023; Smith et al., 2023) only for each individual domain, does not even contribute greater _adaptability_ than that before pre-training (i.e., evidenced in negative diagonal values of Fig. 2(d)(e)). Likewise, pre-training parts of parameters for each domain, may also diminish adaptability, through comparison of Fig. 2(b)(c)(g) with (a). **(2)** Continual pre-training is likely at the cost of sacrificing _generalization_ to unseen domains, shown by large negative values in the third column of Fig. 2(f)(g).
To address the above issues, we propose a Hypernetwork **Prompt** guided **C**ontinual **P**re-Training method (namely HPrompt-CPT1) that strikes a balance between forgetting, adaptability, and generalization. _First,_ inspired by recent success of prompt engineering paired with full fine-tuning in domain adaptation (Radford et al., 2019; Brown et al., 2020), we introduce the hnet-prompt module consisting of a hypernetwork to automatically generate domain-specific prompts without handcrafted engineering. Different from parameter-efficient pre-training that train prompts only, we optimize both the hypernetwork and the full LM so as to fully adapt to the current domain. An added benefit of hypernetwork prompts is that they eliminate the reliance on the domain identity to pinpoint prompts when fine-tuning. _Second_, we maximally preserve the generalization while mitigating CF of a continually pre-trained LM via the agreement and disagreement losses. We prompt the previous and current LM with a random prompt that simulates generic or learned domains and introduce the agreement loss to enforce consistency between their predictions to avoid forgetting while preserving model plasticity on other prompts. On the other hand, the disagreement loss promotes the exclusiveness of generated hidden states for the current domain, thus minimizing interference to the established knowledge and encouraging generalization during fine-tuning through diverse domain knowledge. Noteworthy, the hypernetwork also favors knowledge generalization, compared to disparate prompts of different domains.
Footnote 1: The code of HPrompt-CPT will be released at [https://github.com/gangwJiang/HPrompt-CPT](https://github.com/gangwJiang/HPrompt-CPT)
**Main Findings and Contributions. (1)** We establish a continual pre-training evaluation protocol, called anytime fine-tuning, and empirically verify that existing parameter-efficient approaches lose their competitive edge in adaptability and almost all methods are at risk of impairing generalization to unseen domains (see Fig. 2). **(2)** We further conquer the two challenges by proposing a hypernetwork prompt guided continual pre-training (HPrompt-CPT) scheme where we train the hypernetwork with both the agreement and disagreement losses. HPrompt-CPT is effective, achieving the state-of-the-art on two real-world datasets.
## 2 Related Work
Continual Learning (CL) focuses on the problem of sequential learning from a stream of data that comes in different distributions. It has achieve a great success in computer vision (Wang et al., 2022; Smith et al., 2023; Wang et al., 2022), natural language pro
Figure 2: Evaluation of separate and continual pre-training methods under anytime fine-tuning, where we modify each value \(a_{j}^{i}\) by subtracting \(a_{j}^{0}\) as the fine-tuned accuracy of the initial LM \(B^{0}\). (a)-(e) show the accuracy tables by pre-training each domain separately _w.r.t._ different sets of parameters (e.g., top layers); (f)-(h) are by the naively continual pre-training method (NCL), DAS (Ke et al., 2023), and ours. Detailed settings are available in Sec. 5.2.
cessing Sun et al. (2019); Ke et al. (2023), and data mining Hao et al. (2023); Xue et al. (2023). In this paper, we focus on one of the important aspects, continual pre-training and present recent progresses below. More related works are given in Appendix A.
**Continual Pre-training.** Previous studies Gururangan et al. (2020); Dery et al. (2022) have demonstrated that the fine-tuned performance of LM on downstream tasks can be enhanced by continued training on a domain-related corpus. Recent works take this concept further by introducing _Continual Pre-training_ (CPT), where LM continually learns from streaming domain corpora. Jin et al. (2022); Jang et al. (2022) investigate conventional CL methods in CPT using real-world datasets and highlight the final LM can be fine-tuned to serve any task in pre-trained domains, leading to improved performance, while Hu et al. (2022) finds CPT is comparable with joint pre-training. To improve upon this, ELLE Qin et al. (2022) progressively expands LMs with function-preserving initialization to inject knowledge from new corpus, while CPT Ke et al. (2022) designs specific adapters and utilizes a hard-masking to avoid CF. Additionally, DGA Ke et al. (2022) and DAS Ke et al. (2023) adopt soft-masking to directly controls the update of the entire LM and contrast the previous and current representations.
Though these methods alleviate CF during CPT, they ignore the importance of adaptation to domain knowledge for better fine-tuned performance Gururangan et al. (2020); Dery et al. (2022) and generalization to unseen domains Wortsman et al. (2022); Andreassen et al. (2022). Our work utilizes the potential of LM and improves all three aspects.
## 3 Preliminaries
Our language model \(B\) is constructed using the Roberta architecture Liu et al. (2019), which is based on a bi-directional Transformer structure. LM takes a text sentence \(\mathbf{x}_{1:T}=[x_{1},x_{2},...,x_{T}]\) as input and encodes it into a contextual embedding \(\mathbf{h}=[h_{1},h_{2},...,h_{T}]=B(\mathbf{x}_{1:T})\).
### Pre-training and Fine-tuning Tasks
During pre-training, the model is trained to predict missing words in a given text sentence \(\mathbf{x}\) and thus acquires a general understanding of languages, such as syntax, semantics, and context. The pre-training task is called masked language modeling (MLM) Devlin et al. (2019), and the objective is \(\ell_{mlm}(\mathbf{x},\mathcal{W})=-\sum_{\hat{x}\in m(\mathbf{x})}\log p\left( \hat{x}\mid\mathbf{x}_{\backslash m(\mathbf{x})},\mathcal{W}\right)\), where \(\mathcal{W}\) denotes the parameters of language model \(B\), \(m(\mathbf{x})\) and \(\mathbf{x}_{\backslash m(\mathbf{x})}\) the masked words from \(\mathbf{x}\) and the remain words, respectively. The conditional probability is calculated by a prediction layer \(g_{mlm}\) as \(p\left(\hat{x}\mid\mathbf{x}_{\backslash m(\mathbf{x})},\mathcal{W}\right)=g_ {mlm}\left(B_{\mathcal{W}}(\mathbf{x}_{\backslash m(\mathbf{x})})\right)\).
After pre-training, the model is fine-tuned using a smaller dataset specific to a downstream task, which enables it to learn the intricacies and details of the task. In our study, the downstream task contains labeled samples \((\mathbf{x},y)\) (e.g., in a hash-tag prediction task, \(\mathbf{x}\) is the user's twitter and \(y\) is the selected hashtag). Its objective function is to minimize \(\ell_{down}(\mathbf{x},\mathcal{W})=-\log p\left(y\mid\mathbf{x},\mathcal{W}\right)\).
### Soft Prompt Learning
Prompt tuning Lester et al. (2021) is a lightweight alternative to the full fine-tuning that introduces a trainable prompt \(\mathbf{P}=[p_{1},p_{2},...,p_{L}]\) as a prefix to the input embedding \(\mathbf{E}=[e(x_{1}),e(x_{2}),...,e(x_{T})]\) to replace the update on entire model. The prompt length is \(L\), \(e\) represents the embedding layer in LM, and \(p_{i}\in\mathbb{R}^{d}\) has the same dimension \(d\) as the token embedding. During prompt tuning, the concatenated matrix \([\mathbf{P};\mathbf{E}]\in\mathbb{R}^{(L+T)\times d}\) is used as the input to the LM, expressed as \(B(\mathbf{x},\mathbf{P})\). The downstream task optimization is represented as \(\ell_{down}(\mathbf{x},\mathbf{P})=-\log p\left(y\mid\mathbf{x},\mathbf{P} \right)=-\log g_{down}\left(B(\mathbf{x},\mathbf{P})\right)\), where \(g_{down}\) is the prediction layer for the task and the model \(B\) does not update in conventional soft prompt learning.
### Continual Pre-training for Anytime Fine-tuning
Continual pre-training Jang et al. (2022); Meng et al. (2023) is a way to efficiently adapt to the new domain while maintaining learned knowledge. The problem formulation is as follows (see Fig. 1): assume a stream of new domains (e.g., _latest news about "Omicron"_) sequentially appears as \(\mathcal{D}_{1},...,\mathcal{D}_{N}\), where \(\mathcal{D}_{i}\) is the distribution of \(i\)-th domain over a finite vocabulary of tokens \(\mathcal{X}\). Initially, we have an LM that has been well pre-trained on the general corpus \(C_{0}\), such as Roberta. Then at each stage \(i\), a collection of new unlabeled corpus \(C_{i}=\{\mathbf{x}\mid\mathbf{x}\in\mathcal{D}_{i}\}\) is obtained. The existing LM continually pre-trains to learn the new knowledge from \(\mathcal{D}_{i}\), with the goal of improving performance for _anytime fine-tuning_, where the LM is expected
to get greater capacity when fine-tuned on tasks from all pre-trained, current, and unseen domains.
Each domain has its labeled dataset \(D_{i}=\{(\mathbf{x},y)\mid y=F^{*}(\mathbf{x}),\mathbf{x}\in\mathcal{D}_{i}\}\), where \(F^{*}\in\mathcal{Y}\) provides ground-truth labels for classification. During the evaluation, the LM \(B^{i}\), pre-trained up to the \(i\)-th domain, is fine-tuned on a train set \(D_{j}^{tr}\) and then tested on \(D_{j}^{te}\) to measure its domain performance, as illustrated in Fig. 1. The resulting accuracy, denoted as \(Acc_{D_{j}^{i}}^{Bi}\) (simplified as \(a_{j}^{i}\)), indicates the model capacity on task \(D_{j}\) as well as the degree of knowledge of \(j\)-th domain maintained by LM after being sequentially trained up to \(C_{i}\).
Through the integration of results, an accuracy table is generated, allowing for the computation of three crucial metrics in anytime fine-tuning as discussed in Sec. 1: adaptability, generalization, and forgetting. The values used to calculate these metrics are indicated by different colors in Fig. 1. Red cells along the diagonal of the table represent adaptability, indicating the degree to which the LM learns knowledge relevant to current domain. Yellow cells in the upper triangle represent generalization, signifying the ability to perform effectively in future domains. Blue cells in the lower triangle represent forgetting, reflecting a reduction in previously learned knowledge during training.
## 4 Method
A successful algorithm of continual pre-training for anytime fine-tuning should meet the following requirements: (1) effective adaptation to the current domain and capturing more domain knowledge, (2) strong generalization to tasks in unseen domains, and (3) minimal catastrophic forgetting of previously learned knowledge. To achieve this, we propose a framework, dubbed HPrompt-CPT, which consists of two components: the _Hnet-Prompt_ module and _Agreement_ and _Disagreement_ losses. The overview is presented in Fig. 3.
### Hnet-Prompt for Pre-training and Fine-tuning
Previous soft prompt methods Qin and Joty (2022); Zhu et al. (2022); Razdaibiedina et al. (2023) have made great success in the CL, with almost no catastrophic forgetting. However, these parameter-efficient methods fall short in model adaptation during the pre-training stage and fail to exhibit generalization capabilities when faced with new domains, as shown in Fig. 2. On the other hand, prompt engineering has shown exceptional performance in pre-training language models to better learn domain-specific knowledge Radford et al. (2019); Brown et al. (2020). However, the use of hard-coded prompts makes it difficult to implement and less relevant to generalization.
Therefore, inspired by previous meta-learning approaches Qiao et al. (2018); Yao et al. (2019), we propose a prompt module with a meta hypernetwork (Hnet-Prompt) for automatic knowledge adaptation and cross-domain generalization. Specifically, when a batch of data \(\left[\mathbf{x}^{1},...,\mathbf{x}^{n}\right]\) in a specific domain \(\mathcal{D}_{i}\) comes, the hypernetwork generates a prompt \(\mathbf{P}\) for each sample (see Fig. 3(b)), taking into account both domain and sample properties while generalizing knowledge from learned domains. The process is parameterized as:
\[\mathbf{P}^{i}=F(\hat{\mathbf{h}^{i}})=F(E(\mathbf{x}^{i})), \tag{1}\]
Figure 3: An overview of the model structure, with dotted lines indicating trainable modules and solid lines indicating frozen modules. (a) denotes the soft prompt tuning (Sec. 3.2). (b) shows the pre-training on domain 4 with the hnet-prompt module (Sec. 4.1). The hypernetwork takes the contextual embedding \(\hat{h}\) as input and automatically generates a prompt \(\mathbf{P}\) considering domain and sample properties, which clusters \(\mathbf{P}\) for similar domains (\(\mathcal{D}_{2}\),\(\mathcal{D}_{3}\),\(\mathcal{D}_{4}\)) together and facilitates knowledge generalization. (c) computes the agreement and disagreement losses (Sec. 4.2).
where \(E\) refers to a text encoder, \(F\) corresponds to a hypernetwork, and \(\hat{\mathbf{h}}^{i}\) represents the contextual embedding, which captures both the sentence and implicit domain information.
Hypernetwork \(F\) encodes the domain feature of input samples (we use a 6-layer Transformer) and then projects the pooled feature to obtain the prompt (see Fig. 3(b)). Rather than directly generating the prompt, we set \(M\) prompt components \(\mathbf{V}_{m}\in\mathbb{R}^{L\times d}\) and generate a weight vector \(\alpha\in\mathbb{R}^{M}\) to get the final prompt \(\mathbf{P}=\sum_{m=1}^{M}\alpha_{m}\mathbf{V}_{m}\). Vector \(\alpha\) controls the contribution of each prompt component, which corresponds to a basic domain. This approach reduces the parameter of the linear layer for projection and alleviates forgetting by shifting the learning problem from remembering the entire embedding to a weight vector.
Prompt components \(\mathbf{V}\), analogous to a set of basis vectors, are a set of prompt embeddings that are randomly initialized, trainable and optimized through gradient descent. The well-trained prompt components are supposed to offer greater generalization to future domains as long as the prompt components are as mutually exclusive as possible. For example, a prompt embedding directly optimized for the domain of "ACL papers" does not directly apply to the domain of "AI papers" due to the domain difference; however, one of the prompt components learned on "ACL papers", e.g., "deep learning", can be combined with another component of "statistics" to generalize to the domain of "AI papers".
During pre-training, the language model is conditioned on the prompt generated by the hypernetwork, which models \(p(output\mid input,domain)\) and injects the domain knowledge into the model in an explicit way. Then, we optimize the language model and hypernetwork in an end-to-end manner by minimizing the following equation:
\[\begin{split}\ell_{mlm}&(\mathbf{x},\mathcal{W}, \Theta)=\\ &-\sum_{\hat{x}\in m(\mathbf{x})}\log p\left(\hat{x}\mid\mathbf{ x}_{\backslash m(\mathbf{x})},\mathcal{W},\Theta\right),\end{split} \tag{2}\]
where \(p(\cdot)=g_{mlm}\left(B_{\mathcal{W}}\left(\mathbf{x}_{\backslash m(\mathbf{x })},F_{\Theta}\left(\mathbf{x}_{\backslash m(\mathbf{x})}\right)\right)\right)\) and \(\Theta\) is the parameter of \(F\). This approach allows for qualified and automatic adaptation to domain knowledge and enables the transfer of this knowledge across domains through hypernetwork.
During downstream task fine-tuning, domain identity is not required anymore. Hypernetwork will automatically map the input samples to their unique prompt embedding with the knowledge generalized from learned domains. Given a task \(t\), the entire model will be fine-tuned on the smaller labeled dataset, using the objective \(\ell_{down}(\mathbf{x},\mathcal{W},\Theta)=-\log p\left(y\mid\mathbf{x}, \mathcal{W},\Theta\right)\). Here hypernetwork \(F\) is also trainable to get the best adaptation to downstream tasks. The fine-tuned performance on the task shows the degree of domain knowledge maintained by the LM.
### Agreement and Disagreement Losses for Prompted Language Model
While preventing the forgetting of learned knowledge is always the key challenge in continual pre-training, they are at the cost of adaptability and generalization. To overcome it, we propose a novel approach, named agreement and disagreement losses.
**Agreement loss**. While knowledge distillation (KD) has been demonstrated to perform well in overcoming CF Chuang et al. (2020); Dong et al. (2021), its alignment on the entire feature space can limit the adaptation to new domains. To alleviate it, we propose to align the output \(p(output\mid input,domain)\) of the prompted language model instead \(p(output\mid input)\) used in conventional KD. We term this approach the _agreement loss_. Specifically, we begin with the prior learned LM \(B^{i-1}\). Then, initialize the random prompt \(\mathbf{P}_{rand}\) and generate prompted hidden states using both current LM \(B^{i}\) and previous LM \(B^{i-1}\) (see Fig. 3(c)). We then minimize the distance metrics \(\mathcal{M}\) between the outputs of two models, as shown below:
\[\begin{split}\ell_{a}(\mathbf{x},\mathcal{W})=\mathcal{M}[B^{i-1} (\mathbf{x},\mathbf{P}_{rand}),\\ B^{i}_{\mathcal{W}}(\mathbf{x},\mathbf{P}_{rand})],\end{split} \tag{3}\]
where \(\mathbf{P}_{rand}\) simulates the condition to active generic or learned domain knowledge. The agreement loss, which operates on \(B(\cdot,\mathbf{P}_{rand})\), effectively prevents forgetting by enforcing consistency on multiple randomized conditions and preserves the plasticity to new domains by maintaining model capacity conditioned on other prompts, as demonstrated by a comparison to KD. A smaller \(\mathcal{M}\) indicates a closer distance between the two inputs. In this article, we use cosine similarity to calculate \(\mathcal{M}\), which performs better than the KL distance between logits in the experiments in Sec. 5.4.
**Disagreement loss**. Besides the consistency achieved by agreement loss, we also expect the exclusiveness of the generated hidden states for the current domain. It brings two advantages: (1) it reduces interference to established knowledge, which
mitigates forgetting (Farajtabar et al., 2020; Wang et al., 2021); (2) it encourages generalization when fine-tuning by incorporating a wider range of domain knowledge (Pagliardini et al., 2023). To achieve this exclusiveness, we add a loss function called _disagreement loss_. Specifically, when a sample comes, we generate the prompt using hyper-network \(F\) and train the prompted LM to maximally disagree with the output of the previous LM, which is also promoted by the same embedding (see Fig. 3(c)). This involves minimizing the agreement metric \(\mathcal{A}(\cdot,\cdot)\) to push apart the two prompted hidden states:
\[\ell_{da}(\mathbf{x},\mathcal{W},\Theta)=\mathcal{A}(B^{i-1}( \mathbf{x},F(\mathbf{x})), \tag{4}\] \[B^{i}_{\mathcal{W}}(\mathbf{x},F_{\Theta}(\mathbf{x}))),\]
thereby increasing the exclusiveness of the output of LM for the current domain. In Sec. 5.4, we compare various implementation of \(\mathcal{A}\) including orthogonal constrain (Smith et al., 2023), softmax variant (Pagliardini et al., 2023), opposite value of KL-divergence. Ultimately, we select the orthogonal constraint, which can be calculated using the equation \(\mathcal{A}_{ortho}(\mathbf{X},\mathbf{Y})=||\mathbf{X}\mathbf{Y}^{T}-\mathbf{ I}||\).
Finally, the loss function of our HPrompt-CPT during pre-training can be summarized as follows:
\[\mathcal{L}=\sum_{i=1}^{N}\ell_{mlm}+\lambda_{1}\ell_{a}+\lambda_{2}\ell_{da}, \tag{5}\]
where \(N\) is the batch size, and \(\lambda_{1},\lambda_{2}\) are the trade-off hyper-parameters. The loss input \(\mathbf{x}_{i}\) is omitted.
## 5 Experiment
In this section, we conduct experiments on two benchmarks to investigate the adaptability, generalization, and degree of forgetting of HPrompt-CPT.
### Benchmarks
**DAPset.** It is a benchmark for continual domain adaptive pre-training, originally constructed by (Ke et al., 2023). It consists of six domains, each with an unlabeled corpus and a corresponding end-task classification dataset. Each domain contains a corpus size of over 100 million tokens, and we follow the original data construction and task order.
**TWEET.** We develop a new benchmark based on a tweet dataset (Jin et al., 2022) to simulate the distribution shift over time. The dataset includes tweets from 2015 to 2019 and is split into five time periods to form five domain corpora, each with over 50 million tokens. The tweet texts are pre-processed following Nguyen et al. (2020). For the downstream task, we build a single-label hashtag prediction dataset for each domain following Gong and Zhang (2016). TWEET keeps the chronological order of domains to simulate the updating in the real-world system. Please refer to Appendix B for more information about the two benchmarks.
### Metrics and Baselines
**Metrics.** We introduce three attributes of continual pre-training in Sec.3.3 and provide an explanation of their evaluation methods. Formally, we utilize the adaptation accuracy \(A\_Acc=\frac{1}{T}\sum_{i=1}^{T}a_{i}^{i}\) to measure adaptability, the out-of-domain accuracy \(O\_Acc=\frac{2}{T*(T-1)}\sum_{i=1}^{T}\sum_{j=i+1}^{T}a_{j}^{i}\) to evaluate generalization, and the final accuracy \(F\_Acc=\frac{1}{T}\sum_{i=1}^{T}a_{i}^{T}\) to assess the degree of catastrophic forgetting. Here, \(a_{i}^{j}\) represents the fine-tuned accuracy on the \(i\)-th downstream task, after being sequentially trained up to corpus \(C_{j}\) in the \(j\)-th domain.
**Baselines.** We first evaluate the algorithms that build _separate model_ for each domain, including: (1) **Initial** is fine-tuned on the initial
\begin{table}
\begin{tabular}{c|c|c c c c|c c} \hline \hline \multirow{2}{*}{**Setting**} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**DAPset**} & \multicolumn{3}{c}{**TWEET**} \\ & & \(A\_Acc\) & \(O\_Acc\) & \(F\_Acc\) & \(A\_Acc\) & \(O\_Acc\) & \(F\_Acc\) \\ \hline \hline \multirow{4}{*}{\begin{tabular}{c} Separate \\ Pre-training \\ \end{tabular} } & Initial & 0.8053 \(\pm\) 0.010 & 0.8171 \(\pm\) 0.010 & - & 0.7933 \(\pm\) 0.001 & 0.7935 \(\pm\) 0.001 & - \\ & Multi-Task & 0.8203 \(\pm\) 0.002 & **0.8299**\(\pm\) 0.005 & - & 0.8014 \(\pm\) 0.002 & 0.8047 \(\pm\) 0.001 & - \\ Pre-training & One-Full & 0.8235 \(\pm\) 0.007 & 0.8174 \(\pm\) 0.008 & - & 0.8037 \(\pm\) 0.001 & 0.8064 \(\pm\) 0.001 & - \\ & One-Adep & 0.8060 \(\pm\) 0.008 & 0.8172 \(\pm\) 0.003 & - & 0.7913 \(\pm\) 0.002 & 0.7915 \(\pm\) 0.003 & - \\ & One-Prompt & 0.8101 \(\pm\) 0.012 & 0.8109 \(\pm\) 0.012 & - & 0.7873 \(\pm\) 0.002 & 0.7876 \(\pm\) 0.002 & - \\ \hline \multirow{4}{*}{
\begin{tabular}{c} Continual \\ Pre-training \\ \end{tabular} } & NCL & 0.8298 \(\pm\) 0.005 & 0.8189 \(\pm\) 0.006 & 0.8198 \(\pm\) 0.005 & 0.8108 \(\pm\) 0.002 & 0.8094 \(\pm\) 0.001 & 0.8079 \(\pm\) 0.001 \\ & EWC & 0.8082 \(\pm\) 0.004 & 0.8109 \(\pm\) 0.003 & 0.8020 \(\pm\) 0.003 & 0.8028 \(\pm\) 0.001 & 0.8048 \(\pm\) 0.001 & 0.8037 \(\pm\) 0.001 \\ & DERep & 0.8245 \(\pm\) 0.002 & 0.8174 \(\pm\) 0.004 & 0.8239 \(\pm\) 0.001 & 0.8102 \(\pm\) 0.001 & 0.8087 \(\pm\) 0.001 & 0.8118 \(\pm\) 0.001 \\ Pre-training & LeWt & 0.8239 \(\pm\) 0.003 & 0.8292 \(\pm\) 0.006 & 0.8179 \(\pm\) 0.006 & 0.8021 \(\pm\) 0.002 & 0.7986 \(\pm\) 0.002 & 0.8082 \(\pm\) 0.001 \\ & CoD-Prompt & 0.8141 \(\pm\) 0.002 & 0.8161 \(\pm\) 0.004 & 0.8176 \(\pm\) 0.004 & 0.7931 \(\pm\) 0.001 & 0.7954 \(\pm\) 0.001 & 0.7958 \(\pm\) 0.001 \\ & DAS & 0.8221 \(\pm\) 0.004 & 0.8164 \(\pm\) 0.001 & 0.8251 \(\pm\) 0.006 & 0.8066 \(\pm\) 0.001 & 0.8078 \(\pm\) 0.001 & 0.8099 \(\pm\) 0.003 \\ & Ours & **0.8356**\(\pm\) 0.002 & 0.8277 \(\pm\) 0.003 & **0.8341**\(\pm\) 0.003 & **0.8186**\(\pm\) 0.001 & **0.8168**\(\pm\) 0.002 & **0.8203**\(\pm\) 0.001 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of baseline results on DAPset/TWEET benchmarks (all results reported in this paper are averaged over 4 random seeds). The symbol “\(-\)” in the table is because \(F\_Acc\) is the same as the average accuracy \(A\_Acc\) in the separate pre-training settings. _We also report the results for different domain orders in Appendix D._
pre-trained point. (2) **Multi-Task** is domain-adaptively pre-trained on the mixture of all domains. (3) **One-Full** is domain-adaptively pre-trained with the updates on the full model. (4) **One-Adapter** is domain-adaptively pre-trained with an adapter layer Houlsby et al. (2019). (5) **One-Prompt** is domain-adaptively pre-trained with a new prompt Lester et al. (2021). Additionally, we test 7 _continual pre-training_ methods: (6) **NCL** is sequentially pre-trained without any CL methods. (7) **EWC** Kirkpatrick et al. (2017) is a regularization method that penalizes changes to important neurons. (8) **DERpp**Buzzega et al. (2020) is a replay method in both sample and feature levels. (9) **LwF**Li and Hoiem (2017) uses knowledge distillation to protect previous predictions. (10) **CoDA-Prompt**Smith et al. (2023) uses a set of prompt components to learn domain-specific knowledge. (11) **DAS**Ke et al. (2023) is a parameter-isolation method which adopts soft-masking.
For HPrompt-CPT, we adopt a 6-layer Transformer as our hypernetwork and frozen Roberta as text encoder. We set the prompt length to 50, and the size of prompt components to 100. In addition, we implement a replay loss to the hypernetwork with a memory buffer storing 300 samples to get the best performance, while removing it resulting in a minimal drop of 0.24% in \(F\_Acc\) on DAPset. During fine-tuning, we train each task for 15 epochs with an early stopping mechanism using the validation data (30% of testing data). We include additional _Implementation Details_ in Appendix C.
### Results and Analysis
**Comparison with the state-of-the-art.** Table 1 shows the continual pre-training performance of different methods on three dimensions. From these results, we make the following observations:
_Observation 1:_ HPrompt-CPT outperforms baselines in terms of adaptability, generalization, and avoidance of catastrophic forgetting. Our approach achieves new state-of-the-art results across all three metrics, with increases of 1.38% and 1.09% on the DAPset in terms of generalization and final performance compared to the most recent algorithm, DAS, as depicted in the last row of Table 1. These results highlight the advantages of injecting domain knowledge into the LM with the hnet-prompt module, which aids in adaptation and promotes knowledge transfer.
_Observation 2:_ Naive multi-task learning is sub-optimal for continual pre-training. Our hnet-prompt method achieves a relative improvement in \(F\_Acc\) of 1.69% on DAPset and 2.35% on TWEET, suggesting that it can alleviate negative transfer between conflicting domains and minimize forgetting. It is worth noting that the \(O\_Acc\) metric of multi-task learning cannot be compared fairly with other algorithms since it has already observed all domains. Nevertheless, our algorithm still achieves a 1.50% gain on TWEET, which may result from the generalization of the diverse domain knowledge in HPrompt-CPT.
_Observation 3:_ Full model tuning achieves better results in learning and transferring domain knowledge. Our proposed method and NCL outperform parameter-efficient methods such as One-Adapter, One-Prompt, and CoDA-Prompt. Interestingly, methods that incorporate regularization terms on parts of neurons, such as EWC and DAS, also result in lower \(A\_Acc\). This suggests that injecting a large amount of domain knowledge into the LM requires a sufficient number of trainable parameters. Our prompted LM, with all parameters trainable and no empirical constraints on updates, shows the best adaptation performance.
**Data-efficient pre-training.** Note that we hypothesize that HPrompt-CPT is especially effective in the setting of anytime fine-tuning. Its performance on a small subset of the corpus is worth referring to, for the model can be utilized for fine-tuning in cases where a domain is not finished training. Fig. 4 illustrates the performances trained on different sizes of datasets and highlights the effectiveness of our method in low-resource environments, particularly in terms of generalization ability. Our design of the hnet-prompt module successfully promotes knowledge transfer across domains, and besides we ob
Figure 4: Performances on DAPset with different sizes of the corpus. The implementations of “ours (trans/lin)” refer to utilizing transformer/linear hypernetwork in HPrompt-CPT, respectively.
serve that the structure of the hypernetwork matters in such settings. Transformers may underfit facing smaller datasets, resulting in poor performances compared to the linear structure.
**Analysis on the distributions of hnet-prompt embeddings and hidden states.** We perform qualitative analyses on prompts and hidden states generated by HPrompt-CPT to investigate whether the hypernetwork can generalize domain information. As depicted in Fig. 5, We use t-sne map (van der Maaten and Hinton, 2008) to visualize the model output before and after training on all six domains in DAPset. For prompts, we observe that the generated prompt embeddings can effectively cluster similar domains together (e.g., overlapping embeddings for corpora \(C_{2}\), \(C_{3}\), and \(C_{5}\) from the same paper dataset) while also achieving differentiation for dissimilar domains (e.g., distant embeddings for \(C_{1}\) (restaurant) and \(C_{5}\) (bio-chem)). This is an impressive result, i.e., it transfers the information across domains, making it easier for the LM to effectively adapt and generalize knowledge.
For hidden states, our model generates distinguishable hidden states for downstream task based on pre-trained domain information, i.e.,the initially mixed downstream representation (\(D_{1}\) - \(D_{6}\) in Fig. 5 top right) are successfully separated in Fig. 5 top left. For instance, the model assigns overlapping representations to similar tasks \(D_{2}\) and \(D_{3}\) (belonging to ACL and AI, respectively), while providing effective differentiation for unrelated tasks \(D_{1}\) (restaurant) and \(D_{5}\) (biology).
### Ablation Study
Table 2 and 3 present the results of different designs of HPrompt-CPT on DAPset, where hyper-parameters are fixed across all settings.
**Effectiveness of the main components.** To assess the impact of the hypernetwork, we replace the hnet-prompt with progprompt (Razdaibiedina et al., 2023), which generates a new soft prompt for each domain and concatenates it and previously learned prompts while requiring domain-id during fine-tuning. As shown in Table 2 (rows 1 and 3), it results in a significant decrease in performances, particularly in adaptability, with an almost 1.77% decrease. It highlights the effectiveness of hnet-prompt in adapting and generalizing domain knowledge, providing great capacity for fine-tuning.
To examine the effect of the agreement and disagreement losses, we compare the results of training progressive prompt and hnet-prompt with and without them. It shows that incorporating the agreement and disagreement losses lead to a 1.15% and 1.20% improvement in \(F\_Acc\) for the two models, respectively, demonstrating its efficiency in preventing CF. Furthermore, we observe that introducing the disagreement loss results in a 1.33% gain in \(O\_Acc\), which is attributed to the incorporation of a wider range of domain knowledge for adaptation, as discussed in Sec. 4.2.
**Hypernetwork structure.** We further investigate the different designs of hypernetwork and present the results in Table 3 (top). First, we compare the network structure with the Linear layer or Multilayer Perceptron (MLP) (the top two rows), but they show poor adaptability and a higher level of CF. Interestingly, we find that the linear structure is more stable when facing a low-resource setting. Besides, we examine the performance of generating prompt embedding directly to show the significance of the component-based method introduced in Sec. 4.1. The results reveal that the component-based approach outperforms in generalization and preventing forgetting, benefiting from shifting the learning problem from remembering prompt to the weight vector which is a simple task.
**Agreement and disagreement loss objective.** We first replace the agreement loss with the conventional KD and the result are presented in the first row of Table 3 (middle). It shows agreement loss leads to a 1.06% improvement in adaptability while
\begin{table}
\begin{tabular}{c c c|c c c} \hline \hline Hypernetwork & \(\ell_{a}\) & \(\ell_{da}\) & \(A\_Acc\) & \(O\_Acc\) & \(F\_Acc\) \\ \hline \hline ✗ & ✗ & ✗ & 0.8165 & 0.8066 & 0.8114 \\ ✗ & ✓ & ✓ & 0.8223 & 0.8149 & 0.8208 \\ ✓ & ✗ & ✗ & 0.8312 & 0.8176 & 0.8242 \\ ✓ & ✓ & ✗ & 0.8307 & 0.8168 & 0.8297 \\ ✓ & ✗ & ✓ & 0.8335 & 0.8235 & 0.8280 \\ ✓ & ✓ & ✓ & 0.8356 & 0.8277 & 0.8341 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation results on the main components.
Figure 5: The t-sne map about prompt embedding and hidden state of the last layer. \(C_{i}\) and \(D_{i}\) denote the corpus and downstream task in \(i\)-th domain, respectively.
maintaining its ability to avoid forgetting, demonstrating its advantage in striking a balance of stability and plasticity for LM. Then, as it is unclear what kinds of objectives are most suitable to overcome forgetting, we test various objective functions for agreement and disagreement losses in Table 3 (middle). Ultimately, minimizing the KL-divergence of randomly prompted hidden states (agreement loss) and minimizing the orthogonal distance of current hidden states (disagreement loss) yield the best final performance of 83.41%.
## 6 Conclusion
This paper introduces HPrompt-CPT, a novel prompt-guided continual pre-training method towards anytime fine-tuning, which enables better performance when fine-tuned on seen and unseen domains. By training a hypernetwork to generate domain-specific prompts with agreement and disagreement losses, it results in (i) greater capacity on pre-trained domains by learning domain knowledge with generated prompts while preserving previous knowledge with random prompts, (ii) improved performance on unseen domains by retaining model plasticity with agreement loss and the ability of knowledge transfer with hypernetwork, and (iii) no need for domain-id during fine-tuning. We set a new SOTA on both well-established benchmark and a temporal shift benchmark.
## 7 Limitations
While we have evaluated our approach on two continual pre-training benchmarks, it remains unknown how well our method would perform on benchmarks with severe domain conflicts. The domains in the benchmarks used in our paper are mostly transferable to each other. For example, the Domain "ACL" and "AI" in DAPSet are highly related. We are not sure how will our method perform in a sequence of domains with little to no shared knowledge or even conflicts. In addition, we currently only test our method on the classification task, while the exploration of more types of downstream tasks is also important. Our future work will extend the benchmark to cover such cases.
Another problem for HPrompt-CPT is the selection of hypernetworks. Our experiments in Sec. 5.3 demonstrate that decreasing the size of the unlabeled corpus can cause the Transformer structure to underfit, while the Linear structure cannot capture all the information from a large corpus. In addition, we find the fine-tuning of hypernetwork is sensitive to the learning rate and weight decay. We aim to enhance the capacity and stability of our hypernetwork. Moreover, it is best to get a hypernetwork that can generalize well on downstream tasks without fine-tuning.
|
2304.11974 | A Closed-form Expression for the Gaussian Noise Model in the Presence of
Raman Amplification | A closed-form model for the nonlinear interference (NLI) in Raman amplified
links is presented, the formula accounts for both forward (FW) and backward
(BW) pumping schemes and inter-channel stimulated Raman scattering (ISRS)
effect. The formula also accounts for an arbitrary number of pumps,
wavelength-dependent fibre parameters, launch-power profiles, and is tested
over a distributed Raman-amplified system setup. The formula is suitable for
ultra-wideband (UWB) optical transmission systems and is applied in a signal
with 13~THz optical bandwidth corresponding to transmission over the S-, C-,
and L- band. The accuracy of the closed-form formula is validated through
comparison with numerical integration of the Gaussian noise (GN) model and
split-step Fourier method (SSFM) simulations in a point-to-point transmission
link. | H. Buglia, M. Jarmolovicius, L. Galdino, R. I. Killey, P. Bayvel | 2023-04-24T10:15:03Z | http://arxiv.org/abs/2304.11974v4 | # A Closed-form Expression for the Gaussian Noise Model in the Presence of Raman Amplification
###### Abstract
A closed-form model for the nonlinear interference (NLI) in Raman amplified links is presented, the formula accounts for both forward (FW) and backward (BW) pumping schemes and inter-channel stimulated Raman scattering (ISRS) effect. The formula also accounts for an arbitrary number of pumps, discrete or distributed Raman amplification setup, wavelength-dependent fibre parameters, and launch power profiles. The formula is suitable for ultra-wideband (UWB) optical transmission systems and is applied in a system with 13 THz optical bandwidth corresponding to transmission over the S-, C-, and L- band. The accuracy of the closed-form formula is validated through comparison with numerical integration of the Gaussian noise (GN) model and split-step Fourier method (SSFM) simulations in a point-to-point transmission link.
Ultra-wideband transmission, Raman amplification, S+C+ band transmission, closed-form approximation, Gaussian noise model, nonlinear interference, nonlinear distortion, optical fiber communications, inter-channel stimulated Raman scattering
## I Introduction
To cope with the exponential growth of data transmission required by internet services such as high-definition video streaming, cloud computing, artificial intelligence, Big Data and the Internet of Things, new technologies such as UWB transmission and space-division multiplexing (SDM) have been widely explored in recent years [1, 2, 3]. For UWB transmission systems, exploring the low-loss wavelength window of a silica-based optical fibre, as shown in Fig. 1, requires the utilisation of new amplifier technologies in addition to Erbium-doped fibre amplifiers (EDFAs). Among these, we can cite Thulium and Bismuth doped fibre amplifiers (TDFAs and BDFAs), semiconductor optical amplifiers (SOA) and Raman amplifiers.
Recently, a wide range of works has shown the benefits of using Raman amplification (RA) to achieve higher throughputs [4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. RA can be divided into two types, namely distributed RA and discrete RA. For the former, the pumps are injected into the transmission fibre, while for the latter a separate fibre is used as the amplification stage. In both cases, the pumps interact with the signal to provide the desired signal amplification.
Together with new amplifier technologies, the key goals in optical network design are to maximise system throughput and introduce intelligence in the network, delivering capacity when and where it is needed [14, 15]. To that purpose, real-time estimation of the UWB system performance is essential, as it enables efficient and rapid system design, online network optimisation routines and virtualisation of the physical layer.
Such real-time prediction of UWB optical fibre transmission systems can be achieved via closed-form expressions of the GN model and its extensions [16, 17, 18]. This model offers a simple way of estimating the fibre NLI by treating it as additive Gaussian noise. Numerous closed-form expressions have been proposed to date [19]. Of interest for UWB transmission systems are closed-form expressions for the GN model in the presence of ISRS effect [18], namely ISRS GN model. Closed-form expressions of this model were derived in [20, 21, 22, 23, 24, 25, 26, 27].
This work focuses on the derivation of a closed-form formula to estimate the NLI in Raman-amplified links. Apart from [24], the remaining aforementioned closed-form expressions are valid for lumped-amplified links only. Despite the
Fig. 1: Attenuation coefficient (a) and Raman gain spectrum (b) of an ITU-T G652.D fibre.
closed-form formula in [24] being valid for Raman amplified links, it is limited to FW pumping schemes and was tested only over C-band systems. A closed-form formula limited to BW pumping schemes can be found in [28], however, it is only valid for C-band systems and limited to 2\({}^{\text{nd}}\) order Raman amplification, i.e, the utilisation of two or fewer pumps.
In this work, we developed a general closed-form expression of the ISRS GN model [18] supporting both FW-RA and BW-RA, ISRS, valid for arbitrary-order RA, i.e, an arbitrary number of pumps, and valid for discrete or distributed RA configurations. This was enabled by deriving for the first time a semi-analytical solution to model the signal profile in the presence of RA and ISRS. The proposed closed-form formulation is valid for Gaussian constellations, and in this work is tested using a distributed RA setup. Its accuracy is verified with numerical integration of the ISRS GN model and SSFM simulations.
The closed-form expression presented in this work was first published in [29]. In this work, we extensively discuss its validation and present all the mathematical derivations used to obtain it. We also include a complete discussion on the semi-analytical approach used to obtain an accurate estimation of the fibre signal profile evolution along the fibre distance. This work together with [29] represent the first closed-form expression of the GN model supporting both FW-RA and BW-RA in the presence of ISRS.
## II The signal profile evolution
This section shows the derivation of the semi-analytical expression for the signal power evolution along the fibre distance in the presence of RA and ISRS. The second part of this section shows the accuracy of the utilisation of the proposed approach.
### _The derivation of the closed-form expression for signal profile evolution_
For NLI estimation expressions based on regular perturbation analysis, such as the GN model and its extensions [16, 18, 17], the estimation of the NLI interference is dependent on the signal power profile evolution along the optical fibre distance. Because of this, a fundamental step in deriving any closed-form expression for NLI estimation is to first derive a closed-form expression for the signal power profile evolution.
In the case of C-band systems, such an expression is trivial as the signal power evolution is only loss-dependent [16]. The situation is more tricky in the presence of ISRS as the power of each channel interacts with one another and a set of coupled differential equations must be solved. Analytical expressions for this case were derived in [30, 31]. These expressions are used in [21] (Eq. 16 and 17) to derive a semi-analytical solution of the signal power profile evolution. The solution is semi-analytical because it is further optimised to correctly reproduce the solution of the coupled differential equations.
The situation is even more complicated in the case of RA, whereas besides the channel-channel interactions, pump-signal and pump-pump interactions must also be considered, not only in the forward direction but also in the backward one. Indeed, for the case of RA and ISRS, the solution of the so-called coupled differential Raman equations in the presence of RA must be solved, and it is given by
\[\begin{split}\pm\frac{\partial P_{i}}{\partial z}=-\sum_{k=i+1}^ {\text{N}_{\text{a}}}\frac{f_{k}}{f_{i}}g(\Delta f)P_{k}P_{i}-\sum_{p:f_{i}>f _{p}}\frac{f_{p}}{f_{i}}g(\Delta f)P_{p}P_{i}+\\ +\sum_{k=1}^{i-1}g(\Delta f)P_{k}P_{i}+\sum_{p:f_{i}<f_{p}}g( \Delta f)P_{p}P_{i}-\alpha_{i}P_{i},\end{split} \tag{1}\]
where, \(P_{i}\), \(f_{i}\) are the power and frequency of the channel of interest (COI), \(P_{k}\), \(f_{k}\) are the power and frequency of the remaining WDM channels, \(P_{p}\), \(f_{p}\) are the power and the frequency of the pumps, \(g_{r}(\Delta f)\) is the polarization averaged, normalized (by the effective core area \(A_{\text{eff}}\)) Raman gain spectrum for a frequency separation \(\Delta f=|f_{i}-f_{k}|\), \(j=k,p\) and \(\alpha_{i}\) is the frequency-dependent attenuation coefficient. Note that the symbol \(\pm\) represents the pump under consideration, i.e., \(+\) for FW-pump and \(-\) for BW-pump configurations. The
Fig. 2: Per-channel launch power evolution along the fibre distance for (a) FW-RA (green) and (b) BW-RA (blue).
pump equations are obtained by replacing \(i=p\) in Eq. (1).
The first step in deriving the proposed closed-form expression for NLI estimation in this paper is to find a semi-analytical expression for Eq. (1). To carry this derivation, we based ourselves in [30] and show the derivation in Appendix A. Let \(\rho(z,f_{i})\) be the signal profile evolution normalised by the input power profile, i.e, \(\rho(z,f_{i})=\frac{P(z,f_{i})}{P(0,f_{i})}\). Thus, a semi-analytical solution of Eq. (1) is given by
\[\rho(z,f_{i})=e^{-\alpha_{i}z}[1-(C_{f,i}P_{f}L_{\text{eff}}+C_{b,i}P_{b}\tilde {L}_{\text{eff}})(f_{i}-\hat{f})], \tag{2}\]
where
\[L_{\text{eff}}(\zeta) =(1-e^{-\alpha_{f,i}z})/\alpha_{f,i}\quad,\] \[\tilde{L}_{\text{eff}}(\zeta) =(e^{-\alpha_{b,i}(L-z)}-e^{-\alpha_{b,i}L})/\alpha_{b,i}\quad,\]
\(L\) is the span length, \(\alpha_{i}\), \(\alpha_{f,i}\) and \(\alpha_{b,i}\) are fibre attenuation coefficients, \(\hat{f}\) is the average frequency of the FW and BW pumps, \(P_{f}\), and \(P_{b}\) are the total launch power respectively from the WDM channels together with any FW pumps, and the BW pumps, \(C_{f,i}\) and \(C_{b,i}\) is the slope of a linear regression of the normalised Raman gain spectrum. The proof of Eq. (2) is given in Appendix A.
The coefficients \(\alpha_{i}\), \(C_{f,i}\), \(C_{b,i}\), \(\alpha_{f,i}\), and \(\alpha_{b,i}\) are channel-dependent parameters and matched using nonlinear least-squares fitting to correctly reproduce the solution of the Raman differential equations in the presence of RA, which is obtained by numerically solving Eq. (1). Note that, three different loss coefficients (\(\alpha_{i}\), \(\alpha_{f,i}\), and \(\alpha_{b,i}\)) and two different slopes of the Raman gain spectrum (\(C_{f,i}\) and \(C_{b,i}\)) are considered - this enables an increase in the dimension of optimisation space. The parameters \(\alpha_{i}\), \(C_{f,i}\), \(C_{b,i}\), \(\alpha_{f,i}\), and \(\alpha_{b,i}\) can be interpreted as modelling respectively the fibre loss, the gain/loss due to FW-RA and BW-RA together with ISRS and how fast the channel gain/loss due to the FW-RA and BW-RA together with ISRS extinguishes along the fibre. This fitting optimisation overcomes the restrictive assumptions used to derive Eq. (2) and enables its utilisation in any simulation scenario, such that any number of pumps, launch power profiles and bandwidths.
Semi-analytical approaches were used in [32, 21, 24, 28] to model specific transmission setups. However, other types of approaches are also possible, e.g. [33]. In this paper for the first time, we proposed a general semi-analytical solution to account for any RA setup scenario with ISRS effect. A main difference between this semi-analytical approach and the one in [21], is the utilisation of 5 optimisation coefficients, against 3 for the latter. The 2 additional coefficients are essential to model BW-RA. Note that, our approach is valid for arbitrary-order RA, i.e, an arbitrary number of Raman pumps. The approach is also a generalisation of [21] as it is also valid for lumped amplification - if one sets \(C_{b,i}=0\) and \(\hat{f}=0\), the semi-analytical solution for the normalised signal profile shown in [21] is obtained.
### _Results for signal profile evolution estimation_
This section illustrates the utilisation of the semi-analytical solution proposed in Eq. (2) to reproduce the solution of the differential Raman equations in Eq. (1).
The transmission setup consists of a WDM signal with \(N_{\text{ch}}\)=131 channels spaced by 100 GHz and centred at 1550 nm. The signal is amplified using distributed RA. Each channel was modulated at the symbol rate of 96 GBD, resulting in a total bandwidth of 13 THz (105 nm), ranging from 1500 nm to 1605 nm, corresponding to the transmission over the S- (1470 nm - 1530nm), C- (1530 nm - 1565nm) and L- (1565 nm - 1615nm) bands. Gaussian symbols are considered in the transmission. For both scenarios, the span length is 80 km and an ITU-T G652.D fibre is considered with attenuation profile and the Raman gain spectrum shown in Fig. 1.
We consider two different simulation scenarios, one for FW-RA and the other for BW-RA. A spectrally uniform launch power profile, where each channel carries 0 dBm and -4 dBm is considered respectively for BW-RA and FW-RA. For both scenarios, the number of pumps, and their wavelengths and powers are chosen based on a "find minimum of constrained nonlinear multivariable" optimisation algorithm implemented in Matlab. In this algorithm, the cost function considered is \(\sum_{p}P_{p}\), such that the energy of the pumps is minimised. A nonlinear constraint is also considered such that the received per-channel launch power is above a given threshold. Over the E- and S-band we place 15 pumps spaced from 0.5 THz apart and let the algorithm find the best power allocation. The highest-wavelength pump was chosen to be 2 THz away from the lowest-wavelength channel.
For FW-RA, pumps are optimised such that at least a quarter of the launch power is recovered at the receiver, while for BW-RA, pumps are optimised such that at least half of the launch power is recovered at the receiver. The remaining launch power can be recovered, for instance, with lumped amplification. An example of fully recovered launch power using RA can be found in [29]. For both scenarios, the pumps' allocation with non-zero power found by the described algorithm is shown in Table I.
For both scenarios, the per-channel power profile along the distance, i.e, the solution of Eq. (1), are shown in Fig. 2 for (a) FW-RA and (b) BW-RA cases. Note that, for the FW-RA lower per-channel launch power is chosen (-4 dBm) to limit
the per-channel-power peak along the distance to less than 4 dBm as shown in in Fig. 2.a.
Our goal is now to reproduce the profiles shown in Fig. 2, obtained from Eq. (1) using the semi-analytical solution shown in Eq. (2) after the fitting optimisation routine described in Sec II-A. For better visualisation, Fig. 3 shows the results for the worst-performing channel in terms of accuracy between Eq. (1) and Eq. (2) for (a) FW-RA and (b) BW-RA. Note that, for the NLI estimation, the effect of the normalised signal profile for each channel is taken into account as an integration over the fibre length (see Eq. (8)); this means that the inaccuracy shown in the BW-RA case (Fig. 3(b)) is negligible as it occurs only for reduced-power levels which do not contribute significantly to the result of the integral in Eq. (8). Because of it, this inaccuracy has a negligible impact on the accuracy of the NLI estimation - this is validated in the next section. Thus, Fig. 3 shows that the fitting strategy enables reproducing Eq. (1) by using Eq. (2) and accurately capturing the most impactful contributions to the integral in Eq. (8).
## III The closed-form expression for the NLI estimation
This section describes the closed-form expression used to estimate the NLI in the presence of RA. The integral expressions used as a baseline to derive the closed-form expression are presented in Sec. III-A. As we will see, these expressions depend on the normalised signal power profile evolution \(\rho(z,f_{i})\), which were derived in Sec. II-A. Thus, Eq. (2) is of fundamental importance to derive the closed-form expressions shown in Sec. III-B. This section ends with the application of the closed-form expression in a transmission system and the verification of its accuracy in Sec. III-C.
Let \(i\) indicate the channel index, the nonlinear signal-to-noise ratio, \(\text{SNR}_{\text{NLI},i}\) is given by
\[\text{SNR}_{\text{NLI},i}=\frac{P_{i}}{\eta_{n}(f_{i})P_{i}^{3}}, \tag{3}\]
where \(P_{i}\) is the launch power of the COI and \(\eta_{n}(f_{i})\) is the nonlinear coefficient obtained at the end of the \(n^{\text{th}}\) span.
### _The Integral Expressions_
The integral expressions used to derive the proposed closed-form expressions are as follows. the nonlinear coefficient \(\eta_{GN,n}(f_{i})\) in Eq. (3), can be rewritten as [21]
\[\eta_{GN,n}(f_{i})\approx\sum_{j=1}^{n}\left[\frac{P_{i,j}}{P_{i}}\right]^{2} \cdot[\eta_{\text{SFM}_{j}}(f_{i})n^{\epsilon}+\eta_{\text{SFM}_{j}}(f_{i})], \tag{4}\]
where \(\eta_{\text{SFM}_{j}}(f_{i})\) is the SPM contribution and \(\eta_{\text{SFM}_{j}}(f_{i})\) is the total XPM contribution to the NLI both generated in the \(j^{\text{th}}\) span. \(P_{i,j}\) is the power of channel \(i\) launched into the \(j^{\text{th}}\) span, \(\epsilon\) is the coherent factor [16, Eq. 22]. In Eq. (4), the four-wave mixing (FWM) contributions to the NLI are neglected, the SPM is assumed to accumulate coherently along the fibre spans, while the XPM is assumed to accumulate incoherently - the accuracy of these assumptions was validated in [21]. For notation convenience, the \(j\) dependence of the SPM and XPM contribution is suppressed throughout this paper.
The XPM contribution (\(\eta_{\text{XPM}}(f_{i})\)) in Eq. (4) is obtained by summing over all COI-interfering pairs present in the transmitted signal, i.e,
\[\eta_{\text{XPM}}(f_{i})=\sum_{k=i,k\neq i}^{N_{ch}}\eta_{\text{XPM}}^{(k)}(f_ {i}), \tag{5}\]
where \(N_{\text{ch}}\) is the number of WDM channels and \(\eta_{\text{NPM}}^{(k)}(f_{i})\) is the XPM contribution of a single interfering channel \(k\) on channel \(i\).
The XPM and SPM contributions of a single interfering channel are given respectively by [21, Eq. 8,9]
\[\eta_{\text{XPM}}^{(k)}(f_{i})=\frac{32}{27}\frac{\gamma^{2}}{B_ {k}^{2}}\left(\frac{P_{k}}{P_{i}}\right)^{2}\times\] \[\times\int_{\frac{-B_{i}}{2}}^{\frac{B_{i}}{2}}df_{1}\int_{\frac{ -B_{i}}{2}}^{\frac{B_{i}}{2}}df_{2}\ \Pi\left(\frac{f_{1}+f_{2}}{B_{k}}\right)|\mu(f_{1}+f_{i},f_{2}+f_{k},f_{i})|^ {2}\,, \tag{6}\]
and
\[\eta_{\text{SFM}}(f_{i})=\frac{1}{2}\eta_{\text{XPM}}^{(i)}(f_{i}), \tag{7}\]
where \(\gamma\) is the nonlinear parameter, \(\Pi(x)\) denotes the rectangular function and \(B_{k}\) is the bandwidth of the channel \(k\). \(\mu(f_{1},f_{2},f_{i})\) is the so-called link function or FWM efficiency [16], which is given by [18, Eq. 4]
\[\mu\left(f_{1},f_{2},f_{i}\right)=\] \[=\left|\int_{0}^{L}d\zeta\ \sqrt{\frac{\rho(\zeta,f_{1})\rho(\zeta,f_{2}) \rho(\zeta,f_{1}+f_{2}-f_{i})}{\rho(\zeta,f_{i})}}e^{j\phi(f_{1},f_{2},f_{i}) \zeta}\right|^{2} \tag{8}\]
where \(\phi=-4\pi^{2}\left(f_{1}-f_{i}\right)\left(f_{2}-f_{i}\right)\left[\beta_{2}+ \pi\beta_{3}(f_{1}+f_{2})\right]\), and \(\rho(z,f_{i})\) is the normalized signal power profile (see Sec. II-A). \(\beta_{2}\) is the group velocity dispersion (GVD) parameter, \(\beta_{3}\) is the linear slope of the GVD parameter.
### _The derivation of the closed-form expression_
This section is devoted to the calculation of \(\eta_{n}(f_{i})\) in closed-form, which is then used to calculate \(\text{SNR}_{\text{NLI},i}\) in Eq. (3). The new closed-form expression supporting RA is presented. The formula is obtained by using the semi-analytical solution of the power evolution, obtained in Eq. (2) to derive a closed-form expression of the NLI.
The first step is to derive a closed-form expression of the link function shown in Eq. (8). Let
\[T_{f,i}=-\frac{P_{f}C_{f,i}(f_{i}-\hat{f})}{\alpha_{f,i}} \qquad\qquad\alpha_{l,i}=\alpha_{i}+l_{1}\alpha_{f,i}-l_{2}\alpha_{b,i}\] \[T_{b,i}=-\frac{P_{b}C_{b,i}(f_{i}-\hat{f})}{\alpha_{b,i}} \qquad\qquad\kappa_{f,i}=e^{-(\alpha_{i}+l_{1}\alpha_{f,i})L}\] \[\qquad\qquad\qquad\kappa_{b,i}=e^{-l_{2}\alpha_{b,i}L}\]
The link function is approximated in closed-form as
\[\mu\left(f_{1}+f_{i},f_{2}+f_{i},f_{i}\right)\approx\\ \approx\sum_{\begin{subarray}{c}0\leq l_{1}+l_{2}\leq 1\\ 0\leq l_{1}^{\prime}+l_{2}\leq 1\\ \end{subarray}}\Upsilon_{i}\Upsilon_{i}^{\prime}\left[\frac{(\kappa_{f,i}\kappa _{f,i}^{\prime}+\kappa_{b,i}\kappa_{b,i}^{\prime})(\alpha_{l,i}\alpha_{l,i}^{ \prime}+\phi^{2})}{(\alpha_{l,i}^{2}+\phi^{2})(\alpha_{l,i}^{\prime 2}+\phi^{2})} -\right.\\ -\left.\frac{(\kappa_{f,i}\kappa_{b,i}^{\prime}+\kappa_{b,i} \kappa_{f,i}^{\prime})(\alpha_{l,i}\alpha_{l,i}^{\prime}+\phi^{2})}{(\alpha_{l,i}^{2}+\phi^{2})(\alpha_{l,i}^{\prime 2}+\phi^{2})}\cos(\phi L)+\\ \left.+\frac{(\kappa_{f,i}\kappa_{b,i}^{\prime}-\kappa_{b,i} \kappa_{f,i}^{\prime})(\alpha_{l,i}-\alpha_{l,i}^{\prime})\phi}{(\alpha_{l,i}^ {2}+\phi^{2})(\alpha_{l,i}^{\prime 2}+\phi^{2})}\sin(\phi L)\right], \tag{9}\]
where \(\Upsilon_{i}\) is given by
\[\Upsilon_{i}=T_{i}\left(\frac{-\bar{T}_{f,i}}{T_{i}}\right)^{l_{1}}\left( \frac{\bar{T}_{b,i}}{T_{i}}\right)^{l_{2}}. \tag{10}\]
The proof of Eq. (9) is given in Appendix B. The coefficient \(\Upsilon_{i}^{\prime}\) is respectively the same as the one in Eq. (10) with the indices \(l_{1}\) and \(l_{2}\) replaced by \(l_{1}^{\prime}\) and \(l_{2}^{\prime}\). The same is valid for the variables \(\alpha_{l,i}^{\prime}\), \(\kappa_{f,i}^{\prime}\) and \(\kappa_{b,i}^{\prime}\).
We now present a closed-form expression for the XPM and SPM NLI contributions shown in Eqs. (6) and (7), respectively. Using Eq. (9) as an analytical solution of the link function, a closed-form expression for the XPM and SPM are given respectively by
\[\eta_{\rm{XPM}}^{(k)}(f_{i})=\frac{32}{27}\frac{\gamma^{2}}{B_{k }}\left(\frac{P_{k}}{P_{i}}\right)^{2}\sum_{\begin{subarray}{c}0\leq l_{1}+l_ {2}\leq 1\\ 0\leq l_{1}^{\prime}+l_{2}\leq 1\\ \end{subarray}}\Upsilon_{k}\Upsilon_{k}^{\prime}\frac{1}{\phi_{i,k}(\alpha_ {l,k}+\alpha_{l,k}^{\prime})}\times\\ \times\left\{2(\kappa_{f,k}\kappa_{f,k}^{\prime}+\kappa_{b,k} \kappa_{b,k}^{\prime})\left[\text{atan}\!\left(\frac{\phi_{i,k}B_{i}}{2\alpha _{l,k}^{\prime}}\right)+\text{atan}\!\left(\frac{\phi_{i,k}B_{i}}{2\alpha_{l, k}^{\prime}}\right)\right]+\right.\\ +\left.\pi\!\left[-(\kappa_{f,k}\kappa_{b,k}^{\prime}+\kappa_{b,k }\kappa_{f,k}^{\prime})\left(\text{sign}\!\left(\frac{\alpha_{l,k}}{\phi_{i,k }}\right)e^{-|\alpha_{l,k}L|}+\right.\right.\right.\\ +\left.\left.\left.\text{sign}\!\left(\frac{\alpha_{l,k}^{\prime}}{ \phi_{i,k}}\right)e^{-|\alpha_{l,k}^{\prime}L|}\right)+(\kappa_{f,k}\kappa_{b, k}^{\prime}-\kappa_{b,k}\kappa_{f,k}^{\prime})\times\right.\\ \times\left.\left.\left.\left.\left(\text{sign}\!\left(-\phi_{i,k} \right)e^{-|\alpha_{l,k}L|}+\text{sign}\!\left(\phi_{i,k}\right)e^{-|\alpha_{ l,k}^{\prime}L|}\right)\right]\right\}\right\} \tag{11}\]
and
\[\eta_{\rm{SPM}}(f_{i})=\frac{16}{27}\frac{\gamma^{2}}{B_{i}^{2}} \sum_{\begin{subarray}{c}0\leq l_{1}+l_{2}\leq 1\\ 0\leq l_{1}^{\prime}+l_{2}^{\prime}\leq 1\end{subarray}}\Upsilon_{i} \Upsilon_{i}^{\prime}\frac{\pi}{\phi_{i}(\alpha_{l,i}+\alpha_{l,i}^{\prime})}\times\\ \times\left\{2(\kappa_{f,i}\kappa_{f,i}^{\prime}+\kappa_{b,i} \kappa_{b,i}^{\prime})\left[\text{asin}\!\left(\!\frac{3\phi_{i}B_{i}^{2}}{8 \pi\alpha_{l,i}}\right)+\text{asin}\!\left(\!\frac{3\phi_{i}B_{i}^{2}}{8\pi \alpha_{l,i}^{\prime}}\right)\right]+\right.\\ +\left.4\ln\!\left(\!\sqrt{\frac{\phi_{i}L}{2\pi}}B_{i}\right) \left[-(\kappa_{f,i}\kappa_{b,i}^{\prime}+\kappa_{b,i}\kappa_{f,i}^{\prime}) \left(\text{sign}\!\left(\frac{\alpha_{l,i}}{\phi_{i}}\right)e^{-|\alpha_{l,i} L|}+\right.\right.\\ +\left.\left.\text{sign}\!\left(\frac{\alpha_{l,i}^{\prime}}{\phi_{i}} \right)e^{-|\alpha_{l,i}L|}\right)+(\kappa_{f,i}\kappa_{b,i}^{\prime}-\kappa_{b,i}\kappa_{f,i}^{\prime})\times\right.\\ \times\left.\left(\text{sign}\left(-\phi_{i}\right)e^{-|\alpha_{l,i}L|}\text{sign}\left(\phi_{i}\right)e^{-|\alpha_{l,i}^{\prime}L|}\right) \right]\right\}, \tag{12}\]
where
\[\phi_{i} =-4\pi^{2}\left(\beta_{2}+2\pi\beta_{3}f_{i}\right),\] \[\phi_{i,k} =-4\pi^{2}\left(f_{k}-f_{i}\right)\left[\beta_{2}+\pi\beta_{3} \left(f_{i}+f_{k}\right)\right].\]
The proof of Eqs. (11) and (12) are given respectively in Appendix C and D.
Finally, the \({\rm{SNR}}_{\rm{NLI},i}\) can be calculated analytically by inserting Eqs. (4), (5), (11) and (12) in Eq. (3). The final expression accounts for wavelength-dependent fibre parameters and different launch power per channel. Additionally, the formula is also valid for links made of different span setups - in that case, all the fibre parameters and per-channel launch power depend not only on the channel \(i\), but also on the span \(j\).
### _Results for the nonlinear interference estimation_
This section shows the validation of Eq. (12) and Eq. (11). To that end, we consider the transmission system described in Sec. II-A, i.e, a distributed RA link consisting of a WDM transmission with \(N_{\rm{ch}}\)=131 channels spaced by 100 GHz and centred at 1550 nm. Each channel was modulated at the symbol rate of 96 GBD, resulting in a total bandwidth of 13 THz (105 nm). Gaussian symbols are considered in the transmission. The span length is 80 km and an ITU-T G652.D fibre is considered with Raman gain spectrum and attenuation shown in Fig. 1. Nonlinear coefficient and dispersion parameters are \(\gamma=1.16\) W\({}^{-1}\)km\({}^{-1}\), \(D=16.5\) ps nm\({}^{-1}\)km\({}^{-1}\)
Fig. 3: Signal power evolution along the fibre distance obtained using the numerical solution of the Raman differential equations in Eq. (1) and the semi-analytical solution shown in Eq. (2) for (a) FW-RA and (b) BW-RA. In both cases, the results are shown for the worst-performing channel in terms of accuracy between Eq. (1) and Eq. (2).
\(S=0.09\) ps nm\({}^{-2}\)km\({}^{-1}\), respectively. A spectrally uniform launch power profile, where each channel carries 0 dBm and - 4 dBm is considered respectively for BW-RA and FW-RA (see Sec.II-B). The power profiles along the fibre distance are shown in Fig. 2 and the pumps' allocation used for each one of the scenarios is shown in Table I. Results are obtained for single-span and 3-span transmissions.
The SNR\({}_{\text{NLI}}\) as a function of wavelength is shown in Fig. 4 for (a) FW-RA and (b) BW-RA for the case of a single span transmission and a 3 span transmission. To verify the accuracy of the closed-form expression shown in Eqs. (11) and (12), the SNR\({}_{\text{NLI}}\) is also computed using the integral ISRS GN model [18] and SSFM simulations. For the former, the results are obtained by inserting the power profiles shown in Fig. 2 in [18, Eq. 4]. For the latter, the same power profiles from Fig. 2 are used and interpolated along the fibre distance for each step in the SSFM simulation. To ensure accurate simulation results, adaptive step sizes with local-error method [34] was used, where goal local error \(\delta_{G}=10^{-10}\) and sequence of \(2^{17}\) Gaussian symbols per channel were considered. Note that, for all the results, the XPM generated by the pumps is neglected; as shown in [35], this is a valid assumption when the WDM spectra are sufficiently far from the pumps - in our case, as described in Sec. III-C, the highest-wavelength pump was chosen to be 2 THz away from the lowest-wavelength channel, such that these effects could be neglected. Despite that, the aforementioned effects can be included in this model by considering the pumps as additional interfering channels.
Fig. 4 shows the SNR\({}_{\text{NLI}}\) for (a) FW-RA and (b) BW-RA. It is interesting to note the correlation of the SNR\({}_{\text{NLI}}\) profile with the power profiles shown in Fig. 2. Indeed, for the FW-RA case, shown in Fig. 4(a), the high-power levels in short wavelengths (see Fig. 2(a)), reduce the SNR\({}_{\text{NLI}}\), degrading the performance of those channels; on the other hand, the performance of long-wavelength channels is higher, due to their reduced power levels, yielding to a tilt in the SNR\({}_{\text{NLI}}\) profile. For the BW-RA case, shown in Fig. 4(b) the interaction between fibre attenuation, dispersion and power profile (see Fig. 2(b)) yields a relatively flat SNR\({}_{\text{NLI}}\) profile; however a smooth tilt can still be observed, which also correlate with the power profile shown in Fig. 2(b) as high-power levels are observed for the longer wavelengths. Note that, in general, BW-RA performs better in terms of SNR\({}_{\text{NLI}}\) when compared with FW-RA cases because of the reduced per-channel power evolution along the fibre.
In terms of accuracy, for a single span FW-RA transmission, maximum per-channel errors of 0.52 dB and 0.67 dB were found between the closed-form expression and the integral ISRS GN model, and between the closed-form expression and the SSFM simulation, respectively. For the transmission over 3 spans, these errors are respectively 0.47 dB and 0.69 dB. The same analyses for the BW-RA transmission over a single span yield errors of 1.0 dB and 0.95 dB respectively, while for the transmission over 3 spans, these errors are respectively 1.0 dB and 1.2 dB.
## IV Conclusions
In this work, we presented a closed-form formula of the Gaussian noise (GN) model suitable for ultra-wideband (UWB) transmission systems enabling discrete or distributed Raman amplification. This formula is the first to account for any setup of Raman amplification technologies together with the inter-channel stimulated Raman scattering (ISRS) effect. The formula is shown to support forward (FW) and backward (BW) pumping schemes and accurately predict the nonlinear interference (NLI) for an arbitrary number of pumps and wavelength-dependent fibre parameters and launch power profiles. A fundamental step to deriving this closed formula was to derive a semi-analytical solution to correctly reproduce the signal power profile evolution along the fibre distance in the presence of Raman amplification and ISRS effect.
The formula was applied to 13 THz optical bandwidth corresponding to transmission over the S-, C-, and L- bands. In terms of accuracy, among all of the scenarios tested in this work, the formula showed maximum errors of 1.2 dB and 1 dB when compared to the integral model and slit-step Fourier method (SSFM) simulations. Additionally, the formula is capable of estimating the NLI in only a few seconds, where the majority of the computational time was required to numerically solve the differential Raman equations. Because of the speed of computation, the formula is suitable for real-time estimation of the NLI and can be applied as an enabling tool for future intelligent and dynamic optical fibre networks.
Fig. 4: Nonlinear performance after 1 x 80 km and 3 x 80 km transmission for (a) FW-RA and (b) BW-RA.
Data Availability Statement
The data that support the figures in this paper are available from the UCL Research Data Repository (DOI:10.5522/04/21696401), hosted by FigShare.
## Appendix A Derivation of the Analytical Solution of the Normalized Signal Power Profile.
This section shows the derivation of Eq. (2). We stress that most of the assumptions made in this section is not exact, however, this is not an issue as this equation is used as a semi-analytical solution of the Raman equations and the coefficients will be fitted and optimised.
We start with Eq. (1). The derivation is analogous to [31]. We start by considering a constant attenuation \(\alpha\) for all the channels and neglecting the energy that is lost whenever a high-frequency photon is transformed into a low-frequency photon, i.e, \(\frac{f_{k}}{f_{i}}\approx 1\) and \(\frac{f_{p}}{f_{i}}\approx 1\). Also, we assume the triangular approximation of the Raman spectrum, i.e, \(g_{r}(\Delta f)\approx C_{r}\Delta f\), where \(C_{r}\) is the slope of the linear regression (normalized by the effective core area \(A_{\text{eff}}\)) and \(\Delta f\) is the frequency separation between the channels and between the channels and the pumps. Under these assumptions, Eq. (1) can be written as
\[\begin{split}&\frac{\partial P_{i}}{\partial z}=\\ &=\sum_{k=1}^{N_{ch}}C_{r}(f_{k}-f_{i})P_{k}P_{i}+\sum_{p=1}^{N_ {p}}C_{r}(f_{p}-f_{i})P_{k}P_{i}-\alpha_{i}P_{i}=\\ &=C_{r}P_{i}\left(\sum_{k=1}^{N_{ch}}(f_{k}-f_{i})P_{k}+\sum_{p=1} ^{N_{p}}(f_{p}-f_{i})P_{k}\right)-\alpha P_{i}.\end{split} \tag{13}\]
We now write the coupled differential equations into one equation, by replacing the \(N_{ch}\) signals and \(N_{p}\) pumps by a signal and pump density spectrum. Also, we replace the summation by an integration over the entire frequency spectrum of the signal and the pumps. Thus, Eq. (13) can be written as
\[\begin{split}&\frac{dP(z,f)}{dz}=\\ &=C_{r}P(z,f)\left(\int_{f_{ch,min}}^{f_{ch,max}}(\Lambda_{ch}-f)P (z,\Lambda_{ch})\,d\Lambda_{ch}+\\ &+\int_{f_{p,min}}^{f_{p,max}}(\Lambda_{p}-f)P(z,\Lambda_{p})\,d \Lambda_{p}\right)-\alpha P(z,f).\end{split} \tag{14}\]
Dividing both sides of Eq. (14) by \(P(z,f)\) and taking the derivative with respect to the frequency \(f\), we have
\[\begin{split}&\frac{d}{df}\left(\frac{dP(z,f)/dz}{P(z,f)}\right)=-C _{r}\times\\ &\times\underbrace{\left(\underbrace{\int_{f_{ch,min}}^{f_{ch, max}}P(z,\Lambda_{ch})\,d\Lambda_{ch}}_{P_{total,ch}}\quad+\underbrace{\int_{f_{p, min}}^{f_{p,max}}P(z,\Lambda_{p})\,d\Lambda_{p}}_{P_{total,p}}\right)}_{P_{total,p}} \end{split} \tag{15}\]
Note that, the integrals represent the total launch power (\(P(z)\)), i.e., a sum of the channel (\(P_{total,ch}\)), the forward pump (\(P_{total,fw}\)) and backward pump (\(P_{total,bw}\)) launch powers. Moreover, \(P_{total,ch}\) and \(P_{total,fw}\) must decay with \(e^{-\alpha z}\), while \(P_{total,bw}\) decays with \(e^{-\alpha(L-z)}\). Thus, Eq. (15) can be written as
\[\begin{split}&\frac{d}{df}\left(\frac{dP(z,f)/dz}{P(z,f)}\right)=-C _{r}P(z)=\\ &=-C_{r}\left(P_{total,ch}e^{-\alpha z}+P_{total,fw}e^{-\alpha z} +P_{total,bw}e^{-\alpha(L-z)}\right).\end{split} \tag{16}\]
In order to apply this equation in more general scenarios, we define separate wavelength-dependent attenuation to model channels together with FW pumps (\(\alpha_{f,i}\)) and together with BW pumps (\(\alpha_{b,i}\)). These parameters can be interpreted as modelling respectively how fast the channel gain/loss due to the FW-RA and BW-RA together with ISRS extinguishes along the fibre. We also define separate wavelength-dependent \(C_{r}\) for each pump configuration, i.e, \(C_{f,i}\) and \(C_{b,i}\), respectively for FW and BW pumps. The two parameters models respectively the gain/loss due to FW-RA and BW-RA together with the ISRS effect. Finally, by letting \(P_{f}=P_{total,ch}+P_{total,fw}\) and \(P_{b}=P_{total,bw}\), Eq. (16) is rewritten as
\[\frac{d}{df}\left(\frac{dP(z,f)/dz}{P(z,f)}\right)=\\ =-(C_{f,i}P_{f}e^{-\alpha_{f,i}z}+C_{b,i}P_{b}e^{-\alpha_{b,i}(L- z)}). \tag{17}\]
Now, we integrate with respect to \(z\) and \(f\). For the integration in \(f\), note that, because of the presence of pumps, the WDM spectra are no longer centred at \(f=0\). Without loss of generality, let us consider the centre of the spectrum as the average frequencies of the pumps, which we denote by \(\hat{f}\). Thus, integrating over \(z\) and \(f\) yields
\[P(z,f)= e^{-[C_{f,i}P_{f}L_{\text{eff}}(f-\hat{f})+C_{b,i}P_{b}\tilde{L}_{ \text{eff}}(f-\hat{f})]+A(z)+B(f)}, \tag{18}\]
where \(L_{\text{eff}}=\frac{1-e^{-\alpha_{f,i}z}}{\alpha_{f,i}}\) and \(\tilde{L}_{\text{eff}}=\frac{e^{-\alpha_{b,i}(L-z)}-e^{-\alpha_{b,i}z}}{\alpha_ {b,i}}\), and \(A(z)\), \(B(f)\) arbitrary functions which their values determined by requiring that \(P(z=0,f)=P(0,f)\), which immediately implies that \(e^{B(f)}=P(0,f)\), and, by requiring \(\int P(z,f)\,df=P(z)=P_{total}e^{-\alpha_{i}z}\), the value of \(e^{A(z)}\) is obtained, leading to Eq. (19) as
\[\rho(z,f)=\frac{P(z,f)}{P(0,f)}=\\ =\frac{P_{total}e^{-\alpha_{i}z}e^{-(C_{f,i}P_{f}L_{\text{eff}}+C_ {b,i}P_{b}\tilde{L}_{\text{eff}})(f-\hat{f})}}{\int G_{\text{Tx}}(\nu)e^{-(C_{f,i} P_{f}L_{\text{eff}}+C_{b,i}P_{b}\tilde{L}_{\text{eff}})\nu}d\nu}, \tag{19}\]
where \(G_{Tx}(f)\) is the input signal spectra including the WDM channels and the pumps and \(P_{total}\) is the sum of its launch power. Moreover, the coefficient \(\alpha\) is also considered a wavelength-dependent loss \(\alpha_{i}\). Let \(x_{i}=C_{f,i}P_{f}L_{\text{eff}}+C_{b,i}P_{b}\tilde{L}_{\text{eff}}\). By assuming that the input power \(G_{Tx}(f)\) is uniformly distributed over the optical bandwidth \(B\) with power \(P_{total}\) we can write,
\[\int G_{\text{Tx}}(\nu)e^{-x_{i}\nu}d\nu=\frac{2P_{total}\sinh\left(\frac{x_{ i}B}{2}\right)}{x_{i}B}. \tag{20}\]
Replacing Eq.(20) in Eq.(19) leads to
\[\rho(z,f)=e^{-\alpha_{i}z}\frac{x_{i}Be^{-x_{i}(f-\hat{f})}}{2\sinh\left(\frac{xB} {2}\right)}. \tag{21}\]
Finally, by expanding Eq. (21) using a 1st order Taylor approximation around the point \(x_{i}=0\), yields
\[\rho(z,f)=e^{-\alpha z}[1-x(f-\hat{f})], \tag{22}\]
and setting \(f=f_{i}\), Eq. (2) is obtained concluding the proof.
## Appendix B Derivation of the link function.
This section shows the derivation of Eq. (9). Let \(x_{i}(\zeta)=C_{f,i}P_{f}L_{\text{eff}}+C_{b,i}P_{\tilde{L}}\tilde{L}_{\text{ eff}}\), \(\tau_{i}(\zeta)=1-x_{i}(\zeta)(f_{i}-\hat{f})\) with \(L_{\text{eff}}(\zeta)=\frac{1-e^{-\alpha_{f,i}z}}{\alpha_{f,i}}\) and \(\tilde{L}_{\text{eff}}(\zeta)=\frac{e^{-\alpha_{b,i}(L-z)}-e^{-\alpha_{b,i}L} }{\alpha_{b,i}}\). The first step is to insert Eq. (19) in Eq. (8) and use the approximation in Eq. (20) yielding
\[\mu\left(f_{1},f_{2},f_{i}\right)=\\ =\left|\int_{0}^{L}d\zeta\ e^{-\alpha_{i}z}\frac{x_{i}Be^{-x_{i} (f_{1}+f_{2}-f_{i}-\hat{f})}}{2\sinh\left(\frac{x_{i}B}{2}\right)}e^{j\phi(f_{ 1},f_{2},f_{i})\zeta}\right|^{2}. \tag{23}\]
Now, we consider the link function for the XPM contribution in Eq. (6), i.e, \(\mu\left(f_{1}+f_{i},f_{2}+f_{k},f_{i}\right)\) (the derivation for the link function for SPM is analogous and one simply needs to replace \(f_{k}=f_{i}\) and the indices \(k=i\)). Assuming that the frequency separation between channels \(k\) and \(i\) (\(\Delta f=f_{k}-f_{i}\)) is much larger than half of the bandwidth of channel \(k\) (\(|\Delta f|\gg\frac{B_{k}}{2}\)), we can assume that \(f_{2}+\Delta f\approx\Delta f\). Also, we assume that the signal power profile is constant over the channel bandwidth (see Appendices in [21] for additional details). Then, using the 1st order Taylor approximation shown in Eq. (22), yields to
\[\mu\left(f_{1}+f_{i},f_{2}+f_{k},f_{i}\right)=\\ =\left|\int_{0}^{L}d\zeta\ e^{-\alpha_{k}z}\tau_{k}(\zeta)e^{j \phi(f_{1}+f_{i},f_{2}+f_{k},f_{i})\zeta}\right|^{2}. \tag{24}\]
The term \(\tau_{k}(\zeta)\) can be written as
\[\tau_{k}(\zeta) =1-\left[\left(\frac{C_{f,k}P_{f}}{\alpha_{f,k}}\right)\left(1-e^ {-\alpha_{f,k}\zeta}\right)+\right.\] \[+\left.\left(\frac{C_{b,k}P_{b}}{\alpha_{b,k}}\right)e^{-\alpha_{ b,k}L}\left(e^{\alpha_{b,k}\zeta}-1\right)\right](f_{k}-\hat{f}). \tag{25}\]
Let \(T_{f,k}=\frac{-P_{f}C_{f,k}(f_{k}-\hat{f})}{\alpha_{f,k}}\), \(T_{b,k}=\frac{-P_{b}C_{b,k}(f_{k}-\hat{f})}{\alpha_{b,k}}\), \(T_{k}=1+T_{f,k}-T_{b,k}e^{-\alpha_{b,k}L}\). Thus, the term \(\tau_{k}(\zeta)\) is written as
\[\tau_{k}(\zeta)=T_{k}\left[1-\frac{T_{f,k}}{T_{k}}e^{-\alpha_{f,k}\zeta}+\frac {T_{b,k}}{T_{k}}e^{-\alpha_{b,k}L}e^{\alpha_{b,k}\zeta}\right]. \tag{26}\]
Eq. (26) can be conveniently rewritten in terms of a summation using identity (57), which will facilitate all the mathematical derivations,
\[\tau_{k}(\zeta)=T_{k}\sum_{0\leq l_{1}+l_{2}\leq 1} \left(\frac{-T_{f,k}}{T_{k}}\right)^{l_{1}}\left(\frac{T_{b,k}}{T _{k}}\right)^{l_{2}}\times\] \[\times e^{-(l_{1}\alpha_{f,k}\zeta+l_{2}\alpha_{b,k}L-l_{2}\alpha _{b,k}\zeta)}. \tag{27}\]
Now, defining
\[\Upsilon_{k}=T_{k}\left(\frac{-\tilde{T}_{f,k}}{T_{k}}\right)^{l_{1}}\left( \frac{\tilde{T}_{b,k}}{T_{k}}\right)^{l_{2}}, \tag{28}\]
Eq. (27) is written as
\[\tau_{k}(\zeta)=\sum_{0\leq l_{1}+l_{2}\leq 1}\Upsilon_{k}e^{-(l_{1}\alpha_{f,k} \zeta+l_{2}\alpha_{b,k}L-l_{2}\alpha_{b,k}\zeta)}. \tag{29}\]
Note that \(\Upsilon_{k}\) is a variable which depends on the indices of the summation. Now, inserting Eq. (29) in Eq. (24), we obtain
\[\mu\left(f_{1}+f_{i},f_{2}+f_{k},f_{i}\right)=\\ =\left|\sum_{0\leq l_{1}+l_{2}\leq 1}\Upsilon_{k}\frac{e^{-( \alpha_{k}\zeta+l_{1}\alpha_{f,k}\zeta+l_{2}\alpha_{b,k}L-l_{2}\alpha_{b,k} \zeta)+j\phi\zeta}}{-(\alpha_{k}+l_{1}\alpha_{f,k}-l_{2}\alpha_{b,k})+j\phi \zeta}\right|^{2}, \tag{30}\]
and solving the integral in Eq.(30) yields to
\[\mu\left(f_{1}+f_{i},f_{2}+f_{k},f_{i}\right)=\\ =\left|\sum_{0\leq l_{1}+l_{2}\leq 1}\Upsilon_{k}\frac{e^{-( \alpha_{k}+l_{1}\alpha_{f,k})L+j\phi L}-e^{-l_{2}\alpha_{b,k}L}}{-(\alpha_{k}+ l_{1}\alpha_{f,k}-l_{2}\alpha_{b,k})+j\phi}\right|^{2}. \tag{31}\]
Now, let define \(\alpha_{l,k}=\alpha_{k}+l_{1}\alpha_{f,k}-l_{2}\alpha_{b,k}\), \(\kappa_{f,k}=e^{-(\alpha_{k}+l_{1}\alpha_{f,k})L}\) and \(\kappa_{b,k}=e^{-l_{2}\alpha_{b,k}L}\). Eq. (31) can then be written as
\[\mu\left(f_{1}+f_{i},f_{2}+f_{k},f_{i}\right)=\left|\sum_{0\leq l_{1}+l_{2} \leq 1}\Upsilon_{k}\frac{\kappa_{f,k}e^{j\phi L}-\kappa_{b,k}}{-\alpha_{l,k}+ j\phi}\right|^{2}. \tag{32}\]
The last step of the derivation is to calculate the modulus of Eq. (32). Using the identity (58) we can write Eq. (32) as
\[\mu\left(f_{1}+f_{i},f_{2}+f_{k},f_{i}\right)=\left(\sum_{0\leq l _{1}+l_{2}\leq 1}\Upsilon_{k}\frac{\kappa_{f,k}e^{j\phi L}-\kappa_{b,k}}{-\alpha_{l,k}+ j\phi}\right)\times\\ \times\left(\sum_{0\leq l_{1}^{\prime}+l_{2}^{\prime}\leq 1}\Upsilon_{k}^{ \prime}\frac{\kappa_{f,k}^{\prime}e^{-j\phi L}-\kappa_{b,k}^{\prime}}{-\alpha_{l,k} ^{\prime}-j\phi}\right). \tag{33}\]
Finally, performing the multiplication in Eq. (33) together with the identity (59) and considering the channel \(f_{k}=f_{i}\) yields to Eq. (9), concluding the proof.
## Appendix C Derivation of the XPM contribution.
This section shows the derivation of Eq. (11). We start by approximating the phase mismatch term in Eq. (8). For the XPM contribution, let \(\Delta f=f_{k}-f_{i}\) be the frequency
separation between channels \(k\) and \(i\) - here the pumps are also included as additional indices \(k\). Assuming that frequency separation is much larger than half of the bandwidth of channel \(k\) (\(|\Delta f|\gg\frac{B_{k}}{2}\)), we can make the assumption that \(f_{2}+\Delta f\approx\Delta f\). Also, we assume that the dispersion slope \(\beta_{3}\) is constant over the channel bandwidth. Thus, the phase mismatch term can be approximated as [21, Eq. 15],
\[\begin{split}&\phi(f_{1}+f_{i},f_{2}+f_{k},f_{i})=\\ &=-4\pi^{2}f_{1}\Delta f\left[\beta_{2}+\pi\beta_{3}(f_{1}+f_{2}+ f_{i}+f_{k})\right]\approx\\ &\approx-4\pi^{2}(f_{k}-f_{i})\left[\beta_{2}+\pi\beta_{3}(f_{i}+ f_{k})\right]f_{1}=\\ &=\phi_{i,k}f_{1},\end{split} \tag{34}\]
with \(\phi_{i,k}=-4\pi(f_{k}-f_{i})\left[\beta_{2}+\pi\beta_{3}(f_{i}+f_{k})\right]\). The most impacted channels by this approximation is the ones near the COI. The error relative to this approximation is given by [21, Eq. 25].
Now, we consider Eq. (6) giving us the XPM contribution. For notation brevity, we will omit the factor \(\frac{32}{27}\frac{\gamma^{2}}{B_{k}^{2}}\left(\frac{P_{k}}{P_{i}}\right)^{2}\). Also, the term \(\Pi\left(\frac{f_{1}+f_{2}}{B_{k}}\right)\) is neglected - this is equivalent to approximating the integration domain of the GN model to a rectangle [16]. Because of the approximation in Eq. (34), \(\phi\) no longer depends on \(f_{2}\), and the double integral in (6) turns to be a single integral. Thus, inserting Eq. (9) in Eq. (6), we can identify, three terms as follows
\[\begin{split}&\eta_{\text{XPM,min}}^{(k)}(f_{i})=\sum_{ \begin{subarray}{c}0\leq l_{1}+l_{2}\leq 1\\ 0\leq l_{1}^{\prime}+l_{2}^{\prime}\leq 1\end{subarray}}\Upsilon_{k}T_{k}^{ \prime}[(\kappa_{f,k}\kappa_{f,k}^{\prime}+\kappa_{b,k}\kappa_{b,k}^{\prime}) \times\\ &\times\eta_{\text{XPM,min}}^{(k)}(f_{i})-(\kappa_{f,k}\kappa_{b,k }^{\prime}+\kappa_{b,k}\kappa_{f,k}^{\prime})\eta_{\text{XPM,cos}}^{(k)}(f_{i} )+\\ &+(\kappa_{f,k}\kappa_{b,k}^{\prime}-\kappa_{b,k}\kappa_{f,k}^{ \prime})\eta_{\text{XPM,sin}}^{(k)}(f_{i})],\end{split} \tag{35}\]
with
\[\begin{split}&\eta_{\text{XPM,min}}^{(k)}(f_{i})=2B_{k}\int_{0}^{ \frac{B_{i}}{2}}df_{1}\frac{\alpha_{l,k}\alpha_{l,k}^{\prime}+\phi_{i,k}^{2}f_{ 1}^{2}}{(\alpha_{l,k}^{2}+\phi_{i,k}^{2}f_{1}^{2})(\alpha_{l,k}^{\prime 2}+\phi_{i,k}^{2}f_{1}^{2})}, \\ &\eta_{\text{XPM,cos}}^{(k)}(f_{i})=\\ &=2B_{k}\int_{0}^{\frac{B_{i}}{2}}df_{1}\frac{\alpha_{l,k}\alpha _{l,k}^{\prime}+\phi_{i,k}^{2}f_{1}^{2}}{(\alpha_{l,k}^{2}+\phi_{i,k}^{2}f_{1}^ {2})(\alpha_{l,k}^{\prime 2}+\phi_{i,k}^{2}f_{1}^{2})}\cos(\phi_{i,k}Lf_{1})\end{split} \tag{36}\]
and
\[\begin{split}&\eta_{\text{XPM,sin}}^{(k)}(f_{i})=\\ &=2B_{k}\int_{0}^{\frac{B_{i}}{2}}df_{1}\frac{(\alpha_{l,k}- \alpha_{l,k})^{\prime}\phi_{i,k}f_{1}}{(\alpha_{l,k}^{2}+\phi_{i,k}^{2}f_{1}^ {2})(\alpha_{l,k}^{\prime 2}+\phi_{i,k}^{2}f_{1}^{2})}\sin(\phi_{i,k}Lf_{1}).\end{split} \tag{37}\]
In the following, the above three integrals are solved. Eq. (36) is solving using identity (60) as
\[\eta_{\text{XPM,min}}^{(k)}(f_{i})=\frac{2B_{k}}{\phi_{i,k}(\alpha _{l,k}+\alpha_{l,k}^{\prime})}\times\\ \times\left[\arctan\left(\frac{\phi_{i,k}B_{i}}{2\alpha_{l,k}} \right)+\arctan\left(\frac{\phi_{i,k}B_{i}}{2\alpha_{l,k}^{\prime}}\right) \right], \tag{38}\]
Eqs. (37) and (38) do not have analytical solutions in their current form. In order to derive an analytical solution, we extend the channel bandwidth \(B_{i}\rightarrow\infty\) and solve it using identities (63) and (64), yielding to
\[\eta_{\text{XPM,cos}}^{(k)}(f_{i})=\frac{\pi B_{k}}{\phi_{i,k}( \alpha_{l,k}+\alpha_{l,k}^{\prime})}\times\\ \times\left[e^{-|\alpha_{i,k}L|}\operatorname{sign}\left(\frac{ \phi_{i,k}}{\alpha_{l,k}}\right)+e^{-|\alpha_{i,k}^{\prime}L|}\operatorname{ sign}\left(\frac{\phi_{i,k}}{\alpha_{l,k}^{\prime}}\right)\right] \tag{40}\]
and
\[\eta_{\text{XPM,sin}}^{(k)}(f_{i})=\frac{\pi B_{k}}{\phi_{i,k}( \alpha_{l,k}+\alpha_{l,k}^{\prime})}\times\\ \times\left[e^{-|\alpha_{i,k}L|}\operatorname{sign}\left(-\phi_{i, k}\right)+e^{-|\alpha_{i,k}^{\prime}L|}\operatorname{sign}\left(\phi_{i,k}\right)\right] \tag{41}\]
Finally, by inserting Eqs. (39), (40) and (41) in Eq.(35) together with the pre-factor \(\frac{32}{27}\frac{\gamma^{2}}{B_{k}^{2}}\left(\frac{P_{k}}{P_{i}}\right)^{2}\), Eq. (11) is obtained concluding the proof.
## Appendix D Derivation of the SPM contribution.
This section shows the derivation of Eq. (12). We start by approximating the phase mismatch term. We assume that the dispersion slope \(\beta_{3}\) is constant over the channel bandwidth. Thus, the phase mismatch term can be approximated as
\[\begin{split}&\phi(f_{1}+f_{i},f_{2}+f_{i},f_{i})=\\ &=-4\pi^{2}f_{1}f_{2}\left[\beta_{2}+\pi\beta_{3}(f_{1}+f_{2}-2f_ {i})\right]\approx\\ &\approx-4\pi^{2}f_{1}f_{2}(\beta_{2}+2\pi\beta_{3}f_{i})=\\ &=\phi_{i}f_{1}f_{2},\end{split} \tag{42}\]
with \(\phi_{i}=-4\pi^{2}(\beta_{2}+2\pi\beta_{3}f_{i})\).
Now, using Eq. (7) together with Eqs. (6) and (9) with \(k=i\), and omitting the pre-factor of \(\frac{16}{27}\frac{\gamma^{2}}{B_{i}^{2}}\), we can write
\[\eta_{\text{XPM}}(f_{i})=\sum_{\begin{subarray}{c}0\leq l_{1}+l_{2}\leq 1\\ 0\leq l_{1}^{\prime}+l_{2}\leq 1\end{subarray}}\Upsilon_{i}\Upsilon_{i}^{ \prime}\Bigg{[}(\kappa_{f,i}\kappa_{f,i}^{\prime}+\kappa_{b,i}\kappa_{b,i}^{ \prime})\eta_{\text{XPM,min}}(f_{i})- \tag{43}\] \[\qquad\qquad\qquad-(\kappa_{f,i}\kappa_{b,i}^{\prime}+\kappa_{b,i} \kappa_{f,i}^{\prime})\eta_{\text{XPM,cos}}(f_{i})+\] \[\qquad\qquad\qquad\qquad+(\kappa_{f,i}\kappa_{b,i}^{\prime}-\kappa_{b,i}\kappa_{f,i}^{\prime})\eta_{\text{XPM,sin}}(f_{i})\Bigg{]},\]
where \(\eta_{\text{SPM,sin}}(f_{i})\), \(\eta_{\text{SPM,cos}}(f_{i})\) and \(\eta_{\text{SPM,sin}}(f_{i})\) are given respectively by
\[\eta_{\text{SPM,min}}(f_{i})=\\ =\int_{-\frac{B_{i}}{2}}^{\frac{B_{i}}{2}}df_{1}\int_{-\frac{B_{i}} {2}}^{\frac{B_{i}}{2}}df_{2}\frac{\alpha_{l,i}\alpha_{l,i}^{\prime}+\phi_{i}^{2}f_{ 1}^{2}f_{2}^{2}}{(\alpha_{l,i}^{2}+\phi_{i}^{2}f_{1}^{2}f_{2}^{2})(\alpha_{l,i} ^{\prime 2}+\phi_{i}^{2}f_{1}^{2}f_{2}^{2})}, \tag{44}\]
\[\eta_{\text{SPM,cos}}(
and
\[\eta_{\text{SPM,min}}(f_{i})=\\ =\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}d\nu\ln\left(\frac{B_{i}}{2 \sqrt{\nu}}\right)\frac{\alpha_{l,i}\alpha^{\prime}_{l,i}+\phi_{i}^{2}\nu^{2}}{ (\alpha^{2}_{l,i}+\phi_{i}^{2}\nu^{2})(\alpha^{\prime 2}_{l,i}+\phi_{i}^{2}\nu^{2})} \sin(\phi_{i}L\nu). \tag{46}\]
Note that, similar to Appendix C, the term \(\Pi\left(\frac{f_{1}+f_{2}}{B_{i}}\right)\) is neglected.
In the following the three integrals above are solved. The integral in Eq. (44) is rewritten in polar coordinates \((r,\varphi)\) as
\[\eta_{\text{SPM,min}}(f_{i})\approx 4\int_{0}^{\sqrt{\frac{\pi}{ 2}}\frac{\mu_{i}}{2}}d\nu\,\times\\ \times\frac{r\left[\alpha_{l,i}\alpha^{\prime}_{l,i}+\frac{\phi_ {i}^{2}}{4}(r^{4}\sin{(\varphi)})\right]}{\left[\alpha^{2}_{l,i}+\frac{\phi_ {i}^{2}}{4}(r^{4}\sin{(\varphi)})\right]\left[\alpha^{\prime 2}_{l,i}+\frac{\phi_ {i}^{2}}{4}(r^{4}\sin{(\varphi)})\right]}, \tag{47}\]
where it was used the relations \(f_{1}=r\cos{(\varphi/2)}\), \(f_{2}=r\sin{(\varphi/2)}\) and \(\sin{(\varphi/2)}\cos{(\varphi/2)}=\frac{\sin{(\varphi)}}{2}\). Also, the integration domain of Eq. (7) was approximated by a circular domain such that the area of both domains are equal [21, Fig. 3]. This yields the variation of the radius in the outer integral as shown in Eq. (47). The inner integral in Eq. (47) can be solved using identity (61), yielding to
\[\eta_{\text{SPM,min}}(f_{i})\approx 4\int_{0}^{\sqrt{\frac{\pi}{ 2}}\frac{\mu_{i}}{2}}d\nu\times\\ \times\frac{r\pi}{\alpha_{l,i}+\alpha^{\prime}_{l,i}}\left[\frac{ 1}{\sqrt{4\alpha^{2}_{l,i}+\phi_{i}^{2}r^{4}}}+\frac{1}{\sqrt{4\alpha^{\prime 2}_{l,i} +\phi_{i}^{2}r^{4}}}\right]. \tag{48}\]
This integral can be rewritten as:
\[\eta_{\text{SPM,min}}(f_{i})=\frac{2\pi}{\alpha_{l,i}+\alpha^{ \prime}_{l,i}}\int_{0}^{\sqrt{\frac{\pi}{2}}\frac{\mu_{i}}{2}}d\nu\times\\ \times\left[\frac{r}{\alpha_{l,i}\sqrt{1+\frac{\phi_{i}^{2}r^{4}} {4\alpha^{2}_{l,i}}}}+\frac{r}{\alpha^{\prime}_{l,i}\sqrt{1+\frac{\phi_{i}^{ 2}r^{4}}{4\alpha^{2}_{l,i}}}}\right]. \tag{49}\]
The integral in Eq. (49) is solved using identity (62) as
\[\eta_{\text{SPM,min}}(f_{i})=\frac{2\pi}{\phi_{i}(\alpha_{l,i}+ \alpha^{\prime}_{l,i})}\times\\ \times\left[\operatorname{asin}\left(\frac{3\phi_{i}B_{i}^{2}}{8 \pi\alpha_{l,i}}\right)+\operatorname{asin}\left(\frac{3\phi_{i}B_{i}^{2}}{8 \pi\alpha^{\prime}_{l,i}}\right)\right]. \tag{50}\]
To solve the integrals in Eqs. (45) and (46), a similar approach use in [36] is used. The integrals are converted to hyperbolic coordinates using the relations \(\nu_{1}=\sqrt{f_{1}f_{2}}\), \(\nu_{2}=-\frac{1}{2}\ln\left(\frac{f_{1}}{f_{2}}\right)\), \(f_{1}=\nu_{1}e^{\nu_{2}}\) and \(f_{2}=\nu_{1}e^{-\nu_{2}}\)[16, Sec. VIII-A]; this change of coordinates yields a one-dimensional integral in \(\nu_{1}\). We also use the change of variable \(\nu=\nu_{1}^{2}\)[36] to rewrite Eqs. (45) and (46) as
\[\eta_{\text{SPM,cos}}(f_{i})=\\ =8\int_{0}^{\frac{B_{i}}{2}}d\nu\ln\left(\frac{B_{i}}{2\sqrt{\nu }}\right)\frac{\alpha_{l,i}\alpha^{\prime}_{l,i}+\phi_{i}^{2}\nu^{2}}{(\alpha^ {2}_{l,i}+\phi_{i}^{2}\nu^{2})(\alpha^{\prime 2}_{l,i}+\phi_{i}^{2}\nu^{2})}\cos(\phi_{i}L\nu) \tag{51}\]
and
\[\eta_{\text{SPM,sin}}(f_{i})=\\ =8\int_{0}^{\frac{B_{i}}{2}}d\nu\ln\left(\frac{B_{i}}{2\sqrt{\nu }}\right)\frac{(\alpha_{l,i}-\alpha^{\prime}_{l,i})\phi_{i}\nu}{(\alpha^{2}_{l,i}+\phi_{i}^{2}\nu^{2})(\alpha^{\prime 2}_{l,i}+\phi_{i}^{2}\nu^{2})}\sin(\phi_{i}L\nu). \tag{52}\]
The integrals in Eqs. (51) and (52) do not have analytical solutions in their current form. In order to obtain an integral that yields an analytical solution we evaluate the logarithm functions in the point \(\nu=\frac{\pi}{2\phi_{i}L}\), where this point was chosen such that the cosine function achieves its minima and the sine function achieves its maxima. This yields to
\[\eta_{\text{SPM,cos}}(f_{i})=\\ =8\ln\left(\sqrt{\frac{\phi_{i}L}{2\pi}}B_{i}\right)\int_{0}^{ \frac{B_{i}}{2}}d\nu\frac{\alpha_{l,i}\alpha^{\prime}_{l,i}+\phi_{i}^{2}\nu^{2}}{ (\alpha^{2}_{l,i}+\phi_{i}^{2}\nu^{2})(\alpha^{\prime 2}_{l,i}+\phi_{i}^{2}\nu^{2})}\cos(\phi_{i}L\nu) \tag{53}\]
and
\[\eta_{\text{SPM,sin}}(f_{i})=\\ =8\ln\left(\sqrt{\frac{\phi_{i}L}{2\pi}}B_{i}\right)\int_{0}^{ \frac{B_{i}}{2}}d\nu\frac{(\alpha_{l,i}-\alpha^{\prime}_{l,i})\phi_{i}\nu}{( \alpha^{2}_{l,i}+\phi_{i}^{2}\nu^{2})(\alpha^{\prime 2}_{l,i}+\phi_{i}^{2}\nu^{2})}\sin(\phi_{i}L\nu). \tag{54}\]
The integrals in Eqs. (53) and (54) can now be solved similar to Appendix C, i.e, by letting \(B_{i}\rightarrow\infty\). This yields to
\[\eta_{\text{SPM,cos}}(f_{i})=4\pi\ln\left(\sqrt{\frac{\phi_{i}L}{2 \pi}}B_{i}\right)\times\\ \times\left[e^{-|\alpha_{l,i}L|}\operatorname{sign}\left(\frac{\phi_ {i}}{\alpha_{l,i}}\right)+e^{-|\alpha^{\prime}_{l,i}L|}\operatorname{sign} \left(\frac{\phi_{i}}{\alpha^{\prime}_{l,i}}\right)\right] \tag{55}\]
and
\[\eta_{\text{SPM,sin}}(f_{i})=4\pi\ln\left(\sqrt{\frac{\phi_{i}L}{2 \pi}}B_{i}\right)\times\\ \times\left[e^{-|\alpha_{l,i}L|}\operatorname{sign}\left(-\phi_{i} \right)+e^{-|\alpha^{\prime}_{l,i}L|}\operatorname{sign}\left(\phi_{i}\right) \right]. \tag{56}\]
Finally, by inserting Eqs. (50), (55) and (56) in Eq. (43) together with the pre-factor of \(\frac{16}{27}\frac{\gamma^{2}}{B_{i}^{2}}\), Eq. (12) is obtained concluding the proof.
## Appendix E Mathematical Identities
\[(x+y+z)^{i}=\\ =\sum_{0\leq 1_{l}+1_{2}\leq i}\frac{i!}{l_{1}!l_{2}!(i-l_{1}-1_{2})!} x^{l_{1}}y^{l_{2}}z^{i-l_{1}-l_{2}}. \tag{57}\]
\[|z_{k}|^{2}=\Re(z_{k}\cdot\overline{z}_{k})=z_{k}\cdot\overline{z}_{k}. \tag{58}\]
\[z_{i}\cdot\overline{z}_{j}+z_{j}\cdot\overline{z}_{i}=2\Re(z_{i}\cdot\overline{z}_{j}), \ j<i. \tag{59}\]
\[\begin{split}&\int_{0}^{X}dx\ \frac{ab+c^{2}x^{2}}{(a^{2}+c^{2}x^{2})(b^{2}+c^{2}x^{2})}= \\ &=\frac{1}{c(a+b)}\left[\arctan\left(\frac{cx}{a}\right)+\arctan \left(\frac{cx}{b}\right)\right].\end{split} \tag{60}\]
\[\begin{split}&\int_{0}^{\frac{\pi}{2}}dx\ \frac{ab+c^{2}\sin^{2} \left(x\right)}{[a^{2}+c^{2}\sin^{2}\left(x\right)][b^{2}+c^{2}\sin^{2}\left(x \right)]}=\\ &=\frac{\pi}{2(a+b)}\left(\frac{1}{\sqrt{a^{2}+c^{2}}}+\frac{1}{ \sqrt{b^{2}+c^{2}}}\right).\end{split} \tag{61}\]
\[\begin{split}&\int_{0}^{X}dx\ \frac{x}{\sqrt{1+d^{2}x^{4}}}=\frac{1}{2d} \operatorname{asinh}{(dX^{2})}.\end{split} \tag{62}\]
\[\begin{split}&\int_{0}^{\infty}dx\ \frac{ab+c^{2}x^{2}}{(a^{2}+c^{2}x^{2})(b^{2}+c^{2}x^{2})}\cos(cxL)=\\ &=\frac{\pi}{2}\frac{e^{-|aL|}\operatorname{sign}(c/a)+e^{-|bL|} \operatorname{sign}(c/b)}{c(a+b)}.\end{split} \tag{63}\]
\[\begin{split}&\int_{0}^{\infty}dx\ \frac{(a-b)cx}{(a^{2}+c^{2}x^{2})(b^{2}+c^{2}x^{2})}\sin(cxL)=\\ &=\frac{\pi}{2}\frac{e^{-|aL|}\operatorname{sign}(-c)+e^{-|bL|} \operatorname{sign}(c)}{c(a+b)}.\end{split} \tag{64}\]
|
2302.04749 | Quantum Advantage from One-Way Functions | We demonstrate quantum advantage with several basic assumptions, specifically
based on only the existence of OWFs. We introduce inefficient-verifier proofs
of quantumness (IV-PoQ), and construct it from classical bit commitments.
IV-PoQ is an interactive protocol between a verifier and a quantum prover
consisting of two phases. In the first phase, the verifier is probabilistic
polynomial-time, and it interacts with the prover. In the second phase, the
verifier becomes inefficient, and makes its decision based on the transcript of
the first phase. If the prover is honest, the inefficient verifier accepts with
high probability, but any classical malicious prover only has a small
probability of being accepted by the inefficient verifier. Our construction
demonstrates the following results: (1)If one-way functions exist, then IV-PoQ
exist. (2)If distributional collision-resistant hash functions exist (which
exist if hard-on-average problems in $\mathbf{SZK}$ exist), then constant-round
IV-PoQ exist. We also demonstrate quantum advantage based on worst-case-hard
assumptions. We define auxiliary-input IV-PoQ (AI-IV-PoQ) that only require
that for any malicious prover, there exist infinitely many auxiliary inputs
under which the prover cannot cheat. We construct AI-IV-PoQ from an
auxiliary-input version of commitments in a similar way, showing that (1)If
auxiliary-input one-way functions exist (which exist if
$\mathbf{CZK}\not\subseteq\mathbf{BPP}$), then AI-IV-PoQ exist. (2)If
auxiliary-input collision-resistant hash functions exist (which is equivalent
to $\mathbf{PWPP}\nsubseteq \mathbf{FBPP}$) or $\mathbf{SZK}\nsubseteq
\mathbf{BPP}$, then constant-round AI-IV-PoQ exist. | Tomoyuki Morimae, Takashi Yamakawa | 2023-02-09T16:31:48Z | http://arxiv.org/abs/2302.04749v2 | # Quantum Advantage from One-Way Functions
###### Abstract
Showing quantum advantage based on weaker and standard classical complexity assumptions is one of the most important goals in quantum information science. In this paper, we demonstrate quantum advantage with several basic assumptions, specifically based on only the existence of classically-secure one-way functions. We introduce _inefficient-verifier proofs of quantumness_ (IV-PoQ), and construct it from statistically-hiding and computationally-binding classical bit commitments. IV-PoQ is an interactive protocol between a verifier and a quantum polynomial-time prover consisting of two phases. In the first phase, the verifier is classical probabilistic polynomial-time, and it interacts with the quantum polynomial-time prover over a classical channel. In the second phase, the verifier becomes inefficient, and makes its decision based on the transcript of the first phase. If the quantum prover is honest, the inefficient verifier accepts with high probability, but any classical probabilistic polynomial-time malicious prover only has a small probability of being accepted by the inefficient verifier. In our construction, the inefficient verifier can be a classical deterministic polynomial-time algorithm that queries an \(\mathbf{NP}\) oracle. Our construction demonstrates the following results based on the known constructions of statistically-hiding and computationally-binding commitments from one-way functions or distributional collision-resistant hash functions:
* If one-way functions exist, then IV-PoQ exist.
* If distributional collision-resistant hash functions exist (which exist if hard-on-average problems in \(\mathbf{SZK}\) exist), then constant-round IV-PoQ exist.
We also demonstrate quantum advantage based on worst-case-hard assumptions. We define _auxiliary-input IV-PoQ_ (AI-IV-PoQ) that only require that for any malicious prover, there exist infinitely many auxiliary inputs under which the prover cannot cheat. We construct AI-IV-PoQ from an auxiliary-input version of commitments in a similar way, showing that
* If auxiliary-input one-way functions exist (which exist if \(\mathbf{CZK}\not\subseteq\mathbf{BPP}\)), then AI-IV-PoQ exist.
* If auxiliary-input collision-resistant hash functions exist (which is equivalent to \(\mathbf{PWPP}\not\subseteq\mathbf{FBPP}\)) or \(\mathbf{SZK}\not\subseteq\mathbf{BPP}\), then constant-round AI-IV-PoQ exist.
Finally, we also show that some variants of PoQ can be constructed from quantum-evaluation one-way functions (QE-OWFs), which are similar to classically-secure classical one-way functions except that the evaluation algorithm is not classical but quantum. QE-OWFs appear to be weaker than classically-secure classical one-way functions.
###### Contents
* 1 Introduction
* 1.1 Our Results
* 1.2 Technical Overview
* 1.3 Related Works
* 2 Preliminaries
* 2.1 Basic Notations
* 2.2 Pairwise-Independent Hash Family
* 2.3 OWFs
* 2.4 Commitments
* 3 Hashing Lemmas
* 4 Inefficient-Verifier Proofs of Quantumness
* 4.1 Definitions
* 4.2 Strong Soundness
* 4.3 Gap Amplification
* 5 Coherent Execution of Classical Bit Commitments
* 6 Construction of IV-PoQ
* 6.1 Completeness
* 6.2 Soundness
* 6.3 Computational Power of the Inefficient Verifier
* 7 Implausibility of Two-Round AI-IV-PoQ
* 7.1 Impossibility of Classical Reduction
* 7.2 Oracle Separation
* 8 Variants of PoQ from QE-OWFs
* A Necessity of Assumptions for (AI-/IO-)IV-PoQ
* B Omitted Contents in Section 2
* B.1 Auxiliary-Input Collision-Resistance and \(\mathbf{PWPP}\not\subseteq\mathbf{FBPP}\)
* B.2 Auxiliary-Input Commitments from \(\mathbf{SZK}\not\subseteq\mathbf{BPP}\)
* C Omitted Proofs for the Completeness
* D Distributionally OWFs
Introduction
Quantum advantage means that quantum computing outperforms classical one for some computational tasks. Showing quantum advantage based on weaker and standard classical complexity assumptions is one of the most important goals in quantum information science.
One approach to demonstrate quantum advantage is the sampling-based one. In the sampling-based quantum advantage, quantum polynomial-time (QPT) algorithms can sample certain probability distributions but no classical probabilistic polynomial-time (PPT) algorithm can. A great merit of the approach is that relatively simple quantum computing models are enough, such as the Boson Sampling model [1], the IQP model [1], the random circuit model [1], and the one-clean-qubit model [1]. 1 Output probability distributions of these restricted quantum computing models cannot be sampled by any PPT algorithm within a constant multiplicative error2 unless the polynomial-time hierarchy collapses to the third [1, 2] or the second level [1].3 The assumption that the polynomial-time hierarchy does not collapse is a widely-believed assumption in classical complexity theory, but one disadvantage of these results is that the multiplicative-error sampling is unrealistic. The requirement of the multiplicative-error sampling can be relaxed to that of the constant additive-error sampling [1, 1, 1, 2],4 but the trade-off is that the underlying classical complexity assumptions become less standard: some ad-hoc assumptions about average-case \(\boldsymbol{\#}\)P-hardness of some problems, which were not studied before, have to be introduced.
Footnote 1: The Boson Sampling model is a quantum computing model that uses non-interacting bosons, such as photons. The IQP (Instantaneous Quantum Polytime) model is a quantum computing model where only commuting quantum gates are used. The random circuit model is a quantum computing model where each gate is randomly chosen. The one-clean-qubit model is a quantum computing model where the input is \(|0\rangle\langle 0|\otimes\frac{I^{\otimes m}}{2^{m}}\).
Footnote 2: We say that the output probability distribution of a quantum algorithm is sampled by a classical algorithm within a constant multiplicative error \(\epsilon\) if \(|q_{z}-p_{z}|\leq ep_{z}\) is satisfied for all \(z\), where \(q_{z}\) is the probability that the quantum algorithm outputs the bit string \(z\), and \(p_{z}\) is the probability that the classical algorithm outputs the bit string \(z\).
Footnote 3: [1] previously showed that output probability distributions of constant-depth quantum circuits cannot be sampled classically unless \(\mathbf{BQP}\subseteq\mathbf{AM}\). Their assumption can be easily improved to the assumption that the polynomial-time hierarchy does not collapse to the second level.
Footnote 4: We say that the output probability distribution of a quantum algorithm is sampled by a classical algorithm within a constant additive error \(\epsilon\) if \(\sum_{z}|q_{z}-p_{z}|\leq\epsilon\) is satisfied, where \(q_{z}\) is the probability that the quantum algorithm outputs the bit string \(z\), and \(p_{z}\) is the probability that the classical algorithm outputs the bit string \(z\).
Another disadvantage of the sampling-based approach is that it is not known to be verifiable. For the multiplicative-error case, we do not know how to verify quantum advantage even with a computationally-unbounded verifier. Also for the additive-error case, we do not know how to verify the quantum advantage efficiently. (For example, there is a negative result that suggests that exponentially-many samples are necessary to verify the correctness of the sampling [1].) At least, we can say that if there exists a sampling-based quantum advantage in the additive-error case, there exists an inefficiently-verifiable quantum advantage for a certain search problem [1].5
Footnote 5: [1, Theorem 21] showed that if there exists an additive-error sampling problem that is quantumly easy but classically hard, then there exists a search problem that is quantumly easy but classically hard. The relation of the search problem is verified inefficiently. Note that the search problem depends on the time-complexity of the classical adversary, and therefore it is incomparable to our (AI-)IV-PqQ.
Some inefficiently-verifiable search problems that exhibit quantum advantage have been introduced. For example, for the random circuit model, [1, 2] introduced so-called Heavy Output Generation (HOG) and Linear Cross-Entropy Heavy Output Generation (XHOG) where given a quantum circuit \(C\) it is required to output bit strings that satisfy certain relations about \(C\). The relations can be verified inefficiently. The classical hardnesses of these problems are, however, based on new assumptions introduced by the authors. [1] constructed an inefficiently-verifiable search problem (Fourier Fishing), but its quantum advantage is relative to random oracles. [2] constructed another inefficiently-verifiable search problem (Collision Hashing), but its quantum advantage is also relative to random oracles.
There is another approach of demonstrating quantum advantage where the verification is efficient, namely, proofs of quantumness (PoQ) [1]. In PoQ, we have a QPT prover and a PPT verifier. They interact over a classical channel, and the verifier finally makes the decision. If the QPT prover behaves honestly, the verifier accepts with high probability, but for any malicious PPT prover, the verifier accepts with only small probability. The simplest way of realizing PoQ is to let the prover solve an \(\mathbf{NP}\) problem that is quantumly easy but classically hard, such as factoring [12]. Such a simplest way is, however, based on specific assumptions that certain specific problems are hard for PPT algorithms.
The first construction of PoQ based on a general assumption was given in [1] where (noisy) trapdoor claw-free functions with the adaptive-hardcore-bit property6 is assumed. Such functions can be instantiated with the LWE assumption, for example [1]. The adaptive-hardcore-bit property was removed in [16], where only trapdoor 2-to-1 collision-resistant hash functions are assumed. In [15], PoQ was constructed from (full-domain) trapdoor permutations. PoQ can also be constructed from quantum homomorphic encryptions (QHE) [17] for a certain class of quantum operations (such as controlled-Hadamard gates), which can be instantiated with the LWE assumption [18]. These contructions are interactive, i.e., the verifier and the prover have to exchange many rounds of messages. Recently, a non-interactive PoQ has been realized with only random oracles [19]. This result demonstrates efficiently-verifiable quantum advantage with an "unstructured" problem for the first time. However, it is known that hardness relative to a random oracle does not necessarily imply hardness in the unrelativized world where the random oracle is replaced with a real-world hash function [13]. Thus, [19] does not give quantum advantage under a standard assumption in the unrelativized world.
Footnote 6: The adaptive-hardcore-bit property very roughly means that it is hard to find \(x_{b}\) (\(b\in\{0,1\}\)) and \(d\neq\mathbf{0}\) such that \(f_{0}(x_{0})=f_{1}(x_{1})\) and \(d\cdot(x_{0}\oplus x_{1})=0\), given a claw-free pair \((f_{0},f_{1})\).
We therefore have the following open problem.
_Can we construct PoQ from weaker and standard assumptions, such as the existence of one-way functions (OWFs)?_
Note that this open problem is highly non-trivial even if we give up the efficient verification. As we have explained, all previous results on inefficiently-verifiable quantum advantage assume (random) oracles or some ad-hoc assumptions newly introduced by the authors themselves. It is therefore highly non-trivial to answer even the following question.
_Can we demonstrate inefficiently-verifiable quantum advantage with weaker and standard assumptions, such as the existence of OWFs?_
### Our Results
In this paper, we answer the second question affirmatively. We demonstrate inefficiently-verifiable quantum advantage with several basic assumptions, specifically based on only the existence of OWFs. To our knowledge, this is the first time that quantum advantage is shown based only on OWFs. More precisely, we construct what we call _inefficient-verifier proofs of quantumness_ (IV-PoQ) from statistically-hiding and computationally-binding classical bit commitments. IV-PoQ is an interactive protocol between a verifier and a QPT prover, which is divided into two phases. In the first phase, the verifier is PPT, and it interacts with the QPT prover over a classical channel. In the second phase, the verifier becomes inefficient, and makes the decision based on the transcript of the first phase.7 If the QPT prover is honest, the inefficient verifier accepts with high probability, but for any PPT malicious prover, the inefficient verifier accepts with only small probability. The new notion of IV-PoQ captures both the standard PoQ and inefficiently-verifiable quantum advantage (including search problems that exhibit quantum advantage).
Footnote 7: The inefficient verifier could also take the efficient verifier’s secret information as input in addition to the transcript. However, without loss of generality, we can assume that the inefficient verifier takes only the transcript as input, because we can always modify the protocol of the first phase in such a way that the efficient verifier sends its secret information to the prover at the end of the first phase.
Our main result is the following:
**Theorem 1.1**.: \((k+6)\)_-round IV-PoQ exist if statistically-hiding and computationally-binding classical bit commitments with \(k\)-round commit phase exist._
A proof of Theorem 1.1 is given in Section 6. Note that we actually need the statistical hiding property only for the honest receiver, because the receiver corresponds to the verifier. Moreover, note that in our construction, the inefficient verifier in the second phase is enough to be a classical deterministic polynomial-time algorithm that queries the \(\mathbf{NP}\) oracle. (See Section 6.3.)
Because statistically-hiding and computationally-binding classical bit commitments can be constructed from OWFs [14], we have the following result.
**Theorem 1.2**.: _IV-PoQ exist if OWFs exist._
Moreover, it is known that constant-round statistically-hiding and computationally-binding bit commitments can be constructed from distributional collision resistant hash functions [1]8, which exist if there is an hard-on-average problem in **SZK**[17]. Therefore we also have the following result.9
Footnote 8: A distributional collision-resistant hash function [16] is a weaker variant of a collision-resistant hash function that requires the hardness of sampling a collision \((x,y)\) where \(x\) is uniformly random and \(y\) is uniformly random conditioned on colliding with \(x\).
Footnote 9: It is also known that constant-round statistically-hiding and computationally-binding commitments can be constructed from multi-collision resistant hash functions [1, 15], and therefore we have constant-round IV-PoQ from multi-collision resistant hash functions as well.
**Theorem 1.3**.: _Constant-round IV-PoQ exist if there exist distributional collision-resistant hash functions, which exist if there is an hard-on-average problem in **SZK**._
The assumptions in Theorems 1.2 and 1.3 are average-case-hard assumptions. We can further weaken the assumptions to worst-case-hard ones if we require only worst-case soundness for IV-PoQ. Namely, we define _auxiliary-input IV-PoQ_ (AI-IV-PoQ) that only requires that for any malicious prover, there exist infinitely many auxiliary inputs under which the prover cannot cheat. We can show the following:
**Theorem 1.4**.: \((k+6)\)_-round AI-IV-PoQ exist if auxiliary-input statistically-hiding and computationally-binding classical bit commitments with \(k\)-round commit phase exist._
Its proof is omitted because it is similar to that of Theorem 1.1. Although AI-IV-PoQ is weaker than IV-PoQ, we believe that it still demonstrates a meaningful notion of quantum advantage, because it shows "worst-case quantum advantage" in the sense that no PPT algorithm can simulate the QPT honest prover on all auxiliary inputs.
Auxiliary-input OWFs10 exist if \(\mathbf{CZK}\not\subseteq\mathbf{BPP}\)[14].11 Moreover, the construction of statistically-hiding and computationally-binding commitments from OWFs in [15] can be modified for the auxiliary-input setting. We therefore have the following result.
Footnote 10: Roughly speaking, auxiliary-input OWFs are keyed functions such that for each adversary there exist infinitely many keys on which the adversary fails to invert the function.
**Theorem 1.5**.: _AI-IV-PoQ exist if there exist auxiliary-input OWFs, which exist if \(\mathbf{CZK}\not\subseteq\mathbf{BPP}\)._
Furthermore, relying on the known constructions of constant-round (auxiliary-input) statistically-hiding commitments [13, 14], we obtain the following result.
**Theorem 1.6**.: _Constant-round AI-IV-PoQ exist if auxiliary-input collision-resistant hash functions exist (which is equivalent to \(\mathbf{PWPP}\not\subseteq\mathbf{FBPP}\))12 or \(\mathbf{SZK}\not\subseteq\mathbf{BPP}\)._
Footnote 12: See Appendix B.1 for the definitions of \(\mathbf{PWPP}\) and \(\mathbf{FBPP}\).
Finally, we can also define another variant of IV-PoQ that we call _infinitely-often IV-PoQ_ (IO-IV-PoQ) where the soundness is satisfied for infinitely many values of the security parameter. We note that IO-IV-PoQ lie between IV-PoQ and AI-IV-PoQ. It is known that infinitely-often OWFs exist if \(\mathbf{SRE}\not\subseteq\mathbf{BPP}\)[1].13 Therefore we also have the following result.
Footnote 13: **SBRE** is the class of problems that admit statistically-private randomized encoding with polynomial-time client and computationally-unbounded server.
**Theorem 1.7**.: _IO-IV-PoQ exist if infinitely-often OWFs exist, which exist if \(\mathbf{SRE}\not\subseteq\mathbf{BPP}\)._
A comparison table among existing and our results on quantum advantage can be found in Table 1.
Remarks on completeness-soundness gap.We remark that the above theorems consider (AI-/IO-) IV-PoQ that only have an inverse-polynomial completeness-soundness gap, i.e., the honest QPT prover passes verification with probability at least \(c\) and any PPT cheating prover passes verification with probability at most \(s\) where \(c-s\geq 1/\mathrm{poly}(\lambda)\) for the security parameter \(\lambda\). Due to the inefficiency of verification, it is unclear if we can generically amplify the gap _even by sequential repetition_.14 Fortunately, we find a stronger definition of soundness called strong soundness
which our constructions satisfy and enables us to amplify the gap by sequential repetition. Roughly speaking, strong soundness requires that soundness holds for almost all fixed cheating prover's randomness rather than on average. See Definition 4.8 for the formal definition. This enables us to amplify the completeness-soundness gap to be optimal for any of our constructions. However, we remark that this increases the round complexity and in particular, the schemes of Theorems 1.3 and 1.6 are no longer constant-round if we amplify the completeness-soundness gap. This issue could be resolved if we could prove that parallel repetition amplifies the gap, but we do not know how to prove this. Remark that we cannot use existing parallel repetition theorems for interactive arguments because verification is inefficient. Indeed, it is observed in [10] that parallel repetition may not amplify the gap when verification is inefficient even for two-round arguments. Thus, we believe that it is very challenging or even impossible to prove a general parallel repetition theorem for (AI-/IO-)IV-PoQ. Nonetheless, it may be still possible to prove a parallel repetition theorem for our particular constructions, which we leave as an interesting open problem.
Implausibility of two-round AI-IV-PoQ.It is natural to ask how many rounds of interaction are needed. As already mentioned, it is trivial to construct two-round PoQ if we assume the existence of classically-hard and quantumly-easy problems such as factoring. We show evidence that it is inevitable to rely on such an assumption for constructing two-round (AI-/IO-)IV-PoQ. In the following, we state theorems for AI-IV-PoQ, but they immediately imply similar
\begin{table}
\begin{tabular}{l c c c c} \hline Ref. & Verification & \#Rounds & Assumption & Misc \\ \hline \hline
[13, 1, 1, 1, 1, 1] & No & \(1\) & PH does not collapse & Mult.err. sampling \\ \hline
[1, 1, 1, 1] & No & \(1\) & Ad hoc & Add.err. sampling \\ \hline
[1] & No & \(1\) & Random oracle & Fourier Sampling \\ \hline
[1] & No & \(1\) & seOWFs+\(\mathbf{P/poly}\)-oracle & Fourier Sampling \\ \hline
[1, 1] & Inefficient & \(1\) & Ad hoc & HOG, XHOG \\ \hline
[1] & Inefficient & \(1\) & Random oracle & Fourier Fishing \\ \hline
[1] & Inefficient & \(1\) & seOWFs+\(\mathbf{P/poly}\)-oracle & Fourier Fishing \\ \hline
[1] & Inefficient & \(1\) & seOWFs+\(\mathbf{P/poly}\)-oracle & Fourier Fishing \\ \hline
[1] & Inefficient & \(1\) & Random oracle & Collision Hashing \\ \hline
[1] & Efficient & \(2\) & Factoring/Discrete-log & \\ \hline
[1] & Efficient & \(1\) & Random oracle & \\ \hline
[1, 1] & Efficient & \(O(1)\) & (Noisy) 2-1 TDCRHFs & \\ \hline
[1] & Efficient & \(O(1)\) & QHE & \\ \hline
[1] & Efficient & \(\mathrm{poly}(\lambda)\) & fdTDPs & \\ \hline Theorem 1.2 & Inefficient & \(\mathrm{poly}(\lambda)\) & OWFs & \\ \hline Theorem 1.3 & Inefficient & \(O(1)\) & dCRHFs & \\ \hline Theorem 1.5 & Inefficient & \(\mathrm{poly}(\lambda)\) & \begin{tabular}{c} Auxiliary-input OWFs / \\ **CZK**\(\not\subseteq\)**\(\mathbf{BPP}\)** \\ \end{tabular} & AI-IV-PoQ \\ \hline Theorem 1.6 & Inefficient & \(O(1)\) & \begin{tabular}{c} Auxiliary-input CRHFs / \\ **SZK**\(\not\subseteq\)**\(\mathbf{BPP}\)** \\ \end{tabular} & AI-IV-PoQ \\ \hline Theorem 1.7 & Inefficient & \(\mathrm{poly}(\lambda)\) &
\begin{tabular}{c} Infinitely-often OWFs / \\ **SRE**\(\not\subseteq\)**\(\mathbf{BPP}\)** \\ \end{tabular} & IO-IV-PoQ \\ \hline \end{tabular}
\end{table}
Table 1: Comparison among results on quantum advantage. In column “Verification”, “No” means that the verification is not known to be possible. (Actually, it seems to be impossible.) In column “Assumption”, PH stands for the polynomial-time hierarchy, seOWFs stands for subexponentially secure one-way functions, 2-1 TDCRHFs stands for 2-to-1 trapdoor collision-resistant hash functions, QHE stands for quantum homomorphic encryption, fdTDPs stands for full-domain trapdoor permutations, OWFs stands for one-way functions, dCRHFs stands for distributional collision-resistant hash functions, and CRHFs stands for collision-resistant hash functions. In column “Misc”, Mult.err. and Add.err. stand for multiplicative and additive errors, respectively. In the row of [13], the number of rounds is two, because the verifier sends a composite number to the prover, and the prover returns its factorization. It can be considered as a non-interactive if the composite number is given as an auxiliary input.
results for IV-PoQ and IO-IV-PoQ because they are stronger than AI-IV-PoQ.
First, we prove that there is no classical black-box reduction from security of two-round AI-IV-PoQ to standard cryptographic assumptions unless the assumptions do not hold against QPT adversaries.
**Theorem 1.8** (Informal).: _For a two-round AI-IV-PoQ, if its soundness can be reduced to a game-based assumption by a classical black-box reduction, then the assumption does not hold against QPT adversaries._
The formal version of the theorem is given in Theorem 7.5. Here, game-based assumptions are those formalized as a game between the adversary and challenger that include (but not limited to) general assumptions such as security of OWFs, public key encryption, digital signatures, oblivious transfers, indistinguishability obfuscation, succinct arguments etc. as well as concrete assumptions such as the hardness of factoring, discrete-logarithm, LWE etc.15 See Definition 7.1 for a formal definition. In particular, since we believe that quantumly-secure OWFs exist, the above theorem can be interpreted as a negative result on constructing two-round AI-IV-PoQ from general OWFs.
Footnote 15: This is similar to falsifiable assumptions [21, 17] but there is an important difference that we do not restrict the challenger to be efficient.
The proof idea is quite simple: Suppose that there is a classical black-box reduction algorithm \(R\) that is given a malicious prover as an oracle and breaks an assumption. Intuitively, the reduction should still work even if it is given the honest quantum prover \(\mathcal{P}\) as an oracle. By considering the combination of \(R\) and \(\mathcal{P}\) as a single quantum adversary, the assumption is broken. We remark this can be seen as an extension of an informal argument in [1] where they argue that it is unlikely that a two-round PoQ can be constructed from the hardness of the LWE problem.16
Footnote 16: They use one-round PoQ to mean what we call two-round PoQ by counting interaction from the verifier to prover and from the prover to verifier as a single round.
Note that Theorem 1.8 only rules out classical reductions. One may think that the above argument extends to rule out quantum reductions, but there is some technical difficulty. Roughly speaking, the problem is that a coherent execution of the honest quantum prover may generate an entanglement between its message register and internal register unlike a coherent execution of a classical cheating prover (see Remark 7.6 for more explanations).17 To complement this, we prove another negative result that also captures some class of quantum reductions.
Footnote 17: This observation is due to Mark Zhandry.
**Theorem 1.9** (Informal).: _If a cryptographic primitive \(\mathtt{P}\) has a quantumly-secure construction (possibly relative to a classical oracle), then there is a randomized classical oracle relative to which two-round AI-IV-PoQ do not exist but a quantumly-secure construction of \(\mathtt{P}\) exists._
The formal version of the theorem is given in Theorem 7.13. The above theorem can be interpreted as a negative evidence on constructing two-round IV-PoQ from a cryptographic primitive for which we believe that quantumly-secure constructions exist (e.g., OWFs, public key encryption, indistinguishability obfuscation etc.) In particular, the above theorem rules out any constructions that work relative to randomized classical oracles.18 Theorem 1.9 is incomparable to Theorem 1.8 since Theorem 1.9 does not require the reduction to be classical unlike Theorem 1.8, but requires that the construction and reduction work relative to randomized classical oracles.
Footnote 18: Note that reductions that work relative to _deterministic_ classical oracles do not necessarily work relative to _randomized_ classical oracles [1, Section 5].
Again, the proof idea is simple. Suppose that a quantumly-secure construction \(f\) of a primitive \(\mathtt{P}\) exists relative to an oracle \(O\). Then we introduce an additional oracle \(Q^{O}\) that takes a description of a quantum circuit \(C^{O}\) with \(O\)-gates and its input \(x\) as input and outputs a classical string according to the distribution of \(C^{O}(x)\). Relative to oracles \((O,Q^{O})\), there do not exist AI-IV-PoQ since a classical malicious prover can query the description of the honest quantum prover to \(Q^{O}\) to get a response that passes the verification with high probability. On the other hand, \(f\) is quantumly-secure relative to \((O,Q^{O})\) since we assume that it is quantumly-secure relative to \(O\) and the additional oracle \(Q^{O}\) is useless for quantum adversaries since they can simulate it by themselves.
We remark that the above theorems do not completely rule out black-box constructions of two-round AI-IV-PoQ from quantumly-hard assumptions. For example, consider a quantum black-box reduction that queries a cheating prover with a fixed randomness multiple times. Such a reduction is not captured by Theorem 1.8 because it is quantum. Moreover, it is not captured by Theorem 1.9 because it does not work relative to randomized classical oracles since we cannot fix the randomness of the randomized classical oracle. It is a very interesting open problem to study if such a reduction is possible.
Quantum advantage based on quantum primitives weaker than OWFs.The existence of OWFs is the most fundamental assumption in classical cryptography. Interestingly, it has been realized recently that it is not necessarily the case in quantum cryptography [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. Many quantum cryptographic tasks can be realized with new quantum primitives, which seem to be weaker than OWFs, such as pseudorandom states generators [14], one-way states generators [15], and EFI [2]. Can we construct PoQ (or its variants) from quantum primitives that seem to be weaker than OWFs? We show that variants of PoQ can be constructed from (classically-secure) quantum-evaluation OWFs (QE-OWFs). QE-OWFs is the same as the standard classically-secure classical OWFs except that the function evaluation algorithm is not deterministic classical polynomial-time but quantum polynomial-time. (Its definition is given in Section 8.) QE-OWFs seem to be weaker than classically-secure classical OWFs. (For example, consider the function \(f\) that on input \((x,y)\) outputs \(\Pi_{L}(x)\|g(y)\), where \(L\) is any language in \(\mathbf{BQP}\setminus\mathbf{BPP}\), \(\Pi_{L}\) is a function such that \(\Pi_{L}(x)=1\) if \(x\in L\) and \(\Pi_{L}(x)=0\) if \(x\notin L\), and \(g\) is any classically-secure classical OWF. \(f\) is a QE-OWF, and \(f\) cannot be evaluated in classical polynomial-time if \(\mathbf{BQP}\neq\mathbf{BPP}\). For details, see Section 8.) We show the following result.
**Theorem 1.10**.: _If QE-OWFs exist, then quantum-verifier PoQ (QV-PoQ) exist or infinitely-often classically-secure classical OWFs exist._
A proof of the theorem is given in Section 8. QV-PoQ is the same as PoQ except that the verifier is a QPT algorithm. Such a new notion of PoQ will be useful, for example, when many local quantum computers are connected over classical internet: A quantum local machine may want to check whether it is interacting with a quantum computer or not over a classical channel.
The proof idea of Theorem 1.10 is as follows. Let \(f\) be a QE-OWF. We construct QV-PoQ as follows: The verifier first chooses \(x\leftarrow\{0,1\}^{n}\) and sends it to the prover. The prover then returns \(y\). The verifier finally evaluates \(f(x)\) by himself, and accepts if it is equal to \(y\). If the soundness holds, we have QV-PoQ. On the other hand, if the soundness does not hold, then it means that \(f\) can be evaluated in PPT, which means that \(f\) is a classical OWF. It is an interesting open problem whether PoQ or its variants can be constructed from pseudorandom quantum states generators, one-way states generators, or EFI.
Infinitely-often classically-secure OWFs imply IO-IV-PoQ (Theorem 1.7), and therefore Theorem 1.10 shows that the existence of QE-OWFs anyway implies quantum advantage (i.e., QV-PoQ or IO-IV-PoQ). Moreover, QV-PoQ in Theorem 1.10 implies IV-PoQ (and therefore IO-IV-PoQ). (In general, QV-PoQ does not necessarily imply IV-PoQ, but in our case, it does because our construction of QV-PoQ is a two-round protocol with the verifier's first message being a uniformly-randomly-chosen classical bit string.) Hence we have the result that QE-OWFs implies IO-IV-PoQ in either case.
### Technical Overview
In this subsection, we provide technical overview of our main result, Theorem 1.1, namely, the construction of IV-PoQ from statistically-hiding commitments. (The construction of AI-IV-PoQ is similar.) Our construction is based on PoQ of [13]. Let us first review their protocol. Their protocol can be divided into two phases. In the first phase, the verifier first generates a pair of a trapdoor and a trapdoor 2-to-1 collision resistant hash function \(F\). The verifier sends \(F\) to the prover. The prover generates the quantum state \(\sum_{x\in\{0,1\}^{\ell}}\left|x\right\rangle\left|F(x)\right\rangle\), and measures the second register in the computational basis to obtain the measurement result \(y\). The post-measurement state is \(\left|x_{0}\right\rangle+\left|x_{1}\right\rangle\), where \(F(x_{0})=F(x_{1})=y\). This is the end of the first phase.
In the second phase, the verifier chooses a challenge bit \(c\in\{0,1\}\) uniformly at random. If \(c=0\), the verifier asks the prover to measure the state in the computational basis. The verifier accepts and halts if the prover's measurement result is \(x_{0}\) or \(x_{1}\). (The verifier can compute \(x_{0}\) and \(x_{1}\) from \(y\), because it has the trapdoor.) The verifier rejects and halts if the prover's measurement result is not correct. If \(c=1\), the verifier sends the prover a bit string \(\xi\in\{0,1\}^{\ell}\) which is chosen uniformly at random. The prover changes the state \(\left|x_{0}\right\rangle+\left|x_{1}\right\rangle\) into the state \(\left|\xi\cdot x_{0}\right\rangle\left|x_{0}\right\rangle+\left|\xi\cdot x_{1 }\right\rangle\left|x_{1}\right\rangle\), and measures the second register in the Hadamard basis. If the measurement result is \(d\in\{0,1\}^{\ell}\), the post-measurement state is \(\left|\xi\cdot x_{0}\right\rangle+(-1)^{d\cdot(x_{0}\oplus x_{1})}\left|\xi \cdot x_{1}\right\rangle\), which is one of the BB84 states \(\{\left|0\right\rangle,\left|1\right\rangle,\left|+\right\rangle,\left|- \right\rangle\}\). The verifier then asks the prover to measure this single-qubit state in a certain basis, and accepts if the measurement result is appropriate. This is the end of the second phase. Intuitively, the soundness comes from the collision resistance of \(F\): If a malicious PPT
prover is accepted by the verifier with some high probability for both challenges, \(c=0\) and \(c=1\), we can construct a PPT adversary that can find both \(x_{0}\) and \(x_{1}\) with non-negligible probability, which contradicts the collision resistance.
Therefore, once we can construct an interactive protocol where a verifier can let a prover generate \(\ket{x_{0}}+\ket{x_{1}}\) in such a way that no malicious PPT prover can learn both \(x_{0}\) and \(x_{1}\), we can construct PoQ by running the second phase of [13] on it. Can we do that with only OWFs? Our key idea is to _coherently_ execute statistically-hiding classical bit commitments, which can be constructed from OWFs [14]. (A similar idea was also used in [15].) The prover plays the role of the sender of the commitment scheme, and the verifier plays the role of the receiver of the commitment scheme. The prover first generates the state \(\sum_{b\in\{0,1\}}\sum_{x\in\{0,1\}^{\ell}}\ket{b}\ket{x}\), which is the superposition of the bit \(b\in\{0,1\}\) to commit and sender's random seed \(x\in\{0,1\}^{\ell}\). The prover and the verifier then run the interactive commitment phase. When the prover computes its message, it coherently computes the message on its state, and measures a register to obtain the measurement result.9 The prover sends the measurement result as the sender's message to the verifier. The verifier runs classical receiver's algorithm, and sends classical message to the prover. At the end of the commit phase, the honest prover possesses the state
Footnote 9: For example, in the prover’s \(j\)th round, if the prover possesses a state \(\sum_{b\in\{0,1\}}\sum_{x\in X_{b}}\ket{b}\ket{x}\), where \(X_{b}\) is a certain set, it changes the state into \(\sum_{b\in\{0,1\}}\sum_{x\in X_{b}}\ket{b}\ket{x}\ket{f_{j}(b,x,t_{j})}\), and measures the third register to obtain the measurement result \(\alpha_{j}\), where \(f_{j}\) is the function that computes sender’s \(j\)th message, and \(t_{j}\) is the transcript obtained before the \(j\)th round. The prover sends \(\alpha_{j}\) to the verifier as the sender’s \(j\)th message.
Footnote 9: Strictly speaking, \(\ket{0}\ket{x_{0}}+\ket{1}\ket{x_{1}}\) is not equal to \(\ket{x_{0}}+\ket{x_{1}}\), but the protocol can be easily modified. Given \(\xi\), the prover has only to change \(\ket{0}\ket{x_{0}}+\ket{1}\ket{x_{1}}\) to \(\ket{\xi\cdot x_{0}}\ket{x_{0}}+\ket{1\oplus\left(\xi\cdot x_{1}\right)}\ket{x_ {1}}\).
\[\ket{0}\sum_{x\in X_{0,t}}\ket{x}+\ket{1}\sum_{x\in X_{1,t}}\ket{x}, \tag{1}\]
where \(X_{b,t}\) is the set of sender's random seeds that are consistent with the committed bit \(b\) and the transcript \(t\), which is the sequence of all classical messages exchanged between the prover and the verifier.
If \(\ket{X_{0,t}}=\ket{X_{1,t}}=1\), Equation (1) is \(\ket{0}\ket{x_{0}}+\ket{1}\ket{x_{1}}\), where \(x_{b}\) is the unique element of \(X_{b,t}\) for each \(b\in\{0,1\}\). In that case, we can run the second phase of [13] on it.20 However, in general, \(\ket{X_{0,t}}=\ket{X_{1,t}}=1\) is not always satisfied, and if it is not satisfied, we do not know how to realize PoQ from the state of Equation (1). This is our first problem. Moreover, even if \(\ket{X_{0,t}}=\ket{X_{1,t}}=1\) is satisfied, we have the second problem: The efficient verifier cannot compute \((x_{0},x_{1})\), because there is no trapdoor. The efficient verifier therefore cannot check whether the prover passes the tests or not.21
Footnote 21: In [15], they resolve the first problem by using a specific commitment scheme of [12] and resolve the second problem by simply assuming the existence of a trapdoor. However, since the commitment scheme of [12] relies on one-way _permutations_, their idea does not work based on OWFs even if we give up efficient verification.
Unfortunately, we do not know how to solve the second problem, and therefore we have to give up the efficient verification. On the other hand, we can solve the first problem by introducing a new hashing technique, which may have further applications. First, we notice that \(\ket{X_{0,t}}\simeq\ket{X_{1,t}}\) with overwhelming probability, because otherwise the statistical-hiding of the classical bit commitment scheme is broken. Next, let \(\mathcal{H}\coloneqq\{h:\mathcal{X}\rightarrow\mathcal{Y}\}\) be a pairwise-independent hash family with \(\mathcal{X}=\{0,1\}^{\ell}\). The verifier chooses \(h_{0},h_{1}\in\mathcal{H}\) uniformly at random, and sends \((h_{0},h_{1})\) to the prover. The prover changes the state of Equation (1) into
\[\ket{0}\sum_{x\in X_{0,t}}\ket{x}\ket{h_{0}(x)}+\ket{1}\sum_{x\in X_{1,t}} \ket{x}\ket{h_{1}(x)}, \tag{2}\]
and measures the third register in the computational-basis to obtain the measurement result \(y\). We show that if \(\ket{\mathcal{Y}}\) is chosen so that \(\ket{\mathcal{Y}}\simeq 2\ket{X_{b,t}}\), the state collapses by the measurement to \(\ket{0}\ket{x_{0}}+\ket{1}\ket{x_{1}}\) with constant probability, where \(x_{b}\in X_{b,t}\cap h_{b}^{-1}(y)\) for \(b\in\{0,1\}\). The remaining problem is that the efficient verifier cannot compute \(\ket{X_{b,t}}\), and therefore it cannot find the appropriate \(\ket{\mathcal{Y}}\). This problem is solved by noticing that even if the verifier chooses \(\ket{\mathcal{Y}}\) randomly, it is \(\simeq 2\ket{X_{b,t}}\) with non-negligible probability. More precisely, let \(m\) be an integer such that \((1+\epsilon)^{m}\geq 2^{\ell+1}\), where \(0<\epsilon<1\) is a small constant (which we take \(\epsilon=1/100\), for example). Then, we show that there exists a \(j^{*}\in\{0,1,...,m-1\}\) such that \(\lceil(1+\epsilon)^{j^{*}}\rceil\simeq 2\ket{X_{b,t}}\). Therefore, if the efficient verifier chooses \(j\in\{0,1,...,m-1\}\) uniformly at random, and sets \(\mathcal{Y}\coloneqq\lceil(1+\epsilon)^{j}\rceil\), then \(\ket{\mathcal{Y}}\simeq 2\ket{X_{b,t}}\) is satisfied with probability \(1/m=1/\mathrm{poly}(\lambda)\).
In summary, the efficient verifier can let the honest prover generate \(\ket{0}\ket{x_{0}}+\ket{1}\ket{x_{1}}\) with non-negligible probability. Fortunately, the second phase of [13] is a public coin one, which means that all messages from the verifier
are uniformly-chosen random bit strings, and therefore our efficient verifier can send all its messages without doing any inefficient computation (such as finding an element of \(X_{b,t}\cap h_{b}^{-1}(y)\), etc.). All verifications are later done by the inefficient verifier.
The soundness of our construction is shown from the computational-binding of the classical bit commitment scheme. In the soundness proof of [13], they use the fact that no PPT malicious prover can find both \(x_{0}\) and \(x_{1}\), which comes from the collision resistance. In our case, we have that property from the computational-binding of the classical bit commitment scheme. In a similar way as the soundness proof of [13], we can construct a PPT adversary \(\mathcal{A}\) that can find both \(x_{0}\) and \(x_{1}\) from a PPT malicious prover that passes both challenges with some high probability. We can then construct a PPT adversary \(\mathcal{B}\) that breaks computational-binding of the classical bit commitment scheme from \(\mathcal{A}\).
There is, however, a large difference in our case from that of [13]. In the protocol of [13], the honest prover's state is always \(\left|x_{0}\right\rangle+\left|x_{1}\right\rangle\), but in our case \(\left|X_{0,t}\cap h_{0}^{-1}(y)\right|=\left|X_{1,t}\cap h_{1}^{-1}(y)\right|=1\) is not always satisfied. In order to keep the \(1/\mathrm{poly}\) completeness-soundness gap in our protocol, we need a trick for the algorithm of the inefficient verifier. The inefficient verifier first checks whether \(\left|X_{0,t}\cap h_{0}^{-1}(y)\right|=\left|X_{1,t}\cap h_{1}^{-1}(y)\right|=1\) is satisfied or not. If it is satisfied, the inefficient verifier computes the unique element \(x_{b}\in X_{b,t}\cap h_{b}^{-1}(y)\) for each \(b\in\{0,1\}\), and checks whether the transcript passes the second phase of the protocol of [13] or not. On the other hand, if \(\left|X_{0,t}\cap h_{0}^{-1}(y)\right|=\left|X_{1,t}\cap h_{1}^{-1}(y)\right|=1\) is not satisfied, we need some trick. A naive attempt would be to always accept in such a case. Intuitively, this would give a \(1/\mathrm{poly}\) completeness-soundness gap because we have a constant completeness-soundness gap conditioned on \(\left|X_{0,t}\cap h_{0}^{-1}(y)\right|=\left|X_{1,t}\cap h_{1}^{-1}(y)\right|=1\) by [13] and such an event occurs with probability \(1/\mathrm{poly}\) as explained above. However, there is a flaw in the argument because a malicious prover may change the probability that \(\left|X_{0,t}\cap h_{0}^{-1}(y)\right|=\left|X_{1,t}\cap h_{1}^{-1}(y)\right|=1\) holds. For example, if it can control the probability to be \(1\), then it passes the verification with probability \(1\), which is even higher than the honest quantum prover's success probability! Due to a similar reason, an attempt to let the inefficient verifier always reject when \(\left|X_{0,t}\cap h_{0}^{-1}(y)\right|=\left|X_{1,t}\cap h_{1}^{-1}(y)\right|=1\) is not satisfied also does not work. Our idea is to take the middle of the two attempts: If \(\left|X_{0,t}\cap h_{0}^{-1}(y)\right|=\left|X_{1,t}\cap h_{1}^{-1}(y)\right|=1\) is not satisfied, the inefficient verifier accepts with probability \(s\) and rejects with probability \(1-s\), where \(s\) is the soundness parameter of the PoQ protocol of [13], i.e., for any malicious prover, the verifier accepts with probability at most \(s+\mathsf{negl}(\lambda)\). Let \(p_{\mathsf{good}}\) be the probability that \(\left|X_{0,t}\cap h_{0}^{-1}(y)\right|=\left|X_{1,t}\cap h_{1}^{-1}(y)\right|=1\) is satisfied in the interaction between the honest prover and the verifier. Then, the probability that the inefficient verifier accepts the honest prover is at least \(p_{\mathsf{good}}c+(1-p_{\mathsf{good}})s\), where \(c\) is the completeness parameter of the PoQ protocol of [13], i.e., the verifier accepts the honest prover with probability at least \(c\). On the other hand, we show that the soundness parameter of our protocol is also \(s\). (Intuitively, this is because if \(\left|X_{0,t}\cap h_{0}^{-1}(y)\right|=\left|X_{1,t}\cap h_{1}^{-1}(y)\right|=1\) is satisfied, then a malicious prover can pass the verification with probability at most \(s+\mathsf{negl}(\lambda)\) by the soundness of the PoQ protocol of [13], and if \(\left|X_{0,t}\cap h_{0}^{-1}(y)\right|=\left|X_{1,t}\cap h_{1}^{-1}(y)\right|=1\) is not satisfied, the verifier accepts with probability \(s\) regardless of the prover's behavior.) Therefore, we have \(p_{\mathsf{good}}c+(1-p_{\mathsf{good}})s-s=p_{\mathsf{good}}(c-s)\geq 1/ \mathrm{poly}\), because \(p_{\mathsf{good}}\geq 1/\mathrm{poly}\) as we have explained. In this way, we can achieve the \(1/\mathrm{poly}\) completeness-soundness gap.
Finally, in our construction, the inefficient verifier is enough to be a classical deterministic polynomial-time algorithm that queries the \(\mathbf{NP}\) oracle, because as we have explained above, inefficient computations that the inefficient verifier has to do are verifying \(\left|X_{0,t}\cap h_{0}^{-1}(y)\right|=\left|X_{1,t}\cap h_{1}^{-1}(y)\right|=1\) and finding the single element \(x_{b}\in X_{b,t}\cap h_{b}^{-1}(y)\) for each \(b\in\{0,1\}\).
### Related Works
IV-PoQ from random oracles was constructed in [1], which they call Collision Hashing. Their construction is based on the observation that if the state \(\sum_{x}\left|x\right\rangle\left|g(x)\right\rangle\) is generated, where \(g\) is a random oracle, and the second register is measured in the computational basis, the post-measurement state \(\sum_{x\in g^{-1}(y)}\left|x\right\rangle\) corresponding to the measurement result \(y\) is a superposition of two computational-basis states with some probability on which the second phase of [13] can be run. (Actually, because they assume random oracles, the non-interactive protocol of [1] can be run instead of [13].) This idea seems to be somehow related to our idea.
[1] studied a sampling problem, Fourier Sampling, where given an oracle \(f:\{0,1\}^{n}\rightarrow\{+1,-1\}\), it is required to sample from the distribution \(\{p_{y}\}_{y}\), where \(p_{y}\coloneqq 2^{-n}\tilde{f}(y)^{2}=(\frac{1}{2^{n}}\sum_{x\in\{0,1\}^{n}}f(x)(-1 )^{x\cdot y})^{2}\) within an
additive error. It needs exponentially-many queries to classically solve it relative to a random oracle. [1] also introduced a search problem, Fourier Fishing, where given an oracle \(f:\{0,1\}^{n}\rightarrow\{+1,-1\}\), find \(z\in\{0,1\}^{n}\) such that \(|\hat{f}(z)|\geq 1\). It needs exponentially-many queries to classically solve it relative to a random oracle. The verification of Fourier Fishing can be done inefficiently. [1] also introduced a decision problem, Fourier Checking, and show that it requires exponentially-many queries to solve it classically relative to a certain oracle. Whether \(\mathbf{BQP}\neq\mathbf{BPP}\) relative to a random oracle is an open problem, and given the Aaronson-Ambainis conjecture [1], showing it seems to be difficult.
[1] showed that if OWFs exist, then there are oracles \(A\in\mathbf{P}/\mathbf{poly}\) such that \(\mathbf{BPP}^{A}\neq\mathbf{BQP}^{A}\) (and even \(\mathbf{BQP}^{A}\not\subset\mathbf{SZK}^{A}\)). The paper also showed that if there exist subexponentially-secure OWFs, then Fourier Sampling and Fourier Fishing are classically hard relative to oracles in \(\mathbf{P}/\mathbf{poly}\). Regarding the possibility of removing the oracles, the authors say that _"... in the unrelativized world, there seems to be no hope at present of proving \(\mathbf{BPP}\neq\mathbf{BQP}\) under any hypothesis nearly as weak as the existence of one-way functions"_, which suggests the difficulty of demonstrating quantum advantage based only on one-way functions. We bypass the difficulty by considering interactive protocols.
It was pointed out in [1] that the complexity assumption of \(\mathbf{PP}\neq\mathbf{BPP}\) is necessary for the existence of PoQ. A similar idea can be applied to show that \(\mathbf{PP}\neq\mathbf{BPP}\) is necessary for the existence of (AI-/IO-)IV-PoQ. (For the convenience of readers, we provide a proof in Appendix A.) We remark that the proof holds even if we allow the honest prover to perform post-selection. Moreover, it holds even if the verifier in the first phase is unbounded-time.
Unconditional quantum advantage over restricted classical computing was also studied [1, 1, 2, 3]. Unconditional separations between quantum and classical computing are appealing, but in this paper we do not focus on the setups of restricting classical computing. Note that showing unconditional quantum advantage without restricting classical computing is at least as hard as proving \(\mathbf{PP}\neq\mathbf{BPP}\) ([1] and Appendix A), which is a major open problem in complexity theory.
The idea of coherently running statistically-hiding commitments was first introduced in [13]. However, they could apply the idea only to the specific commitment scheme of [2] whereas we can apply it to _any_ statistically-hiding commitments. This is made possible by our new hashing technique as explained in Section 1.2.
## 2 Preliminaries
### Basic Notations
We use the standard notations of quantum computing and cryptography. We use \(\lambda\) as the security parameter. \([n]\) means the set \(\{1,2,...,n\}\). For any set \(S\), \(x\gets S\) means that an element \(x\) is sampled uniformly at random from the set \(S\). For a set \(S\), \(|S|\) means the cardinality of \(S\). We write \(\operatorname{negl}\) to mean a negligible function and \(\operatorname{poly}\) to mean a polynomial. PPT stands for (classical) probabilistic polynomial-time and QPT stands for quantum polynomial-time. For an algorithm \(A\), \(y\gets A(x)\) means that the algorithm \(A\) outputs \(y\) on input \(x\). For two bit strings \(x\) and \(y\), \(x\|y\) means the concatenation of them. For simplicity, we sometimes omit the normalization factor of a quantum state. (For example, we write \(\frac{1}{\sqrt{2}}(|x_{0}\rangle+|x_{1}\rangle)\) just as \(|x_{0}\rangle+|x_{1}\rangle\).) \(I\coloneqq|0\rangle\langle 0|+|1\rangle\langle 1|\) is the two-dimensional identity operator. For the notational simplicity, we sometimes write \(I^{\otimes n}\) just as \(I\) when the dimension is clear from the context.
### Pairwise-Independent Hash Family
**Definition 2.1**.: _A family of hash functions \(\mathcal{H}\coloneqq\{h:\mathcal{X}\rightarrow\mathcal{Y}\}\) is pairwise-independent if for any two \(x\neq x^{\prime}\in\mathcal{X}\) and any two \(y,y^{\prime}\in\mathcal{Y}\),_
\[\Pr_{h\leftarrow\mathcal{H}}[h(x)=y\wedge h(x^{\prime})=y^{\prime}]=\frac{1}{| \mathcal{Y}|^{2}}. \tag{3}\]
### OWFs
**Definition 2.2** (Owfs).: _A function \(f:\{0,1\}^{*}\rightarrow\{0,1\}^{*}\) is a (classically-secure) OWF if it is computable in classical deterministic polynomial-time, and for any PPT adversary \(\mathcal{A}\), there exists a negligible function \(\operatorname{negl}\) such that for any
\(\lambda\)_
\[\Pr[f(x^{\prime})=f(x):x^{\prime}\leftarrow\mathcal{A}(1^{\lambda},f(x)),x \leftarrow\{0,1\}^{\lambda}]\leq\mathsf{negl}(\lambda). \tag{4}\]
**Definition 2.3** (Infinitely-often OWFs).: _A function \(f:\{0,1\}^{*}\rightarrow\{0,1\}^{*}\) is a (classically-secure) infinitely-often OWF if it is computable in classical deterministic polynomial-time, and there exists an infinite set \(\Lambda\subseteq\mathbb{N}\) such that for any PPT adversary \(\mathcal{A}\),_
\[\Pr[f(x^{\prime})=f(x):x^{\prime}\leftarrow\mathcal{A}(1^{\lambda},f(x)),x \leftarrow\{0,1\}^{\lambda}]\leq\mathsf{negl}(\lambda) \tag{5}\]
_for all \(\lambda\in\Lambda\)._
**Definition 2.4** (Auxiliary-input function ensemble).: _An auxiliary-input function ensemble is a collection of functions \(\mathcal{F}\coloneqq\{f_{\sigma}:\{0,1\}^{p(|\sigma|)}\rightarrow\{0,1\}^{q( |\sigma|)}\}_{\sigma\in\{0,1\}^{*}}\), where \(p\) and \(q\) are polynomials. We call \(\mathcal{F}\) polynomial-time computable if there is a classical deterministic polynomial-time algorithm \(F\) such that for every \(\sigma\in\{0,1\}^{*}\) and \(x\in\{0,1\}^{p(|\sigma|)}\), we have \(F(\sigma,x)=f_{\sigma}(x)\)._
**Definition 2.5** (Auxiliary-input OWFs).: _A (classically-secure) auxiliary-input OWF is a polynomial-time computable auxiliary-input function ensemble \(\mathcal{F}\coloneqq\{f_{\sigma}:\{0,1\}^{p(|\sigma|)}\rightarrow\{0,1\}^{q( |\sigma|)}\}_{\sigma\in\{0,1\}^{*}}\) such that for every uniform PPT adversary \(\mathcal{A}\) and a polynomial \(\mathrm{poly}(\lambda)\), there exists an infinite set \(\Lambda\subseteq\{0,1\}^{*}\) such that,_
\[\Pr[f_{\sigma}(x^{\prime})=f_{\sigma}(x):x^{\prime}\leftarrow\mathcal{A}( \sigma,f_{\sigma}(x)),x\leftarrow\{0,1\}^{p(|\sigma|)}]\leq\frac{1}{\mathrm{ poly}(|\sigma|)} \tag{6}\]
_for all \(\sigma\in\Lambda\)._
_Remark 2.6_.: It is easy to see that OWFs imply infinitely-often OWFs, and infinitely-often OWFs imply auxiliary-input OWFs.
**Theorem 2.7** ([10]).: _Auxiliary-input OWFs exist if \(\mathbf{CZK}\not\subseteq\mathbf{BPP}\)._
_Remark 2.8_.: As is pointed out in [10], auxiliary-input OWFs secure against non-uniform PPT adversaries exist if \(\mathbf{CZK}\not\subseteq\mathbf{P}/\mathsf{poly}\).
### Commitments
**Definition 2.9** (Statistically-hiding and computationally-binding classical bit commitments).: _A statistically-hiding and computationally-binding classical bit commitment scheme is an interactive protocol \(\langle\mathcal{S},\mathcal{R}\rangle\) between two PPT algorithms \(\mathcal{S}\) (the sender) and \(\mathcal{R}\) (the receiver) such that_
* _In the commit phase,_ \(\mathcal{S}\) _takes_ \(b\in\{0,1\}\) _and_ \(1^{\lambda}\) _as input and_ \(\mathcal{R}\) _takes_ \(1^{\lambda}\) _as input._ \(\mathcal{S}\) _and_ \(\mathcal{R}\) _exchange classical messages. The transcript_ \(t\)_, i.e., the sequence of all classical messages exchanged between_ \(\mathcal{S}\) _and_ \(\mathcal{R}\)_, is called a commitment. At the end of the commit phase,_ \(\mathcal{S}\) _privately outputs a decommitment_ \(\mathsf{decom}\)_._
* _In the open phase,_ \(\mathcal{S}\) _sends_ \((b,\mathsf{decom})\) _to_ \(\mathcal{R}\)_._ \(\mathcal{R}\) _on input_ \((t,b,\mathsf{decom})\) _outputs_ \(\top\) _or_ \(\bot\)_._
_We require the following three properties._
Perfect Correctness:_For all \(\lambda\in\mathbb{N}\) and \(b\in\{0,1\}\), if \(\mathcal{S}(1^{\lambda},b)\) and \(\mathcal{R}(1^{\lambda})\) behave honestly, \(\Pr[\top\leftarrow\mathcal{R}]=1\)._
Statistical Hiding:_Let us consider the following security game between the honest sender \(\mathcal{S}\) and a malicious receiver \(\mathcal{R}^{*}\):_
1. \(\mathcal{S}(b,1^{\lambda})\) _and_ \(\mathcal{R}^{*}(1^{\lambda})\) _run the commit phase._
2. \(\mathcal{R}^{*}\) _outputs_ \(b^{\prime}\in\{0,1\}\)_._
_We say that the scheme is statistically hiding if for any computationally unbounded adversary \(\mathcal{R}^{*}\),_
\[|\Pr[0\leftarrow\mathcal{R}^{*}|b=0]-\Pr[0\leftarrow\mathcal{R}^{*}|b=1]|\leq \mathsf{negl}(\lambda). \tag{7}\]
Computational Binding:_Let us consider the following security game between a malicious sender \(\mathcal{S}^{*}\) and the honest receiver \(\mathcal{R}\):_
1. \(\mathcal{S}^{*}(1^{\lambda})\) _and_ \(\mathcal{R}(1^{\lambda})\) _run the commit phase to generate a commitment_ \(t\)_._
2. \(\mathcal{S}^{*}\) _sends_ \((0,\mathsf{decom}_{0})\) _and_ \((1,\mathsf{decom}_{1})\) _to_ \(\mathcal{R}\)_._
_We say that the scheme is computationally binding if for any PPT malicious \(\mathcal{S}^{*}\),_
\[\Pr[\top\leftarrow\mathcal{R}(0,\mathsf{decom}_{0})\wedge\top\leftarrow \mathcal{R}(1,\mathsf{decom}_{1})]\leq\mathsf{negl}(\lambda). \tag{8}\]
Statistically-hiding and computationally-binding bit commitments can be constructed from OWFs.
**Theorem 2.10** ([10]).: _If OWFs exist, then statistically-hiding and computationally-binding bit commitments exist._
Moreover, constant-round schemes are known from collision-resistant hash functions [11]. The assumption is further weakened to the existence of distributional collision-resistant hash functions, which exist if there is an hard-on-average problem in **SZK**.
**Theorem 2.11** ([16, 17]).: _If distributional collision-resistant hash functions exist, which exist if there is an hard-on-average problem in **SZK**, then constant-round statistically-hiding and computationally-binding bit commitments exist._
We define an infinitely-often variant of statistically-hiding and computationally-binding commitments as follows.
**Definition 2.12** (Infinitely-often statistically-hiding and computationally-binding commitments).: _Infinitely-often statistically-hiding and computationally-binding commitments are defined similarly to Definition 2.9 except that we require the existence of an infinite set \(\Lambda\subseteq\mathbb{N}\) such that statistical hiding and computational binding hold for all \(\lambda\in\Lambda\) instead of for all \(\lambda\in\mathbb{N}\)._
By using infinitely-often OWFs instead of OWFs in the commitment scheme of [10], we obtain the following theorem. Since the construction and proof are almost identical to those of [10], we omit the details.
**Theorem 2.13** (Infinitely-often variant of [10]).: _If infinitely-often OWFs exist, then infinitely-often statistically-hiding and computationally-binding bit commitments exist._
We also define an auxiliary-input variant of statistically-hiding and computationally-binding commitments. Intuitively, it is a family of commitment schemes indexed by an auxiliary input where correctness and statistical hiding hold for all auxiliary inputs and an "auxiliary-input" version of computational binding holds, i.e., for any PPT cheating sender \(\mathcal{S}^{*}\), there is an infinite set of auxiliary inputs under which computational binding holds.
**Definition 2.14** (Auxiliary-input statistically-hiding and computationally-binding classical bit commitments).: _An auxiliary-input statistically-hiding and computationally-binding classical bit commitment scheme is an interactive protocol \(\langle\mathcal{S},\mathcal{R}\rangle\) between two PPT algorithms \(\mathcal{S}\) (the sender) and \(\mathcal{R}\) (the receiver) associated with an infinite subset \(\Sigma\subseteq\{0,1\}^{*}\)such that_
* _In the commit phase,_ \(\mathcal{S}\) _takes_ \(b\in\{0,1\}\) _and the auxiliary input_ \(\sigma\in\Sigma\) _as input and_ \(\mathcal{R}\) _takes the auxiliary input_ \(\sigma\) _as input._ \(\mathcal{S}\) _and_ \(\mathcal{R}\) _exchange classical messages. The transcript_ \(t\)_, i.e., the sequence of all classical messages exchanged between_ \(\mathcal{S}\) _and_ \(\mathcal{R}\)_, is called a commitment. At the end of the commit phase,_ \(\mathcal{S}\) _privately outputs a decommitment_ \(\mathsf{decom}\)_._
* _In the open phase,_ \(\mathcal{S}\) _sends_ \((b,\mathsf{decom})\) _to_ \(\mathcal{R}\)_._ \(\mathcal{R}\) _on input_ \((t,b,\mathsf{decom})\) _outputs_ \(\top\) _or_ \(\bot\)_._
_We require the following properties:_
Perfect Correctness:_For all \(\sigma\in\Sigma\) and \(b\in\{0,1\}\), if \(\mathcal{S}(b,\sigma)\) and \(\mathcal{R}(\sigma)\) behave honestly, \(\Pr[\top\leftarrow\mathcal{R}]=1\)._
Statistical Hiding:_Let us consider the following security game between the honest sender \(\mathcal{S}\) and a malicious receiver \(\mathcal{R}^{*}\):_
1. \(\mathcal{S}(b,\sigma)\) _and_ \(\mathcal{R}^{*}(\sigma)\) _run the commit phase._
2. \(\mathcal{R}^{*}\) _outputs_ \(b^{\prime}\in\{0,1\}\)_._
_We say that the scheme is statistically hiding if for all \(\sigma\in\Sigma\) and any computationally unbounded adversary \(\mathcal{R}^{*}\),_
\[|\Pr[0\leftarrow\mathcal{R}^{*}|b=0]-\Pr[0\leftarrow\mathcal{R}^{*}|b=1]|\leq \mathsf{negl}(|\sigma|). \tag{9}\]
Computational Binding:_Let us consider the following security game between a malicious sender \(\mathcal{S}^{*}\) and the honest receiver \(\mathcal{R}\):_
1. \(\mathcal{S}^{*}(\sigma)\) _and_ \(\mathcal{R}(\sigma)\) _run the commit phase to generate a commitment_ \(t\)_._
2. \(\mathcal{S}^{*}\) _sends_ \((0,\mathsf{decom}_{0})\) _and_ \((1,\mathsf{decom}_{1})\) _to_ \(\mathcal{R}\)_._
_We say that the scheme is computationally binding if for any PPT malicious sender \(\mathcal{S}^{*}\) and a polynomial \(\mathrm{poly}\), there exists an infinite subset \(\Lambda\subseteq\Sigma\) such that for any \(\sigma\in\Lambda\),_
\[\Pr[\top\leftarrow\mathcal{R}(t,0,\mathsf{decom}_{0})\wedge\top\leftarrow \mathcal{R}(t,1,\mathsf{decom}_{1})]\leq\frac{1}{\mathrm{poly}(|\sigma|)}. \tag{10}\]
By using auxiliary-input OWFs instead of OWFs in the commitment scheme of [10], we obtain the following theorem. Since the construction and proof are almost identical to those of [10], we omit the details.
**Theorem 2.15** (Auxiliary-input variant of [10]).: _If auxiliary-input OWFs exist, then auxiliary-input statistically-hiding and computationally-binding bit commitments exist._
Similarly, by using auxiliary-input collision-resistant hash functions instead of collision-resistant hash functions in the commitment scheme of [10], we obtain 2-round auxiliary-input statistically-hiding and computationally-binding bit commitments. As shown in Appendix B.1, auxiliary-input collision-resistant hash functions exist if and only if \(\mathbf{PWPP}\nsubseteq\mathbf{FBPP}\). Thus, we obtain the following theorem.
**Theorem 2.16** (Auxiliary-input variant of [10]).: _If auxiliary-input collision-resistant hash functions exist, which exist if and only if \(\mathbf{PWPP}\nsubseteq\mathbf{FBPP}\), then 2-round auxiliary-input statistically-hiding and computationally-binding bit commitments exist._
In addition, we observe in Appendix B.2 that the instance-dependent commitments for \(\mathbf{SZK}\) of [11] directly gives constant-round auxiliary-input statistically-hiding and computationally-binding bit commitments under the assumption that \(\mathbf{SZK}\nsubseteq\mathbf{BPP}\).
**Theorem 2.17** (Auxiliary-input variant of [11]).: _If \(\mathbf{SZK}\nsubseteq\mathbf{BPP}\), then constant-round auxiliary-input statistically-hiding and computationally-binding bit commitments exist._
_Remark 2.18_.: In the constructions for Theorems 2.15 and 2.16, we can set \(\Sigma\coloneqq\{0,1\}^{*}\). However, we do not know if this is possible for the construction for Theorem 2.17 given in Appendix B.2. This is why we introduce the subset \(\Sigma\) in Definition 2.14.
## 3 Hashing Lemmas
In this section, we show two useful lemmas, Lemma 3.1 and Lemma 3.2. Lemma 3.1 is used to show Lemma 3.2, and Lemma 3.2 is used in the proof of our main result.
**Lemma 3.1**.: _Let \(\mathcal{H}\coloneqq\{h:\mathcal{X}\to\mathcal{Y}\}\) be a pairwise-independent hash family such that \(|\mathcal{X}|\geq 2\). Let \(S\subseteq\mathcal{X}\) be a subset of \(\mathcal{X}\). For any \(y\in\mathcal{Y}\),_
\[\Pr_{h\leftarrow\mathcal{H}}[|S\cap h^{-1}(y)|\geq 1]\geq\frac{|S|}{| \mathcal{Y}|}-\frac{|S|^{2}}{2|\mathcal{Y}|^{2}}. \tag{11}\]
Proof of Lemma 3.1.: First, if \(|S|=0\), Equation (11) trivially holds. Second, let us consider the case when \(|S|=1\). In that case,
\[\Pr_{h\leftarrow\mathcal{H}}[|S\cap h^{-1}(y)|\geq 1] =\frac{1}{|\mathcal{Y}|} \tag{12}\] \[\geq\frac{1}{|\mathcal{Y}|}-\frac{1}{2|\mathcal{Y}|^{2}}\] (13) \[=\frac{|S|}{|\mathcal{Y}|}-\frac{|S|^{2}}{2|\mathcal{Y}|^{2}}, \tag{14}\]
and therefore Equation (11) is satisfied. Here, the first equality comes from the fact that the probability that the unique element of \(S\) is mapped to \(y\) is \(1/|\mathcal{Y}|\).
Finally, let us consider the case when \(|S|\geq 2\). The following argument is based on [14]. First, for each \(y\in\mathcal{Y}\),
\[\sum_{j=1}^{|S|}j\Pr_{h\leftarrow\mathcal{H}}[|S\cap h^{-1}(y)|=j] =\mathop{\mathbb{E}}_{h\leftarrow\mathcal{H}}[|S\cap h^{-1}(y)|] \tag{15}\] \[=\mathop{\mathbb{E}}_{h\leftarrow\mathcal{H}}[|\{x\in S:h(x)=y\}|]\] (16) \[=\sum_{x\in S}\Pr_{h\leftarrow\mathcal{H}}[h(x)=y]\] (17) \[=\frac{|S|}{|\mathcal{Y}|}. \tag{18}\]
Second, for each \(y\in\mathcal{Y}\),
\[\sum_{j=1}^{|S|}(j-1)\Pr_{h\leftarrow\mathcal{H}}[|S\cap h^{-1}( y)|=j] \leq\sum_{j=1}^{|S|}\binom{j}{2}\Pr_{h\leftarrow\mathcal{H}}[|S \cap h^{-1}(y)|=j] \tag{19}\] \[=\mathop{\mathbb{E}}_{h\leftarrow\mathcal{H}}\left[\binom{|S\cap h ^{-1}(y)|}{2}\right]\] (20) \[=\mathop{\mathbb{E}}_{h\leftarrow\mathcal{H}}[|\{\{x,x^{\prime} \}\subseteq S:x\neq x^{\prime},h(x)=h(x^{\prime})=y\}|]\] (21) \[=\sum_{\{x,x^{\prime}\}\subseteq S,x\neq x^{\prime}}\Pr_{h \leftarrow\mathcal{H}}[h(x)=h(x^{\prime})=y]\] (22) \[=\frac{1}{|\mathcal{Y}|^{2}}\binom{|S|}{2}\] (23) \[=\frac{|S|(|S|-1)}{2|\mathcal{Y}|^{2}}\] (24) \[\leq\frac{|S|^{2}}{2|\mathcal{Y}|^{2}}. \tag{25}\]
(Note that \(\binom{n}{m}=0\) for any \(n<m\).) By extracting both sides of Equation (18) and Equation (25), we have
\[\sum_{j=1}^{|S|}\Pr_{h\leftarrow\mathcal{H}}[|S\cap h^{-1}(y)|=j] \geq \frac{|S|}{|\mathcal{Y}|}-\frac{|S|^{2}}{2|\mathcal{Y}|^{2}}, \tag{26}\]
which shows Lemma 3.1.
**Lemma 3.2**.: _Let \(\mathcal{H}\coloneqq\{h:\mathcal{X}\rightarrow\mathcal{Y}\}\) be a pairwise-independent hash family such that \(|\mathcal{X}|\geq 2\). Let \(S\subseteq\mathcal{X}\) be a subset of \(\mathcal{X}\). For any \(y\in\mathcal{Y}\),_
\[\Pr_{h\leftarrow\mathcal{H}}[|S\cap h^{-1}(y)|=1]\geq\frac{|S|}{| \mathcal{Y}|}-\frac{|S|^{2}}{|\mathcal{Y}|^{2}}. \tag{27}\]
Proof of Lemma 3.2.: For any \(y\in\mathcal{Y}\),
\[\frac{|S|}{|\mathcal{Y}|} =\sum_{j=1}^{|S|}j\Pr_{h\leftarrow\mathcal{H}}[|S\cap h^{-1}(y)| =j] \tag{28}\] \[\geq\Pr_{h\leftarrow\mathcal{H}}[|S\cap h^{-1}(y)|=1]+2\Pr_{h \leftarrow\mathcal{H}}[|S\cap h^{-1}(y)|\geq 2]\] (29) \[=2\Pr_{h\leftarrow\mathcal{H}}[|S\cap h^{-1}(y)|\geq 1]-\Pr_{h \leftarrow\mathcal{H}}[|S\cap h^{-1}(y)|=1]\] (30) \[\geq\frac{2|S|}{|\mathcal{Y}|}-\frac{|S|^{2}}{|\mathcal{Y}|^{2} }-\Pr_{h\leftarrow\mathcal{H}}[|S\cap h^{-1}(y)|=1]. \tag{31}\]
Here, the first equality is from Equation (18), and in the last inequality we have used Lemma 3.1. Therefore,
\[\Pr_{h\leftarrow\mathcal{H}}[|S\cap h^{-1}(y)|=1] \geq\frac{2|S|}{|\mathcal{Y}|}-\frac{|S|^{2}}{|\mathcal{Y}|^{2}}- \frac{|S|}{|\mathcal{Y}|} \tag{32}\] \[=\frac{|S|}{|\mathcal{Y}|}-\frac{|S|^{2}}{|\mathcal{Y}|^{2}}, \tag{33}\]
which shows Lemma 3.2.
## 4 Inefficient-Verifier Proofs of Quantumness
In this section, we define inefficient-verifier proofs of quantumness (IV-PoQ) and its variants. Then we show that sequential repetition amplifies the completeness-soundness gap assuming a special property of soundness, which we call a strong soundness, for the base scheme.
### Definitions
We define IV-PoQ. It is identical to the definition of PoQ, which are implicitly defined in [1], except that we allow the verifier to be unbounded-time after completing interaction with the prover.
**Definition 4.1** (Inefficient-verifier proofs of quantumness (IV-PoQ)).: _An inefficient-verifier proof of quantumness (IV-PoQ) is an interactive protocol \((\mathcal{P},\mathcal{V})\) between a QPT algorithm \(\mathcal{P}\) (the prover) and an algorithm \(\mathcal{V}=(\mathcal{V}_{1},\mathcal{V}_{2})\) (the verifier) where \(\mathcal{V}_{1}\) is PPT and \(\mathcal{V}_{2}\) is unbounded-time. The protocol is divided into two phases. In the first phase, \(\mathcal{P}\) and \(\mathcal{V}_{1}\) take the security parameter \(1^{\lambda}\) as input and interact with each other over a classical channel. Let \(I\) be the transcript, i.e., the sequence of all classical messages exchanged between \(\mathcal{P}\) and \(\mathcal{V}_{1}\). In the second phase, \(\mathcal{V}_{2}\) takes \(I\) as input and outputs \(\top\) or \(\bot\). We require the following two properties for some functions \(c\) and \(s\) such that \(c(\lambda)-s(\lambda)\geq 1/{\rm poly}(\lambda)\)._
\(c\)**-completeness:**__
\[\Pr[\top\leftarrow\mathcal{V}_{2}(I):I\leftarrow\langle\mathcal{P}(1^{ \lambda}),\mathcal{V}_{1}(1^{\lambda})\rangle]\geq c(\lambda)-\mathsf{negl}( \lambda). \tag{34}\]
\(s\)**-soundness:** _For any PPT malicious prover \(\mathcal{P}^{*}\),_
\[\Pr[\top\leftarrow\mathcal{V}_{2}(I):I\leftarrow\langle\mathcal{P}^{*}(1^{ \lambda}),\mathcal{V}_{1}(1^{\lambda})\rangle]\leq s(\lambda)+\mathsf{negl}( \lambda). \tag{35}\]
_Remark 4.2_.: \(\mathcal{V}_{2}\) could take \(\mathcal{V}_{1}\)'s secret information as input in addition to \(I\), but without loss of generality, we can assume that \(\mathcal{V}_{2}\) takes only \(I\), because we can always modify the protocol of the first phase in such a way that \(\mathcal{V}_{1}\) sends its secret information to \(\mathcal{P}\) at the end of the first phase.
_Remark 4.3_.: In our constructions, \(\mathcal{V}_{2}\) is actually enough to be a classical deterministic polynomial-time algorithm that queries the \(\mathbf{NP}\) oracle. (See Section 6.3.)
We define an infinitely-often version of IV-PoQ as follows.
**Definition 4.4** (Infinitely-often inefficient-verifier proofs of quantumness (IO-IV-PoQ)).: _An infinitely-often inefficient-verifier proofs of quantumness (IO-IV-PoQ) is defined similarly to IV-PoQ (Definition 4.1) except that we require the existence of an infinite set \(\Lambda\subseteq\mathbb{N}\) such that \(c\)-completeness and \(s\)-soundness hold for all \(\lambda\in\Lambda\) instead of for all \(\lambda\in\mathbb{N}\)._
We also define an auxiliary-input variant of IV-PoQ as follows. It is defined similarly to IV-PoQ except that the prover and verifier take an auxiliary input instead of the security parameter and completeness should hold for all auxiliary inputs wheres soundness is replaced to auxiliary-input soundness, i.e., for any PPT cheating prover \(\mathcal{P}^{*}\), there exists an infinite set of auxiliary inputs under which soundness holds.
**Definition 4.5** (Auxiliary-input inefficient-verifier proofs of quantumness (AI-IV-PoQ)).: _An auxiliary-input inefficient-verifier proof of quantumness (AI-IV-PoQ) is an interactive protocol \((\mathcal{P},\mathcal{V})\) between a QPT algorithm \(\mathcal{P}\) (the prover) and an algorithm \(\mathcal{V}=(\mathcal{V}_{1},\mathcal{V}_{2})\) (the verifier) where \(\mathcal{V}_{1}\) is PPT and \(\mathcal{V}_{2}\) is unbounded-time, associated with an infinite set \(\Sigma\subseteq\{0,1\}^{*}\). The protocol is divided into two phases. In the first phase, \(\mathcal{P}\) and \(\mathcal{V}_{1}\) take an auxiliary input \(\sigma\in\Sigma\) as input and interact with each other over a classical channel. Let \(I\) be the transcript, i.e., the sequence of all classical messages exchanged between \(\mathcal{P}\) and \(\mathcal{V}_{1}\). In the second phase, \(\mathcal{V}_{2}\) takes \(I\) as input and outputs \(\top\) or \(\bot\). We require the following two properties for some functions \(c\) and \(s\) such that \(c(|\sigma|)-s(|\sigma|)\geq 1/\mathrm{poly}(|\sigma|)\)._
\(c\)-completeness:_For any \(\sigma\in\Sigma\),_
\[\Pr[\top\leftarrow\mathcal{V}_{2}(I):I\leftarrow\langle\mathcal{P}(\sigma), \mathcal{V}_{1}(\sigma)\rangle]\geq c(|\sigma|)-\mathsf{negl}(|\sigma|). \tag{36}\]
\(s\)-soundness:_For any PPT malicious prover \(\mathcal{P}^{*}\) and polynomial \(p\), there exists an infinite set \(\Lambda\subseteq\Sigma\) such that_
\[\Pr[\top\leftarrow\mathcal{V}_{2}(I):I\leftarrow\langle\mathcal{P}^{*}(\sigma ),\mathcal{V}_{1}(\sigma)\rangle]\leq s(|\sigma|)+\frac{1}{p(|\sigma|)} \tag{37}\]
_for all \(\sigma\in\Lambda\)._
_Remark 4.6_.: We can set \(\Sigma\coloneqq\{0,1\}^{*}\) for all our constructions of AI-IV-PoQ except for the one based on \(\mathbf{SZK}\neq\mathbf{BPP}\). See also Remark 2.18.
_Remark 4.7_.: It is easy to see that IV-PoQ imply IO-IV-PoQ, and IO-IV-PoQ imply AI-IV-PoQ.
Even though AI-IV-PoQ is weaker than IV-PoQ, we believe that it still demonstrates a meaningful notion of quantum advantage, because it shows "worst-case quantum advantage" in the sense that no PPT algorithm can simulate the QPT honest prover on all auxiliary inputs \(\sigma\in\Sigma\).
### Strong Soundness
Unfortunately, we do not know if parallel or even sequential repetition amplifies the completeness-soundness gap for general (AI-/IO-)IV-PoQ. Here, we define a stronger notion of soundness which we call strong soundness. In Section 4.3, we show that sequential repetition amplifies the completeness-soundness gap if the base scheme satisfies strong soundness. In Section 6, we show that our (AI-/IO-)IV-PoQ satisfies strong soundness. Thus, gap amplification by sequential repetition works for our particular constructions of (AI-/IO-)IV-PoQ.
Roughly, the \(s\)-strong-soundness requires that a PPT cheating prover can pass verification with probability at most \(\approx s\) for _almost all fixed randomness_. The formal definition is given below.
**Definition 4.8** (Strong soundness for IV-PoQ).: _We say that an IV-PoQ \((\mathcal{P},\mathcal{V}=(\mathcal{V}_{1},\mathcal{V}_{2}))\) satisfies \(s\)-strong-soundness if the following holds:_
\(s\)-strong-soundness:_For any PPT malicious prover \(\mathcal{P}^{*}\) and any polynomial \(p\),_
\[\Pr_{r\leftarrow\mathcal{R}}\left[\Pr[\top\leftarrow\mathcal{V}_{2}(I):I\leftarrow \langle\mathcal{P}^{*}_{r}(1^{\lambda}),\mathcal{V}_{1}(1^{\lambda})\rangle] \geq s(\lambda)+\frac{1}{p(\lambda)}\right]\leq\frac{1}{p(\lambda)} \tag{38}\]
_for all sufficiently large \(\lambda\) where \(\mathcal{R}\) is the randomness space for \(\mathcal{P}^{*}\) and \(\mathcal{P}^{*}_{r}\) is \(\mathcal{P}^{*}\) with the fixed randomness \(r\)._
It is defined similarly for (AI-/IO-)IV-PoQ.
It is easy to see that \(s\)-strong-soundness implies \(s\)-soundness.
**Lemma 4.9**.: _For any \(s\), \(s\)-strong-soundness implies \(s\)-soundness for (AI-/IO-)IV-PoQ._
Proof.: We focus on the case of IV-PoQ since the cases of (AI-/IO-)IV-PoQ are similar. If there is a PPT malicious prover \(\mathcal{P}^{*}\) that breaks \(s\)-soundness of IV-PoQ, then there exists a polynomial \(p\) such that
\[\Pr[\top\leftarrow\mathcal{V}_{2}(I):I\leftarrow\langle\mathcal{P}^{*}(1^{ \lambda}),\mathcal{V}_{1}(1^{\lambda})\rangle]\geq s(\lambda)+\frac{3}{p( \lambda)} \tag{39}\]
for infinitely many \(\lambda\). By a standard averaging argument, this implies
\[\Pr_{r\leftarrow\mathcal{R}}\left[\Pr[\top\leftarrow\mathcal{V}_{2}(I):I \leftarrow\langle\mathcal{P}^{*}_{r}(1^{\lambda}),\mathcal{V}_{1}(1^{\lambda} )\rangle]\geq s(\lambda)+\frac{1}{p(\lambda)}\right]\geq\frac{2}{p(\lambda)} \tag{40}\]
for infinitely many \(\lambda\). This contradicts \(s\)-strong-soundness. Thus, Lemma 4.9 holds.
We remark that the other direction does not seem to hold. For example, suppose that a PPT malicious prover \(\mathcal{P}^{*}\) passes the verification with probability \(1\) for \(0.99\)-fraction of randomness and with probability \(0\) for the rest of randomness. In this case, \(\mathcal{P}^{*}\) breaks \(0.99\)-soundness. On the other hand, it does not break \(s\)-strong-soundness for any constant \(s>0\).
### Gap Amplification
We prove that sequential repetition amplifies the completeness-soundness gap if the base scheme satisfies strong soundness.
**Theorem 4.10** (**Gap amplification theorem**).: _Let \(\Pi=(\mathcal{P},\mathcal{V}=(\mathcal{V}_{1},\mathcal{V}_{2}))\) be an (AI-/IO-)IV-PoQ that satisfies \(c\)-completeness and \(s\)-strong-soundness where \(c(\lambda)-s(\lambda)\geq 1/{\rm poly}(\lambda)\) and \(c\) and \(s\) are computable in polynomial-time. Let \(\Pi^{N\text{-}\mathsf{seq}}=(\mathcal{P}^{N\text{-}\mathsf{seq}},\mathcal{V}^ {N\text{-}\mathsf{seq}}=(\mathcal{V}_{1}^{N\text{-}\mathsf{seq}},\mathcal{V} _{2}^{N\text{-}\mathsf{seq}}))\) be its \(N\)-sequential-repetition version as described in Algorithm 1. If \(N\geq\frac{\lambda}{(c(\lambda)-s(\lambda))^{2}}\), then \(\Pi^{N\text{-}\mathsf{seq}}\) satisfies \(1\)-completeness and \(0\)-soundness._
_Remark 4.11_.: Note that the meaning of "sequential repetition" is slightly different from that for usual interactive arguments: we defer the second phases of each execution to the end of the protocol so that inefficient computations are only needed after completing the interaction.
_Remark 4.12_.: If we assume \(s\)-soundness against _non-uniform_ PPT adversaries, we can easily prove a similar amplification theorem without introducing strong soundness. However, for proving soundness against non-uniform PPT adversaries, we would need non-uniform hardness assumptions such as non-uniformly secure OWFs. Since our motivation is to demonstrate quantum advantage from the standard notion of _uniformly_ secure OWFs, we do not take the above approach.
Proof of Theorem 4.10.: We focus on the case of IV-PoQ since it is almost identical for (AI-/IO-)IV-PoQ. First, \(1\)-completeness of \(\Pi^{N\text{-seq}}\) immediately follows from Hoeffding's inequality. (Recall that \(1\)-completeness means that the honest prover's acceptance probability is at least \(1-\mathsf{negl}(\lambda)\).) In the following, we prove that \(\Pi^{N\text{-seq}}\) satisfies \(0\)-soundness. Suppose that it does not satisfy \(0\)-soundness. Then, there is a PPT malicious prover \(\mathcal{P}^{N\text{-seq}^{*}}\) against \(\Pi^{N\text{-seq}}\) and a polynomial \(p\) such that
\[\Pr[\top\leftarrow\mathcal{V}_{2}(I_{1},\ldots,I_{N}):(I_{1}, \ldots,I_{N})\leftarrow\langle\mathcal{P}^{N\text{-seq}^{*}}(1^{\lambda}), \mathcal{V}_{1}^{N\text{-seq}}(1^{\lambda})\rangle]\geq\frac{1}{p(\lambda)} \tag{41}\]
for infinitely many \(\lambda\). We define random variables \(X_{1},\ldots,X_{N}\) as in Algorithm 1, i.e., \(X_{i}=1\) if \(\mathcal{V}_{2}\) accepts the \(i\)-th transcript \(I_{i}\) and otherwise \(X_{i}=0\). Using this notation, the above inequality can be rewritten as
\[\Pr\left[\frac{\sum_{i\in[N]}X_{i}}{N}\geq\frac{c(\lambda)+s( \lambda)}{2}\right]\geq\frac{1}{p(\lambda)} \tag{42}\]
for infinitely many \(\lambda\). For \(i\in[N]\), let \(X^{\prime}_{i}\) be an independent random variable over \(\{0,1\}\) such that \(\Pr[X^{\prime}_{i}=1]=s(\lambda)+\frac{1}{2Np(\lambda)}\). Noting that \(\frac{c(\lambda)+s(\lambda)}{2}-(s(\lambda)+\frac{1}{2Np(\lambda)})\geq\frac{c( \lambda)-s(\lambda)}{4}\) for sufficiently large \(\lambda\),22 by Hoeffding's inequality, we have
Footnote 22: This can be seen as follows: \(\frac{c(\lambda)+s(\lambda)}{2}-(s(\lambda)+\frac{1}{2Np(\lambda)})=\frac{c( \lambda)-s(\lambda)}{2}-\frac{1}{2Np(\lambda)}\geq\frac{c(\lambda)-s(\lambda)} {2}-\frac{1}{2N}\geq\frac{c(\lambda)-s(\lambda)}{2}-\frac{(c(\lambda)-s( \lambda))^{2}}{2\lambda}\geq\frac{c(\lambda)-s(\lambda)}{4}-\frac{c(\lambda)-s( \lambda)}{4}\) for sufficiently large \(\lambda\).
\[\Pr\left[\frac{\sum_{i\in[N]}X^{\prime}_{i}}{N}\geq\frac{c( \lambda)+s(\lambda)}{2}\right]\leq\mathsf{negl}(\lambda). \tag{43}\]
Moreover, we prove below that for any \(k\in[N]\), we have
\[\Pr\left[\frac{\sum_{i=1}^{k}X_{i}+\sum_{i=k+1}^{N}X^{\prime}_{i} }{N}\geq\frac{c(\lambda)+s(\lambda)}{2}\right]-\Pr\left[\frac{\sum_{i=1}^{k-1} X_{i}+\sum_{i=k}^{N}X^{\prime}_{i}}{N}\geq\frac{c(\lambda)+s(\lambda)}{2}\right]\leq \frac{1}{2Np(\lambda)} \tag{44}\]
for sufficiiently large \(\lambda\). By a standard hybrid argument, Equations (42) and (44) imply
\[\Pr\left[\frac{\sum_{i\in[N]}X^{\prime}_{i}}{N}\geq\frac{c(\lambda )+s(\lambda)}{2}\right]\geq\frac{1}{2p(\lambda)}. \tag{45}\]
for infinitely many \(\lambda\). This contradicts Equation (43). Thus, we only have to prove Equation (44) holds for all \(k\in[N]\) and sufficiently large \(\lambda\).
Proof of Equation (44).: Let \(\mathcal{P}^{*}_{k}\) be a malicious prover against \(\Pi\) that works as follows: \(\mathcal{P}^{*}_{k}\) first simulates the interaction between \(\mathcal{P}^{N\text{-seq}^{*}}\) and \(\mathcal{V}_{1}^{N\text{-seq}}\) for the first \(k-1\) executions of \(\Pi\) where it also simulates \(\mathcal{V}_{1}^{N\text{-seq}}\) by itself in this phase. Then \(\mathcal{P}^{*}_{k}\) starts interaction with the external verifier \(\mathcal{V}_{1}\) of \(\Pi\) where it works similarly to \(\mathcal{P}^{N\text{-seq}^{*}}\) in the \(k\)th execution of \(\Pi\). Note that the randomness of \(\mathcal{P}^{*}_{k}\) consists of the randomness \(r_{P}\) for \(\mathcal{P}^{N\text{-seq}^{*}}\) and randomness \(r_{V}^{k-1}\) for \(\mathcal{V}_{1}^{N\text{-seq}}\) for the first \(k-1\) executions of \(\Pi\). Therefore, by applying \(s\)-strong-soundness of \(\Pi\) for the above
malicious prover \(\mathcal{P}_{k}^{*}\), there is a set of \(\left(1-\frac{1}{2Np(\lambda)}\right)\)-fraction of \((r_{P},r_{V}^{k-1})\), which we denote by \(\mathcal{G}_{k-1}\), such that for all \((r_{P},r_{V}^{k-1})\in\mathcal{G}_{k-1}\),
\[\Pr[X_{k}=1|r_{P},r_{V}^{k-1}]\leq s(\lambda)+\frac{1}{2Np(\lambda)} \tag{46}\]
for sufficiently large \(\lambda\), where \(\Pr[X_{k}=1|r_{P},r_{V}^{k-1}]\) means the conditional probability that \(X_{k}=1\) occurs conditioned on the fixed values of \((r_{P},r_{V}^{k-1})\). On the other hand, because \(X_{k}^{\prime}\) is independent of \((r_{P},r_{V}^{k-1})\),
\[\Pr[X_{k}^{\prime}=1|r_{P},r_{V}^{k-1}]=\Pr[X_{k}^{\prime}=1]=s( \lambda)+\frac{1}{2Np(\lambda)} \tag{47}\]
for any fixed \((r_{P},r_{V}^{k-1})\).
For notational simplicity, we denote the events that \(\frac{\sum_{i=1}^{k}X_{i}+\sum_{i=k+1}^{N}X_{i}^{\prime}}{N}\geq\frac{c( \lambda)+s(\lambda)}{2}\) and \(\frac{\sum_{i=1}^{k-1}X_{i}+\sum_{i=k}^{N}X_{i}^{\prime}}{N}\geq\frac{c( \lambda)+s(\lambda)}{2}\) by \(E_{k}\) and \(E_{k-1}\), respectively. Then for any \((r_{P},r_{V}^{k-1})\in\mathcal{G}_{k-1}\),
\[\Pr\left[E_{k}\middle|(r_{P},r_{V}^{k-1})\right] \tag{48}\] \[= \Pr\left[\frac{X_{k}}{N}\geq\frac{c(\lambda)+s(\lambda)}{2}- \frac{\sum_{i=1}^{k-1}X_{i}+\sum_{i=k+1}^{N}X_{i}^{\prime}}{N}\middle|(r_{P},r _{V}^{k-1})\right]\] (49) \[\leq \Pr\left[\frac{X_{k}^{\prime}}{N}\geq\frac{c(\lambda)+s(\lambda)} {2}-\frac{\sum_{i=1}^{k-1}X_{i}+\sum_{i=k+1}^{N}X_{i}^{\prime}}{N}\middle|(r_{P},r_{V}^{k-1})\right]\] (50) \[= \Pr\left[E_{k-1}\middle|(r_{P},r_{V}^{k-1})\right] \tag{51}\]
for sufficiently large \(\lambda\) where Equation (50) follows from Equations (46) and (47) and the observations that \(X_{1},\ldots,X_{k-1}\) are determined by \((r_{P},r_{V}^{k-1})\) and \(X_{k+1}^{\prime},\ldots,X_{N}^{\prime}\) are independent of \(X_{k}\) or \(X_{k}^{\prime}\).
Then, for sufficiently large \(\lambda\), we have
\[\Pr\left[E_{k}\right]-\Pr\left[E_{k-1}\right] \tag{52}\] \[= \left(\Pr\left[E_{k}\;\wedge\;(r_{P},r_{V}^{k-1})\notin\mathcal{G }_{k-1}\right]-\Pr\left[E_{k-1}\;\wedge\;(r_{P},r_{V}^{k-1})\notin\mathcal{G} _{k-1}\right]\right)\] (53) \[+\left(\Pr\left[E_{k}\;\wedge\;(r_{P},r_{V}^{k-1})\notin\mathcal{ G}_{k-1}\right]-\Pr\left[E_{k-1}\;\wedge\;(r_{P},r_{V}^{k-1})\notin\mathcal{G} _{k-1}\right]\right)\] (54) \[\leq \Pr\left[(r_{P},r_{V}^{k-1})\in\mathcal{G}_{k-1}\right]\cdot \left(\Pr\left[E_{k}\middle|(r_{P},r_{V}^{k-1})\in\mathcal{G}_{k-1}\right]-\Pr \left[E_{k-1}\middle|(r_{P},r_{V}^{k-1})\in\mathcal{G}_{k-1}\right]\right)\] (55) \[+\Pr\left[(r_{P},r_{V}^{k-1})\notin\mathcal{G}_{k-1}\right]\] (56) \[\leq \frac{1}{2Np(\lambda)}, \tag{57}\]
where Equation (57) follows from Equations (48)-(51) and the fact that \(\mathcal{G}_{k-1}\) consists of \(\left(1-\frac{1}{2Np(\lambda)}\right)\)-fraction of \((r_{P},r_{V}^{k-1})\). This implies Equation (44) and completes the proof of Theorem 4.10.
## 5 Coherent Execution of Classical Bit Commitments
In this section, we explain our key concept, namely, executing classical bit commitments coherently.
Let \((\mathcal{S},\mathcal{R})\) be a classical bit commitment scheme. If we explicitly consider the randomness, the commit phase can be described as in Algorithm 2.
Now let us consider the coherent execution of Algorithm 2, which is shown in Algorithm 3. Let \(t\coloneqq(\alpha_{1},\beta_{1},\alpha_{2},\beta_{2},...,\alpha_{L},\beta_{L})\) be the transcript obtained in the execution of Algorithm 3. At the end of the execution of Algorithm 3, \(\mathcal{S}\) possesses the state
\[\frac{1}{\sqrt{\left|X_{0,t}\right|+\left|X_{1,t}\right|}}\sum_{b\in\{0,1\}} \sum_{x\in X_{b,t}}\left|b\right\rangle\left|x\right\rangle, \tag{63}\]
where
\[X_{b,t}\coloneqq\bigcap_{j=0}^{L}X_{b}^{j}=\Big{\{}x\in\{0,1\}^{\ell}:\bigwedge_{j= 1}^{L}f_{j}(b,x,\alpha_{1},\beta_{1},...,\alpha_{j-1},\beta_{j-1})=\alpha_{j} \Big{\}}. \tag{64}\]
The probability \(\Pr[t]\) that the transcript \(t\) is obtained in the execution of Algorithm 3 is
\[\Pr[t]=\frac{|R_{t}|}{2^{\ell}}\frac{|X_{0,t}|+|X_{1,t}|}{2^{\ell+1}}, \tag{65}\]
where
\[R_{t}\coloneqq\Big{\{}r\in\{0,1\}^{\ell}:\bigwedge_{j=1}^{L}g_{j}(r,\alpha_{1 },\beta_{1},...,\alpha_{j})=\beta_{j}\Big{\}}. \tag{66}\]
In the remaining of this section, we show two lemmas, Lemma 5.1 and Lemma 5.2, that will be used in the proofs of our main results.
The following Lemma 5.1 roughly claims that \(|X_{0,t}|\) and \(|X_{1,t}|\) are almost equal with overwhelming probability.
**Lemma 5.1**.: _Let \(0<\epsilon<1\) be a constant. Define the set_
\[T\coloneqq\{t:(1-\epsilon)|X_{1,t}|<|X_{0,t}|<(1+\epsilon)|X_{1,t}|\}. \tag{67}\]
_Then,_
\[\sum_{t\in T}\Pr[t]\geq 1-\mathsf{negl}(\lambda). \tag{68}\]
_Here, \(\Pr[t]\) is the probability that the transcript \(t\) is obtained in the execution of Algorithm 3 as is given in Equation (65)._
Proof of Lemma 5.1.: Intuitively, this follows from the statistical hiding property since whenever \(t\notin T\), an unbounded adversary can guess the committed bit from the transcript \(t\) with probability \(1/2+\Omega(\epsilon)\). Below, we provide a formal proof.
Define
\[T^{+} \coloneqq\{t:(1+\epsilon)|X_{1,t}|\leq|X_{0,t}|\}, \tag{69}\] \[T^{-} \coloneqq\{t:|X_{0,t}|\leq(1-\epsilon)|X_{1,t}|\}. \tag{70}\]
In order to show the lemma, we want to show that
\[\sum_{t\in T^{+}\cup T^{-}}\Pr[t]\leq\mathsf{negl}(\lambda). \tag{71}\]
To show this, assume that
\[\sum_{t\in T^{+}\cup T^{-}}\Pr[t]\geq\frac{1}{\mathrm{poly}(\lambda)} \tag{72}\]
for infinitely many \(\lambda\). Then the following computationally-unbounded malicious receiver \(\mathcal{R}^{*}\) can break the statistical hiding of the classical bit commitment scheme in Algorithm 2.
1. \(\mathcal{R}^{*}\) honestly executes the commit phase with \(\mathcal{S}\). Let \(t\) be the transcript obtained in the execution.
2. If \(t\in T^{+}\), \(\mathcal{R}^{*}\) outputs 0. If \(t\in T^{-}\), \(\mathcal{R}^{*}\) outputs 1. If \(t\in T\), \(\mathcal{R}^{*}\) outputs 0 with probability \(1/2\) and outputs 1 with probability \(1/2\).
The probability that \(\mathcal{R}^{*}\) outputs 0 when \(\mathcal{S}\) commits \(b\in\{0,1\}\) is
\[\Pr[0\leftarrow\mathcal{R}^{*}|b] =\sum_{t\in T^{+}}\Pr[t|b]+\frac{1}{2}\sum_{t\in T}\Pr[t|b] \tag{73}\] \[=\sum_{t\in T^{+}}\frac{|R_{t}|}{2^{\ell}}\frac{|X_{b,t}|}{2^{ \ell}}+\frac{1}{2}\sum_{t\in T}\frac{|R_{t}|}{2^{\ell}}\frac{|X_{b,t}|}{2^{ \ell}}. \tag{74}\]
Therefore
\[\Pr[0\leftarrow\mathcal{R}^{*}|b=0]-\Pr[0\leftarrow\mathcal{R} ^{*}|b=1] \tag{75}\] \[=\sum_{t\in T^{+}}\Pr[t|b=0]+\frac{1}{2}\sum_{t\in T}\Pr[t|b=0]- \sum_{t\in T^{+}}\Pr[t|b=1]-\frac{1}{2}\sum_{t\in T}\Pr[t|b=1]\] (76) \[=\sum_{t\in T^{+}}\Pr[t|b=0]+\frac{1}{2}\Big{(}1-\sum_{t\in T^{+ }}\Pr[t|b=0]-\sum_{t\in T^{-}}\Pr[t|b=0]\Big{)}\] (77) \[\quad-\sum_{t\in T^{+}}\Pr[t|b=1]-\frac{1}{2}\Big{(}1-\sum_{t\in T ^{+}}\Pr[t|b=1]-\sum_{t\in T^{-}}\Pr[t|b=1]\Big{)}\] (78) \[=\frac{1}{2}\sum_{t\in T^{+}}(\Pr[t|b=0]-\Pr[t|b=1])+\frac{1}{2} \sum_{t\in T^{-}}(\Pr[t|b=1]-\Pr[t|b=0])\] (79) \[=\frac{1}{2}\sum_{t\in T^{+}}\frac{|R_{t}|}{2^{\ell}}\frac{|X_{0, t}|-|X_{1,t}|}{2^{\ell}}+\frac{1}{2}\sum_{t\in T^{-}}\frac{|R_{t}|}{2^{\ell}} \frac{|X_{1,t}|-|X_{0,t}|}{2^{\ell}}\] (80) \[\geq\frac{1}{2}\sum_{t\in T^{+}}\frac{|R_{t}|}{2^{\ell}}\frac{|X_ {0,t}|-\frac{|X_{0,t}|}{1+\epsilon}}{2^{\ell}}+\frac{1}{2}\sum_{t\in T^{-}} \frac{|R_{t}|}{2^{\ell}}\frac{|X_{1,t}|-(1-\epsilon)|X_{1,t}|}{2^{\ell}}\] (81) \[=\frac{1}{2}\Big{(}1-\frac{1}{1+\epsilon}\Big{)}\sum_{t\in T^{+} }\frac{|R_{t}|}{2^{\ell}}\frac{|X_{0,t}|}{2^{\ell}}+\frac{\epsilon}{2}\sum_{t \in T^{-}}\frac{|R_{t}|}{2^{\ell}}\frac{|X_{1,t}|}{2^{\ell}}\] (82) \[=\frac{1}{2}\frac{\epsilon}{1+\epsilon}\sum_{t\in T^{+}}\frac{|R_ {t}|}{2^{\ell}}\frac{|2X_{0,t}|}{2^{\ell+1}}+\frac{\epsilon}{2}\sum_{t\in T^{ -}}\frac{|R_{t}|}{2^{\ell}}\frac{2|X_{1,t}|}{2^{\ell+1}}\] (83) \[\geq\frac{1}{2}\frac{\epsilon}{1+\epsilon}\sum_{t\in T^{+}}\frac {|R_{t}|}{2^{\ell}}\frac{|X_{0,t}|+|X_{1,t}|}{2^{\ell+1}}+\frac{\epsilon}{2} \sum_{t\in T^{-}}\frac{|R_{t}|}{2^{\ell}}\frac{|X_{0,t}|+|X_{1,t}|}{2^{\ell+1}}\] (84) \[\geq\frac{1}{2}\frac{\epsilon}{1+\epsilon}\sum_{t\in T^{+}}\frac {|R_{t}|}{2^{\ell}}\frac{|X_{0,t}|+|X_{1,t}|}{2^{\ell+1}}+\frac{\epsilon}{2(1+ \epsilon)}\sum_{t\in T^{-}}\frac{|R_{t}|}{2^{\ell}}\frac{|X_{0,t}|+|X_{1,t}|}{ 2^{\ell+1}}\] (85) \[=\frac{1}{2}\frac{\epsilon}{1+\epsilon}\sum_{t\in T^{+}\cup T^{-} }\frac{|R_{t}|}{2^{\ell}}\frac{|X_{0,t}|+|X_{1,t}|}{2^{\ell+1}}\] (86) \[=\frac{1}{2}\frac{\epsilon}{1+\epsilon}\sum_{t\in T^{+}\cup T^{-} }\Pr[t]\] (87) \[\geq\frac{1}{2}\frac{\epsilon}{1+\epsilon}\frac{1}{\mathrm{poly }(\lambda)}\] (88) \[=\frac{1}{\mathrm{poly}(\lambda)} \tag{89}\]
for infinitely many \(\lambda\), which breaks the statistical hiding.
The following Lemma 5.2 roughly claims that whenever \(t\in T\) a good approximation \(k\) of \(2|X_{0,t}|\) and \(2|X_{1,t}|\) up to a small constant multiplicative error can be chosen with probability \(1/m=1/\mathrm{poly}(\lambda)\).
**Lemma 5.2**.: _Let \(0<\epsilon<1\) be a constant. Let_
\[T\coloneqq\{t:(1-\epsilon)|X_{1,t}|<|X_{0,t}|<(1+\epsilon)|X_{1,t}|\}. \tag{90}\]
_Let \(m\) be an integer such that \((1+\epsilon)^{m}\geq 2^{\ell+1}\). For any \(t\in T\), there exists an integer \(j\in\{0,1,2,...,m-1\}\) such that_
\[k\leq 2|X_{0,t}|\leq(1+\epsilon)k \tag{91}\]
_and_
\[\frac{k}{1+\epsilon}\leq 2|X_{1,t}|\leq\frac{1+\epsilon}{1-\epsilon}k. \tag{92}\]
_Here, \(k\coloneqq\lceil(1+\epsilon)^{j}\rceil\)._
Proof of Lemma 5.2.: Let \(t\in T\). Because \(0\leq(1-\epsilon)|X_{1,t}|<|X_{0,t}|\), we have \(|X_{0,t}|\geq 1\). Because \(2|X_{0,t}|\leq 2^{\ell+1}\), there exists an integer \(j\in\{0,1,2,...,m-1\}\) such that \((1+\epsilon)^{j}<2|X_{0,t}|\leq(1+\epsilon)^{j+1}\). Let us take \(k=\lceil(1+\epsilon)^{j}\rceil\). Because \(2|X_{0,t}|\) is an integer, \(k\leq 2|X_{0,t}|\). Moreover, we have
\[2|X_{0,t}| \leq(1+\epsilon)^{j+1} \tag{93}\] \[=(1+\epsilon)\times(1+\epsilon)^{j}\] (94) \[\leq(1+\epsilon)\times\lceil(1+\epsilon)^{j}\rceil\] (95) \[=(1+\epsilon)\times k. \tag{96}\]
In summary, we have \(k\leq 2|X_{0,t}|\leq(1+\epsilon)k\). We also have \(2|X_{1,t}|<\frac{2|X_{0,t}|}{1-\epsilon}\leq\frac{1+\epsilon}{1-\epsilon}k\), and \(2|X_{1,t}|>\frac{2|X_{0,t}|}{1+\epsilon}\geq\frac{k}{1+\epsilon}\).
## 6 Construction of IV-PoQ
In this section, we prove Theorem 1.1. That is, we construct IV-PoQ from statistically-hiding and computationally-binding classical bit commitments.
Let \((\mathcal{S},\mathcal{R})\) be a statistically-hiding and computationally-binding classical bit commitment scheme. The first phase where the PPT verifier \(\mathcal{V}_{1}\) and the QPT prover \(\mathcal{P}\) interact is given in Algorithm 4. The second phase where the inefficient verifier \(\mathcal{V}_{2}\) runs is given in Algorithm 5.
We can show the completeness and soundness of our IV-PoQ as follows.
**Theorem 6.1** (Completeness).: _Our IV-PoQ satisfies \((\frac{7}{8}+\frac{1}{\mathrm{poly}(\lambda)})\)-completeness._
**Theorem 6.2** (Soundness).: _Our IV-PoQ satisfies \(\frac{7}{8}\)-strong-soundness, which in particular implies \(\frac{7}{8}\)-soundness._
The above only gives an inverse-polynomial completeness-soundness gap, but we can amplify the gap to \(1\) by sequential repetition by Theorem 4.10. Theorem 6.1 is shown in Section 6.1. Theorem 6.2 is shown in Section 6.2. By combining Theorem 6.1 and Theorem 6.2, we obtain Theorem 1.1.
### Completeness
In this subsection, we show \((\frac{7}{8}+\frac{1}{\mathrm{poly}(\lambda)})\)-completeness.
Proof of Theorem 6.1.: Let \(p_{\mathsf{good}}\) be the probability that Equation (97) in the Item 4 of Algorithm 4 is in the form of \(\ket{0}\ket{x_{0}}+\ket{1}\ket{x_{1}}\). Then
\[p_{\mathsf{good}}\geq(1-\mathsf{negl}(\lambda))\frac{0.1}{m} \tag{102}\]
because of the following reasons.
1. The PPT verifier \(\mathcal{V}_{1}\) and the QPT prover \(\mathcal{P}\) coherently execute the commit phase of the classical bit commitment scheme \((\mathcal{S},\mathcal{R})\). (See Algorithm 3.) \(\mathcal{V}_{1}\) plays the role of the receiver \(\mathcal{R}\). \(\mathcal{P}\) plays the role of the sender \(\mathcal{S}\). Let \(t\) be the transcript obtained in the execution.
2. \(\mathcal{P}\) has the state \(\sum_{b\in\{0,1\}}\sum_{x\in X_{b,t}}\left|b\right\rangle\left|x\right\rangle\).
3. Let \(0<\epsilon<1\) be a small constant. (We set \(\epsilon\coloneqq\frac{1}{100}\) for clarity.) Let \(m\) be an integer such that \((1+\epsilon)^{m}\geq 2^{\ell+1}\). (Such an \(m\) can be computed in \(\operatorname{poly}(\lambda)\) time. In fact, we have only to take the minimum integer \(m\) such that \(m\geq\frac{\ell+1}{\log_{2}(1+\epsilon)}\).) \(\mathcal{V}_{1}\) chooses \(j\leftarrow\{0,1,2,...,m-1\}\). Define \(k\coloneqq\lceil(1+\epsilon)^{j}\rceil\). Let \(\mathcal{H}\coloneqq\{h:\mathcal{X}\rightarrow\mathcal{Y}\}\) be a pairwise-independent hash family with \(\mathcal{X}\coloneqq\{0,1\}^{\ell}\) and \(\mathcal{Y}\coloneqq[k]\). \(\mathcal{V}_{1}\) chooses \(h_{0},h_{1}\leftarrow\mathcal{H}\), and sends \((h_{0},h_{1})\) to \(\mathcal{P}\).
4. \(\mathcal{P}\) changes its state into \(\sum_{b\in\{0,1\}}\sum_{x\in X_{b,t}}\left|b\right\rangle\left|x\right\rangle \left|h_{b}(x)\right\rangle\), and measures the third register in the computational basis to obtain the result \(y\in[k]\). \(\mathcal{P}\) sends \(y\) to \(\mathcal{V}_{1}\). The post-measurement state is \[\sum_{b\in\{0,1\}}\sum_{x\in X_{b,t}\cap h_{b}^{-1}(y)}\left|b\right\rangle \left|x\right\rangle.\] (97) (If there is only a single \(x_{b}\) such that \(x_{b}\in X_{b,t}\cap h_{b}^{-1}(y)\) for each \(b\in\{0,1\}\), Equation (97) is \(\left|0\right\rangle\left|x_{0}\right\rangle+\left|1\right\rangle\left|x_{1}\right\rangle\). We will show later that it occurs with a non-negligible probability.)
5. From now on, \(\mathcal{V}_{1}\) and \(\mathcal{P}\) run the protocol of [10]. \(\mathcal{V}_{1}\) chooses \(v_{1}\leftarrow\{0,1\}\). \(\mathcal{V}_{1}\) chooses \(\xi\leftarrow\{0,1\}^{\ell}\). \(\mathcal{V}_{1}\) sends \(v_{1}\) and \(\xi\) to \(\mathcal{P}\).
6. * If \(v_{1}=0\): \(\mathcal{P}\) measures all qubits of the state of Equation (97) in the computational basis, and sends the measurement result \((b^{\prime},x^{\prime})\in\{0,1\}\times\{0,1\}^{\ell}\) to \(\mathcal{V}_{1}\). \(\mathcal{V}_{1}\) halts. * If \(v_{1}=1\): \(\mathcal{P}\) changes the state of Equation (97) into \[\sum_{b\in\{0,1\}}\sum_{x\in X_{b,t}\cap h_{b}^{-1}(y)}\left|b\oplus(\xi\cdot x )\right\rangle\left|x\right\rangle,\] (98) measures its second register in the Hadamard basis to obtain the measurement result \(d\in\{0,1\}^{\ell}\), and sends \(d\) to \(\mathcal{V}_{1}\). The post-measurement state is \[\sum_{b\in\{0,1\}}\sum_{x\in X_{b,t}\cap h_{b}^{-1}(y)}(-1)^{d\cdot x}\left|b \oplus(\xi\cdot x)\right\rangle.\] (99) (If there is only a single \(x_{b}\) such that \(x_{b}\in X_{b,t}\cap h_{b}^{-1}(y)\) for each \(b\in\{0,1\}\), Equation (98) is \(\left|\xi\cdot x_{0}\right\rangle\left|x_{0}\right\rangle+\left|1\oplus(\xi \cdot x_{1})\right\rangle\left|x_{1}\right\rangle\), and Equation (99) is \(\left|\xi\cdot x_{0}\right\rangle+(-1)^{d\cdot(x_{0}\oplus x_{1})}\left|1\oplus (\xi\cdot x_{1})\right\rangle\).)
7. \(\mathcal{V}_{1}\) chooses \(v_{2}\leftarrow\{0,1\}\). \(\mathcal{V}_{1}\) sends \(v_{2}\) to \(\mathcal{P}\).
8. If \(v_{2}=0\), \(\mathcal{P}\) measures Equation (99) in the basis \(\left\{\cos\frac{\pi}{8}|0\rangle+\sin\frac{\pi}{8}|1\rangle,\sin\frac{\pi}{8}| 0\rangle-\cos\frac{\pi}{8}|1\rangle\right\}\). If \(v_{2}=1\), \(\mathcal{P}\) measures Equation (99) in the basis \(\left\{\cos\frac{\pi}{8}|0\rangle-\sin\frac{\pi}{8}|1\rangle,\sin\frac{\pi}{8}| 0\rangle+\cos\frac{\pi}{8}|1\rangle\right\}\). Let \(\eta\in\{0,1\}\) be the measurement result. (For the measurement in the basis \(\{|\phi\rangle,|\phi^{\perp}\rangle\}\), the result 0 corresponds to \(|\phi\rangle\) and the result 1 corresponds to \(|\phi^{\perp}\rangle\).) \(\mathcal{P}\) sends \(\eta\) to \(\mathcal{V}_{1}\).
* In Step 1 of Algorithm 4, the probability that the transcript \(t\) such that \(t\in T\) is obtained is at least \(1-\mathsf{negl}(\lambda)\) from Lemma 5.1 where \(T\) is defined in Lemma 5.1.
* Given \(t\in T\), in Step 3 of Algorithm 4, the probability that \(\mathcal{V}_{1}\) chooses \(j\) such that \(k\) satisfies Equation (91) and Equation (92) of Lemma 5.2 is \(\frac{1}{m}\).
* Given that \(k\) satisfies Equation (91) and Equation (92) of Lemma 5.2, in Step 4 of Algorithm 4, the probability that \(y\) such that \(|X_{0,t}\cap h_{0}^{-1}(y)|=|X_{1,t}\cap h_{1}^{-1}(y)|=1\) is obtained is at least 0.1 from Lemma 6.3 shown below.
Moreover, if Equation (97) in Step 4 of Algorithm 4 is in the form of \(|0\rangle\,|x_{0}\rangle+|1\rangle\,|x_{1}\rangle\), the probability that \(\mathcal{V}_{2}\) outputs \(\top\) in Algorithm 5 is \(\frac{1}{2}+\frac{1}{2}\cos^{2}\frac{\pi}{8}\geq 0.9\) as is shown in Appendix C.
Therefore, the probability that \(\mathcal{V}_{2}\) outputs \(\top\) is
\[\Big{(}\frac{1}{2}+\frac{1}{2}\cos^{2}\frac{\pi}{8}\Big{)}p_{ \mathsf{good}}+\frac{7}{8}(1-p_{\mathsf{good}}) \tag{103}\] \[=\frac{7}{8}+\Big{(}\frac{1}{2}+\frac{1}{2}\cos^{2}\frac{\pi}{8}- \frac{7}{8}\Big{)}p_{\mathsf{good}}\] (104) \[\geq\frac{7}{8}+\frac{1}{\mathrm{poly}(\lambda)}, \tag{105}\]
which shows the completeness.
**Lemma 6.3**.: _Assume that \(k\) satisfies Equation (91) and Equation (92) of Lemma 5.2. In Step 4 of Algorithm 4, the probability that \(y\) such that \(|X_{0,t}\cap h_{0}^{-1}(y)|=|X_{1,t}\cap h_{1}^{-1}(y)|=1\) is obtained is at least 0.1._
Proof of Lemma 6.3.: By using Lemma 3.2 with \(S=X_{b,t}\), \(h=h_{b}\), and \(\mathcal{Y}=[k]\), we have, for any \(b\in\{0,1\}\), \(t\), and \(y\in\mathcal{Y}\),
\[\Pr_{h_{b}\leftarrow\mathcal{H}}[|X_{b,t}\cap h_{b}^{-1}(y)|=1] \geq\frac{|X_{b,t}|}{k}-\frac{|X_{b,t}|^{2}}{k^{2}} \tag{106}\] \[\geq\frac{1}{2(1+\epsilon)}-\frac{(1+\epsilon)^{2}}{4(1-\epsilon )^{2}}. \tag{107}\]
Here, in the last inequality, we have used Lemma 5.2.
In Step 4 of Algorithm 4, the probability that \(y\) is obtained is
\[\frac{|X_{0,t}\cap h_{0}^{-1}(y)|+|X_{1,t}\cap h_{1}^{-1}(y)|}{|X_{0,t}|+|X_{1,t}|}. \tag{108}\]
Let us define
\[G_{b,t,h_{b}}\coloneqq\{y\in[k]:|X_{b,t}\cap h_{b}^{-1}(y)|=1\}. \tag{109}\]
Then, the probability that we obtain \(y\) such that \(|X_{0,t}\cap h_{0}^{-1}(y)|=|X_{1,t}\cap h_{1}^{-1}(y)|=1\) is
\[\mathop{\mathbb{E}}_{h_{0},h_{1}\leftarrow\mathcal{H}}\left[ \sum_{y\in G_{0,t,h_{0}}\cap G_{1,t,h_{1}}}\frac{2}{|X_{0,t}|+|X_{1,t}|}\right] \tag{110}\] \[=\mathop{\mathbb{E}}_{h_{0},h_{1}\leftarrow\mathcal{H}}\left[ \frac{2|G_{0,t,h_{0}}\cap G_{1,t,h_{1}}|}{|X_{0,t}|+|X_{1,t}|}\right]\] (111) \[\geq\mathop{\mathbb{E}}_{h_{0},h_{1}\leftarrow\mathcal{H}}\left[ \frac{2|G_{0,t,h_{0}}\cap G_{1,t,h_{1}}|}{\frac{(1+\epsilon)k}{1-\epsilon}}\right]\] (112) \[=\frac{2(1-\epsilon)}{1+\epsilon}\mathop{\mathbb{E}}_{h_{0},h_{1 }\leftarrow\mathcal{H}}\left[\frac{|G_{0,t,h_{0}}\cap G_{1,t,h_{1}}|}{k}\right]\] (113) \[=\frac{2(1-\epsilon)}{1+\epsilon}\frac{1}{|\mathcal{H}|^{2}}\sum_ {h_{0},h_{1}\in\mathcal{H}}\frac{1}{k}\sum_{y\in[k]}\delta_{y\in G_{0,t,h_{0}}} \delta_{y\in G_{1,t,h_{1}}}\] (114) \[=\frac{2(1-\epsilon)}{1+\epsilon}\frac{1}{k}\sum_{y\in[k]}\Big{(} \frac{1}{|\mathcal{H}|}\sum_{h_{0}\in\mathcal{H}}\delta_{y\in G_{0,t,h_{0}}} \Big{)}\Big{(}\frac{1}{|\mathcal{H}|}\sum_{h_{1}\in\mathcal{H}}\delta_{y\in G_ {1,t,h_{1}}}\Big{)}\] (115) \[=\frac{2(1-\epsilon)}{1+\epsilon}\frac{1}{k}\sum_{y\in[k]}\Big{(} \mathop{\mathrm{Pr}}_{h_{0}\leftarrow\mathcal{H}}[y\in G_{0,t,h_{0}}]\Big{)} \Big{(}\mathop{\mathrm{Pr}}_{h_{1}\leftarrow\mathcal{H}}[y\in G_{1,t,h_{1}} ]\Big{)}\] (116) \[\geq\frac{2(1-\epsilon)}{1+\epsilon}\frac{1}{k}\sum_{y\in[k]} \Big{[}\frac{1}{2(1+\epsilon)}-\frac{(1+\epsilon)^{2}}{4(1-\epsilon)^{2}} \Big{]}^{2}\] (117) \[=\frac{2(1-\epsilon)}{1+\epsilon}\Big{[}\frac{1}{2(1+\epsilon)}- \frac{(1+\epsilon)^{2}}{4(1-\epsilon)^{2}}\Big{]}^{2}\] (118) \[>0.1. \tag{119}\]
Here, \(\delta_{\alpha}\) is 1 if the statement \(\alpha\) is true, and is 0 if not. In Equation (112), we have used Lemma 5.2, and in Equation (117), we have used Equation (107). In the last inequality, we have taken \(\epsilon=\frac{1}{100}\).
### Soundness
In this subsection, we show \(\frac{7}{8}\)-strong-soundness.
Proof of Theorem 6.2.: Our goal is to prove that for any PPT malicious prover \(\mathcal{P}^{*}\) and any polynomial \(p\),
\[\Pr_{r\leftarrow\mathcal{R}}\left[\Pr[\top\leftarrow\mathcal{V}_{2}(I):I \leftarrow\langle\mathcal{P}_{r}^{*}(1^{\lambda}),\mathcal{V}_{1}(1^{\lambda} )\rangle]\geq\frac{7}{8}+\frac{1}{p(\lambda)}\right]\leq\frac{1}{p(\lambda)} \tag{120}\]
for sufficiently large \(\lambda\) where \(\mathcal{R}\) is the randomness space for \(\mathcal{P}^{*}\) and \(\mathcal{P}_{r}^{*}\) is \(\mathcal{P}^{*}\) with the fixed randomness \(r\).
Toward contradiction, suppose that there are a PPT prover \(\mathcal{P}^{*}\) and a polynomial \(p\) such that
\[\Pr_{r\leftarrow\mathcal{R}}\left[\Pr[\top\leftarrow\mathcal{V}_{2}(I):I \leftarrow\langle\mathcal{P}_{r}^{*}(1^{\lambda}),\mathcal{V}_{1}(1^{\lambda} )\rangle]\geq\frac{7}{8}+\frac{1}{p(\lambda)}\right]>\frac{1}{p(\lambda)} \tag{121}\]
for infinitely many \(\lambda\). Then we prove the following lemma.
**Lemma 6.4**.: _There is an oracle-aided PPT algorithm \(\mathcal{B}\) that breaks the computational binding property of the commitment scheme if it is given black-box access to \(\mathcal{P}_{r}^{*}\) such that_
\[\Pr[\top\leftarrow\mathcal{V}_{2}(I):I\leftarrow\langle\mathcal{P }_{r}^{*}(1^{\lambda}),\mathcal{V}_{1}(1^{\lambda})\rangle]\geq\frac{7}{8}+ \frac{1}{p(\lambda)} \tag{122}\]
_for infinitely many \(\lambda\)._
By combining Equation (121) and Lemma 6.4, \(\mathcal{B}^{\mathcal{P}_{r}^{*}}\) for random \(r\leftarrow\mathcal{R}\) breaks the computational binding property, which is a contradiction. Thus, we only have to prove Lemma 6.4 for completing the proof of Theorem 6.2.
Proof of Lemma 6.4.: The proof is very similar to that of [13], which in turn is based on [11]. Nonetheless, there is a difference that we have to deal with the case where \(|X_{0,t}\cap h_{0}^{-1}(y)|=|X_{1,t}\cap h_{1}^{-1}(y)|=1\) is not satisfied. Thus, we provide the full proof even though we sometimes repeat the same arguments as those in [13] where some sentences are taken verbatim from there with notational adaptation.
We fix \(r\) and an infinite set \(\Gamma\subseteq\mathbb{N}\) such that Equation (122) holds for all \(\lambda\in\Gamma\). In the following, we simply write \(\mathcal{P}^{*}\) to mean \(\mathcal{P}_{r}^{*}\) and \(\Pr[\top\leftarrow\mathcal{V}_{2}]\) to mean \(\Pr[\top\leftarrow\mathcal{V}_{2}(I):I\leftarrow\langle\mathcal{P}_{r}^{*}(1 ^{\lambda}),\mathcal{V}_{1}(1^{\lambda})\rangle]\). We also often omit to say "for all \(\lambda\in\Gamma\)", but whenever we refer to some inequality where \(\lambda\) appears, we always mean it holds for all \(\lambda\in\Gamma\).
Define
\[\mathsf{Good}\coloneqq\left\{(t,h_{0},h_{1},y):\Pr[\top\leftarrow \mathcal{V}_{2}\mid(t,h_{0},h_{1},y)]\geq\frac{7}{8}+\frac{1}{2p(\lambda)} \right\}, \tag{123}\]
where \(\Pr[\top\leftarrow\mathcal{V}_{2}\mid(t,h_{0},h_{1},y)]\) denotes \(\mathcal{V}_{2}\)'s acceptance probability conditioned on a fixed \((t,h_{0},h_{1},y)\), and define
\[p_{\mathsf{Good}}:=\Pr[(t,h_{0},h_{1},y)\in\mathsf{Good}]. \tag{124}\]
Note that we have \(|X_{0,t}\cap h_{0}^{-1}(y)|=|X_{1,t}\cap h_{1}^{-1}(y)|=1\) for all \((t,h_{0},h_{1},y)\in\mathsf{Good}\) since otherwise \(\Pr[\top\leftarrow\mathcal{V}_{2}\mid(t,h_{0},h_{1},y)]=\frac{7}{8}\). Then we have
\[\Pr[\top\leftarrow\mathcal{V}_{2}] =\Pr[\top\leftarrow\mathcal{V}_{2}\wedge(t,h_{0},h_{1},y)\in \mathsf{Good}]+\Pr[\top\leftarrow\mathcal{V}_{2}\wedge(t,h_{0},h_{1},y)\notin \mathsf{Good}] \tag{125}\] \[\leq p_{\mathsf{Good}}+(1-p_{\mathsf{Good}})\cdot\left(\frac{7}{8} +\frac{1}{2p(\lambda)}\right). \tag{126}\]
By Equations (122) and (126), we have
\[p_{\mathsf{Good}}\geq\frac{1}{2p(\lambda)}. \tag{127}\]
We fix \((t,h_{0},h_{1},y)\in\mathsf{Good}\) until Equation (137).
For \(b\in\{0,1\}\), let \(x_{b}\in\{0,1\}^{\ell}\) be the unique element in \(X_{b,t}\cap h_{b}^{-1}(y)\). Note that it is well-defined since we assume \((t,h_{0},h_{1},y)\in\mathsf{Good}\), which implies \(|X_{0,t}\cap h_{0}^{-1}(y)|=|X_{1,t}\cap h_{1}^{-1}(y)|=1\).
We define the following probabilities all of which are conditioned on the fixed value of \((t,h_{0},h_{1},y)\):
* The probability that \(\mathcal{V}_{2}\) returns \(\top\) conditioned on \(v_{1}=0\).
* The probability that \(\mathcal{V}_{2}\) returns \(\top\) conditioned on \(v_{1}=1\).
* The probability that \(\mathcal{V}_{2}\) returns \(\top\) conditioned on \(v_{1}=1\) and \(v_{2}=0\).
* The probability that \(\mathcal{V}_{2}\) returns \(\top\) conditioned on \(v_{1}=1\) and \(v_{2}=1\).
Clearly, we have
\[\Pr[\top\leftarrow\mathcal{V}_{2}|(t,h_{0},h_{1},y)]=\frac{p_{0 }+p_{1}}{2} \tag{128}\]
\[p_{1}=\frac{p_{1,0}+p_{1,1}}{2}. \tag{129}\]
By \((t,h_{0},h_{1},y)\in\mathsf{Good}\), Equations (123) and (128), and a trivial inequality \(p_{0},p_{1}\leq 1\), we have
\[p_{0}\geq\frac{3}{4}+\frac{1}{p(\lambda)} \tag{130}\]
and
\[p_{1}\geq\frac{3}{4}+\frac{1}{p(\lambda)}. \tag{131}\]
Let \(\mathcal{A}\) be a classical deterministic polynomial-time algorithm that works as follows:
1. \(\mathcal{A}\) takes \((t,h_{0},h_{1},y)\) and \(\xi\in\{0,1\}^{\ell}\) as input.
2. \(\mathcal{A}\) runs Step 6 of \(\mathcal{P}^{*}\) where the transcript of Step 1-4 is set to be \((t,h_{0},h_{1},y)\) and the transcript of Step 5 is set to be \((v_{1}=1,\xi)\). Let \(d\in\{0,1\}^{\ell}\) be the message sent from \(\mathcal{P}^{*}\) to \(\mathcal{V}_{1}\). Note that \(\mathcal{P}^{*}\)'s message is determined by the previous transcript since \(\mathcal{P}^{*}\) is deterministic. (Recall that \(\mathcal{P}^{*}\) is a shorthand of \(\mathcal{P}^{*}_{r}\) for a fixed randomness \(r\).)
3. \(\mathcal{A}\) runs Step 8 of \(\mathcal{P}^{*}\) where the transcript of Step 1-4 is set to be \((t,h_{0},h_{1},y)\), the transcript of Step 5 is set to be \((v_{1}=1,\xi)\), the transcript of Step 6 is set to be \(d\), and the transcript of Step 7 is set to be \(v_{2}=0\). Let \(\eta_{1,0}\) be the message sent from \(\mathcal{P}^{*}\) to \(\mathcal{V}_{1}\).
4. \(\mathcal{A}\) runs Step 8 of \(\mathcal{P}^{*}\) where the transcript of Step 1-4 is set to be \((t,h_{0},h_{1},y)\), the transcript of Step 5 is set to be \((v_{1}=1,\xi)\), the transcript of Step 6 is set to be \(d\), and the transcript of Step 7 is set to be \(v_{2}=1\). Let \(\eta_{1,1}\) be the message sent from \(\mathcal{P}^{*}\) to \(\mathcal{V}_{1}\).
5. \(\mathcal{A}\) outputs \(\eta_{1,0}\oplus\eta_{1,1}\oplus 1\).
By the union bound, the probability that both \((d,\eta_{1,0})\) and \((d,\eta_{1,1})\) pass the verification is at least
\[1-(1-p_{1,0})-(1-p_{1,1})=-1+2p_{1}\geq\frac{1}{2}+\frac{1}{p( \lambda)}, \tag{132}\]
where the equation follows from Equation (129) and the inequality follows from Equation (131). When this occurs, for each \(v_{2}\in\{0,1\}\), we have
\[(\xi\cdot x_{0}\neq\xi\cdot x_{1})\wedge(\eta_{1,v_{2}}=\xi\cdot x _{0}), \tag{133}\]
or
\[(\xi\cdot x_{0}=\xi\cdot x_{1})\wedge(\eta_{1,v_{2}}=v_{2}\oplus d \cdot(x_{0}\oplus x_{1})). \tag{134}\]
(Remark that the same \(d\) is used for both cases of \(v_{2}=0\) and \(v_{2}=1\).) In particular, if \(\xi\cdot x_{0}\neq\xi\cdot x_{1}\) then \(\eta_{1,0}=\eta_{1,1}\), and if \(\xi\cdot x_{0}=\xi\cdot x_{1}\) then \(\eta_{1,0}=\eta_{1,1}\oplus 1\). This implies that
\[\eta_{1,0}\oplus\eta_{1,1}\oplus 1=\xi\cdot(x_{0}\oplus x_{1}). \tag{135}\]
Therefore, we have
\[\Pr_{\xi\leftarrow\{0,1\}^{\ell}}[\mathcal{A}((t,h_{0},h_{1},y), \xi)=\xi\cdot(x_{0}\oplus x_{1})]\geq\frac{1}{2}+\frac{1}{p(\lambda)}. \tag{136}\]
Thus, by the Goldreich-Levin theorem [10], there is a PPT algorithm \(\mathcal{E}\) such that
\[\Pr[\mathcal{E}(t,h_{0},h_{1},y)=x_{0}\oplus x_{1}]\geq\frac{1}{p ^{\prime}(\lambda)} \tag{137}\]
for some polynomial \(p^{\prime}\). (Remark that what we have shown so far is that the above holds for any fixed \((t,h_{0},h_{1},y)\in\mathsf{Good}\).)
Then, we construct a PPT algorithm \(\mathcal{B}\) that breaks the computational binding property of the classical bit commitment scheme as follows:
1. \(\mathcal{B}\) interacts with the receiver \(\mathcal{R}\) in the same way as \(\mathcal{P}^{*}\) does in Step 1 of Algorithm 4, and let \(t\) be the transcript obtained from the execution.
2. \(\mathcal{B}\) chooses hash functions \(h_{0}\) and \(h_{1}\) as in Step 3 of Algorithm 4, and send them to \(\mathcal{P}^{*}\).
3. \(\mathcal{P}^{*}\) returns \(y\) as a message of Step 4 in Algorithm 4. At this point, \((x_{0},x_{1})\) is implicitly determined if \((t,h_{0},h_{1},y)\in\mathsf{Good}\).
4. \(\mathcal{B}\) sends \(v_{1}=0\) and \(\xi\leftarrow\{0,1\}^{\ell}\) to \(\mathcal{P}^{*}\) as a message of Step 5 in Algorithm 4.
5. \(\mathcal{P}^{*}\) returns \((b^{\prime},x^{\prime})\) as a message of the first case of Step 6 in Algorithm 4.
6. \(\mathcal{B}\) runs \(\mathcal{E}(t,h_{0},h_{1},y)\) and let \(z\) be the output.
7. \(\mathcal{B}\) sets \(x^{\prime}_{0}\coloneqq x^{\prime}\) and \(x^{\prime}_{1}\coloneqq x^{\prime}\oplus z\) if \(b^{\prime}=0\), and \(x^{\prime}_{0}\coloneqq x^{\prime}\oplus z\) and \(x^{\prime}_{1}\coloneqq x^{\prime}\) otherwise. For each \(b\in\{0,1\}\), \(\mathcal{B}\) generate a decommitment \(\mathsf{d}\mathsf{c}\mathsf{o}\mathsf{m}_{b}\) corresponding to the sender's randomness \(x^{\prime}_{b}\) and transcript \(t\). \(\mathcal{B}\) outputs \((0,\mathsf{d}\mathsf{c}\mathsf{o}\mathsf{m}_{0})\) and \((1,\mathsf{d}\mathsf{c}\mathsf{o}\mathsf{m}_{1})\).
Recall that we have shown that for any \((t,h_{0},h_{1},y)\in\mathsf{Good}\), Equations (130) and (137) hold. Thus, for any \((t,h_{0},h_{1},y)\in\mathsf{Good}\), we have
\[\Pr[x^{\prime}=x_{b^{\prime}}|(t,h_{0},h_{1},y)]\geq\frac{3}{4}+ \frac{1}{p(\lambda)} \tag{138}\]
and
\[\Pr[z=x_{0}\oplus x_{1}|(t,h_{0},h_{1},y)]\geq\frac{1}{p^{\prime }(\lambda)}. \tag{139}\]
Moreover, the two events \(x^{\prime}=x_{b^{\prime}}\) and \(z=x_{0}\oplus x_{1}\) are independent once we fix \((t,h_{0},h_{1},y)\). Therefore, for any \((t,h_{0},h_{1},y)\in\mathsf{Good}\), we have
\[\Pr[x^{\prime}=x_{b^{\prime}}\wedge z=x_{0}\oplus x_{1}|(t,h_{0}, h_{1},y)]\geq\frac{3}{4p^{\prime}(\lambda)}. \tag{140}\]
Combined with Equation (127), we have
\[\Pr[(t,h_{0},h_{1},y)\in\mathsf{Good}\wedge x^{\prime}_{0}=x_{0 }\wedge x^{\prime}_{1}=x_{1}]\geq\frac{3}{8p(\lambda)p^{\prime}(\lambda)}. \tag{141}\]
By the definition of \(x_{b}\), we have \(x_{b}\in X_{b,t}\) for \(b\in\{0,1\}\). Thus, by the perfect correctness of the commitment scheme, \(\mathsf{d}\mathsf{c}\mathsf{o}\mathsf{m}_{b}\) derived from \((x_{b},t)\) is a valid decommitment. Thus, Equation (141) implies that \(\mathcal{B}\) outputs valid decommitments for both messages \(0\) and \(1\) with probability at least \(\frac{3}{8p(\lambda)p^{\prime}(\lambda)}\) (for all \(\lambda\in\Gamma\)). This completes the proof of Lemma 6.4.
This completes the proof of Theorem 6.2.
### Computational Power of the Inefficient Verifier
In this subsection, we show that \(\mathcal{V}_{2}\) can be a classical deterministic polynomial-time algorithm querying an \(\mathbf{NP}\) oracle. (The inefficient verifier in our construction actually uses randomness when \(|X_{0,t}\cap h_{0}^{-1}(y)|=|X_{1,t}\cap h_{1}^{-1}(y)|=1\) is not satisfied, but the inefficient verifier can be deterministic if we let the first phase verifier append the randomness to the transcript.)
The inefficient parts of \(\mathcal{V}_{2}\) are verifying \(|X_{0,t}\cap h_{0}^{-1}(y)|=|X_{1,t}\cap h_{1}^{-1}(y)|=1\) and finding \((x_{0},x_{1})\), where \(x_{b}\) is the single element of \(X_{b,t}\cap h_{b}^{-1}(y)\). We show that these two tasks can be done in classical deterministic polynomial-time querying an \(\mathbf{NP}\) oracle. Note that the membership of \(x\in X_{b,t}\cap h_{b}^{-1}(y)\) can be decided in a classical deterministic polynomial time. Therefore, the decision problem
* There exists \(x\in\{0,1\}^{\ell}\) such that \(x\in X_{b,t}\cap h_{b}^{-1}(y)\).
* For any \(x\in\{0,1\}^{\ell}\), \(x\notin X_{b,t}\cap h_{b}^{-1}(y)\).
is in \(\mathbf{NP}\).
First, \(\mathcal{V}_{2}\) queries the above decision problem to the \(\mathbf{NP}\) oracle for each \(b\in\{0,1\}\). If the answer is no for a \(b\in\{0,1\}\), it means that \(|X_{b,t}\cap h_{b}^{-1}(y)|=0\). In that case, \(\mathcal{V}_{2}\) concludes that \(|X_{0,t}\cap h_{b}^{-1}(y)|=|X_{1,t}\cap h_{1}^{-1}(y)|=1\) is not satisfied. If the answer is yes for both \(b\in\{0,1\}\), \(|X_{0,t}\cap h_{0}^{-1}(y)|\geq 1\) and \(|X_{1,t}\cap h_{1}^{-1}(y)|\geq 1\) are guaranteed.
Then, \(\mathcal{V}_{2}\) finds an element \(x_{b}\in X_{b,t}\cap h_{b}^{-1}(y)\) for each \(b\in\{0,1\}\). Finding such an element is just an \(\mathbf{NP}\) search problem, which can be solved in classical deterministic polynomial-time by querying the \(\mathbf{NP}\) oracle polynomially many times.
\(\mathcal{V}_{2}\) finally queries the following decision problem to the \(\mathbf{NP}\) oracle for each \(b\in\{0,1\}\).
* There exists \(x\in\{0,1\}^{\ell}\) such that \(x\in(X_{b,t}\cap h_{b}^{-1}(y))\setminus\{x_{b}\}\).
* For any \(x\in\{0,1\}^{\ell}\), \(x\notin(X_{b,t}\cap h_{b}^{-1}(y))\setminus\{x_{b}\}\).
If the answer is yes for a \(b\in\{0,1\}\), it means that \(|X_{b,t}\cap h_{b}^{-1}(y)|\geq 2\). In that case, \(\mathcal{V}_{2}\) concludes that \(|X_{0,t}\cap h_{0}^{-1}(y)|=|X_{1,t}\cap h_{1}^{-1}(y)|=1\) is not satisfied. If the answer is no for both \(b\in\{0,1\}\), \(\mathcal{V}_{2}\) concludes that \(|X_{0,t}\cap h_{0}^{-1}(y)|=|X_{1,t}\cap h_{1}^{-1}(y)|=1\) is satisfied.
## 7 Implausibility of Two-Round AI-IV-PoQ
In this section, we prove Theorems 1.8 and 1.9.
### Impossibility of Classical Reduction
In this subsection, we formally state Theorem 1.8 and prove it.
First, we define game-based assumptions. The definition is identical to _falsifiable assumptions_ in [11] except for the important difference that the challenger can be unbounded-time.
**Definition 7.1** (Game-based assumptions).: _A game-based assumption consists of a possibly unbounded-time interactive machine \(\mathcal{C}\) (the challenger) and a constant \(t\in[0,1)\). On the security parameter \(1^{\lambda}\), the challenger \(\mathcal{C}(1^{\lambda})\) interacts with a classical or quantum machine \(\mathcal{A}\) (the adversary) over a classical channel and finally outputs a bit \(b\). We denote this execution by \(b\leftarrow\langle\mathcal{A}(1^{\lambda}),\mathcal{C}(1^{\lambda})\rangle\)._
_We say that a game-based assumption \((\mathcal{C},t)\) holds against classical (resp. quantum) adversaries if for any PPT (resp. QPT) adversary \(\mathcal{A}\), \(|\Pr[1\leftarrow\langle\mathcal{A}(1^{\lambda}),\mathcal{C}(1^{\lambda}) \rangle]-t|\leq\mathsf{negl}(\lambda)\)._
_Remark 7.2_ (Examples).: As explained in [11], the above definition captures a very wide class of assumptions used in cryptography even if we restrict the challenger to be PPT. They include (but not limited to) general assumptions such as security of OWFs, public key encryption, digital signatures, oblivious transfers etc. as well as concrete assumptions such as the hardness of factoring, discrete-logarithm, LWE etc. In addition, since we allow the challenger to be unbounded-time, it also captures some non-falsifiable assumptions such as hardness of indistinguishability obfuscation
[12, 13] or succinct arguments [14] etc. Examples of assumptions that are not captured by the above definition include so called knowledge assumptions [1, 1, 2, 1] and zero-knowledge proofs with non-black-box zero-knowledge [17, 1].
We clarify meanings of several terms used in the statement of our theorem.
**Definition 7.3** (Classical oracle-access to a cheating prover).: _Let \(\Pi=(\mathcal{P},\mathcal{V}=(\mathcal{V}_{1},\mathcal{V}_{2}))\) be a two-round AI-IV-PoQ. We say that a (possibly unbounded-time stateless) randomized classical machine \(\mathcal{P}^{*}\) breaks \(s\)-soundness of \(\Pi\) if there is a polynomial \(\operatorname{poly}\) such that \(\Pr[\top\leftarrow\mathcal{V}_{2}(I):I\leftarrow\langle\mathcal{P}^{*}( \sigma),\mathcal{V}_{1}(\sigma)\rangle]\geq s(|\sigma|)+1/\operatorname{poly} (|\sigma|)\) for all but finitely many \(\sigma\in\Sigma\). We say that an oracle-aided classical machine \(\mathcal{R}\) is given oracle access to \(\mathcal{P}^{*}\) if it can query an auxiliary input \(\sigma\) and the first-round message \(m_{1}\) of \(\Pi\) and the oracle returns the second-round message \(m_{2}\) generated by \(\mathcal{P}^{*}\) with a fresh randomness \(r\) in each invocation._
_Remark 7.4_.: Since we consider two-round protocols, we can assume that \(\mathcal{P}^{*}\) is stateless without loss of generality.
Then we state the formal version of Theorem 1.8.
**Theorem 7.5**.: _Let \(\Pi\) be a two-round AI-IV-PoQ that satisfies \(c\)-completeness and \(s\)-soundness where \(c(\lambda)-s(\lambda)\geq 1/\operatorname{poly}(\lambda)\) and let \((\mathcal{C},t)\) be a game-based assumption. Suppose that there is an oracle-aided PPT machine \(\mathcal{R}\) (the reduction algorithm) such that for any (possibly unbounded-time stateless) randomized classical machine \(\mathcal{P}^{*}\) that breaks \(s\)-soundness of \(\Pi\), \(|\Pr[1\leftarrow\langle\mathcal{R}^{\mathcal{P}^{*}}(1^{\lambda}),\mathcal{C}( 1^{\lambda})\rangle]-t|\) is non-negligible. Then the game-based assumption \((\mathcal{C},t)\) does not hold against quantum adversaries._
Proof.: Let \(\widetilde{\mathcal{P}}^{*}\) be an unbounded-time randomized classical machine that simulates the honest QPT prover \(\mathcal{P}\). Then \(\widetilde{\mathcal{P}}^{*}\) breaks \(s\)-soundness of \(\Pi\) because of \(c\)-soundness and \(c(\lambda)-s(\lambda)\geq 1/\operatorname{poly}(\lambda)\). Thus, \(|\Pr[1\leftarrow\langle\mathcal{R}^{\widetilde{\mathcal{P}}^{*}}(1^{\lambda}),\mathcal{C}(1^{\lambda})\rangle]-t|\) is non-negligible. Since \(\widetilde{\mathcal{P}}^{*}\) is simulatable by a QPT machine, \(\mathcal{R}^{\widetilde{\mathcal{P}}^{*}}\) is simulatable by a QPT machine. Thus, \((\mathcal{C},t)\) does not hold against quantum adversaries.
_Remark 7.6_.: One might think that a similar proof extends to the case of quantum reductions. However, we believe that this is non-trivial. For example, suppose that a quantum reduction algorithm \(\mathcal{R}\) queries a uniform superposition \(\sum_{m_{1}}|m_{1}\rangle\) to the oracle where we omit an auxiliary input for simplicity. If the oracle is a classical randomized cheating prover \(\mathcal{P}^{*}\), then it should return a state of the form \(\sum_{m_{1}}|m_{1}\rangle\,|\mathcal{P}^{*}(m_{1};r)\rangle\) for a randomly chosen \(r\) where \(\mathcal{P}^{*}(m_{1};r)\) is the second message \(m_{2}\) sent by \(\mathcal{P}^{*}\) given the first-round message \(m_{1}\) and randomness \(r\). On the other hand, if we try to simulate the oracle by using the honest QPT prover \(\mathcal{P}\), then the resulting state is of the form \(\sum_{m_{1},m_{2}}|m_{1}\rangle\,|m_{2}\rangle\,|garbage_{m_{1},m_{2}}\rangle\). Due to potential entanglement between the first two registers and the third register, this does not correctly simulate the situation with a classical prover.
### Oracle Separation
In this subsection, we formally state Theorem 1.9 and prove it.
First, we define cryptographic primitives. The following definition is taken verbatim from [13] except for the difference that we consider quantum security. We remark that we restrict primitives themselves to be classical and only allow the adversary (the machine \(M\)) to be quantum.
**Definition 7.7** (Cryptographic primitives; quantumly-secure version of [13, Definition 2.1]).: _A primitive \(\mathsf{P}\) is a pair \((F_{\mathsf{P}},R_{\mathsf{P}})\), where \(F_{\mathsf{P}}\) is a set of functions \(f:\{0,1\}^{*}\rightarrow\{0,1\}^{*}\), and \(R_{\mathsf{P}}\) is a relation over pairs \((f,M)\) of a function \(f\in F_{\mathsf{P}}\) and an interactive quantum machine \(M\). The set \(F_{\mathsf{P}}\) is required to contain at least one function which is computable by a PPT machine._
_A function \(f:\{0,1\}^{*}\rightarrow\{0,1\}^{*}\) implements \(\mathsf{P}\) or is an implementation of \(\mathsf{P}\) if \(f\in F_{\mathsf{P}}\). An efficient implementation of \(\mathsf{P}\) is an implementation of \(\mathsf{P}\) which is computable by a PPT machine. A machine \(M\)\(\mathsf{P}\)-breaks \(f\in F_{\mathsf{P}}\) if \((f,M)\in R_{\mathsf{P}}\). A quantumly-secure implementation of \(\mathsf{P}\) is an implementation of \(\mathsf{P}\) such that no QPT machine \(\mathsf{P}\)-breaks \(f\)._
It was pointed out in [1] that the above definition was too general and there are subtle logical gaps and counter examples in their claims. In particular, [13] implicitly assumes that if two machines \(M\) and \(M^{\prime}\) behave identically, then \((f,M)\in R_{\mathsf{P}}\) and \((f,M^{\prime})\in R_{\mathsf{P}}\) are equivalent. We formalize this property following [1] where the definitions are taken verbatim from there except for adaptation to the quantumly-secure setting.
**Definition 7.8** (Output distribution [12, Definition B.1 in the ePrint version]).: _An interactive (oracle-aided) quantum Turing machine \(M\) together with its oracle defines an output distribution, namely, each fixed finite sequence of inputs fed to \(M\) induces a distribution on the output sequences by considering all random choices of \(M\) and its oracle. The output distribution of \(M\) is defined to be the set of these distributions, indexed by the finite sequences of input values._
**Definition 7.9** (Semantical cryptographic primitive [12, Definition B.2 in the ePrint version]).: _A cryptographic primitive \(\mathsf{P}=(F_{\mathsf{P}},R_{\mathsf{P}})\) is called semantical, iffor all \(f\in F_{\mathsf{P}}\) and all interactive (oracle-aided) quantum Turing machines \(M\) and \(M^{\prime}\) (including their oracles), it holds: If \(M\) induces the same output distribution as \(M^{\prime}\), then \((f,M)\in R_{\mathsf{P}}\) if and only if \((f,M^{\prime})\in R_{\mathsf{P}}\)._
_Remark 7.10_ (Examples).: As explained in [10, 12], most cryptographic primitives considered in the literature are captured by semantical cryptographic primitives. They include (but not limited to) OWFs, public key encryption, digital signatures, oblivious transfers indistinguishability obfuscation etc. On the other hand, we note that it does not capture concrete assumptions such as the hardness of factoring, discrete-logarithm, LWE etc. unlike game-based assumptions defined in Definition 7.1. Similarly to game-based assumptions, semantical cryptographic primitives do not capture knowledge-type assumptions or zero-knowledge proofs (with non-black-box zero-knowledge).
Next, we define secure implementation relative to oracles following [10].
**Definition 7.11** (Secure implementation relative to oracles [10, Definition 2.2]).: _A quantumly-secure implementation of primitive \(\mathsf{P}\) exists relative to an oracle \(O\) if there exists an implementation of \(f\) of \(\mathsf{P}\) which is computable by a PPT oracle machine with access to \(O\) and such that no QPT oracle machine with access to \(O\)\(\mathsf{P}\)-breaks \(f\)._
_Remark 7.12_ (Example).: A quantumly-secure implementation of OWFs and collision-resistant hash functions exists relative to a random oracle [12, 13]. [11] implicitly proves that quantumly-secure implementation of trapdoor-permutations exist relative to a classical oracle. We believe that we can prove similar statements for most cryptographic primitives by appropriately defining oracles.
Now, we are ready to state the formal version of Theorem 1.9.
**Theorem 7.13**.: _Suppose that a semantical cryptographic primitive \(\mathsf{P}=(F_{\mathsf{P}},R_{\mathsf{P}})\) has a quantumly-secure implementation relative to a classical oracle. Then there is a randomized classical oracle relative to which two-round AI-IV-PoQ do not exist but a quantumly-secure implementation of \(\mathsf{P}\) exists._
_Remark 7.14_.: If we assume that a quantumly-secure implementation of a semantical cryptographic primitive \(\mathsf{P}\) exists in the unrelativized world, then the assumption of the theorem is trivially satisfied relative to a trivial oracle that does nothing. Thus, the above theorem can be understood as a negative result on constructing two-round AI-IV-PoQ from any primitive whose quantumly-secure implementation is believed to exist.
Proof.: Let \(f\) be a quantumly-secure implementation of \(\mathsf{P}\) relative to a classical oracle \(O\). Let \(Q^{O}\) be a randomized oracle that takes a description of an \(n\)-qubit input quantum circuit \(C^{O}\) with \(O\)-gates and a classical string \(x\in\{0,1\}^{n}\) as input and returns a classical string according to the distribution of \(C^{O}(x)\). We prove that AI-IV-PoQ do not exist but \(f\) is a quantumly-secure implementation relative to oracles \((O,C^{O})\).
Let \(\Pi=(\mathcal{P},\mathcal{V}=(\mathcal{V}_{1},\mathcal{V}_{2}))\) be an AI-IV-PoQ that satisfies \(s\)-soundness relative to \((O,Q^{O})\). Since \(Q^{O}\) is simulatable in QPT with oracle access to \(O\), given the auxiliary input \(\sigma\), one can generate a description of quantum circuit \(C^{O}\) with \(O\)-gates that simulates \(\mathcal{P}^{O,Q^{O}}(\sigma)\) in a classical polynomial time. Then let us consider a classical cheating prover \(\mathcal{P}^{*}\) relative to the oracles \((O,R^{O})\) that works as follows: Receiving the auxiliary input \(\sigma\) and the first-round message \(m_{1}\) from the external verifier, \(\mathcal{P}^{*}\) generates the above quantum circuit \(C^{O}\), queries \((C^{O},m_{1})\) to the oracle \(Q^{O}\) to receive the response \(m_{2}\), and send \(m_{2}\) to the external verifier. Clearly, \(\mathcal{P}^{*}\) passes the verification with the same probability as \(\mathcal{P}\) does. Therefore, \(\Pi\) cannot satisfy \(c\)-soundness for any \(c<s\). This means that there is no AI-IV-PoQ relative to \((O,Q^{O})\).
On the other hand, if \(f\) is not a quantumly-secure implementation of \(\mathsf{P}\) relative to \((O,R^{O})\), then there is a QPT oracle-aided machine \(M\) such that \((f,M^{O,Q^{O}})\in R_{\mathsf{P}}\). Again, since \(Q^{O}\) is simulatable in QPT with oracle access to \(O\), there is a QPT oracle-aided machine \(\widetilde{M}\) such that \(M^{O,Q^{O}}\) and \(\widetilde{M}^{O}\) induce the same output distributions. Since
is semantical, we have \((f,\widetilde{M}^{O})\in R_{\mathsf{P}}\). This contradicts the assumption that \(f\) is a quantumly-secure implementation of \(\mathsf{P}\) relative to \(O\). Therefore, \(f\) is a quantumly-secure implementation of \(\mathsf{P}\) relative to \((O,R^{O})\).
## 8 Variants of PoQ from QE-OWFs
**Definition 8.1** (Qe-Owfs).: _A function \(f:\{0,1\}^{*}\to\{0,1\}^{*}\) is a (classically-secure) quantum-evaluation OWF (QE-OWF) if the following two properties are satisfied._
* _There exists a QPT algorithm_ \(\mathsf{QEval}\) _such that_ \(\Pr[f(x)\leftarrow\mathsf{QEval}(x)]\geq 1-2^{-|x|}\) _for all_ \(x\in\{0,1\}^{*}\)_._23__ Footnote 23: Actually the threshold can be any value larger than 1/2, because the amplification is possible._
* _For any PPT adversary_ \(\mathcal{A}\)_, there exists a negligible function_ \(\mathsf{negl}\) _such that for any_ \(\lambda\)_,_ \[\Pr[f(x^{\prime})=f(x):x^{\prime}\leftarrow\mathcal{A}(1^{\lambda},f(x)),x \leftarrow\{0,1\}^{\lambda}]\leq\mathsf{negl}(\lambda).\] (142)
_Remark 8.2_.: It is usually useless to consider OWFs whose evaluation algorithm is QPT but the security is against PPT adversaries. However, for our applications, classical security is enough. We therefore consider the classically-secure QE-OWFs, because it only makes our result stronger.
Before explaining our construction of variants of PoQ from QE-OWFs, we point out that QE-OWFs seems to be weaker than classically-secure and classical-evaluation OWFs. Let \(g\) be a classically-secure and classical-evaluation OWF. Let \(L\) be any language in \(\mathbf{BQP}\). From them, we construct the function \(f\) as follows: \(f(x,y)\coloneqq L(x)\|g(y)\), where \(L(x)=1\) if \(x\in L\) and \(L(x)=0\) if \(x\notin L\). Then we have the following lemma.
**Lemma 8.3**.: \(f\) _is QE-OWFs. Moreover, if \(\mathbf{BQP}\neq\mathbf{BPP}\), \(f\) cannot be evaluated in classical polynomial-time._
Proof.: First, it is clear that there exists a QPT algorithm \(\mathsf{QEval}\) such that for any \(x,y\)
\[\Pr[f(x,y)\leftarrow\mathsf{QEval}(x,y)]\geq 1-2^{-|x||y|}. \tag{143}\]
Second, let us show the one-wayness of \(f\). Assume that it is not one-way. Then, there exists a PPT adversary \(\mathcal{A}\) and a polynomial \(p\) such that
\[\frac{1}{2^{n}}\sum_{x\in\{0,1\}^{n}}\frac{1}{2^{m}}\sum_{y\in\{0,1\}^{m}}\Pr[ L(x)=L(x^{\prime})\wedge g(y)=g(y^{\prime}):(x^{\prime},y^{\prime})\leftarrow \mathcal{A}(L(x)\|g(y))]\geq\frac{1}{p}. \tag{144}\]
From this \(\mathcal{A}\), we can construct a PPT adversary \(\mathcal{B}\) that breaks the one-wayness of \(g\) as follows.
1. On input \(g(y)\), sample \(x\leftarrow\{0,1\}^{n}\) and \(b\leftarrow\{0,1\}\).
2. Run \((x^{\prime},y^{\prime})\leftarrow\mathcal{A}(b\|g(y))\).
3. Output \(y^{\prime}\).
The probability that \(\mathcal{B}\) breaks the one-wayness of \(g\) is
\[\frac{1}{2^{m}}\sum_{y\in\{0,1\}^{m}}\frac{1}{2^{n}}\sum_{x\in\{0, 1\}^{n}}\frac{1}{2}\sum_{b\in\{0,1\}}\sum_{x^{\prime},y^{\prime}}\Pr[(x^{ \prime},y^{\prime})\leftarrow\mathcal{A}(b\|g(y))]\delta_{g(y),g(y^{\prime})} \tag{145}\] \[\geq\frac{1}{2}\frac{1}{2^{n}}\sum_{x\in\{0,1\}^{n}}\frac{1}{2^{ m}}\sum_{y\in\{0,1\}^{m}}\sum_{x^{\prime},y^{\prime}}\Pr[(x^{\prime},y^{\prime}) \leftarrow\mathcal{A}(L(x)\|g(y))]\delta_{g(y),g(y^{\prime})}\] (146) \[\geq\frac{1}{2}\frac{1}{2^{n}}\sum_{x\in\{0,1\}^{n}}\frac{1}{2^{ m}}\sum_{y\in\{0,1\}^{m}}\sum_{x^{\prime},y^{\prime}}\Pr[(x^{\prime},y^{\prime}) \leftarrow\mathcal{A}(L(x)\|g(y))]\delta_{g(y),g(y^{\prime})}\delta_{L(x),L(x^ {\prime})}\] (147) \[\geq\frac{1}{2p}, \tag{148}\]
which is non-negligible.
Finally, it is clear that if there exists a PPT algorithm that computes \(f(x,y)\) for any \(x,y\) with probability at least \(1-2^{-\left\lVert x\right\rVert y}\), then the algorithm can solve \(L\), which contradicts \(\mathbf{BQP}\neq\mathbf{BPP}\).
Now we show the main result of this section.
**Theorem 8.4**.: _If(classically-secure) QE-OWFs exist, then QV-PoQ exist or (classically-secure and classical-evaluation) infinitely-often OWFs exist._
Proof.: Let \(f\) be a classically-secure QE-OWF. From the \(f\), we construct a QV-PoQ \((\mathcal{P},\mathcal{V})\) as follows.
1. The verifier \(\mathcal{V}\) chooses \(x\leftarrow\{0,1\}^{\lambda}\), and sends it to the prover \(\mathcal{P}\).
2. \(\mathcal{P}\) runs \(y\leftarrow\mathsf{QEval}(x)\), and sends \(y\) to \(\mathcal{V}\).
3. \(\mathcal{V}\) runs \(y^{\prime}\leftarrow\mathsf{QEval}(x)\). If \(y=y^{\prime}\), \(\mathcal{V}\) outputs \(\top\). Otherwise, \(\mathcal{V}\) outputs \(\bot\).
The \(1\)-completeness is shown as follows. The probability that \(\mathcal{V}\) accepts with the honest prover is
\[\frac{1}{2^{\lambda}}\sum_{x}\sum_{y}\Pr[y\leftarrow\mathsf{QEval }(x)]^{2} \geq\frac{1}{2^{\lambda}}\sum_{x}\Pr[f(x)\leftarrow\mathsf{QEval }(x)]^{2} \tag{149}\] \[\geq\frac{1}{2^{\lambda}}\sum_{x}(1-2^{-\lambda})^{2}\] (150) \[\geq 1-\mathsf{negl}(\lambda). \tag{151}\]
If the soundness is also satisfied, then we have a QV-PoQ.
Assume that the soundness is not satisfied. Then there exists a PPT algorithm \(P^{*}\) such that for any polynomial \(\operatorname{poly}\) such that
\[\frac{1}{2^{\lambda}}\sum_{x}\sum_{y}\Pr[y\gets P^{*}(x)]\Pr[y \leftarrow\mathsf{QEval}(x)]\geq 1-\frac{1}{\operatorname{poly}(\lambda)} \tag{152}\]
for infinitely many \(\lambda\). Then we have for any polynomial \(\operatorname{poly}\)
\[1-\frac{1}{\operatorname{poly}(\lambda)} \leq\frac{1}{2^{\lambda}}\sum_{x}\sum_{y}\Pr[y\gets P^{*}(x) ]\Pr[y\leftarrow\mathsf{QEval}(x)] \tag{153}\] \[=\frac{1}{2^{\lambda}}\sum_{x}\Pr[f(x)\gets P^{*}(x)]\Pr[f(x )\leftarrow\mathsf{QEval}(x)]\] (154) \[+\frac{1}{2^{\lambda}}\sum_{x}\sum_{y\neq f(x)}\Pr[y\gets P^ {*}(x)]\Pr[y\leftarrow\mathsf{QEval}(x)]\] (155) \[\leq\frac{1}{2^{\lambda}}\sum_{x}\Pr[f(x)\gets P^{*}(x)]+ \frac{1}{2^{\lambda}}\sum_{x}\sum_{y\neq f(x)}\Pr[y\leftarrow\mathsf{QEval}(x)]\] (156) \[\leq\frac{1}{2^{\lambda}}\sum_{x}\Pr[f(x)\gets P^{*}(x)]+ \frac{1}{2^{\lambda}}\sum_{x}2^{-\lambda} \tag{157}\]
for infinitely many \(\lambda\), which gives that for any polynomial \(\operatorname{poly}\)
\[\frac{1}{2^{\lambda}}\sum_{x}\Pr[f(x)\gets P^{*}(x)]\geq 1- \frac{1}{\operatorname{poly}(\lambda)} \tag{158}\]
for infinitely many \(\lambda\). If we write the random seed for \(P^{*}\) explicitly, it means that for any polynomial \(\operatorname{poly}\)
\[\frac{1}{2^{\lambda+p(\lambda)}}\sum_{x\in\{0,1\}^{\lambda}}\sum_{r\in\{0,1 \}^{p(\lambda)}}\delta_{f(x),P^{*}(x;r)}\geq 1-\frac{1}{\operatorname{poly}(\lambda)} \tag{159}\]
for infinitely many \(\lambda\), where \(p(\lambda)\) is the length of the seed, and \(\delta_{\alpha,\beta}=1\) if \(\alpha=\beta\) and it is 0 otherwise. Define the set
\[G\coloneqq\{(x,r)\in\{0,1\}^{\lambda}\times\{0,1\}^{p}:f(x)=P^{*}(x;r)\}. \tag{160}\]
Then, from Equation (159), we have for any polynomial \(\mathrm{poly}\)
\[\frac{2^{\lambda+p}-|G|}{2^{\lambda+p}}\leq\frac{1}{\mathrm{poly}(\lambda)} \tag{161}\]
for infinitely many \(\lambda\).
Define the function \(g:(x,r)\to P^{*}(x;r)\). We show that it is a classically-secure and classical-evaluation infinitely-often distributionally OWF. (For the definition of distributionally OWFs, see Appendix D.) It is enough because distributionally OWFs imply OWFs (Lemma D.2). To show it, assume that it is not. Then, for any polynomial \(\mathrm{poly}\) there exists a PPT algorithm \(\mathcal{A}\) such that
\[\left\|\frac{1}{2^{\lambda+p}}\sum_{x,r}(x,r)\otimes g(x,r)-\frac{1}{2^{ \lambda+p}}\sum_{x,r}\mathcal{A}(g(x,r))\otimes g(x,r)\right\|_{1}\leq\frac{1} {\mathrm{poly}(\lambda)} \tag{162}\]
for infinitely many \(\lambda\). Here, we have used quantum notations although everything is classical, because it is simpler. Moreover, for the notational simplicity, we omit bras and kets: \((x,r)\) means \(|(x,r)\rangle\langle(x,r)|\), \(g(x,r)\) means \(|g(x,r)\rangle\langle g(x,r)|\), and \(\mathcal{A}(g(x,r))\) is the (diagonal) density matrix that represents the classical output distribution of the algorithm \(\mathcal{A}\) on input \(g(x,r)\).
From the algorithm \(\mathcal{A}\), we construct a PPT adversary \(\mathcal{B}\) that breaks the distributional one-wayness of \(f\) as follows:
1. On input \(f(x)\), sample \(r\leftarrow\{0,1\}^{p}\).
2. Run \((x^{\prime},r^{\prime})\leftarrow\mathcal{A}(f(x))\).
3. Output \(x^{\prime}\).
Then for any polynomial poly
\[\left\|\frac{1}{2^{\lambda}}\sum_{x}x\otimes f(x)-\frac{1}{2^{ \lambda}}\sum_{x}\mathcal{B}(f(x))\otimes f(x)\right\|_{1} \tag{163}\] \[=\left\|\frac{1}{2^{\lambda}}\sum_{x}x\otimes f(x)-\frac{1}{2^{ \lambda}}\sum_{x}\frac{1}{2^{p}}\sum_{r}\text{Tr}_{R}[\mathcal{A}(f(x))] \otimes f(x)\right\|_{1}\] (164) \[=\left\|\frac{1}{2^{\lambda}}\sum_{x}\sum_{r}\frac{1}{2^{p}} \text{Tr}(r)x\otimes f(x)-\frac{1}{2^{\lambda}}\sum_{x}\frac{1}{2^{p}}\sum_{r} \text{Tr}_{R}[\mathcal{A}(f(x))]\otimes f(x)\right\|_{1}\] (165) \[\leq\left\|\frac{1}{2^{\lambda}}\sum_{x}\sum_{r}\frac{1}{2^{p}} x\otimes r\otimes f(x)-\frac{1}{2^{\lambda}}\sum_{x}\frac{1}{2^{p}}\sum_{r} \mathcal{A}(f(x))\otimes f(x)\right\|_{1}\] (166) \[=\left\|\frac{1}{2^{\lambda+p}}\sum_{x,r}[x\otimes r\otimes f(x) -\mathcal{A}(f(x))\otimes f(x)]\right\|_{1}\] (167) \[=\left\|\frac{1}{2^{\lambda+p}}\sum_{(x,r)\in G}[x\otimes r \otimes f(x)-\mathcal{A}(f(x))\otimes f(x)]\right.\] (168) \[+\frac{1}{2^{\lambda+p}}\sum_{(x,r)\notin G}[x\otimes r\otimes f (x)-\mathcal{A}(f(x))\otimes f(x)]\] (169) \[+\frac{1}{2^{\lambda+p}}\sum_{(x,r)\notin G}[x\otimes r\otimes P ^{*}(x;r)-\mathcal{A}(P^{*}(x;r))\otimes P^{*}(x;r)]\] (170) \[-\frac{1}{2^{\lambda+p}}\sum_{(x,r)\notin G}[x\otimes r\otimes P ^{*}(x;r)-\mathcal{A}(P^{*}(x;r))\otimes P^{*}(x;r)]\Big{\|}_{1}\] (171) \[=\left\|\frac{1}{2^{\lambda+p}}\sum_{(x,r)\in G}[x\otimes r \otimes P^{*}(x;r)-\mathcal{A}(P^{*}(x;r))\otimes P^{*}(x;r)]\right.\] (172) \[+\frac{1}{2^{\lambda+p}}\sum_{(x,r)\notin G}[x\otimes r\otimes f (x)-\mathcal{A}(f(x))\otimes f(x)]\] (173) \[+\frac{1}{2^{\lambda+p}}\sum_{(x,r)\notin G}[x\otimes r\otimes P ^{*}(x;r)-\mathcal{A}(P^{*}(x;r))\otimes P^{*}(x;r)]\] (174) \[-\frac{1}{2^{\lambda+p}}\sum_{(x,r)\notin G}[x\otimes r\otimes P ^{*}(x;r)-\mathcal{A}(P^{*}(x;r))\otimes P^{*}(x;r)]\Big{\|}_{1}\] (175) \[=\left\|\frac{1}{2^{\lambda+p}}\sum_{x,r}[x\otimes r\otimes P^{*} (x;r)-\mathcal{A}(P^{*}(x;r))\otimes P^{*}(x;r)]\right.\] (176) \[+\frac{1}{2^{\lambda+p}}\sum_{(x,r)\notin G}[x\otimes r\otimes f (x)-\mathcal{A}(f(x))\otimes f(x)]\] (177) \[-\frac{1}{2^{\lambda+p}}\sum_{(x,r)\notin G}[x\otimes r\otimes P ^{*}(x;r)-\mathcal{A}(P^{*}(x;r))\otimes P^{*}(x;r)]\Big{\|}_{1}\] (178) \[\leq\left\|\frac{1}{2^{\lambda+p}}\sum_{x,r}[x\otimes r\otimes P ^{*}(x;r)-\mathcal{A}(P^{*}(x;r))\otimes P^{*}(x;r)]\right\|_{1}\] (179) \[+\frac{1}{2^{\lambda+p}}\sum_{(x,r)\notin G}\left\|x\otimes r \otimes f(x)-\mathcal{A}(f(x))\otimes f(x)\right\|_{1}\] (180) \[+\frac{1}{2^{\lambda+p}}\sum_{(x,r)\notin G}\left\|x\otimes r \otimes P^{*}(x;r)-\mathcal{A}(P^{*}(x;r))\otimes P^{*}(x;r)\right\|_{1}\] (181) \[\leq\frac{1}{\text{poly}(\lambda)} \tag{182}\]
for infinitely \(\lambda\), which means that \(f\) is not distributional one-way. It contradicts the assumption that \(f\) is one-way, because one-wayness implies distributionally one-wayness. In Equation (164), \(R\) is the register of the state \(\mathcal{A}(f(x))\) that contains "the output \(r\)" of the algorithm \(\mathcal{A}\). The last inequality comes from Equation (162) and Equation (161).
**Acknowledgements.** We thank Mark Zhandry for pointing out Remark 7.6. TM is supported by JST Moonshot R&D JPMJMS2061-5-1-1, JST FOREST, MEXT QLEAP, the Grant-in-Aid for Scientific Research (B) No.JP19H04066, the Grant-in Aid for Transformative Research Areas (A) 21H05183, and the Grant-in-Aid for Scientific Research (A) No.22H00522.
|
2307.08975 | A Bayesian Framework for Multivariate Differential Analysis accounting
for Missing Data | Current statistical methods in differential proteomics analysis generally
leave aside several challenges, such as missing values, correlations between
peptide intensities and uncertainty quantification. Moreover, they provide
point estimates, such as the mean intensity for a given peptide or protein in a
given condition. The decision of whether an analyte should be considered as
differential is then based on comparing the p-value to a significance
threshold, usually 5%. In the state-of-the-art limma approach, a hierarchical
model is used to deduce the posterior distribution of the variance estimator
for each analyte. The expectation of this distribution is then used as a
moderated estimation of variance and is injected directly into the expression
of the t-statistic. However, instead of merely relying on the moderated
estimates, we could provide more powerful and intuitive results by leveraging a
fully Bayesian approach and hence allow the quantification of uncertainty. The
present work introduces this idea by taking advantage of standard results from
Bayesian inference with conjugate priors in hierarchical models to derive a
methodology tailored to handle multiple imputation contexts. Furthermore, we
aim to tackle a more general problem of multivariate differential analysis, to
account for possible inter-peptide correlations. By defining a hierarchical
model with prior distributions on both mean and variance parameters, we achieve
a global quantification of uncertainty for differential analysis. The inference
is thus performed by computing the posterior distribution for the difference in
mean peptide intensities between two experimental conditions. In contrast to
more flexible models that can be achieved with hierarchical structures, our
choice of conjugate priors maintains analytical expressions for direct sampling
from posterior distributions without requiring expensive MCMC methods. | Marie Chion, Arthur Leroy | 2023-07-18T05:14:29Z | http://arxiv.org/abs/2307.08975v1 | # A Bayesian Framework for Multivariate Differential Analysis accounting for Missing Data
###### Abstract
Current statistical methods in differential proteomics analysis generally leave aside several challenges, such as missing values, correlations between peptides' intensities and uncertainty quantification. Moreover, they provide point estimates, such as the mean intensity for a given peptide or protein in a given condition. The decision whether an analyte should be considered as _differential_ is then based on comparing the p-value to a significance threshold, usually 5%. In the state-of-the-art limma approach, a hierarchical model is used to deduce the posterior distribution of the variance estimator for each analyte. The expectation of this distribution is then used as a moderated estimation of variance and is injected directly into the expression of the t-statistic. However, instead of merely relying on the moderated estimates, we could provide more powerful and intuitive results by leveraging a fully Bayesian approach and hence allow the quantification of uncertainty. The present work introduce this idea by taking advantage of standard results from Bayesian inference with conjugate priors in hierarchical models to derive a methodology tailored to handle multiple imputation contexts. Furthermore, we aim to tackle the more general problem of multivariate differential analysis, to account for possible inter-peptide correlations. By defining a hierarchical model with prior distributions on both mean and variance parameters, we achieve a global quantification of the uncertainty for differential analysis. The inference is thus performed by computing the posterior distribution for the difference in mean peptide intensities between two experimental conditions. In contrast to more flexible models that can be achieved with hierarchical structures, our choice of conjugate priors maintains analytical expressions for direct sampling from posterior distributions without requiring expensive MCMC methods. The performances of this approach have been assessed through extensive simulation studies as well as application to a real-world controlled dataset. We demonstrated its ability to provide more accurate and intuitive results than standard t-tests for handling differential analysis in proteomics.
## 1 Introduction
Context.Differential proteomics analysis aims to compare peptide and/or protein expression levels across several biological conditions. The massive data provided by label-free mass spectrometry-based quantitative proteomics experiments requires reliable statistical modelling tools to assess which protein is differentially abundant. Table 1 shows the state-of-the-art tools for differential proteomics analysis. They are based on well-known statistical methods, yet they are faced with several challenges. First, they rely on complete datasets. However, quantitative proteomics data usually contains missing values. In label-free quantitative proteomics, missing value proportion ranges between 10% and 50% according to Lazar et al. (2016). Imputation remedies this problem by replacing a missing value with a user-defined one. In particular, multiple imputation (Little and Rubin, 2019) consists in generating several imputed datasets that are then combined in order to get an estimator of the parameter of interest (often the mean peptide or protein intensity in a given condition) and an estimator of its variability. Recent work from Chion et al. (2022) includes the uncertainty induced by the
multiple imputation process in the previously described moderated \(t\)-testing framework from Smyth (2004). This method relies on a hierarchical model used to deduce the posterior distribution of the variance estimator for each analyte. The expectation of this distribution is then used as a moderated estimation of variance and is injected directly into the expression of the \(t\)-statistic. However, instead of relying simply on the moderated estimates, taking advantage of a fully Bayesian approach could make sense. The topic of missing data has been under investigation in the Bayesian community for a long time, particularly in simple cases involving conjugate priors (Dominici et al., 2000). Despite such theoretical advances, practitioners in proteomics often still rely on old-fashioned tools, like \(t\)-tests, for conducting most differential analyses. Recently, some authors provided convenient approaches and associated implementations (Kruschke, 2013) for handling differential analysis problems with Bayesian inference. For instance, the R package BEST (standing for Bayesian Estimation Supersedes T-test) has widely contributed to the diffusion of those practices. Crook et al. (2022) reviewed the contributions of Bayesian statistics to proteomics data analysis. O'Brien et al. (2018) suggested a Bayesian selection model to mitigate the problem of missing values. The and Kall (2019) implemented in Triqler a probabilistic model that accounts for different sources of variability from identification and quantification to differential analysis.
Contribution.The present article follows a similar idea by taking advantage of standard results from Bayesian inference with conjugate priors in hierarchical models to derive a methodology tailored to handle our multiple imputation context. Furthermore, we aim to tackle the more general problem of multivariate differential analysis, to account for possible correlations between analytes. They often consider each analyte independently from the other ones. Hence, they do not take advantage of the possible correlations between peptides that belong to the same protein (for example). By defining a hierarchical model with prior distributions on both mean and variance parameters, we aim to provide an adequate quantification of the uncertainty for differential analysis. Inference is thus performed by computing the posterior distribution for the difference in mean peptide intensity between two experimental conditions. In contrast to more flexible models that can be achieved with hierarchical structures, our choice of conjugate priors maintains analytical expressions for directly sampling from posterior distributions without needing MCMC methods. This results in a fast inference procedure in practice.
Outline.The paper is organised as follows: Section 2.1 presents well-known results about Bayesian inference for Gaussian-inverse-gamma conjugated priors. Following analogous results for the multivariate case, Section 2.2 introduces a general Bayesian framework for evaluating mean differences in our differential proteomics context. Section 2.3 provides insights on the particular case where the considered analytes are uncorrelated. The proofs of these methodological developments can be found in Section 7. Section 3 evaluates the framework through a simulation study, illustrating hands-on examples on a real proteomics dataset and highlights the benefits of such a multivariate Bayesian framework for practitioners.
\begin{table}
\begin{tabular}{|c|c|} \hline Method & Software \\ \hline \multirow{2}{*}{t-tests} & Perseus (Tyanova et al., 2016) \\ & DAPAR (Wieczorek et al., 2017) \\ & PANDA-view (Chang et al., 2018) \\ \hline ANOVA & Perseus (Tyanova et al., 2016) \\ & PANDA-view (Chang et al., 2018) \\ \hline \multirow{2}{*}{Moderated t-test (limma)} & DAPAR (Wieczorek et al., 2017) \\ & mi4p (Chion et al., 2022) \\ \hline \multirow{2}{*}{Linear model} & MSstats (Choi et al., 2014) \\ & proDA (Ahlmann-Eltze and Anders, 2020) \\ \hline \end{tabular}
\end{table}
Table 1: State-of-the-art software for differential proteomics analysis
Modelling
### Bayesian inference for Normal-Inverse-Gamma conjugated priors
Before deriving our complete workflow, let us recall some classical Bayesian inference results that will further serve our aim. We assume a generative model such as:
\[y=\mu+\varepsilon,\]
* \(\mu\mid\sigma^{2}\sim\mathcal{N}\left(\mu_{0},\frac{1}{\lambda_{0}}\sigma^{2}\right)\) is the prior distribution over the mean,
* \(\varepsilon\sim\mathcal{N}(0,\sigma^{2})\) is the error term,
* \(\sigma^{2}\sim\Gamma^{-1}(\alpha_{0},\beta_{0})\) is the prior distribution over the variance,
with \(\{\mu_{0},\lambda_{0},\alpha_{0},\beta_{0}\}\) an arbitrary set of prior hyper-parameters. In Figure 1, we provide an illustration of the hypotheses taken over such a hierarchical generative model.
From the previous hypotheses, we can deduce the likelihood of the model for a sample of observations \(\mathbf{y}=\{y_{1},\ldots,y_{N}\}\):
\[p(\mathbf{y}\mid\mu,\sigma^{2}) =\prod_{n=1}^{N}p(y_{n}\mid\mu,\sigma^{2})\] \[=\prod_{n=1}^{N}\mathcal{N}\left(y_{n};\mu,\sigma^{2}\right),\]
Let us recall that such assumptions define a prior Gaussian-inverse-gamma distribution, which is conjugated with the Gaussian distribution with unknown mean \(\mu\) and variance \(\sigma^{2}\). The probability density function (PDF) of such a prior distribution can be written as follows:
\[p(\mu,\sigma^{2}\mid\mu_{0},\lambda_{0},\alpha_{0},\beta_{0})=\frac{\sqrt{ \lambda_{0}}}{\sqrt{2\pi}}\frac{\beta_{0}^{\alpha_{0}}}{\Gamma(\alpha_{0})} \left(\frac{1}{\sigma^{2}}\right)^{\alpha_{0}+\frac{3}{2}}\exp\left(-\frac{2 \beta_{0}+\lambda_{0}(\mu-\mu_{0})^{2}}{2\sigma^{2}}\right).\]
In this particular case, it is a well-known result that the inference is tractable and the posterior distribution remains a Gaussian-inverse-gamma (Murphy, 2007). The proof is available in section 7.1.
Therefore, the joint posterior distribution can be expressed as:
\[\mu,\sigma^{2}\mid\mathbf{y}\sim\mathcal{N}\Gamma^{-1}\left(\mu_{N},\lambda_{N}, \alpha_{N},\beta_{N}\right) \tag{1}\]
with:
* \(\mu_{N}=\frac{N\bar{y}+\lambda_{0}\mu_{0}}{\lambda_{0}+N}\),
Figure 1: Graphical model of the hierarchical structure when assuming a Gaussian-inverse-gamma prior, conjugated with a Gaussian likelihood with unknown mean and variance.
* \(\lambda_{N}=\lambda_{0}+N\),
* \(\alpha_{N}=\alpha_{0}+\frac{N}{2}\),
* \(\beta_{N}=\beta_{0}+\frac{1}{2}\sum\limits_{n=1}^{N}(y_{n}-\tilde{y})^{2}+\frac{ \lambda_{0}N}{2(\lambda_{0}+N)}(\tilde{y}-\mu_{0})^{2}\).
Although these update formulas provide a valuable result, we shall see in the sequel that we are more interested in the marginal distribution over the mean parameter \(\mu\) for comparison purposes. Computing this marginal from the joint posterior in Equation (1) remains tractable as well by integrating over \(\sigma^{2}\):
\[p(\mu\mid\mathbf{y}) =\int p(\mu,\sigma^{2}\mid\mathbf{y})\,\mathrm{d}\sigma^{2}\] \[=\frac{\sqrt{\lambda_{N}}}{\sqrt{2\pi}}\frac{\beta_{N}^{\alpha_{N }}}{\Gamma(\alpha_{N})}\int\left(\frac{1}{\sigma^{2}}\right)^{\alpha_{N}+ \frac{3}{2}}\exp\left(-\frac{2\beta_{N}+\lambda_{N}(\mu-\mu_{N})^{2}}{2 \sigma^{2}}\right)\mathrm{d}\sigma^{2}\] \[=\frac{\Gamma(\frac{\nu+1}{2})}{\Gamma(\frac{\nu}{2})}\frac{1}{ \sqrt{\pi\nu\hat{\sigma}^{2}}}(1+\frac{1}{\nu}\frac{(\mu-\mu_{N})^{2}}{\hat{ \sigma}^{2}})^{-\frac{\nu+1}{2}}\] \[=T_{\nu}(\mu;\ \mu_{N},\hat{\sigma}^{2}),\]
with:
* \(\nu=2\alpha_{N}\),
* \(\hat{\sigma}^{2}=\frac{\beta_{N}}{\alpha_{N}\lambda_{N}}\).
The marginal posterior distribution over \(\mu\) can thus be expressed as a non-standardised Student's \(t\)-distribution that we express below in terms of the initial hyper-parameters:
\[\mu\mid\mathbf{y}\sim T_{2\alpha_{0}+N}\left(\frac{N\bar{y}+\lambda_{0}\mu_{0}}{ \lambda_{0}+N},\frac{\beta_{0}+\frac{1}{2}\sum\limits_{n=1}^{N}(y_{n}-\bar{y}) ^{2}+\frac{\lambda_{0}N}{2(\lambda_{0}+N)}(\bar{y}-\mu_{0})^{2}}{(\alpha_{0} +\frac{N}{2})(\lambda_{0}+N)}\right). \tag{2}\]
The derivation of this analytical formula provides a valuable tool for computing straightforward posterior distribution for the mean parameter in such a context. We shall see in the next section how to leverage this approach to introduce a novel means' comparison methodology for a more general framework to handle both multidimensional and missing data.
### General Bayesian framework for evaluating mean differences
Recalling our differential proteomics context that assesses the differences in mean intensity values for \(P\) peptides or proteins quantified in \(N\) samples divided into \(K\) conditions. As before, Figure 2 illustrates the hierarchical generative structure assumed for each group \(k=1,\ldots,K\).
Maintaining the notation analogous to previous ones, the generative model for \(\mathbf{y}_{k}\in\mathbb{R}^{P}\), can be written as:
\[\mathbf{y}_{k}=\mathbf{\mu}_{k}+\mathbf{\varepsilon}_{k},\ \forall k=1,\ldots,K,\]
where:
* \(\mathbf{\mu}_{k}\mid\mathbf{\Sigma}_{k}\sim\mathcal{N}\left(\mathbf{\mu}_{0},\frac{1}{ \lambda_{0}}\mathbf{\Sigma}_{k}\right)\) is the prior mean intensities vector of the \(k\)-th group,
* \(\mathbf{\varepsilon}_{k}\sim\mathcal{N}(0,\mathbf{\Sigma}_{k})\) is the error term of the \(k\)-th group,
* \(\mathbf{\Sigma}_{k}\sim\mathcal{W}^{-1}(\mathbf{\Sigma}_{0},\nu_{0})\) is the prior variance-covariance matrix of the \(k\)-th group,
with \(\{\mathbf{\mu}_{0},\lambda_{0},\mathbf{\Sigma}_{0},\nu_{0}\}\) a set of hyper-parameters that needs to be chosen as modelling hypotheses and \(\mathcal{W}^{-1}\) represents the inverse-Wishart distribution, used as the conjugate prior for an unknown covariance matrix of a multivariate Gaussian distribution (Bishop, 2006).
Traditionally, in Bayesian inference, those quantities must be carefully chosen for the estimation to be as accurate as possible, particularly with low sample sizes. Incorporating expert or prior knowledge in the model would also come from the adequate setting of these hyper-parameters. However, our final purpose in this article is not much about estimating but comparing groups' mean (i.e. differential analysis). Interestingly, providing a perfect estimation of the posterior distributions over \(\{\mathbf{\mu}_{k}\}_{k=1,\ldots,K}\) does not appear as the main concern here, as the posterior difference of means (i.e. \(p(\mathbf{\mu}_{k}-\mathbf{\mu}_{k^{\prime}}|\mathbf{y}_{k},\mathbf{y}_{k^{\prime}})\)) represents the actual quantity of interest. Although providing meaningful prior hyper-parameters leads to adequate uncertainty quantification, we shall, above all, take those quantities equal for all groups. This choice would ensure an unbiased comparison, constituting a valuable alternative to the traditional and somehow limited \(t\)-tests. Indeed, inference based on hypothesis testing and p-values has been widely questioned over the past decade (Wasserstein et al., 2019). Additionally, \(t\)-tests do not provide insight into effect sizes or uncertainty quantification (in contrast to Bayesian inference as emphasised by Kruschke and Liddell (2018)).
The present framework aspires to estimate a posterior distribution for each mean parameter vector \(\mathbf{\mu}_{k}\), starting from the same prior assumptions in each group. The comparison between the means of all groups would then only rely on the ability to sample directly from these distributions and compute empirical posteriors for the means' difference. As a bonus, this framework remains compatible with multiple imputations strategies previously introduced to handle missing data that frequently arise in applicative contexts (Chion et al., 2022).
From the previous hypotheses, we can deduce the likelihood of the model for an i.i.d. sample \(\{\mathbf{y}_{k,1},\ldots,\mathbf{y}_{k,N_{k}}\}\):
\[p(\mathbf{y}_{k,1},\ldots,\mathbf{y}_{k,N_{k}}\mid\mathbf{\mu}_{k},\mathbf{ \Sigma}_{k}) =\prod_{n=1}^{N_{k}}p(\mathbf{y}_{k,n}\mid\mathbf{\mu}_{k},\mathbf{\Sigma}_{k})\] \[=\prod_{n=1}^{N_{k}}\mathcal{N}\left(\mathbf{y}_{k,n};\;\mathbf{\mu}_{k}, \mathbf{\Sigma}_{k}\right),\]
However, as previously pointed out, such datasets often contain missing data, and we shall introduce here consistent notation. Assume \(\mathcal{H}\) to be the set of all observed data, we additionally define:
* \(\mathbf{y}_{k}^{(0)}=\{y_{k,n}^{p}\in\mathcal{H},\;n=1,\ldots N_{k},\;p=1,\ldots,P\}\), the set of elements that are observed in the \(k\)-th group,
* \(\mathbf{y}_{k}^{(1)}=\{y_{k,n}^{p}\notin\mathcal{H},\;n=1,\ldots N_{k},\;p=1, \ldots,P\}\), the set of elements that are missing the \(k\)-th group.
Figure 2: Graphical model of the hierarchical structure of the generative model for the vector \(\mathbf{y}_{k}\) of peptide intensities in \(K\) groups of biological samples, _i.e._\(K\) experimental conditions.
Moreover, as we remain in the context of multiple imputation, we define \(\{\tilde{\mathbf{y}}_{k}^{(1),1},\ldots,\tilde{\mathbf{y}}_{k}^{(1),D}\}\) as the set of \(D\) draws of an imputation process applied on missing data in the \(k\)-th group. In such context, a closed-form approximation for the multiple-imputed posterior distribution of \(\mathbf{\mu}_{k}\) can be derived for each group as stated in Proposition 1.
Proposition 1.: _For all \(k=1,\ldots,K\), the posterior distribution of \(\mathbf{\mu}_{k}\) can be approximated by a mixture of multiple-imputed multivariate \(t\)-distributions, such as:_
\[p(\mathbf{\mu}_{k}\mid\mathbf{y}_{k}^{(0)})\simeq\frac{1}{D}\sum_{d=1}^{D}T_{\nu_{k}} \left(\mathbf{\mu};\tilde{\mathbf{\mu}}_{k}^{(d)},\tilde{\mathbf{\Sigma}}_{k}^{(d)}\right)\]
_with:_
* \(\nu_{k}=\nu_{0}+N_{k}-P+1\)_,_
* \(\tilde{\mathbf{\mu}}_{k}^{(d)}=\frac{\lambda_{0}\mathbf{\mu}_{0}+N_{k}\tilde{\mathbf{y}}_{ k}^{(d)}}{\lambda_{0}+N_{k}}\) _,_
* \(\tilde{\mathbf{\Sigma}}_{k}^{(d)}=\frac{\mathbf{\Sigma}_{0}+\sum\limits_{n=1}^{N_{k}}( \tilde{\mathbf{y}}_{k,n}^{(d)}-\bar{\mathbf{y}}_{k}^{(d)})(\tilde{\mathbf{y}}_{k,n}^{(d)} -\bar{\mathbf{y}}_{k}^{(d)})^{\intercal}+\frac{\lambda_{0}N_{k}}{(\lambda_{0}+N_{ k})}(\bar{\mathbf{y}}_{k}^{(d)}-\mathbf{\mu}_{0})(\bar{\mathbf{y}}_{k}^{(d)}-\mathbf{\mu}_{0})^{ \intercal}}{(\nu_{0}+N_{k}-P+1)(\lambda_{0}+N_{k})}\)_,_
_where we introduced the shorthand \(\tilde{\mathbf{y}}_{k,n}^{(d)}=\begin{bmatrix}\mathbf{y}_{k,n}^{(0)}\\ \tilde{\mathbf{y}}_{k,n}^{(1),d}\end{bmatrix}\) to represent the \(d\)-th imputed vector of observed data, and the corresponding average vector \(\bar{\mathbf{y}}_{k}^{(d)}=\frac{1}{N_{k}}\sum\limits_{n=1}^{N_{k}}\tilde{\mathbf{y}}_ {k,n}^{(d)}\)._
This analytical formulation is particularly convenient for our purpose and, as we shall see in the proof in section 7.2, merely comes from imputation.
Thanks to Proposition 1, we have an explicit formula for approximating, using multiple-imputed datasets, the posterior distribution of the mean vector for each group. Although such a linear combination of multivariate \(t\)-distributions is not a known specific distribution in itself, it is now straightforward to generate realisations of samples of the posterior by simply drawing from the \(D\) multivariate \(t\)-distributions, each being specific to an imputed dataset, and then compute the mean of the \(D\) vectors. Therefore, the empirical distribution resulting from a high number of samples generated by this procedure would be easy to visualise and manage for comparison purposes. Generating the empirical distribution of the mean's difference between two groups \(k\) and \(k^{\prime}\) then comes directly by computing the difference between each couple of samples drawn from both posterior distributions \(p(\mathbf{\mu}_{k}\mid\mathbf{y}_{k}^{(0)})\) and \(p(\mathbf{\mu}_{k}^{\prime}\mid\mathbf{y}_{k^{\prime}}^{(0)})\). In Bayesian statistics, relying on empirical distributions drawn from the posterior is common practice in the context of Markov chain Monte Carlo (MCMC) algorithms but often comes at a high computational cost. In our framework, we managed to maintain the best of both worlds since deriving analytical distributions from model hypotheses offers the benefits of probabilistic inference with adequate uncertainty quantification while remaining tractable and not relying on MCMC procedures. The computational cost of the method thus roughly remains as low as frequentist counterparts since merely a few updating calculus and drawing from \(t\)-distributions are needed. Empirical evidence of this claim is provided in the dedicated simulation study proposed further in Table 3.
As usual, when it comes to comparing the mean between two groups, we still need to assess if the posterior distribution of the difference appears, in a sense, to be sufficiently away from zero. This practical inference choice is not specific to our context and remains highly dependent on the context of the study. Moreover, as the present model is multi-dimensional, we may also question the metric used to compute the difference between vectors. In a sense, our posterior distribution of the mean's differences offers an elegant solution to the traditional problem of multiple testing often encountered in applied science and allows tailored definitions of what could be called a _meaningful_ result (_significant_ does not appear anymore as an appropriate term in this more general context). For example, displaying the distribution of the squared difference would penalise large differences in elements of the mean vector. In contrast, the absolute difference would give a more balanced
conception of the average divergence from one group to the other. Clearly, as any marginal of a multivariate \(t\)-distribution remains a (multivariate) \(t\)-distribution, comparing specific elements of the mean vectors merely by restraining to the appropriate dimension is also straightforward. In particular, comparing two groups in the univariate case would be a particular case of Proposition 1 with \(P=1\). Recalling our proteomics context, we could still compare the mean intensity of peptides between groups, one peptide at a time, or choose to compare all peptides at once and thus accounting for possible correlations between peptides in each group. However, an appropriate manner of accounting for those correlations could be to subset peptides using their protein groups.
Let us provide in Algorithm 1 a summary of the procedure for comparing mean vectors of two different experimental conditions regarding posterior distribution.
```
Initialise the hyper-posteriors \(\mathbf{\mu}_{0}^{k}=\mathbf{\mu}_{0}^{k^{\prime}}\), \(\lambda_{0}^{k}=\lambda_{0}^{k^{\prime}}\), \(\mathbf{\Sigma}_{0}^{k}=\mathbf{\Sigma}_{0}^{k^{\prime}}\), \(\nu_{0}^{k}=\nu_{0}^{k^{\prime}}\) for\(d=1,\ldots,D\)do Compute \(\{\mathbf{\mu}_{N}^{k,(d)},\lambda_{N}^{k},\mathbf{\Sigma}_{N}^{k,(d)},\nu_{N}^{k}\}\) and \(\{\mathbf{\mu}_{N}^{k^{\prime},(d)},\lambda_{N}^{k^{\prime}},\mathbf{\Sigma}_{N}^{k^ {\prime},(d)},\nu_{N}^{k^{\prime}}\}\) from hyper-posteriors and data Draw \(R\) realisations \(\hat{\mathbf{\mu}}_{k}^{(d)[r]}\sim T_{\nu_{N}^{k}}\left(\mathbf{\mu}_{N}^{k,(d)}, \frac{\mathbf{\Sigma}_{N}^{k,(d)}}{\lambda_{N}^{k}\nu_{N}^{k}}\right)\); \(\hat{\mathbf{\mu}}_{k^{\prime}}^{(d)[r]}\sim T_{\nu_{N}^{k^{\prime}}}\left(\mathbf{ \mu}_{N}^{k^{\prime},(d)},\frac{\mathbf{\Sigma}_{N}^{k^{\prime},(d)}}{\lambda_{N}^ {k^{\prime}}\nu_{N}^{k^{\prime}}}\right)\) endfor for\(r=1,\ldots,R\)do Compute \(\hat{\mathbf{\mu}}_{k}^{[r]}=\frac{1}{D}\sum\limits_{d=1}^{D}\hat{\mathbf{\mu}}_{k}^{(d )[s]}\) and \(\hat{\mathbf{\mu}}_{k^{\prime}}^{[r]}=\frac{1}{D}\sum\limits_{d=1}^{D}\hat{\mathbf{\mu }}_{k^{\prime}}^{(d)[r]}\) to combine samples Generate a realisation \(\hat{\mathbf{\mu}}_{\Delta}^{[r]}=\hat{\mathbf{\mu}}_{k}^{[r]}-\hat{\mathbf{\mu}}_{k^{ \prime}}^{[r]}\) from the difference's distribution endfor return\(\{\hat{\mathbf{\mu}}_{\Delta}^{[1]},\ldots,\hat{\mathbf{\mu}}_{\Delta}^{[R]}\}\), an R-sample drawn from the posterior distribution of the mean's difference
```
**Algorithm 1** Posterior distribution of the vector of mean's difference
### The uncorrelated case: no more multiple testing nor imputation
Let us notice that modelling covariances between all variables as in Proposition 1 often constitutes a challenge, which is computationally expensive in high dimensions and not always adapted. However, we detailed in Section 2.1 results that are classical in Bayesian inference but not widespread enough in applied science, especially when comparing means. In particular, we can leverage these results to adapt Algorithm 1 to the univariate case for handling the same problem as in Chion et al. (2022) with a more probabilistic flavour. Indeed, when the absence of correlations between peptides is assumed (_i.e._\(\mathbf{\Sigma}\) being diagonal), the problem reduces to the analysis of \(P\) independent inference problems (as \(\mathbf{\mu}\) is supposed Gaussian) and the posterior distributions can be derived in closed-form, as we recalled in Equation (1). Moreover, let us highlight a nice property coming with this relaxing assumption is that (multiple-)imputation is no longer needed in this context. Using the same notation as before and the uncorrelated assumption (and thus the induced independence between analytes for \(p\neq p^{\prime}\)), we can write:
\[p\left(\mathbf{\mu}_{k}\mid\mathbf{y}_{k}^{(0)}\right) =\int p\left(\mathbf{\mu}_{k},\mathbf{y}_{k}^{(1)}\mid\mathbf{y}_{k}^{(0)} \right)\mathrm{d}\mathbf{y}_{k}^{(1)} \tag{3}\] \[=\int p\left(\mathbf{\mu}_{k}\mid\mathbf{y}_{k}^{(0)},\mathbf{y}_{k}^{(1)} \right)p\left(\mathbf{y}_{k}^{(1)}\mid\mathbf{y}_{k}^{(0)}\right)\mathrm{d}\mathbf{y}_{k} ^{(1)}\] (4) \[=\int\prod_{p=1}^{P}\left\{p\left(\mu_{k}^{p}\mid y_{k}^{p,(0)}, y_{k}^{p,(1)}\right)p\left(y_{k}^{p,(1)}\mid y_{k}^{p,(0)}\right)\right\} \mathrm{d}\mathbf{y}_{k}^{(1)} \tag{5}\]
\[=\prod_{p=1}^{P}\int\left\{p\left(\mu_{k}^{p}\mid y_{k}^{p,(0)},y_{k} ^{p,(1)}\right)p\left(y_{k}^{p,(1)}\mid y_{k}^{p,(0)}\right)\mathrm{d}y_{k}^{p, (1)}\right\} \tag{6}\] \[=\prod_{p=1}^{P}p\left(\mu_{k}^{p}\mid y_{k}^{p,(0)}\right)\] (7) \[=\prod_{p=1}^{P}T_{2\alpha_{0}^{p}+N_{\mathrm{c}}^{p}}\left(\mu_{ k}^{p};\ \mu_{k,N}^{p},\ \hat{\sigma_{k}^{p}}^{2}\right), \tag{8}\]
with:
* \(\mu_{k,N}^{p}=\dfrac{N_{k}^{p}\bar{y}_{k}^{p,(0)}+\lambda_{0}^{p}\mu_{0}^{p}}{ \lambda_{0}^{p}+N_{\mathrm{c}}^{p}}\),
* \(\hat{\sigma_{k}^{p}}^{2}=\dfrac{\beta_{0}^{p}+\dfrac{1}{2}\sum _{n=1}^{N_{\mathrm{c}}^{p}}(y_{k,n}^{p,(0)}-\bar{y}_{k}^{p,(0)})^{2}+\dfrac{ \lambda_{0}N_{k}^{p}}{2(\lambda_{0}^{p}+N_{k}^{p})}(\bar{y}_{k}^{p,(0)}-\mu_{0 }^{p})^{2}}{(\alpha_{0}^{p}+\dfrac{N_{k}^{p}}{2})(\lambda_{0}^{p}+N_{k}^{p})}\).
In this context, it can be noticed that \(p\left(\boldsymbol{\mu}_{k}\mid\boldsymbol{y}_{k}^{(0)}\right)\) factorises naturally over \(p=1,\ldots,P\), and thus only depends upon the data that have actually been observed for each peptide. Indeed, we observe that the integration over the missing data \(\boldsymbol{y}_{k}^{(1)}\) is straightforward in this framework, and neither Rubin's approximation nor even imputation (whether multiple or not) appears necessary. The observed data \(\boldsymbol{y}_{k}^{(0)}\) already bear all the useful information as if each unobserved value could simply be ignored without effect on the posterior distribution.
Let us emphasise that this property of factorisation and tractable integration over missing data comes directly from the covariance structure as a diagonal matrix and thus only constitutes a particular case of the previous model, though convenient. However, in the context of differential analysis in proteomics, analysing each peptide as an independent problem is a common practice, as seen in Chion et al. (2022), and we shall notice that the Bayesian framework tackles this issue in an elegant and somehow simpler way. In particular, the classical inference approach based on hypothesis testing performs numerous successive tests for all peptides. Such an approach often leads to the pitfall of multiple testing that must be carefully dealt with. Interestingly, we notice that the above model also avoids multiple testing (as it does not rely on hypothesis testing and the definition of some threshold) while maintaining the convenient interpretations of Bayesian probabilistic inference. To conclude, whereas the analytical derivation of posterior distributions with Gaussian-inverse-gamma constitutes a well-known result, our proposition to define such probabilistic mean's comparison procedure provides, under the standard uncorrelated-peptides assumption, an elegant and handy alternative to classical techniques that naturally tackles both the imputation and multiple testing issues.
Let us provide in Algorithm 2 the pseudo-code of the inference procedure to highlight differences with the fully-correlated case:
```
[MISSING_PAGE_POST]
t-test, for which the associated p-value is reported. Throughout the experiment section, we used the following values for prior parameters:
* \(\mu_{0}=\bar{y}\),
* \(\lambda=1\),
* \(\alpha_{0}=1\),
* \(\beta_{0}=1\),
* \(\mathbf{\Sigma}_{0}=I_{P}\),
* \(\nu_{0}=10\),
where \(\bar{y}\) represent the average of observed values computed over all groups. These values correspond to the practical insights acquired from our previous studies while remaining relatively vague in terms of prior variance. As previously stated, it is essential for these values to be identical in all groups to ensure a fair and unbiased comparison. In the case where more expert information would be accessible, its incorporation would be possible, for instance, through the definition of a more precise prior mean (\(\mu_{0}\)) associated with a more confident prior variance (encoded through \(\alpha_{0}\) and \(\beta_{0}\)).
### Real datasets
To illustrate our methodology, we used a real proteomics dataset already introduced in Chion et al. (2022), namely the _Arabidopsis thaliana_ + UPS dataset, with the Match between Runs algorithm and at least one quantified value in each experimental condition. Briefly, let us recall that UPS proteins were spiked in increasing amounts into a constant background of _Arabidopsis thaliana_ (ARATH) protein lysate. Hence, UPS proteins are differentially expressed, and ARATH proteins are not. For illustration purposes, we arbitrarily focused the examples on the P12081ups[SYHC_HUMAN_UPS and the sp|F41893|ILA_ARATH proteins. Note that both proteins have nine quantified peptides. Unless otherwise stated, we took the examples of the AALEELVK UPS peptide and the VPLLIIPILSK ARATH peptide and the same values as for synthetic data have been set for the prior hyper-parameters.
Additionally, let us recall that in our real datasets, the constants have the following values:
* \(\forall k=1,\ldots,K,\ N_{k}=3\) data points, in the absence of missing data,
* \(P=9\) peptides, when using the multivariate model,
* \(D=7\) draws of imputation,
* \(R=10^{4}\) sample points from the posterior distributions.
In this context, where the number \(N_{k}\) of observed biological samples is extremely low, particularly when data are missing, we should expect a perceptible influence of the prior hyper-parameters and a perceptible influence of inherent uncertainty in the posteriors. However, this influence has been reduced to the minimum in all the subsequent graphs for the sake of clarity and to ensure a good understanding of the underlying properties of the methodology. The high number \(R\) of sample points drawn from the posteriors assures the empirical distribution to be smoothly displayed on the graph. Still, one should note that sampling is really quick in practice, and this number can be easily increased if necessary.
### Univariate Bayesian inference for differential analysis
First, let us illustrate the univariate framework described in Section 2.3. In this experiment, we compared the intensity means in the lowest (0.05 fmol UPS) and the highest points (10 fmol UPS) of the UPS spike range. Let us recall that our univariate algorithm does not rely on imputation and should be applied directly to raw data. For the sake of illustration, the chosen peptides were observed entirely in all three biological samples of both experimental conditions. Resulting of the application of our univariate algorithm, posterior distributions of the mean difference for both peptides are represented on Figure 3. As the analysis consists of a comparison between conditions, the 0 value has been highlighted on the x-axis for assessing both the direction and the magnitude of the difference. The distance to zero of the distributions indicates whether the peptide is differentially expressed or not. In particular, Figure 2(a) shows the posterior distribution of the means' difference for the UPS peptide. Its location, far from zero, indicates a high probability (almost surely in this case) that the mean intensity of this peptide differs between the two considered groups. Conversely, the posterior distribution of the difference of means for the ARATH peptide (Figure 2(b)) suggests that the probability that means differ is low. Those conclusions support the summaries of raw data depicted on the bottom panel of Figure 3. Moreover, the posterior distribution provides additional insights into whether a peptide is under-expressed or over-expressed in a condition compared to another. For example, looking back to the UPS peptide, Figure 2(a) suggests an over-expression of the AALEELVK peptide in the seventh group (being the condition with the highest amount of UPS spike) compared to the first group (being the condition with the lowest amount of UPS spike), which is consistent with the experimental design. Furthermore, the middle panel merely highlights the fact that the posterior distribution of the difference \(\mu_{1}-\mu_{7}\) is symmetric of \(\mu_{7}-\mu_{1}\), thus, the sense of the comparison only remains an aesthetic choice.
To pursue further the illustration and evaluation of ProteoSayes at a larger scale, we provided in Table 2 a thorough analysis of mean differences computation for various effect size and variance combinations. One can notice that, in all cases, we recover values that are, on average, remarkably close to the true mean difference. As expected, increasing the variance in the data would result in larger credible intervals are the computed posterior distributions adapt to the higher uncertainty context. Even though this issue is often pointed out in the literature, p-values coming from the t-test in these experiments seem particularly uninformative in this context. Their values are so close to 0 that it is generally difficult to assess how much the two groups are close, with an adequate degree of caution. Moreover, these results were all computed for a sample size equal to 5, though it is
\begin{table}
\begin{tabular}{|c|c c|c|c c|} \cline{2-7} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{**ProteoBayes**} & \multicolumn{2}{c|}{**t-test**} & \multicolumn{2}{c|}{**Quality of estimation**} \\ \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{**Mean difference**} & \multicolumn{2}{c|}{**CI\({}_{\textbf{95}}\) width**} & \multicolumn{2}{c|}{**p-value**} & \multicolumn{2}{c|}{**RMSE**} & \multicolumn{2}{c|}{**CIC\({}_{\textbf{95}}\)**} \\ \hline \(\mathcal{N}(1,1)\) & 1 (0.04) & 0.12 (0.003) & 10\({}^{-}\)79 (10\({}^{-}\)78) & 0.03 (0.04) & 95.7 (20.3) \\ \(\mathcal{N}(5,1)\) & 4.99 (0.04) & 0.12 (0.003) & 0 (0) & 0.03 (0.04) & 94.6 (22.61) \\ \(\mathcal{N}(10,1)\) & 9.99 (0.04) & 0.13 (0.003) & 0 (0) & 0.03 (0.04) & 95.9 (19.84) \\ \(\mathcal{N}(1,5)\) & 1 (0.16) & 0.6 (0.01) & 10\({}^{-}\)6 (10\({}^{-}\)6) & 0.16 (0.19) & 95.5 (20.74) \\ \(\mathcal{N}(1,10)\) & 0.99 (0.31) & 1.2 (0.02) & 0.03 (0.08) & 0.31 (0.37) & 95 (21.81) \\ \(\mathcal{N}(1,20)\) & 1.04 (0.58) & 2.4 (0.06) & 0.22 (0.26) & 0.62 (0.75) & 95.2 (21.39) \\ \hline \end{tabular}
\end{table}
Table 2: Simulation study reporting performances of univariate ProteoSayes compared to a standard t-test. All distributions are compared with the univariate Gaussian baseline \(\mathcal{N}(0,1)\). All results are averaged over 1000 repetitions of the experiments and reported using the format _Mean (Sd)_.
well known that p-values can change dramatically depending on sample size, regardless of the true underlying difference between groups. A drawback that is often associated with Bayesian methods lies in the increasing computational burden compared to frequentist counterparts. However, by leveraging conjugate priors in our model and relying on sampling from analytical distributions to conduct inference, we managed to maintain a (univariate) algorithm as quick as t-tests in practice, as illustrated in Table 3. As expected, the multivariate version is generally slightly longer to run as we need to estimate covariance matrices that typically grow quickly with the number of peptides simultaneously modelled. That said, let us point out that we can still easily scale up to many thousands of peptides in a reasonable time (from a few seconds to minutes).
\begin{table}
\begin{tabular}{|c|c c c|} \cline{2-4} \multicolumn{1}{c|}{} & Univariate & Multivariate & T-test \\ \hline \(P=10^{2}\) & 0.08 & 0.8 & 0.22 \\ \(P=10^{3}\) & 1.75 & 4.91 & 1.33 \\ \(P=10^{4}\) & 6.80 & 58.96 & 6.38 \\ \hline \end{tabular}
\end{table}
Table 3: Running times (in seconds) of univariate and multivariate ProteoBayes compared with standard t-test for increasing numbers of peptides.
Figure 3: **Posterior distributions of the difference of means between the 0.05 fmol UPS spike condition (\(\mu_{1}\)) and the 10 fmol UPS spike condition (\(\mu_{7}\)) and the corresponding boxplots summarising the observed data.** The 95% credible interval is indicated by the blue central region.
### The benefit of intra-protein correlation
One of the main benefits of our methodology is to account for between-peptides correlation, as described in Section 2.2. As the first illustration of such a property, we modelled correlations between all quantified peptides derived from the same protein. In order to highlight the gains that we may expect from such modelling, we displayed on Figure 4 the comparison between a differential analysis using our univariate method or using the multivariate approach. In this example, we purposefully considered a group of 9 peptides coming from the same protein (P12081ups|SYHC_HUMAN_UPS), which intensities may undoubtedly be correlated to some degree. We consider in this section the comparison of intensity means between the fifth point (2.5 fmol UPS - \(\mathbf{\mu}_{5}\)) and the seventh point (10 fmol UPS - \(\mathbf{\mu}_{7}\)) of the UPS spike range. The posterior difference of the mean vector \(\mathbf{\mu}_{5}-\mathbf{\mu}_{7}\) between two conditions has been computed, and the first peptide (AALELVK) has been extracted for graphical visualisation. Meanwhile, the univariate algorithm has also been applied to compute the posterior difference \(\mu_{5}-\mu_{7}\), solely on the peptide AALELVK. The top panel of Figure 4 displays the latter approach, while the multivariate case is exhibited on the bottom panel. One should observe clearly that, while the location parameter of the two distributions is close as expected, the multivariate
\begin{table}
\begin{tabular}{|c c|c c|} \cline{2-4} \multicolumn{1}{c|}{} & **Mean difference** & \(CI_{95}\) **width** \\ \hline \multirow{2}{*}{**Univariate**} & \(\mathcal{N}_{10}(\mathbf{0}_{10},0.9\times\mathbf{I}_{10}+\mathbf{0}.\mathbf{1}_{10\times 1 0})\) & 0.92 (0.02) & 1.29 (0.03) \\ & \(\mathcal{N}_{10}(\mathbf{0}_{10},\mathbf{0}.\mathbf{1}_{10\times 10})\) & 0.9 (0.03) & 1.69 (0.04) \\ \hline \multirow{2}{*}{**Multivariate**} & \(\mathcal{N}_{10}(\mathbf{0}_{10},0.9\times\mathbf{I}_{10}+\mathbf{0}.\mathbf{1}_{10\times 1 0})\) & 0.93 (0.02) & 0.93 (0.04) \\ & \(\mathcal{N}_{10}(\mathbf{0}_{10},\mathbf{1}_{10\times 10})\) & 0.89 (0.03) & 1.28 (0.06) \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of univariate and multivariate versions of ProteoBayes in terms of computed mean differences and associated uncertainty. This baseline comparison is the multivariate Gaussian \(\mathcal{N}_{10}(\mathbf{0}_{10},I_{10})\).
Figure 4: Posterior distributions of the mean difference \(\mu_{5}-\mu_{7}\) for the AALELVK peptide from the P12081ups|SYHC_HUMAN_UPS protein using the univariate approach (top) and the multivariate approach (bottom). The blue central region indicates the 95% credible interval.
approach takes advantage of the information coming from the correlated peptides to reduce the uncertainty in the posterior estimation. To confirm this visual intuition, we provided in Table 4 additional evidence from synthetic datasets highlighting the tighter credible intervals obtained thanks to the multivariate modelling and accounting for inter-peptide correlations. This tighter range of probable values leads to a more precise estimation of the effect size and increased confidence in the resulting inference (deciding whether the peptide is differential or not).
### The mirage of imputed data
After discussing the advantages and the valuable interpretative properties of our methods, let us mention a pitfall that one should avoid for the inferences to remain valid. In the case of univariate analysis, we pointed out with Equation (3) that all the useful information is contained in observed data, and no imputation is needed since we already integrated out all missing data. Imputation does actually not even make sense in one dimension since, by definition, a missing data point is simply equivalent to an unobserved one, and we shall gain more information only by collecting more data. Therefore, one should be really careful when dealing with imputed datasets and keep in mind that imputation somehow _creates_ new data points that do not bear any additional information. Thus, there is a risk of artificially decreasing the uncertainty of our estimated posterior distributions simply by considering more data points in the computations than what was genuinely observed. For instance, imagine a dummy example where 10 points are effectively observed, and 1000 remain missing. It would be a massive error and underestimation of the true variance to impute the 1000 missing points (say with the average of the ten observed ones) and use the resulting 1010-dimensional vector for computing the posterior distributions of the mean. Let us mention that such a problem is not specific to our framework and, more generally, also applies to Rubin's rules. One should keep in mind that those approximations only hold for a reasonable ratio of missing data. Otherwise, one may consider adapting the method, for example, by penalising the degree of freedom in the relevant \(t\)-distributions. To illustrate this issue, we displayed on Figure 5 an example of our univariate algorithm
Figure 5: Posterior distributions of the mean difference \(\mu_{1}-\mu_{4}\) for the EVQELAQEAER peptide from the sp\(|\)F4I893\(|\)ILA_ARATH protein using the observed dataset (top) and the imputed dataset (bottom)
applied both on the observed dataset (top panel) and the imputed dataset (bottom panel). In this context, we observe a reduced variance for the imputed data. However, this behaviour is just an artefact of the phenomenon mentioned above: the bottom graph is merely not valid, and only raw data should be used in our univariate algorithm to avoid spurious inference results. More generally, while imputation is sometimes needed for the methods to work, one should always keep in mind that it always constitutes a bias (although controlled) that should be accounted for with tailored solutions, as this manuscript intends to provide.
### Acknowledging the effect size
After discussing methodological aspects, let us dive into more biological-related properties displayed on Figure 6. The three panels describe the increasing differences that can be observed when we sequentially compare the first point (0.05 fmol UPS) of the UPS spike range (\(\mu_{1}\)) to the second one (0.25 fmol UPS - \(\mu_{2}\)), the fourth one (1.25 fmol UPS - \(\mu_{4}\)) and the highest one (25 fmol UPS - \(\mu_{7}\)). The experimental design suggests that the difference in means for a UPS peptide should increase with respect to the amount of UPS proteins that was spiked in the biological sample (Chion et al., 2022). This illustration offers a perspective on how this difference becomes more and more noticeable, though mitigated by the inherent variability. Such an explicit and adequately quantified variance, and the induced uncertainty in the estimation, should help practitioners to make more educated decisions with the appropriate degree of caution. In particular, Figure 6 highlights the importance of considering the effect size (increasing here), which is crucial when studying the underlying biological phenomenon. Such a graph may remind us that statistical inference should be more about offering helpful insights to experts of a particular domain rather than defining automatic and blind decision-making procedures (Betensky, 2019). Moreover, let us point out that current statistical tests used for differential analysis express their results solely as \(p\)-values. One should keep in mind that, no matter their value, they do not provide any information about the effect size of the phenomenon (Sullivan and Feinn, 2012).
### About protein inference
To conclude on the practical usage of the proposed multivariate algorithm, let us develop ideas for comparing simultaneously multiple peptides or proteins. As highlighted before, accounting for the covariances between peptides tends to reduce the uncertainty on the posterior distribution of a unique peptide. However, we only exhibited examples comparing one peptide at a time between two conditions, although in applications, practitioners often need to compare thousands of them simultaneously. From a practical point of view, while possible in theory, we probably want to avoid modelling the correlations between every combination of peptides into a full rank matrix for at least two reasons.
First, it probably does not bear much sense to assume that all peptides in a biological sample interact with no particular structure. Secondly, it appears unreasonable to do so from a statistical and practical point of view. Computing and storing a matrix with roughly \(10^{4}\) rows and columns induces a computational and memory burden that would complicate the procedure while potentially leading to unreliable objects if matrices are estimated merely on a few data points, as for our example. However, a more promising approach would consist in deriving a sparse approach by levering the underlying structure of data from a biological perspective. If we reasonably assume, as before, that only peptides from common proteins present non-negligible correlations, it is then straightforward to define a block-diagonal matrix for the complete vector of peptides, which would be far more reasonable to estimate. Such an approach would take advantage of both of our algorithms by using the factorisation (as in Equation (3)) over thousands of proteins to sequentially estimate a high number of low dimensional mean vectors. Assuming an example with a thousand proteins containing ten peptides each, the approximate computing and storage requirements would be reduced from a \((10^{4})^{2}=10^{8}\) order of magnitude (due to one high-dimensional matrix) to \(10^{3}\times 10^{2}=10^{5}\) (a thousand of small matrices). In our applicative context, the strategy of dividing a big problem into independent smaller ones appears beneficial from both the applicative and statistical perspective.
This being said, the question of the _global_ inference, in contrast with a peptide-by-peptide approach, remains pregnant. To illustrate this topic, let us provide on Figure 7 an example of simultaneous differential analysis for nine peptides from the same protein. According to our previous recommendations, we accounted for the correlations through the multivariate algorithm and displayed the results in posterior mean's differences for each peptide from the P12081ups[SYHC_HUMAN_UPS protein at once (_i.e._\(\boldsymbol{\mu}_{1}-\boldsymbol{\mu}_{7}\)). In this example, eight peptides over nine contained in the protein are clearly differential in the same direction with comparable effect sizes, corroborating our intuition of correlated quantities. However, the situation may become far trickier when distributions lie closer to 0 on the x-axis or if only one peptide presents a clear differential pattern. As multiple and heterogeneous situations could be encountered, we do not provide recommendations here for directly dealing with protein-scale inference. Once again, the criterion for deciding what should be considered as _different enough_ is highly dependent on the context and reasonable hypotheses, and no arbitrary threshold may bear any kind of general relevancy. However, we should still point out that our Bayesian framework provides convenient and natural interpretations in terms of probability for each peptide individually. It is then straightforward to construct probabilistic decision rules and combine them to reach a multivariate inference tool, for instance, by computing an average probability for the means' difference to be below 0 across all peptides. However, one should note that probability rules prevent directly deriving global probabilistic statements without closely looking at dependencies between the single events (for instance, the factorisation in Equation (3) holds thanks to the induced independence between peptides). Although such an automatic procedure cannot replace expert analysis, it may still provide a handy tool for extracting the most noteworthy results from a massive number of comparisons, which the practitioner should look at more closely afterwards. Therefore, once a maximal risk of the adverse event or a minimum probability of the desired outcome has been defined, one may derive the adequate procedure to reach those properties.
## 4 Conclusion and perspectives
This article presents a Bayesian inference framework to tackle the problem of differential analysis in both univariate and multivariate contexts while accounting for possible missing data. We proposed two algorithms, levering classical results from conjugate priors to compute posterior distributions and
easily sample the difference of means when comparing groups of interest. For handling the recurrent problem of missing data, our multivariate approach takes advantage of the multiple imputations' approximation, while the univariate framework allows us to merely ignore this issue. In addition, this methodology aims at providing information not only on the probability of the means' difference to be null but also on the uncertainty quantification as well as the effect sizes, which are crucial in a biological framework.
We believe that such probabilistic statements offer valuable inference tools to practitioners. In the particular context of differential proteomics, this methodology allows us to account for between-peptides correlations. With an adequate decision rule and an appropriate correlation structure, Bayesian inference could be used in large-scale proteomics experiments, such as label-free global quantification strategies. Nevertheless, targeted proteomics experiments could already benefit from this approach, as the set of considered peptides is restricted. Furthermore, such experiments used in biomarker research could greatly benefit from the quantification of the uncertainty and the assessment of the effect sizes.
## 5 Code availability
The work described in the present article was implemented as an R package called _ProteBayes_, available on CRAN, while a development version can be found on GitHub (github.com/mariechion/ProteoBayes). A web app has also been developed and can be accessed at arthurleroy.shinyapps.io/ProteoBayes.
## 6 Data availability
The _Arabidopsis thaliana_ spiked dataset is public and accessible on the ProteomeXchange website using the PXD027800 identifier.
Figure 7: Posterior distributions of mean difference \(\mu_{1}-\mu_{7}\) for the nine peptides from the P12081ups\(|\)SYHC_HUMAN_UPS protein using the multivariate approach.
Proofs
### Proof of Bayesian inference for Normal-Inverse-Gamma conjugated priors
Let us recall below the complete development of this derivation by identification of the analytical form (we ignore conditioning over the hyper-parameters for convenience):
\[p(\mu,\sigma^{2}\mid\mathbf{y}) \propto p(\mathbf{y}\mid\mu,\sigma^{2})\times p(\mu,\sigma^{2})\] \[=\left(\frac{1}{2\pi\sigma^{2}}\right)^{\frac{N}{2}}\exp\left(- \frac{1}{2\sigma^{2}}\sum\limits_{n=1}^{N}(y_{n}-\mu)^{2}\right)\] \[\quad\times\frac{\sqrt{\lambda_{0}}}{\sqrt{2\pi}}\frac{\beta_{0} ^{\alpha_{0}}}{\Gamma(\alpha_{0})}\left(\frac{1}{\sigma^{2}}\right)^{\alpha_{0 }+\frac{3}{2}}\exp\left(-\frac{2\beta_{0}+\lambda_{0}(\mu-\mu_{0})^{2}}{2 \sigma^{2}}\right)\] \[\propto\left(\frac{1}{\sigma^{2}}\right)^{\alpha_{0}+\frac{N+3}{ 2}}\exp\left(\underbrace{-\frac{2\beta_{0}+\lambda_{0}(\mu-\mu_{0})^{2}+\sum \limits_{n=1}^{N}(y_{n}-\mu)^{2}}{2\sigma^{2}}}_{\mathcal{A}}\right).\]
Let us introduce Lemma 1 below to decompose the term \(\mathcal{A}\) as desired:
Lemma 1.: _Assume a set \(\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\in\mathbb{R}^{q}\), and note \(\bar{\mathbf{x}}=\frac{1}{N}\sum\limits_{n=1}^{N}\mathbf{x}_{n}\) the associated average vector. For any \(\mathbf{\mu}\in\mathbb{R}^{q}\):_
\[\sum\limits_{n=1}^{N}\left(\mathbf{x}_{n}-\mathbf{\mu}\right)(\mathbf{x}_{n}-\mathbf{\mu})^{ \intercal}=N(\bar{\mathbf{x}}-\mathbf{\mu})(\bar{\mathbf{x}}-\mathbf{\mu})^{\intercal}+\sum \limits_{n=1}^{N}\left(\mathbf{x}_{n}-\bar{\mathbf{x}}\right)(\mathbf{x}_{n}-\bar{\mathbf{x}} )^{\intercal}.\]
Proof.: \[\sum\limits_{n=1}^{N}\left(\mathbf{x}_{n}-\mathbf{\mu}\right)(\mathbf{x}_{n}- \mathbf{\mu})^{\intercal} =\sum\limits_{n=1}^{N}\mathbf{x}_{n}\mathbf{x}_{n}{}^{\intercal}+\mathbf{\mu }\mathbf{\mu}^{\intercal}-2\mathbf{x}_{n}\mathbf{\mu}^{\intercal}\] \[=N\mathbf{\mu}\mathbf{\mu}^{\intercal}-2N\bar{\mathbf{x}}\mathbf{\mu}^{\intercal} +\sum\limits_{n=1}^{N}\mathbf{x}_{n}\mathbf{x}_{n}{}^{\intercal}\] \[=N\mathbf{\mu}\mathbf{\mu}^{\intercal}+N\bar{\mathbf{x}}\bar{\mathbf{x}}^{ \intercal}+N\bar{\mathbf{x}}\bar{\mathbf{x}}^{\intercal}-2N\bar{\mathbf{x}}\bar{\mathbf{x}}^{ \intercal}-2N\bar{\mathbf{x}}\mathbf{\mu}^{\intercal}+\sum\limits_{n=1}^{N}\mathbf{x}_{n} \mathbf{x}_{n}{}^{\intercal}\] \[=N\left(\bar{\mathbf{x}}\bar{\mathbf{x}}^{\intercal}-\mathbf{\mu}\mathbf{\mu}^{ \intercal}-2\bar{\mathbf{x}}\mathbf{\mu}^{\intercal}\right)+\sum\limits_{n=1}^{N}\mathbf{x }_{n}\mathbf{x}_{n}{}^{\intercal}+\bar{\mathbf{x}}\bar{\mathbf{x}}^{\intercal}-2\mathbf{x}_{n} \bar{\mathbf{x}}^{\intercal}\] \[=N\left(\bar{\mathbf{x}}-\mathbf{\mu}\right)(\bar{\mathbf{x}}-\mathbf{\mu})^{ \intercal}+\sum\limits_{n=1}^{N}\left(\mathbf{x}_{n}-\bar{\mathbf{x}}\right)(\mathbf{x}_{ n}-\bar{\mathbf{x}})^{\intercal}.\]
Applying this result in our context for \(q=1\), we obtain:
\[\mathcal{A} =-\frac{1}{2\sigma^{2}}\left(2\beta_{0}+\lambda_{0}(\mu-\mu_{0}) ^{2}+N(\bar{y}-\mu)^{2}+\sum\limits_{n=1}^{N}(y_{n}-\bar{y})^{2}\right)\] \[=-\frac{1}{2\sigma^{2}}\left(2\beta_{0}+\sum\limits_{n=1}^{N}(y_ {n}-\bar{y})^{2}+(\lambda_{0}+N)\mu^{2}-2\mu(N\bar{y}+\lambda_{0}\mu_{0})+N \bar{y}^{2}+\lambda_{0}\mu_{0}^{2}\right)\] \[=-\frac{1}{2\sigma^{2}}\left(2\beta_{0}+\sum\limits_{n=1}^{N}(y_ {n}-\bar{y})^{2}+N\bar{y}^{2}+\lambda_{0}\mu_{0}^{2}\right.\]
\[\qquad+(\lambda_{0}+N)\left[\mu^{2}-2\mu\frac{N\bar{y}+\lambda_{0}\mu_{0 }}{\lambda_{0}+N}+\left(\frac{N\bar{y}+\lambda_{0}\mu_{0}}{\lambda_{0}+N}\right)^ {2}-\left(\frac{N\bar{y}+\lambda_{0}\mu_{0}}{\lambda_{0}+N}\right)^{2}\right] \Bigg{)}\] \[\qquad+(\lambda_{0}+N)\left(\mu-\frac{N\bar{y}+\lambda_{0}\mu_{0 }}{\lambda_{0}+N}\right)^{2}\Bigg{)}\] \[\qquad+(\lambda_{0}+N)\left(\mu-\frac{N\bar{y}+\lambda_{0}\mu_{0 }}{\lambda_{0}+N}\right)^{2}\Bigg{)}\] \[\qquad+(\lambda_{0}+N)\left(\mu-\frac{N\bar{y}+\lambda_{0}\mu_{0 }}{\lambda_{0}+N}\right)^{2}\Bigg{)}\] \[\qquad+(\lambda_{0}+N)\left(\mu-\frac{N\bar{y}+\lambda_{0}\mu_{0 }}{\lambda_{0}+N}\right)^{2}\Bigg{)}\] \[\qquad=-\frac{1}{2\sigma^{2}}\Bigg{(}2\beta_{0}+\sum_{n=1}^{N}(y _{n}-\bar{y})^{2}+\frac{(\lambda_{0}+N)(N\bar{y}^{2}+\lambda_{0}\mu_{0}^{2})- N^{2}\bar{y}^{2}-\lambda_{0}^{2}\mu_{0}^{2}+2N\bar{y}\lambda_{0}\mu_{0}}{\lambda_{0}+N}\] \[\qquad+(\lambda_{0}+N)\left(\mu-\frac{N\bar{y}+\lambda_{0}\mu_{0 }}{\lambda_{0}+N}\right)^{2}\Bigg{)}\] \[\qquad=-\frac{1}{2\sigma^{2}}\Bigg{(}2\beta_{0}+\sum_{n=1}^{N}(y _{n}-\bar{y})^{2}+\frac{\lambda_{0}N}{\lambda_{0}+N}(\bar{y}-\mu_{0})^{2}+( \lambda_{0}+N)\left(\mu-\frac{N\bar{y}+\lambda_{0}\mu_{0}}{\lambda_{0}+N} \right)^{2}\Bigg{)}.\]
### Proof of General Bayesian framework for evaluating mean differences
Proof.: For the sake of clarity, let us omit the \(K\) groups here and first consider a general case with \(\boldsymbol{y}_{k}=\boldsymbol{y}\in\mathbb{R}^{P}\). Moreover, let us focus on only one imputed dataset and maintain the notation \(\tilde{\boldsymbol{y}}_{1}^{(d)},\ldots,\tilde{\boldsymbol{y}}_{N}^{(d)}= \boldsymbol{y}_{1},\ldots,\boldsymbol{y}_{N}\) for convenience. From the hypotheses of the model, we can derive \(\mathcal{L}\), the posterior log-PDF over \((\boldsymbol{\mu},\boldsymbol{\Sigma})\), following the same idea as for the univariate case presented in Section 2.1:
\[\mathcal{L} =\log p(\boldsymbol{\mu},\boldsymbol{\Sigma}\mid\boldsymbol{y}_{ 1},\ldots,\boldsymbol{y}_{N})\] \[=\log\underbrace{p(\boldsymbol{y}_{1},\ldots,\boldsymbol{y}_{N} \mid\boldsymbol{\mu},\boldsymbol{\Sigma})}_{\mathcal{N}(\boldsymbol{\mu}, \boldsymbol{\Sigma})}+\log\underbrace{p(\boldsymbol{\mu},\boldsymbol{\Sigma})}_ {\mathcal{N}\mathcal{N}^{-1}(\boldsymbol{\mu}_{0},\lambda_{0},\boldsymbol{ \Sigma}_{0},\mu_{0})}+C_{1}\] \[=-\frac{N}{2}\log|\boldsymbol{\Sigma}|-\frac{1}{2}\left(\sum_{n= 1}^{N}(\boldsymbol{y}_{n}-\boldsymbol{\mu})^{\intercal}\boldsymbol{\Sigma}^{- 1}(\boldsymbol{y}_{n}-\boldsymbol{\mu})\right)\] \[\qquad-\frac{\nu_{0}+P+2}{2}\log|\boldsymbol{\Sigma}|-\frac{1}{ 2}\left(\operatorname{tr}\left(\boldsymbol{\Sigma}_{0}\boldsymbol{\Sigma}^{- 1}\right)-\frac{\lambda_{0}}{2}(\boldsymbol{\mu}-\boldsymbol{\mu}_{0})^{ \intercal}\boldsymbol{\Sigma}^{-1}(\boldsymbol{\mu}-\boldsymbol{\mu}_{0}) \right)+C_{2}\] \[=-\frac{1}{2}\Bigg{[}\left(\nu_{0}+P+2+N\right)\log|\boldsymbol{ \Sigma}|+\operatorname{tr}\left(\boldsymbol{\Sigma}^{-1}\Big{\{}\boldsymbol{ \Sigma}_{0}+\lambda_{0}(\boldsymbol{\mu}-\boldsymbol{\mu}_{0})(\boldsymbol{ \mu}-\boldsymbol{\mu}_{0})^{\intercal}\] \[\qquad+\underbrace{N(\bar{\boldsymbol{y}}-\boldsymbol{\mu})(\bar {\boldsymbol{y}}-\boldsymbol{\mu})^{\intercal}+\sum_{n=1}^{N}(\boldsymbol{y} _{n}-\bar{\boldsymbol{y}})(\boldsymbol{y}_{n}-\bar{\boldsymbol{y}})^{ \intercal}}_{\text{Lemma 1}}\right)\Bigg{]}+C_{2}\] \[=-\frac{1}{2}\Bigg{[}\left(\nu_{0}+P+2+N\right)\log|\boldsymbol{ \Sigma}|+\operatorname{tr}\Bigg{(}\boldsymbol{\Sigma}^{-1}\Big{\{}\boldsymbol{ \Sigma}_{0}+\sum_{n=1}^{N}(\boldsymbol{y}_{n}-\bar{\boldsymbol{y}})( \boldsymbol{y}_{n}-\bar{\boldsymbol{y}})^{\intercal}\] \[\qquad+(N+\lambda_{0})\boldsymbol{\mu}\boldsymbol{\mu}^{\intercal} -\boldsymbol{\mu}\left(N\bar{\boldsymbol{y}}^{\intercal}+\lambda_{0}\boldsymbol{ \mu}_{0}^{\intercal}\right)-(\lambda_{0}\boldsymbol{\mu}_{0}+N\bar{\boldsymbol{y }})\boldsymbol{\mu}^{\intercal}+\lambda_{0}\boldsymbol{\mu}_{0}\boldsymbol{\mu}_{ 0}^{\intercal}+N\bar{\boldsymbol{y}}\bar{\boldsymbol{y}}^{\intercal}\Big{\}} \Bigg{)}\Bigg{]}+C_{2}\] \[=-\frac{1}{2}\Bigg{[}\left(\nu_{0}+P+2+N\right)\log|\boldsymbol{ \Sigma}|\]
\[\times\left(1+\frac{\lambda_{N}(\nu_{N}-P+1)}{(\nu_{N}-P+1)} \left(\mathbf{\mu}-\mathbf{\mu}_{N}\right)^{\intercal}\mathbf{\Sigma}_{N}^{-1}\left(\mathbf{ \mu}-\mathbf{\mu}_{N}\right)\right)^{-\frac{\nu_{N}+1}{2}}\] \[=\frac{\Gamma\left(\frac{(\nu_{N}-P+1)+P}{2}\right)}{\Gamma\left( \frac{\nu_{N}-P+1}{2}\right)\left[\pi(\nu_{N}-P+1)\right]^{\frac{P}{2}}|\frac{ \mathbf{\Sigma}_{N}}{\lambda_{N}(\nu_{N}-P+1)}|^{\frac{1}{2}}}\] \[\times\left(1+\frac{1}{\nu_{N}-P+1}\left(\mathbf{\mu}-\mathbf{\mu}_{N} \right)^{\intercal}\left(\frac{\mathbf{\Sigma}_{N}}{\lambda_{N}(\nu_{N}-P+1)} \right)^{-1}\left(\mathbf{\mu}-\mathbf{\mu}_{N}\right)\right)^{-\frac{(\nu_{N}-P+1)+P}{ 2}}.\]
The above expression corresponds to the PDF of a multivariate \(t\)-distribution \(\mathcal{T}_{\nu}\left(\mathbf{\mu}_{N},\hat{\mathbf{\Sigma}}\right)\), with:
* \(\nu=\nu_{N}-P+1\),
* \(\hat{\mathbf{\Sigma}}=\dfrac{\mathbf{\Sigma}_{N}}{\lambda_{N}(\nu_{N}-P+1)}\).
Therefore, we demonstrated that for each group and imputed dataset, the complete-data posterior distribution over \(\mathbf{\mu}_{k}\) is a multivariate \(t\)-distribution. Thus, following Rubin's rules for multiple imputation (see (Little and Rubin, 2019), we can propose an approximation to the true posterior distribution (that is only conditioned over observed values):
\[p\left(\mathbf{\mu}_{k}\mid\mathbf{y}_{k}^{(0)}\right) =\int p\left(\mathbf{\mu}_{k}\mid\mathbf{y}_{k}^{(0)},\mathbf{y}_{k}^{(1)} \right)p\left(\mathbf{y}_{k}^{(1)}\mid\mathbf{y}_{k}^{(0)}\right)\mathrm{d}\mathbf{y}_{k}^ {(1)}\] \[\simeq\dfrac{1}{P}\sum_{p=1}^{P}p\left(\mathbf{\mu}_{k}\mid\mathbf{y}_{k}^ {(0)},\tilde{\mathbf{y}}_{k}^{(1),d}\right)\]
Leading to the desired results when evaluating the previously derived posterior distribution on each multiple-imputed dataset.
|
2303.04460 | A New Scenario of Solar Modulation Model during the Polarity Reversing | When the Galactic Cosmic Rays (GCRs) entering the heliosphere, they encounter
the solar wind plasma, and their intensity is reduced, so-called solar
modulation. The modulation is caused by the combination of a few factors, such
as particle energies, solar activity and solar disturbance. In this work, a 2D
numerical method is adopted to simulate the propagation of GCRs in the
heliosphere with SOLARPROP, and to overcome the time-consuming issue, the
machine learning technique is also applied. With the obtained proton local
interstellar spectra (LIS) based on the observation from Voyager 1 and AMS-02,
the solar modulation parameters during the solar maximum activity of cycle 24
have been found. It shows the normalization and index of the diffusion
coefficient indeed reach a maximal value in February 2014. However, after
taking into account the travel time of particles with different energies, the
peak time was found postponed to November 2014 as expected. The nine-month late
is so-called time lag. | Jieteng Jiang, Sujie Lin, Lili Yang | 2023-03-08T09:18:34Z | http://arxiv.org/abs/2303.04460v2 | # A New Scenario of Solar Modulation Model during the Polarity Reversing
###### Abstract
When the Galactic Cosmic Rays (GCRs) entering the heliosphere, they encounter the solar wind plasma, and their intensity is reduced, so called solar modulation. The modulation is caused by the combination of a few factors, such as the particle energies, solar activity and solar disturbance. In this work, a 2D numerical method is adopted to simulate the propagation of GCRs in the heliosphere with SOLARPROP, and to overcome the time consuming issue, the machine learning technique is also applied. With the obtained proton local interstellar spectra (LIS) based on the observation from Voyager 1 and AMS-02, the solar modulation parameters during the solar maximum activity of cycle 24 have been found. It shows the normalization and index of diffusion coefficient indeed reach a maximal value in February 2014. However after taking into account the travel time of particles with different energies, the peak time found postponed to November 2014 as expected. The nine-month late is so called time lag.
galactic cosmic ray, AMS-02, machine learning, solar modulation, proton 0000-0002-4135-2885]Jieteng Jiang
0000-0002-4073-2885]Sujie Lin
0000-0002-4073-3885]Lili Yang
## 1 Introduction
Cosmic rays (GCRs) have been widely studied for more than a hundred years, since the first discovery by Austrian-American physicist Victor Hess. GCRs are charged, energetic nuclei coming from far beyond the solar system, and are believed to be originated from extreme phenomena in the universe. Specifically, GCRs are particles accelerated to high energies from some powerful astronomical objects or magnetic fields in our Milky Way.
When the GCRs cross the heliopause (HP), the boundary of the solar system, they enter the heliosphere where they collide with the solar wind moving outward and are affected by the heliospheric magnetic field (HMF) Parker (1958). As a result, their intensity is modulated, which is varied for different types and energies of particles. This significantly changes their energy spectrum, making it different from the energy spectrum on the boundary, namely the local interstellar spectrum (LIS), especially at lower energies (below 30 GeV). This process is called solar modulation. It encompasses various effects such as diffusion, drift, convection, and adiabatic energy changes (see reviews by e.g. Heber and Potgieter (2006); Moraal (2013); Cliver et al. (2013); Kota (2013); Potgieter (2013); Engelbrecht et al. (2017)).
The study of solar modulation is essential not only for comprehending the modulation process but also for advancing relevant research. For instance, the study of the transport model of GCR within the Galaxy Yuan et al. (2017) and indirectly searching the dark matter with the anomalous CR antiproton flux Lin et al. (2019) were hindered by the uncertainties in the LIS. A better understanding of the modulation model could help us determine the LIS more accurately and improve these research areas.
Fortunately, we have obtained highly precise GCRs data over several months, which has allowed us to gain a deeper understanding of solar modulation. For example, the Voyager 1 spacecraft, launched in 1977, provided proton data at a few MeV upon crossing heliopause in August 2012 Stone et al. (2013). Additionally, the Alpha Magnetic Spectrometer (AMS-02) has offered precise measurements of protons across a broad energy range of 0.5 GeV to a few TeV Aguilar et al. (2018). We derived the LIS of protons for energy from
0.5 GeV to 30 GeV in two periods of low solar activity with interpolating the Voyager 1 data and fitting modulated AMS-02 data Wang et al. (2019, 2022). To numerically describe the instantaneous propagation of GCRs, researchers have widely applied Parker's equation, which is a type of Fokker-Planck equation. With the heliosphere model, solar modulation parameters, and the LIS, the propagation of GCRs can be simulated with tools like SOLARPROP, allowing for the calculation of their energy spectrum at Earth. Fortunately, the physical heliosphere model and the numerical solution to Parker's equation have made significant progress over recent decades (e.g., Fisk (1971); Gleeson et al. (1979); Potgieter & Moraal (1985); Jokipii & Thomas (1981)Potgieter (2000)Potgieter et al. (2014)).
At present, although some results of solar modulation can fit well with data during quiet solar epochs, it remains a challenge during maximum activity. Because it has a more complex coronal structure McComas et al. (2001), and results in the solar wind and HMF behave in a more complicated manner. Nonetheless, some research groups have made progress for the maximum activity period. For example, Song et al. (2021) used five modulation parameters to fit the observed data, Shen et al. (2021) employed a force-field approach to obtain the best-fit parameters, and Fiandrini et al. (2021) introduced a weight to linearly combine of the fluxes with two polarities, and among other methods. In this work, we redefined the weight and considered the differences for particles with varying energies to successfully obtain the best-fit parameters during maximum activity.
This paper is organized as follows. In Section 2, the heliosphere model and diffusion model are described in details. In Section 3, we analyze the cycle 24 data of active periods and apply the machine learning method for better efficiency. In Section 4, the modeling results are given. Also, the LIS of proton and the best-fit parameters from May 2011 to October 2016 are provided. A summary and conclusion are presented in Section 5.
## 2 Numerical model
To comprehend the solar modulation, the cosmic ray propagation inside the heliosphere has to be understood. When the GCRs enter the solar system, they suffer from energy loss and direction change, which results in a reduction of their intensity. The propagation of these charged particles can be described by the transport equation, which was firstly given by Parker in 1965 Parker (1965) in the form of the Fokker-Planck equation (FPE) without sources
\[\frac{\partial f(\mathbf{r},\mathbf{p},t)}{\partial t}= \nabla(\mathbf{K^{S}}\cdot\nabla f(\mathbf{r},\mathbf{p},t))+ \frac{1}{3}(\nabla\cdot\mathbf{V_{SW}})\frac{\partial f(\mathbf{r},\mathbf{p},t)}{\partial ln\ p} \tag{1}\] \[-(\mathbf{V_{SW}}+\mathbf{V_{D}})\cdot\nabla f(\mathbf{r}, \mathbf{p},t),\]
where \(f(\mathbf{r},\mathbf{p},t)\), as a function of position \(\mathbf{r}\), momentum \(\mathbf{p}\), and temporal variable \(t\), describes the dynamic phase-space distribution of GCRs. On the right side of Equation 1, there are three terms describing the CR transportation processes of diffusion, adiabatic energy loss, and convection and drift in the heliosphere respectively. The physical quantities involved include diffusion coefficient \(\mathbf{K^{S}}\), solar wind velocity \(\mathbf{V_{SW}}\) and drift velocity \(\mathbf{V_{D}}\). Here \(\mathbf{V_{D}}\) includes gradient-curvature drift Jokipii et al. (1977); Jokipii & Kopriva (1979) and the heliosphere current sheet (HCS) drift Potgieter & Moraal (1985); Burger & Potgieter (1989); Hoeksema (1992) and diffusion velocity.
To find the solution to FPE, the time-backward numerical method with stochastic differential equations (SDEs) has become popular. Where those pseudo particles are simulated from the moment they reach the earth, and traced backward until they get the heliopause Yamada et al. (1998); Zhang (1999); Kopp et al. (2012); Kappl (2016); Zhang (1999). For a stochastic process driven by Wiener process, the SDEs describe the particle position \(d\mathbf{r}\) in the form of,
\[\mathrm{d}\mathbf{r}=(\nabla\cdot\mathbf{K^{S}}-\mathbf{V})\mathrm{d}t+ \overset{\leftrightarrow}{\sigma}\cdot\mathrm{d}\mathbf{W}, \tag{2}\]
where \(\mathbf{V}=\mathbf{V_{SW}}+\mathbf{V_{D}}\) is the global velocity of the particles, \(\overset{\leftrightarrow}{\sigma}\) is a third-order matrix satisfying \(\overset{\leftrightarrow}{\sigma}\cdot\overset{\leftrightarrow}{\sigma}=2 \mathbf{K^{S}}\), \(\mathrm{d}\mathbf{W}\) is a Wiener process related to a standard normal distribution \(N(0,1)\). The kinetic energy \(\mathrm{d}T\) of a cosmic ray particle with mass \(m\) in \(\mathrm{d}t\) time interval can be found as
\[\mathrm{d}T=\frac{2|\mathbf{V_{SW}}|}{3|\mathbf{r}|}\frac{T^{2}+2Tm}{T+m} \mathrm{d}t, \tag{3}\]
here \(m\) is the mass of particle. With the constructed numerical method above, we adopted the public code SOLARPROP Kappl (2016) to perform the simulation. Based on this framework, one can change the propagation model according to various presumptions on the physical quantities. In this work, we adopted a 2D model to describe these quantities inside the heliosphere following Ref. Potgieter et al. (2014).
### Heliosphere Model
Both the diffusion coefficient \(\mathbf{K^{S}}\) and the drift velocity \(\mathbf{V_{D}}\) depend on the HMF and solar wind. Previously the large-scale HMF, embedded into the outward-flowing solar wind, was given by Parker as an Archimedean spiral
field Parker (1958). However, as the transverse component of the HMF decreases as \(1/r\) while the radial component decreases as \(1/r^{2}\), the transverse perturbation near the sun would significantly enhance the average magnitude of the magnetic field in the polar region Jokipii and Kota (1989). In this work, we adopt the HMF model performed in Ref. Fichtner et al. (1996), which takes the transverse perturbation into account by modifying the magnitude of the Archimedean spiral field. This modification is supported by measurements of the magnetic field in the polar regions of the heliosphere by Ulysses Balogh et al. (1995). The modified HMF model can be written in the form
\[\left\{\begin{aligned} \mathbf{B}&=A\ B_{0}\frac{r_{0}^{2}}{r^{ 2}}(\mathbf{e_{r}}+\zeta\mathbf{e_{\theta}}-\psi\mathbf{e_{\varphi}})\\ \zeta&=\frac{r\delta(\theta)}{r_{\odot}sin(\theta) }\\ \psi&=\frac{\Omega(r-r_{\odot})sin(\theta)}{V_{SW}} \end{aligned}\right. \tag{4}\]
where \(\Omega=2.7\times 10^{-6}\text{rad/s}\) is the rotation angular velocity of the sun, \(r_{\odot}=3\times 695500\) km is the radius of the corona, \(V_{SW}\) is the velocity of the solar wind, \(B_{0}\) is the HMF observed at the reference point \(r_{0}\), \(A\) is the polarity of the field and could only be 1 or \(-1\), the N pole of HMF locate in the northern solar hemisphere in the case \(A=1\) and vice verse, and \(\delta(\theta)\) is presumed to follow the expression Fiandrini et al. (2021)
\[\delta(\theta)=\left\{\begin{aligned} 3\times 10^{-3}\sin( \theta),& 1.7^{\circ}<\theta<178.3^{\circ}\\ 8.7\times 10^{-5},&\text{else}\end{aligned}\right.. \tag{5}\]
The observation shows that the solar wind speed \(\mathbf{V_{SW}}\) changes with radial and polar position, during periods of minimum solar activity Bame et al. (1992); Heber and Potgieter (2006). Along the radial direction of the equatorial plane, the wind speed keeps constant at 430 km/s until reaching the termination shock (TS). It decreases to about 170 km/s after across the TS and finally becomes zero or moves to tail-ward in the inner heliosheath because of the barrier of the heliopause (HP) Krimigis et al. (2011). While in the polar direction, \(\mathbf{V_{SW}}\) increases from about 430 km/s to 800 km/s at high polar region, as observed by Heber and Potgieter (2006). The solar wind speed was given by Potgieter et al. (2014),
\[\begin{aligned} \mathbf{V_{SW}}(r,\theta)=& V_{0}(1.475\mp 0.4\tanh[6.8(\theta-\frac{\pi}{2})\pm( \frac{15\pi}{180}+\alpha)])\\ &\times[\frac{s+1}{2s}-\frac{s-1}{2s}\tanh(\frac{r-r_{TS}}{L})] \mathbf{e_{r}}\end{aligned} \tag{6}\]
where \(V_{0}=400\) km/s, \(\theta\) is the polar angle, the distance of termination shock \(r_{TS}=90\) AU, \(s=2.5\) and \(L=1.2\) AU. And \(\alpha\) is the tilt angle that describes the angle of the HCS. For the same \(\theta\), the radial variation of Equation 6 is a constant while the polar variation changes from 430 km/s near the equator to 800 km/s in the polar region. The HMF strength around the earth \(B_{0}\), polarity \(A\), and the tilt angle \(\alpha\) in Equation 4 and 6 have to be obtained from the observation.
### Diffusion Model
In Equation 1, the spatial diffusion coefficient tensor \(\mathbf{K^{S}}\) describes the diffusion of GCRs. In general, the full diffusion tensor is expressed as \(\mathbf{K}=\mathbf{K^{S}}+\mathbf{K^{A}}\). It includes symmetric diffusion tensor \(\mathbf{K^{S}}\), which is diagonal, and asymmetric diffusion tensor \(\mathbf{K^{A}}\) as following,
\[\begin{aligned} \mathbf{K}=\left[\begin{array}{ccc}K_{r\perp}&-K_{A}&0 \\ K_{A}&K_{\theta\perp}&0\\ 0&0&K_{\parallel}\end{array}\right]&=\underbrace{\left[\begin{array}{ccc}K_{r \perp}&0&0\\ 0&K_{\theta\perp}&0\\ 0&0&K_{\parallel}\end{array}\right]}_{\mathbf{K^{S}}}\\ &\hskip 14.226378pt+\underbrace{\left[\begin{array}{ccc}0&-K_{A}&0 \\ K_{A}&0&0\\ 0&0&0\end{array}\right]}_{\mathbf{K^{A}}}\end{aligned} \tag{7}\]
The symmetric part describes the normal diffusion effect while the asymmetric part describes the drift effect. The symmetric part, \(K_{\parallel}\), is the diffusion component parallel to the direction of the magnetic field, \(K_{r\perp}\) and \(K_{\theta\perp}\) are two perpendicular diffusion coefficients in the radial direction and the polar direction respectively. A typical empirical expression for the \(K_{\parallel}\) is given by Ref. Potgieter et al. (2014) in the form of
\[K_{\parallel}=(K_{0})\,\beta\left(\frac{B_{0}}{|\mathbf{B}|}\right)\left(\frac {R}{R_{0}}\right)^{a}\left(\frac{\left(\frac{R}{R_{0}}\right)^{m}+\left(\frac{R _{k}}{R_{0}}\right)^{m}}{1+\left(\frac{R_{k}}{R_{0}}\right)^{m}}\right)^{ \frac{b-a}{m}}, \tag{8}\]
where \(K_{0}\) is a constant with an order of \(10^{23}\text{cm}^{2}\text{s}^{-1}\), \(\beta=v/c\) is the speed of the particle in the nature unit, \(B_{0}\) is the value of HMF detected around the Earth, \(R=p/Z\) is the particle rigidity, the reference rigidity \(R_{0}=1\) GV, and \(m=3.0\) guarantees the smoothness of the transition. The indexes \(a\) and \(b\) determine the slope of the rigidity dependence respectively below and above a rigidity with the value \(R_{k}=3\) GV.
Perpendicular diffusion term in the radial direction is presumed to Giacalone and Jokipii (1999)
\[K_{r\perp}=0.02\ K_{\parallel}, \tag{9}\]
while the polar perpendicular diffusion term is given as Ref. Potgieter (2000); Balogh et al. (2008)
\[K_{\perp\theta}=0.02K_{\parallel}f_{\perp\theta}. \tag{10}\]
The factor \(f_{\perp\theta}\) satisfies the expression
\[f_{\perp\theta}=A^{+}\mp A^{-}\tanh[8(\theta_{A}-90^{\circ}\pm\theta_{F})], \tag{11}\]
where \(A^{\pm}=(d\pm 1)/2\), \(\theta_{F}=35^{\circ}\), and \(\theta_{A}=90^{\circ}-|90^{\circ}-\theta|\). This means that \(K_{\perp\theta}\) is enhanced towards the poles by a factor of \(d\) with respect to the value of \(K_{\parallel}\) in the equatorial regions of the heliosphere. The enhance factor \(d\) is set to be 3.
Plugging the asymmetric part into the diffusion term \(\nabla(\mathbf{K^{A}}\cdot\nabla f)\) would lead to a cross-product-like result in the form of \(\nabla\times\mathbf{B}\cdot\nabla f\). This term could describe the drift effect caused by the uneven magnetic field, thus it was written as the drift velocity in Equation 1. Under the assumption of weak scattering and full drift process, the average drift velocity is related to the rigidity \(R\) and the charge \(q\) of particles, and the strength of magnetic field \(B\) Burger et al. (1985, 1987):
\[\langle\mathbf{V_{D}}\rangle=\nabla\times(\frac{qR\beta}{3B}\frac{\mathbf{B}} {B})). \tag{12}\]
The drift velocity can be divided into two parts, gradient-curvature drift velocity \(\mathbf{V_{G}}\) from the magnetic field itself and HCS drift velocity \(\mathbf{V_{HCS}}\) from the HCS. The two drift velocities are expressed with two factors, \(f(\theta)\) and \(\zeta(R)\), given as Potgieter & Moraal (1985); Burger et al. (2000),
\[\left\{\begin{array}{l}\mathbf{V_{G}}=f(\theta)\zeta(R)\cdot\nabla\times( \frac{qR\beta}{3B}\frac{\mathbf{B}}{B})\\ \mathbf{V_{HCS}}=\zeta(R)\frac{qR\beta}{3B}\frac{\mathbf{B}}{B}\nabla\times f (\theta)\\ f(\theta)=\frac{1}{\alpha_{h}}\tan^{-1}[(1-\frac{2\theta}{\pi})\tan \alpha_{h}]\\ \zeta(R)=\frac{(R/R_{A})^{2}}{1+(R/R_{A})^{2}}\end{array}\right.. \tag{13}\]
Here the cut-off value \(R_{A}\) is fixed to be 0.5 GV according to Fiandrini et al. (2021), \(f(\theta)\) is a transition function that models a wavy neutral sheet near the equator plane. And \(\zeta(R)\) is a reduction function, which describes the change of drift velocity for particles with different momentum. The angle \(\alpha_{h}\) equals to \(arccos(\frac{\pi}{2c_{h}}-1)\), here \(c_{h}=\frac{\pi}{2}-\frac{1}{\pi}sin(\alpha+\frac{2r_{L}}{r})\), \(\alpha\) is tilt angle and \(r_{L}\) depends on the maximum distance that particle can be away from the HCS.
In summary, the diffusion coefficient has been well established with our assumptions, except the three parameters, \(K_{0}\), indices \(a\) and \(b\), which are obtained from the analysis of the experiment data.
## 3 Analysis and Calculation
### The Model Parameters
To calculate the spectrum of GCRs using the heliosphere model and the diffusion model, six parameters are required, which consist of three heliospheric parameters related to the solar system and three diffusion parameters. The heliospheric parameters are the strength \(B_{0}\) of the HMF near the Earth, the tilt angle \(\alpha\) of the HCS, and the polarity \(A\) of the HMF, which can be obtained from observations, as shown in Figure 1. The value of \(B_{0}\) is provided by the Advanced Composition Explorer (ACE), while the tilt angle and polarity are provided by the Wilcox Solar Observatory (WSO), represented by solid lines in the two top panels. It takes time for the change in the magnetic field that embedded in the solar wind to affect the motion of GCR, typically around nine months, which is referred to as the time lag Tomassetti et al. (2017); Orcinha et al. (2019). Considering that, we calculate the average field and tilt angle encountered by GCR particles during their journey from heliopause to Earth, as represented by the square symbol. The last panel in Figure 1 shows the sunspot number (SSN) as a reference to compare the trend of \(B_{0}\) and \(\alpha\). It can be observed that \(B_{0}\) and \(\alpha\) increase with SSN, which reaches maximum values in February 2014, and the polarity reverses around this time. Other three diffusion parameters (normalization factor of diffusion \(K_{0}\) and two spectral indices \(a\) and \(b\) in Equation 8) can be obtained by fitting the observed data.
### Application of Machine Learning
In this work, we applied the heliospheric model as described in Section 2 and utilized SOLARPROP to simulate the propagation of GCR. SOLARPROP simulates the propagation of particles in the heliosphere. There are 30 energy bins and each has 2000 particles, starting from the earth. On average, it takes 1500 steps for one particle to reach the HP. Therefore a total of billions of steps are taken for all particles, and it needs about 10 minutes for SOLARPROP to complete one simulation. Running thousands of simulations can be quite time-consuming. To improve efficiency, we employed a machine learning method, the LIBSVM library of Support Vector Machine (SVM) Chang & Lin (2011), to replace the calculations of SOLARPROP. In order to construct the SVM model, we set a 5D parameter space with the following ranges:
* \(B_{0}\) in the range of (\(3\sim 8\))nT
* \(\alpha\) in the range of \(15^{\circ}\sim 75^{\circ}\)
* \(K_{0}\) in the range of (\(0.001\sim 1.5)\times 10^{23}\)cm\({}^{2}\)s
* the indices \(a\) and \(b\) in the range of \(0.001\sim 3\)
We randomly picked about 40000 samples for \(A=1\) and about 50000 samples for \(A=-1\) from this parameter space to train the SVM model. To ensure the reliability of the machine learning method, we performed detailed tests in Section 4.1.
### Analysis for Solar Cycle 24
In a recent study of the solar polar magnetic field during the activity maximum in cycle 24, researchers observed that the magnetic field underwent three reversals in the northern hemisphere (in May 2012, February 2014, and July 2014) and only one reversal in the southern hemisphere (in November 2013). This asymmetry of the magnetic field reversals has created a challenge in simulating solar modulation during this period, as the particles will experience magnetic fields with different signs. To address this issue, various methods have been proposed, such as adding more modulation parameters (as done by Song et al. (2021)) or simplifying the particle flux as a weighted sum of two spectra with different polarities (as proposed by Fiandrini et al. (2021)). We adopt the latter method and give the weight a physical meaning, as the ratio of the space occupied by the N-pole magnetic field in the heliosphere to the total space. Meanwhile, we also take into account the different motion times of particles with different energies.
## 4 Results
### The Local Interstellar Spectrum of proton
The local interstellar spectrum (LIS) of protons represents the energy spectrum outside the heliopause. Voyager 1 crossed the heliopause in August 2012 and provided the LIS for protons at low energy (\(<0.5\) GeV). Additionally, energy spectra above a few GeV were measured by AMS-02 near the Earth. Solar modulation effects are significant below 30 GeV, but no observed LIS of protons has been obtained in the energy range
Figure 1: The observed data (solid line) of HMF \(B_{0}\), tilt angle \(\alpha\), polarity \(A\) and sunspot number from the Advanced Composition Explorer (ACE) and the Wilcox Solar Observatory (WSO), separately, from 2011 to 2017. The square symbols are the average parameters of the ten periods of carrington rotation numbers. In the last two panels, the shaded regions represent the period from May 2012 to March 2015.
Figure 3: LIS-avg comparing with Voyager data in heliopause are shown in square symbols.
Figure 2: Derived the best-fit LIS of proton, constrained by the data from Voyager 1 and AMS-02. In the upper panel, LIS-n/LIS-p with negative/positive polarity are shown in dotted/solid lines, and the average LIS of LIS-n and LIS-p, labeled by LIS-avg. The ratio of LIS-n to LIS-p is shown in the second panel.
between 0.4 GeV and 30 GeV. Therefore, the LIS in this range needs to be calculated. With cubic spline interpolation, a complete LIS is obtained that considers the constraints from Voyager 1 observations and fits the AMS-02 data after solar modulation.
To study the LIS, the proton data observed by AMS-02 from two quiet periods were chosen, which corresponded to Bartels' numbers 2426-2437 and 2470-2487, respectively, and corresponded to different HMF polarities. Two independent fittings were performed, and two proton LIS were obtained for the two periods (in Figure 2), named LIS-n (dotted line) for negative polarity and LIS-p (solid line) for positive polarity. These two LIS are quite close, with a relative difference of less than 10%. It is well known that cosmic ray particles propagate in different paths for different HMF polarities due to the drift direction. A positive charge particle is likely to propagate inward along the heliospheric current sheet (HCS) in the negative polarity period, while it is likely to propagate along the polar regions in the positive polarity period. The reverse applies for the negative charge particle. The two LIS were averaged to obtain a unified LIS (LIS-avg) for the following work, which is shown as square symbols in Figure 3.
To evaluate the validity of machine learning, SOLARPROP and LIBSVM were applied with given LIS-n and LIS-p, and the fluxes \(\phi_{swm}\) and \(\phi_{prop}\) were obtained after solar modulation using best-fit parameters for every energy bin. By comparing the difference between \(\phi_{prop}\) and \(\phi_{swm}\) and the total error of AMS-02, the validity is given in Figure 4. The ratio are mostly less than one for both LIS-n and LIS-p, which means the difference between \(\phi_{prop}\) and \(\phi_{swm}\) from the two methods is smaller than the total error and can be neglected in our analysis.
### Quiet Periods
To determine the best-fit parameters for cycle 24 during solar maximum activity, we analyzed the full set of data from May 2011 to October 2016. In this data set, we bin the data into 34 energy bins from 0.47 GeV to 24.71 GeV, namely the degree of freedom is 34. The resulting values for the parameters \(K_{0}\), \(a\), and \(b\) are presented in Figure 5. Notably, two parameters, \(K_{0}\) and index \(a\), exhibit a sudden change in November 2013. Specifically, the diffusion coefficient \(K_{0}\) decreases, while index \(a\) increases. This change can be attributed to the polarity shift illustrated in the third panel of Figure 1. In contrast, the value of index \(b\) remained stable throughout the analyzed period.
As seen the last panel in Figure 5, the value of reduced chi square is less than 1 during two quiet periods. But the value increases to more than 1 and keep on for a long time, the increasing rate is more than 60%, the time nodes are May 2012 and May 2015. So we think that the solar magnetic field reversal occurred from May 2012 to May 2015. Some other works also hold the same point, like Pishkalo & Leiko (2016); Gopalswamy et al. (2016) through the observed polar magnetic field on surface of the Sun. Even though the polar field observations(above 55\({}^{\circ}\)) from the Wilcox Solar Observatory in Figure 6, the first time change of the polar magnetic field in the northern hemisphere was June 2012, and the last time was August 2014. After we added the time lag of surface magnetic field, nine months, it is just in May 2015.
For high values of reduced \(\chi^{2}\) (\(>2\)) in the last panel of Figure 5, especially the period from December 2013
Figure 4: The time profile of the difference between \(\phi_{prop}\) and \(\phi_{swm}\), calculated from SOLARPRO and Libsvm, to the total err of AMS-02.
to November 2014, we think that three diffusion model parameters can not fit the observed data. Therefore we think to get more reasonable results, these data have to be analysed with other methods during this maximum activity period.
### Maximum Activity
During periods of maximum activity, the sign of the large-scale magnetic field can vary at different positions, even within the same hemisphere. As a result, cosmic rays will encounter the HMF with different polarities along their path. Ideally, the magnetic field at the location of each cosmic ray should be simulated, but currently, it is not possible to detect the magnetic polarity and path of every cosmic ray within the heliosphere. Nonetheless, some progress has been made in simplifying this process. For example, Fiandrini et al. (2021) introduced a weight term, denoted by \(P\), to calculate the final spectrum, \(\phi_{f}\) near the Earth. This spectrum is the weighted sum of the spectra with two polarities, \(\phi^{-}(E)\) with \(A=-1\), and \(\phi^{+}(E)\) with \(A=1\).
\[\phi_{f}(E)=\phi^{-}(E)(1-P)+\phi^{+}(E)P \tag{14}\]
Here we employ a similar approach. Considering that magnetic field transports at solar wind speeds, we define the weight as the ratio of the space occupied by the N-pole magnetic field in the heliosphere to the total space. Figure 6 shows that the directions of the polar magnetic field (above \(55^{\circ}\)) have changed over time for both the northern and southern hemispheres. The northern hemisphere experienced magnetic field reversals three times in May 2012, February 2014, and August 2014, while the southern hemisphere experienced one in July 2013. The calculated weight is presented in the forth panel of Figure 7, where two structures, platform A and valley B, are evident. These structures can be explained by the temporary stability of the solar field in the first half of 2013 and the change to a negative field in the northern hemisphere in the first half of 2014. Using the specified weight, the best-fit parameters are shown in the first three panels in Figure 7, where the parameters change continuously. The parameter \(K_{0}\) gets at a minimum in February 2014, and the index \(a\) reaches a maximum at the same time. When compared to the SSN change in Figure 1, which also reaches an extreme value in February 2014, the parameters show obvious trends with the change in solar activity. However, although these three parameters show clear trends, the reduced \(\chi^{2}\) in the last panel of Figure 7 still has a high value, especially from August 2014 to April 2015, making the results somewhat unsatisfactory. Thus, further improvement is necessary.
To improve our results, we take into account the different traveling times of GCRs with different energies. To achieve this, we have utilized the time data simulated by SOLARPROP as a reference, which provides the proton traveling time for each energy level. We have used the average values of parameters such as \(K_{0}=0.3\times 10^{23}cm^{-2}s^{-1}sr^{-1}GeV^{-1}\), \(a=1.80\), \(b=0.989\), \(B_{0}=5nT\), \(\alpha=70^{\circ}\), and \(A=1\). These values have been selected based on the points with \(\chi^{2}/dof\) greater than 1 in the last panel of Figure 7. The time range for each energy bin is between 1.87 and 163 days, and particle travel time decreases as their energy increases. By adding this information, the best-fit param
Figure 5: Results of the best-fit parameters, \(K_{0}\), \(a\) and \(b\) for two periods, \(A=-1\) (square) and \(A=1\) (triangle), and the corresponding reduced chi square in the last panel, here the dof equals to 34. The vertical dashed lines indicate the beginning and the end of the reversal epoch.
Figure 6: The two polar magnetic fields (above \(55^{\circ}\)), northern pole (solid line) and southern pole (dotted line). The wave line is 10 days averaged, Data is from the Wilcox Solar Observatory.
eters and reduced \(\chi^{2}\) are shown in Figure 8. The three diffusion parameters have the same trends as shown in Figure 7, but the extreme values are in November 2014, and the index \(b\) is stable as always. Compared with the time reaching extreme values in Figure 7, the difference of time is nearly 9 months, which is just a time lag. The reduced \(\chi^{2}\) in the last panel of Figure 8 shows that 60% of them have a value less than 1, 90% less than 2, and only one has a maximum of 3.7. Therefore, we conclude that these best-fit parameters are reliable.
## 5 Conclusion
This study examines the solar modulation of Galactic Cosmic Rays (GCRs) and presents a new Local Interstellar Spectrum (LIS) of protons during solar activity in cycle 24, as seen in Figure 3. The final spectrum near the Earth during the period of maximum solar activity is obtained using a weight factor in Equation 14, which is defined as the ratio of the space occupied by the N-pole magnetic field in the heliosphere. The weight factor is used to fit the final spectrum, which is equal to a weighted sum of two spectra with both polarities. The best-fit diffusion parameters are then determined, and their trends are shown in Figure 7. The normalization diffusion coefficient \(K_{0}\) reached a minimum in February 2014, while index \(a\) reached its maximum at the same time. In contrast, index \(b\) does not exhibit a regular change. However, due to the different motion times of particles with different energies in space, the ratio of the magnetic field occupying needs to be modified for each particle. The modified best-fit parameters are shown in Figure 8, which also have one extreme point, but this time it is in November 2014, nine months later than before. This delay time represents the time taken for the solar magnetic field to act on the energy spectrum, namely the time lag.
The time lag was discussed with different methods in the literature. For example, the Ref. Fiandrini et al. (2021), in which the weight during the maximum solar activity period was described in a different way from us, established a relationship between the parameters and the sunspot number (SSN) at the epoch \(t-\Delta T_{lag}\), and found that the curve of \(K_{0}\) vs SSN approaches a single-valued function. The \(\Delta T_{lag}\) finally given by this method is about 11 months, which is comparable with the 9 months in our study.
To improve the reliability of LIS in future work, there are two main steps that should be taken. Firstly, it is important to overlap the energy range between Voyager data and that near the earth. Currently, the AMS-02 data is used, but there is no overlapped energy range between the Voyager and AMS-02 data. To address this issue, data from PAMELA can be utilized. The energy
Figure 8: The best-fit parameters during maximum activity considering different traveling times of GCRs with different energies in the first three panel. The reduced \(\chi^{2}\) is in the last panel
Figure 7: The best-fit parameters during maximum activity from May 2012 to May 2015. The final flux near the Earth is the weighted sum of the spectra with two polarities. The weight \(P\) is the ratio of the space occupied by the N-pole magnetic field in the heliosphere to the total space, shown in the forth panel. The reduced \(\chi^{2}\) is in the last panel
range of PAMELA data is (\(0.088\sim 46.5\)GeV), which covers the energy range of interest. However, PAMELA data is only available during negative polarity, which limits the ability to obtain the LIS with positive polarity. Therefore, it is necessary to wait for PAMELA to release new observations. Secondly, to account for the difference in particle motion time, it is important to consider the motion time of particles corresponding to diffusion parameters rather than the average parameters. By doing so, the reliability of the results can be improved. In summary, the two next steps to improve the reliability of LIS are to utilize PAMELA data and to consider the motion time of particles corresponding to diffusion parameters.
## 6 Acknowledgements
We thank E. Fiandrini and N. Tomassett for the valuable discussion with them. This work is supported by the National Natural Science Foundation of China (NSFC) grants 12205388, 12005313, 42150105, and 12261141691.
|
2310.04558 | VTON-IT: Virtual Try-On using Image Translation | Virtual Try-On (trying clothes virtually) is a promising application of the
Generative Adversarial Network (GAN). However, it is an arduous task to
transfer the desired clothing item onto the corresponding regions of a human
body because of varying body size, pose, and occlusions like hair and
overlapped clothes. In this paper, we try to produce photo-realistic translated
images through semantic segmentation and a generative adversarial
architecture-based image translation network. We present a novel image-based
Virtual Try-On application VTON-IT that takes an RGB image, segments desired
body part, and overlays target cloth over the segmented body region. Most
state-of-the-art GAN-based Virtual Try-On applications produce unaligned
pixelated synthesis images on real-life test images. However, our approach
generates high-resolution natural images with detailed textures on such variant
images. | Santosh Adhikari, Bishnu Bhusal, Prashant Ghimire, Anil Shrestha | 2023-10-06T19:47:20Z | http://arxiv.org/abs/2310.04558v2 | # VTON-IT: Virtual Try-On using Image Translation
###### Abstract
Virtual Try-On (trying clothes virtually) is a promising application of the Generative Adversarial Network (GAN). However, it is an arduous task to transfer the desired clothing item onto the corresponding regions of a human body because of varying body size, pose, and occlusions like hair and overlapped clothes. In this paper, we try to produce photo-realistic translated images through semantic segmentation and a generative adversarial architecture-based image translation network. We present a novel image-based Virtual Try-On application VTON-IT that takes an RGB image, segments desired body part, and overlays target cloth over the segmented body region. Most state-of-the-art GAN-based Virtual Try-On applications produce unaligned pixelated synthesis images on real-life test images. However, our approach generates high-resolution natural images with detailed textures on such variant images. 1
Footnote 1: Details of the implementation, algorithms and codes, are publicly available on Github: [https://github.com/shuntos/VITON-IT](https://github.com/shuntos/VITON-IT)
**Keywords: Virtual Try On, Human Part Segmentation, Image Translation, Semantic Segmentation, Generative Adversarial Network** |
2302.03026 | Sampling-Based Accuracy Testing of Posterior Estimators for General
Inference | Parameter inference, i.e. inferring the posterior distribution of the
parameters of a statistical model given some data, is a central problem to many
scientific disciplines. Generative models can be used as an alternative to
Markov Chain Monte Carlo methods for conducting posterior inference, both in
likelihood-based and simulation-based problems. However, assessing the accuracy
of posteriors encoded in generative models is not straightforward. In this
paper, we introduce `Tests of Accuracy with Random Points' (TARP) coverage
testing as a method to estimate coverage probabilities of generative posterior
estimators. Our method differs from previously-existing coverage-based methods,
which require posterior evaluations. We prove that our approach is necessary
and sufficient to show that a posterior estimator is accurate. We demonstrate
the method on a variety of synthetic examples, and show that TARP can be used
to test the results of posterior inference analyses in high-dimensional spaces.
We also show that our method can detect inaccurate inferences in cases where
existing methods fail. | Pablo Lemos, Adam Coogan, Yashar Hezaveh, Laurence Perreault-Levasseur | 2023-02-06T18:59:25Z | http://arxiv.org/abs/2302.03026v2 | # Sampling-Based Accuracy Testing of Posterior Estimators for General Inference
###### Abstract
Parameter inference, i.e. inferring the posterior distribution of the parameters of a statistical model given some data, is a central problem to many scientific disciplines. Posterior inference with generative models is an alternative to methods such as Markov Chain Monte Carlo, both for likelihood-based and simulation-based inference. However, assessing the accuracy of posteriors encoded in generative models is not straightforward. In this paper, we introduce 'distance to random point' (DRP) coverage testing as a method to estimate coverage probabilities of generative posterior estimators. Our method differs from previously-existing coverage-based methods, which require posterior evaluations. We prove that our approach is necessary and sufficient to show that a posterior estimator is optimal. We demonstrate the method on a variety of synthetic examples, and show that DRP can be used to test the results of posterior inference analyses in high-dimensional spaces. We also show that our method can detect non-optimal inferences in cases where existing methods fail.
Machine Learning, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian Inference, Bayesian Inference, Inference, Bayesian, Inference, Inference, Bayesian Inference, Bayesian, Inference, Bayesian Inference, Inference, Bayesian, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian, Inference, Bayesian Inference, Inference, Bayesian Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Inference, Bayesian Inference, Bayesian, Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference Inference Inference, Bayesian Inference Inference Inference Inference Inference, Bayesian
estimated posterior is often performed using coverage probabilities (but see also Guo et al. (2017)), relying on the evaluation of the density of the posteriors. (Schall, 2012; Prangle et al., 2013; Cranmer et al., 2020; Hermans et al., 2021). Coverage probabilities measure the proportion of the time that a certain interval contains the true parameter value. However, coverage probability calculations based on evaluations of the learned posterior distributions are not applicable to samples obtained from a generative model, where such evaluations are not available. Furthermore, and more importantly, these coverage probabilities tests are a necessary but not sufficient diagnostic to assess the accuracy of the estimated posterior.
Although other works have suggested alternative validation methods for SBI (Talts et al., 2018; Lueckmann et al., 2021; Dalmasso et al., 2020; Linhart et al., 2022; Deistler et al., 2022), none of these can be applied for generative models on arbitrary dimensions when we do not have access to posterior evaluations.
The goal is this paper is to introduce a framework for testing the accuracy of parameter inference using only samples from the true joint distribution of the data \(x\) and the parameters of interest \(\theta\), \(p(x,\theta)\), and samples from the estimated posterior distribution \(\hat{p}(\theta|x)\). We begin by introducing all necessary notation in SS 2. We then introduce our method in SS 3. We present our experiments in SS4, and summarize our findings in SS5. An example implementation of our code is available upon request.
## 2 Formalism
In this section, we introduce some basic concepts and build up to our key theoretical result (Theorem 3). The coverage testing procedure introduced in the following section is essentially a practical implementation of this theorem.
### Notation
As stated in the introduction, we are interested continuous-valued parameters \(\theta\in U\subset\mathbb{R}^{n}\) and observations \(x\in V\subset\mathbb{R}^{m}\) taken from (subsets of) Euclidean space, with joint density \(p(\theta,x)\). We denote our posterior estimator by \(\hat{p}(\theta|x)\) (which could be a neural network or MCMC sampler, for example) and assume we can also use it to generate samples of \(\theta\).
With these preliminaries, we make two basic definitions:
**Definition 1**.: _A posterior estimator \(\hat{p}(\theta|x)\) is_ **optimal** _if_
\[\hat{p}(\theta|x)=p(\theta|x)\quad\forall(x,\theta)\sim p(x,\theta)\,. \tag{1}\]
**Definition 2**.: _A **credible region generator**\(\mathcal{G}:\hat{p},\alpha,x\mapsto W\subset U\) for a given credibility level \(\alpha\) and observation \(x\) is a function satisfying_
\[\int_{\mathcal{G}(\hat{p},\alpha,x)}\mathrm{d}\theta\,\hat{p}(\theta|x)=1- \alpha\,. \tag{2}\]
Note that there are an infinite number of such generators. A commonly-used one is the highest-posterior density region generator, which produces the region with mass \(1-\alpha\) occupying the smallest-possible volume in \(U\)1.
Footnote 1: Note this is ill-defined for the uniform density function.
Next, we introduce two central definitions for this work, adapted from Hermans et al. (2021) (henceforth H21).
**Definition 3**.: _The **coverage probability** for a generator \(\mathcal{G}\), credibility level \(\alpha\) and datum \(x\) is_
\[\mathrm{CP}(\hat{p},\alpha,x,\mathcal{G})=\mathbb{E}_{p(\theta|x)}\left[ \mathds{1}\left(\theta\in\mathcal{G}(\hat{p},\alpha,x)\right)\right]\,. \tag{3}\]
**Definition 4**.: _The **expected coverage probability** for a generator \(\mathcal{G}\) and credibility level \(\alpha\) is the coverage probability averaged over the data distribution:_
\[\mathrm{ECP}(\hat{p},\alpha,\mathcal{G})=\mathbb{E}_{p(x)}\left[\mathrm{CP}( \hat{p},\alpha,x,\mathcal{G})\right]\,. \tag{4}\]
### Coverage probability
We now demonstrate some basic facts about estimators with correct coverage probabilities. We begin with a straightforward result:
**Theorem 1**.: _The posterior has coverage probability \(\mathrm{CP}(p,\alpha,x,\mathcal{G})=1-\alpha\) for all values of \(x\) and any credible region generator \(\mathcal{G}(\hat{p},\alpha,x)\)._
**Proof** Substituting \(\hat{p}(\theta|x)=p(\theta|x)\), the definition of coverage probability becomes:
\[\mathrm{CP}(p,\alpha,x,\mathcal{G}) =\mathbb{E}_{p(\theta|x)}\left[\mathds{1}(\theta\in\mathcal{G}(p, \alpha,x)\right] \tag{5}\] \[=\int_{\mathcal{G}(p,\alpha,x)}\mathrm{d}\theta\,p(\theta|x)\] \[=1-\alpha\,,\]
where the last line follows from the definition of a credible region. \(\blacksquare\)
It follows trivially from this that the posterior has \(\mathrm{ECP}(p,\alpha,x,\mathcal{G})=1-\alpha\) as well.
Next, we prove the more interesting reverse direction of this theorem. To do this we introduce the concept of a _positionable credible region generator_\(\mathcal{P}_{\theta_{r}}(\hat{p},\alpha,x)\) that generates credible regions positioned at \(\theta_{r}\). Technically, this means \(\lim_{\alpha\to 1}\mathcal{P}_{\theta_{r}}(\hat{p},\alpha,x)=\mathds{1}( \theta=\theta_{r})\) for all \(x\) and \(\theta_{r}\). The regions' shapes are not important: they could be, for example, balls or hypercubes. We also define the average of a function \(f(\theta)\) over a credible region \(\Theta\) positioned at \(\theta_{r}\) as
\[\overline{f(\cdot)}(\Theta):=\frac{1}{\mathrm{vol}[\Theta]}\int_{\Theta} \mathrm{d}\theta\,f(\theta)\,. \tag{6}\]
This is itself a probability density function over \(\theta_{r}\).
**Theorem 2**.: _Suppose the coverage probability of a posterior estimator is equal to \(1-\alpha\) for a positionable credible region generator \(\mathcal{P}_{\theta_{r}}\) for all \(\theta_{r}\), \(x\) and \(\alpha\). Further, suppose that \(\hat{p}(\cdot|x)\) has support everywhere in the parameter space for all \(x\). Then \(\hat{p}(\cdot|x)=p(\cdot|x)\)._
* Define \(\Theta:=\mathcal{P}_{\theta_{r}}(\hat{p},\alpha,x)\) for clarity.
The integral in the definition of the coverage probability can be written as
\[\begin{split}\mathrm{CP}(\hat{p},\alpha,x,\mathcal{P}_{\theta_{ r}})&=1-\alpha\\ &=\int_{\Theta}\mathrm{d}\theta\,p(\theta|x)\\ &=\mathrm{vol}[\Theta]\,\overline{p(\cdot|x)}(\Theta)\,,\end{split} \tag{7}\]
where first equality follows by assumption. Since we've assumed \(\hat{p}(\cdot|x)\) has support everywhere, the volume of the credible region is positive. By the definition of a credible region, we also have
\[1-\alpha=\int_{\Theta}\mathrm{d}\theta\,\hat{p}(\theta|x)=\mathrm{vol}[\Theta ]\,\overline{\hat{p}(\cdot|x)}(\Theta)\,. \tag{8}\]
Setting this equal to the previous expression yields \(\hat{p}(\cdot|x)(\Theta)=\overline{p(\cdot|x)}(\Theta)\), which holds for all \(\theta_{r}\) and \(x\) by assumption. Taking \(\alpha\to 1\) (i.e., making \(\Theta\) small) gives the desired result.
### Expected coverage probability
The previous result is still not very useful, since it is computationally very expensive to calculate the coverage probability of a posterior estimator. Practically, doing so requires producing histograms of the samples from \(p(\theta,x)\) in \(x\), which may be high-dimensional. However, as pointed out in H21, it's much simpler to compute the _expected_ coverage probability.
The next theorem is our main theoretical result: correct expected coverage is enough to verify the posterior estimator is optimal, as long as it is correct for any function \(\theta_{r}(x)\) defining the positions of the credible regions.
**Theorem 3**.: _Suppose the expected coverage probability of \(\hat{p}\) is equal to \(1-\alpha\) for a positionable credible region generator \(\mathcal{P}_{\theta_{r}}\) for all \(\alpha\), \(x\), and \(\theta_{r}(\cdot)\) assigning a position to the credible regions as a function of \(x\). Further suppose that \(\hat{p}(\cdot|x)\) and \(p(\theta,x)\) are nonzero for all \(\theta\) and \(x\). Then \(\hat{p}(\cdot|x)=p(\cdot|x)\)._
* Again, let \(\Theta:=\mathcal{P}_{\theta_{r}}(\hat{p},\alpha,x)\) for clarity.
First, we leverage the definition of credible regions to find an expression for the volume of \(\Theta\):
\[1-\alpha=\int_{\Theta}\mathrm{d}\theta\,\hat{p}(\theta|x)=\mathrm{vol}[\Theta ]\,\overline{p(\cdot|x)}(\Theta)\,, \tag{9}\]
which implies
\[\mathrm{vol}[\Theta]=\frac{1-\alpha}{\overline{p(\cdot|x)}(\Theta)}\,. \tag{10}\]
This allows us to expand and simplify the expression for the expected coverage:
\[\begin{split}\mathrm{ECP}(\hat{p},\alpha,\mathcal{P}_{\theta_{r}} )&=1-\alpha\\ &=\int\mathrm{d}x\,p(x)\int_{\Theta}\mathrm{d}\theta\,p(\theta|x )\\ &=\int\mathrm{d}x\,p(x)\,\,\mathrm{vol}[\Theta]\,\overline{p( \cdot|x)}(\Theta)\\ &=(1-\alpha)\int\mathrm{d}x\,p(x)\,\overline{\frac{p(\cdot|x)}{ \hat{p}(\cdot|x)}(\Theta)}\,.\end{split} \tag{11}\]
Canceling the factors of \(1-\alpha\) gives that the integral in the last line is equal to \(1\).
By assumption, this holds for _any_ choice of position function \(\theta_{r}(x)\). We can therefore take the functional derivative of the integral with respect to \(\theta_{r}(x)\). Recalling that the averages in the integrand depend on \(\theta_{r}\), we obtain
\[0 =\frac{\delta}{\delta\theta_{r}(x)}\int\mathrm{d}x\,p(x)\,\frac{ \overline{p(\cdot|x)}(\Theta)}{\hat{p}(\cdot|x)(\Theta)} \tag{12}\] \[=\int\mathrm{d}x\,\delta\theta_{r,i}(x)\,p(x)\frac{\partial}{ \partial\theta_{r,i}}\left(\frac{\overline{p(\cdot|x)}(\Theta)}{\hat{p}(\cdot |x)}(\Theta)\right)\] (13) \[=\int\mathrm{d}x\,\delta\theta_{r,i}(x)\,\overline{\frac{p(\cdot |x)}{\hat{p}(\cdot|x)}(\Theta)}\] \[\qquad\qquad\times\left[\frac{\partial\log\overline{p(\cdot|x)}( \Theta)}{\partial\theta_{r,i}}-\frac{\partial\log\overline{\hat{p}(\cdot|x)}( \Theta)}{\partial\theta_{r,i}}\right]\,, \tag{14}\]
where the \(i\) subscript indexes the components of \(\theta_{r}\). Since this expression must hold for all variations \(\delta\theta_{r,i}\), the integrand must evaluate to zero (i.e., the Euler-Lagrange equation must be satisfied). By assumption, the factor outside the braces in the integrand is nonzero, implying
\[\frac{\partial\log\overline{p(\cdot|x)}(\Theta)}{\partial\theta_{r,i}}=\frac{ \partial\log\overline{\hat{p}(\cdot|x)}(\Theta)}{\partial\theta_{r,i}}\,. \tag{15}\]
This implies \(\log\overline{p(\cdot|x)}(\Theta)=\log\overline{\hat{p}(\cdot|x)}(\Theta)+c(x)\), for some \(x\)-dependent integration constant \(c\). But since the functions inside the logarithms themselves densities, we have \(c(\cdot)=0\). Taking the limit \(\alpha\to 1\) gives \(\hat{p}(\theta|x)=p(\theta|x)\).
The coverage testing method we will introduce in the next section is effectively a practical implementation of this theorem.
## 3 Our method
With our main theoretical result proven (c.f. Theorem 3), in this section we use it to first explain the blind spots of typical coverage probability calculations and then introduce our new coverage checking procedure.
### High posterior density coverage testing
Before introducing our method, we first discuss HPD coverage.
HPD credible regions are often used to assess coverage (Hermans et al., 2021; Rozet et al., 2021; Miller et al., 2022; Deistler et al., 2022; Tejero-Cantero et al., 2020). Perhaps the most intuitive way of calculating expected coverage probability using HPD regions is to compute such a region for every value of \(\alpha\), then calculate the expected coverage using (3). In practice, however, there is a more efficient calculation of expected coverage probabilities, which is derived from the following:
**Definition 5**.: _A pair (\(\theta^{*},x^{*}\)), and a posterior estimator \(\hat{p}(\theta|x)\) uniquely define a HPD confidence region as:_
\[\Theta_{\mathrm{HPD}}\left(x^{*},\theta^{*},\hat{p}\right):=\left\{\theta \in U\mid\hat{p}(\theta|x^{*})\geq\hat{p}(\theta^{*}|x^{*})\right\}. \tag{16}\]
_This, in turn, defines a corresponding_ **HPD confidence level**__\(1-\tilde{\alpha}_{\mathrm{HPD}}(\hat{p},\theta^{*},x^{*})\). We denote the generator of HPD credible regions as \(\mathcal{H}\)._
We can then rederive an important result for this HPD confidence level:
**Lemma 1**.: _We can calculate the ECP of the \(1-\alpha\) highest posterior density regions as:_
\[\mathrm{ECP}(\hat{p},\alpha)=\mathbb{E}_{p(\theta,x)}\left[\mathds{1}\left( \tilde{\alpha}_{\mathrm{HPD}}(\hat{p},\theta,x)\geq\alpha\right)\right]. \tag{17}\]
* Firstly, we notice that: \[\theta^{*}\in\Theta_{\mathrm{HPD}}\left(x^{*},\alpha,\hat{p}\right)\Leftrightarrow \tilde{\alpha}_{\mathrm{HPD}}(\hat{p},\theta^{*},x^{*})\geq\alpha.\] (18) This follows from the fact that, if \(\theta^{*}\in\Theta_{\mathrm{HPD}}\left(x^{*},\alpha,\hat{p}\right)\), then the HPD confidence region defined by \((\theta^{*},x^{*})\) is contained in \(\Theta_{\mathrm{HPD}}\left(x^{*},\alpha,\hat{p}\right)\). Then, from (4), it follows that (17) is true. \(\blacksquare\)
This result can be used in practice to calculate the HPD ECP from samples of the true joint distribution \(p(\theta,x)\), as shown in Algorithm 1. As previously discussed, this algorithm requires explicit evaluations of the posterior estimator. We try to provide more intuitive connections between both definitions in SSA.
As is well-known in the literature, estimating the ECP with HPD regions is not enough to demonstrate a posterior estimator is optimal. Theorem 3 reveals why: by definition, the HPD region generator is not positionable. Positionability is critical to the proof of the theorem, since it requires varying the position function \(\theta_{r}(x)\).
To concretely demonstrate how considering only HPD coverage can fail, we consider the interesting case discussed in
Figure 1: A graphical illustration of our proposed method for coverage probabilities. _Top left_: we use each simulation from the validation set to generate a number of samples \(n\) from the posterior estimator. _Top right_ We pick a random location in parameter space as our reference \(\theta_{r}\). _Bottom left_: We calculate the distance \(d_{i}\) between each sample and \(\theta_{r}\). _Bottom right_: We calculate the distance \(d_{\mathrm{true}}\) between the true value of the parameters \(\theta_{\mathrm{true}}\), and \(\theta_{r}\) (bottom right panel).
H21 of \(\hat{p}(\theta|x)=p(\theta)\). From the definition of \(\mathrm{ECP}\),
\[\begin{split}\mathrm{ECP}(\hat{p},\alpha,\mathcal{H})& =\mathbb{E}_{p(x,\theta)}[\mathds{1}(\theta\in\mathcal{H}(\hat{p},\alpha)]\\ &=\mathbb{E}_{p(\theta)}[\mathds{1}(\theta\in\mathcal{H}(\hat{p},\alpha)]\\ &=\int_{\mathcal{H}(\hat{p},\alpha)}\mathrm{d}\theta\,p(\theta)\\ &=1-\alpha\,.\end{split} \tag{19}\]
In the second line, we used the fact that HPD generator is independent of \(x\) in this case. We recognize the third line as the definition of a credible region for the prior, yielding the fourth line. This means that \(\hat{p}(\theta|x)\) has perfect HPD ECP in this case.
We now introduce a coverage testing method that remedies such blind spots.
### Distance to random point coverage testing
Our method generates spherical credible regions around position \(\theta_{r}\):
**Definition 6**.: _A pair (\(\theta^{*},x^{*}\)), and a posterior estimator \(\hat{p}(\theta|x)\) uniquely define a **distance to random point (DRP)** credible region for a given \(d\) and \(\theta_{r}\):_
\[\Theta_{\mathrm{DRP}}(x^{*},\theta^{*},\hat{p},\theta_{r})=\{\theta\in U\mid d (\theta,\theta_{r})\leq d(\theta^{*},\theta_{r})\} \tag{20}\]
_This, in turn, defines a corresponding_ **DRP confidence level**__\(1-\tilde{\alpha}_{\mathrm{DRP}}(\hat{p},\theta^{*},\theta_{r},d)\). We call the generator of DRP regions \(\mathcal{D}_{\theta_{r}}\)._
We can calculate expected coverage similarly to the HPD case:
**Lemma 2**.: _We can calculate the ECP of the \(1-\alpha\) highest posterior density regions as:_
\[\mathrm{ECP}(\hat{p},\alpha)=\mathbb{E}_{p(\theta,x)}\left[\mathds{1}(\tilde{ \alpha}_{\mathrm{DRP}}(\hat{p},\theta^{*},\theta_{r},x^{*},d)\geq\alpha)\right]. \tag{21}\]
Proof.: Let \(\mathcal{D}_{\theta_{r}}(x^{*},\alpha,\hat{p})\) be a ball centered at \(\theta_{r}\) with radius \(R(\alpha)\) and credibility \(1-\alpha\). Similarly, the DRP region defined by \((\theta^{*},x^{*})\) has the same center, radius \(d(\theta^{*},\theta_{r})\), and credibility \(1-\tilde{\alpha}\) for some \(\tilde{\alpha}\). It then follows that:
\[\theta^{*}\in\mathcal{D}_{\theta_{r}}(x^{*},\alpha,\hat{p})\Leftrightarrow d (\theta^{*},\theta_{r})\leq R(\alpha). \tag{22}\]
Since \(R\) is a monotonic function of \(\alpha\) and the regions are centered on the same point, we have
\[d(\theta^{*},\theta_{r})<R(\alpha)\Leftrightarrow\tilde{\alpha}\geq\alpha\,. \tag{23}\]
Then by (4) we have (21).
With this, we have everything we need to formulate our algorithm, which is presented in Algorithm 2. While similar to Algorithm 1, there are two key differences to this algorithm:
* DRP implements Theorem 3's requirement that coverage holds for all possible ways of choosing the positions of the credible regions by randomly sampling \(\theta_{r}\) from some distribution \(\tilde{p}(\theta|x)\) that can depend on \(x\).
* DRP probes credible regions of smaller size (i.e., larger \(\alpha\)) as the number of posterior samples, simulations and reference points tested is increased. Following the login of the proof of Theorem 3, this means it asymptotically tests whether the averages of \(\hat{p}(\theta|x)\) and \(p(\theta|x)\) match on smaller and small balls.
* DRP does not require explicit evaluations of the posterior estimator \(\hat{p}\): it only requires calculating distances between parameters sampled from \(\hat{p}\) and \(\theta_{r}\).
In the following section, we test our method in a series of experiments and compare its performance with that of HPD coverage probabilities.
## 4 Experiments
We apply our algorithm, described in Algorithm 2 to three different experiments. For all experiments, we normalize all parameters \(\theta\) to the range \([0,1]\), and unless otherwise specified, we generate reference points uniformly in the \(D\)-dimensional hypercube \(x\in[0,1]^{D}\) where \(D\) is the dimensionality of the parameter space. We use the Euclidean or L2 distance as a metric to calculate DRP regions.
### Gaussian Toy Model
As a first example, we can use a simple Gaussian toy model. In this model, we assume that all the posterior distributions
are Gaussian. Therefore, we can generate samples from the posterior for a validation simulation from the estimated mean and covariance matrix. We first generate'simulations', by uniformly sampling in our parameter space:
\[\theta_{\mathrm{true}}\sim\mathcal{U}(-5,5). \tag{24}\]
We also randomly generate the diagonal elements of the covariances matrices \(\Sigma\) of our posterior estimates (assumed to have no non-diagonal elements).
\[\log\sigma\sim\mathcal{U}(-5,-1), \tag{25}\]
To validate, we also need to know the mean of the posterior distributions. We consider three cases:
* Firstly, we draw these from a normal distribution \(\mathcal{N}(\theta_{\mathrm{true}},\Sigma)\). This means that the coverage probabilities should show a uniform distribution. We call this the _correct case_.
* Secondly, we draw the true values from \(\mathcal{N}(\theta_{\mathrm{true}},0.5\Sigma)\) and \((\mathcal{N}(\theta_{\mathrm{true}},2\Sigma))\). This means that the posterior samples come from a distribution that is too narrow (wide), and are therefore overconfident (underconfident)
* Lastly, we want to build a _biased case_. For this, we pick the means to be equal to: \[\theta_{\mathrm{true}}-\mathrm{sign}(\theta_{\mathrm{true}})\cdot Z\left(1- \frac{|\theta_{\mathrm{true}}|}{5}\right)\cdot\sigma,\] (26) where Z is the inverse survival function. The idea with this example is to create a position-dependent bias: The furthest the true value is from the origin, the more biased the posterior is. We have specifically designed
Figure 3: An example of one of our lensing simulations. The top panels show the source plane that we are trying to infer, while the bottom panels show the distorted light. From left to right, the plot shows truth, mean and standard deviation of the samples from our posterior estimator (in the case of this figure, the ‘exact’ estimator), and the residuals. The noise in the observations is set to 1 on the color scales shown here.
Figure 2: Results on the Gaussian toy model for all four cases described in §4.1. The red line shows the method presented in this paper, while the blue shows the HPD region.
this bias in a way that HPC coverage probabilities will be blind to it. However, the point of this example is to show that there are biases that HPC can be blind to, but the random nature of DRP should be able to detect.
For each of these cases, we want to compare how our method compares to the HPD coverage probability test. Because in this toy model we know the correct posterior, we can easily compute both HPD and DRP coverage probabilities. To pick the DRP reference points, we use the prior (\(\tilde{p}(\theta_{r}|x)=p(\theta_{r})\)).
The results for our Gaussian toy model are shown in Fig. 2. In each panel, the \(x\)-axis shows the credibility level \(1-\alpha\), while the \(y\)-axis shows the expected coverage \(\mathrm{ECP}(\hat{p},\alpha)\). For an optimal posterior estimator, \(\mathrm{ECP}(\hat{p},\alpha)=1-\alpha,\ \forall\alpha\in(0,1)\) as described in SS 2, which would then lead to the diagonal black dashed diagonal line. We see in the first panel that that is indeed the case for the 'correct' case, which is optimal by construction. We found consistent results amongst all values of \(D\) we tested, going up to \(D=1000\).
The second and third panels show the over and underconfident cases, respectively. We see how these cases lead to different coverage plots than the HPD method. This is not entirely unexpected: For underconfident estimators, the DRP regions from randomly selected points are more likely to cover approximately half of the posterior estimator \(\alpha\sim 0.5\), while for overconfident estimators, they are likely to cover either very little \(\alpha\sim 1\) or a lot \(\alpha\sim 0\). We expand this intuition, including some figures, in SSB. Finally, in the fourth panel, we see how the biased case cannot be detected by the HPD region but is detectable by DRP. This shows how, as explained in SS2, \(\mathrm{ECP}(\hat{p},\alpha)=1-\alpha\) does not mean the posterior is optimal for HPD regions, but it does for DRP regions.
### Revealing when estimators are uninformative
As our second benchmark, we consider the case mentioned before in which the learned posterior estimator is equal to the prior \(\hat{p}(\theta|x)=p(\theta)\). The reason why we are interested in this example is that, in that case, the expected coverage probability calculated using HPD regions will be equal to \(1-\alpha\) for any value of \(\alpha\), as previously discussed. However, with DRP we have the ability to avoid this blindspot by sampling reference points in a manner dependent on \(x\).
To make this concrete, we consider a one-dimensional example with a Gaussian prior \(p(\theta)=\mathcal{N}(\theta;\mu_{0},\sigma_{0}^{2})\). Our 'forward model' in this case is simply generating a number \(n_{x}\) of data points, from \(\left\{x_{i}\right\}_{i=1}^{n_{x}}\sim N(\theta,\sigma^{2})\). In this conjugate model, we can easily derive the true posterior:
\[p\left(\theta|\left\{x_{i}\right\}_{i=1}^{n_{x}}\right) =\mathcal{N}(\mu|m,s), \tag{27}\] \[s =\left(\frac{1}{\sigma_{0}^{2}}+\frac{n+x}{\sigma+x^{2}}\right)^ {-1},\] (28) \[m =s\left(\frac{\mu_{0}}{\sigma_{0}^{2}}+\frac{\sum_{i}x_{i}}{ \sigma_{x}^{2}}\right) \tag{29}\]
We fix \(n_{x}=50\), \(\mu_{0}=0\), \(\sigma_{0}=1\) and \(\sigma_{x}=0.1\). We generate \(500\) samples from the forward model, and calculate expected coverage from an 'uninformative estimator'
Figure 4: Expected coverage vs probability level for the uninformative posterior estimator described in §4.2. The blue line shows the coverage calculated using HPD regions, while the red lines use DRP regions. The continuous line uses reference points that are independent of \(x\), while the dot-dashed line uses reference points that depend on \(x\).
\(\hat{p}(\theta|x)=p(\theta)\) in three ways: 1) using HPD regions, 2) using DRP regions where \(\theta_{r}\) is drawn randomly from the prior, and 3) using DRP regions where \(\theta_{r}=x_{0}+u\), where \(x_{0}\) is the first observation, and \(u\sim\mathcal{U}(-1,1)\). We expect the first two methods to have ECP equal to \(1-\alpha\), but for the third to not.
We show the results in Fig. 4. First, we notice that when we use HPD regions, we get the correct expected coverage, even though the estimator is wrong (validating the theoretical discussion in SS 2). This means that, in this case, HPD coverage could fool us into thinking our estimator is optimal when in reality it is completely uninformative. Interestingly, the same happens when we use DRP regions with reference points selected randomly from the prior (red line). This is because, as discussed in SS2, Theorem 3 only holds in both directions when the choice of the region depends on \(x\). Finally, as anticipated, the expected coverage is _not_\(1-\alpha\) when the sampling distribution for \(\theta_{r}\) has some \(x\)-dependence. Therefore, we see how even when we introduce a small dependence on \(x\) to \(\tilde{p}(\theta_{r}|x)\) in DRP reveals that the posterior estimator is not optimal.
### Gravitational Lensing
To test our algorithm in a more realistic and high-dimensional setting, we consider a simplified astrophysics problem: gravitational lensing source reconstruction. Gravitational lensing occurs in nature when light rays from a distant galaxy move along curved rather than straight paths due to the mass of another intervening galaxy (the 'lens') (Treu, 2010). The result is a highly-distorted, ring-shaped image of the background galaxy. The goal of source reconstruction is to infer from a noisy image what the light from the source galaxy looks like without distortions, assuming the mathematical form of the distortions is perfectly known.
The simulator in this scenario samples the source galaxy's light \(\theta\) from a multivariate-normal distribution that we fit to a dataset of galaxy images (Stone and Courteau, 2019; Stone et al., 2021). A matrix \(A\) encoding the lensing distortions are then applied, and the final observation is produced by adding Gaussian pixel noise of standard deviation \(\sigma_{n}\), so that \(x\sim\mathcal{N}(A\theta,\sigma_{n}^{2})\). For computational convenience, we use \(16\times 16\)-pixel source images and \(32\times 32\)-pixel observations.
As shown in Adam et al. (2022) and reviewed in SSC, posterior samples of \(\theta\) can be generated using techniques from diffusion modeling. In general, this approach yields subtly biased posterior samples. However, with our multivariate-normal prior on \(\theta\), it is possible to generate unbiased posterior samples. We refer to samples from these as 'biased' and 'exact' in our results.
Fig. 5 shows the results for both the exact and the biased posterior estimators, using \(100\) simulations, and \(1000\) posterior samples per simulation, and sampling \(\theta_{r}\) from the prior. As expected, our method gets the correct coverage for the exact estimator. It is important to stress that generative models are needed for parameter spaces of this dimensionality (256 parameters), and no previously existing methods could calculate ECPs to assess the optimality of such models. The biased estimator, on the other hand, produces a similar curve to that of the bottom right panel of Fig. 2, which indicates that it is indeed biased.
## 5 Conclusions
Testing the accuracy of estimated posteriors is a key element of parameter inference. While there exist well-establish methods to test posterior sampling techniques such as MCMC, it is difficult to test the accuracy of posterior estimated from alternative methods such as deep learning methods. This is the case for both likelihood-based and simulation-based inference. In this paper, we introduced DRP coverage probabilities as a new technique to test the accuracy of estimated posteriors using posterior samples alone, when explicit posterior evaluations are not available. We have shown that this test is sufficient to prove that the inference is optimal, while other similar tests were necessary but not sufficient.
We applied our test successfully to a variety of inference problems, in particular in cases where alternative methods fail, and shown that it scales well to high-dimensional posteriors. Therefore, we propose DRP coverage probabilities as a tool to test the accuracy of future posterior inference analyses from generative models.
Figure 5: Coverage probability vs credibility level for our lensing example. We see how, as expected, the exact estimator (blue) produces optimal posteriors.
## Acknowledgements
We would like to thank Bruno Regaldo, Shirley Ho and Nikolay Malkin for their feedback on preliminary versions of the paper.
This research was made possible by a generous donation by Eric and Wendy Schmidt with the recommendation of the Schmidt Futures Foundation. The work is in part supported by computational resources provided by Calcul Quebec and the Digital Research Alliance of Canada. Y.H. and L.P. acknowledge support from the National Sciences and Engineering Council of Canada grant RGPIN-2020-05102, the Fonds de recherche du Quebec grant 2022-NC-301305 and 300397, and the Canada Research Chairs Program. P.L acknowledges support from the Simons Foundation.
|
2308.16316 | Ten Years of Generative Adversarial Nets (GANs): A survey of the
state-of-the-art | Since their inception in 2014, Generative Adversarial Networks (GANs) have
rapidly emerged as powerful tools for generating realistic and diverse data
across various domains, including computer vision and other applied areas.
Consisting of a discriminative network and a generative network engaged in a
Minimax game, GANs have revolutionized the field of generative modeling. In
February 2018, GAN secured the leading spot on the ``Top Ten Global
Breakthrough Technologies List'' issued by the Massachusetts Science and
Technology Review. Over the years, numerous advancements have been proposed,
leading to a rich array of GAN variants, such as conditional GAN, Wasserstein
GAN, CycleGAN, and StyleGAN, among many others. This survey aims to provide a
general overview of GANs, summarizing the latent architecture, validation
metrics, and application areas of the most widely recognized variants. We also
delve into recent theoretical developments, exploring the profound connection
between the adversarial principle underlying GAN and Jensen-Shannon divergence,
while discussing the optimality characteristics of the GAN framework. The
efficiency of GAN variants and their model architectures will be evaluated
along with training obstacles as well as training solutions. In addition, a
detailed discussion will be provided, examining the integration of GANs with
newly developed deep learning frameworks such as Transformers, Physics-Informed
Neural Networks, Large Language models, and Diffusion models. Finally, we
reveal several issues as well as future research outlines in this field. | Tanujit Chakraborty, Ujjwal Reddy K S, Shraddha M. Naik, Madhurima Panja, Bayapureddy Manvitha | 2023-08-30T20:46:45Z | http://arxiv.org/abs/2308.16316v1 | # Ten Years of Generative Adversarial Nets (GANs):
###### Abstract
Since their inception in 2014, Generative Adversarial Networks (GANs) have rapidly emerged as powerful tools for generating realistic and diverse data across various domains, including computer vision and other applied areas. Consisting of a discriminative network and a generative network engaged in a Minimax game, GANs have revolutionized the field of generative modeling. In February 2018, GAN secured the leading spot on the "Top Ten Global Breakthrough Technologies List" issued by the Massachusetts Science and Technology Review. Over the years, numerous advancements have been proposed, leading to a rich array of GAN variants, such as conditional GAN, Wasserstein GAN, CycleGAN, and StyleGAN, among many others. This survey aims to provide a general overview of GANs, summarizing the latent architecture, validation metrics, and application areas of the most widely recognized variants. We also delve into recent theoretical developments, exploring the profound connection between the adversarial principle underlying GAN and Jensen-Shannon divergence, while discussing the optimality characteristics of the GAN framework. The efficiency of GAN variants and their model architectures will be evaluated along with training obstacles as well as training solutions. In addition, a detailed discussion will be provided, examining the integration of GANs with newly developed deep learning frameworks such as Transformers, Physics-Informed Neural Networks, Large Language models, and Diffusion models. Finally, we reveal several issues as well as future research outlines in this field.
Adversarial learning, Image generation, Deep learning, Model evaluation and selection, Generative Adversarial Networks, Generator network, Artificial intelligence.
## I Introduction
Generative Adversarial Networks (GANs) have emerged as a transformative deep learning approach for generating high-quality and diverse data. In GAN, a generator network produces data, while a discriminator network evaluates the authenticity of the generated data. Through an adversarial mechanism, the discriminator learns to distinguish between real and fake data, while the generator aims to produce data that is indistinguishable from real data.
Since their introduction in 2014 by Goodfellow et al. [1], GANs have witnessed remarkable advancements, leading to the development of numerous specialized variants that excel in creating data across diverse fields. Conditional GAN [2] enables the generation of data based on specific conditions or desired qualities, such as synthesizing photos of a particular class. CycleGAN [3] have proven effective in image-to-image translation tasks, even in the absence of paired data. StackGAN [4] has demonstrated the ability to generate high-resolution images from textual descriptions, pushing the boundaries of visual realism. Progressive GAN [5] has achieved exceptional results in producing high-quality images with increasing resolution. StyleGAN [6], known for its versatility, generates images with a wide range of styles and distinctive features. Furthermore, GANs have extended beyond visual domains and shown potential in generating textual [7], musical [8], 3D modeling [9], future cities [10], time series [11] data among many others.
The success of GANs has led to their adoption in various applications, such as image and video synthesis, data augmentation, super-resolution, inpainting, anomaly detection, and image editing. GANs have also been employed to address data scarcity issues in machine learning, where they generate synthetic data to improve the effectiveness of models trained on limited datasets [12]. Additionally, GANs have found utility in creating realistic simulations for video games and virtual reality environments, enhancing user experiences and immersive interactions [13]. To ensure the comprehensiveness of this survey, we conducted an extensive review of the research papers encompassing both theoretical advancements and practical applications of GAN. Our survey draws insights from diverse fields, including computer vision, natural language processing, autonomous vehicles, time series, medical domain, and many others. Notable papers that significantly contributed to our survey include Goodfellow et al. [1] for introducing the GAN framework, Mirza and Osindero [2] for pioneering conditional GAN, Zhu et al. [3] for introducing CycleGAN, Karras et al. [5] for their seminal work on progressive GAN, and Chen et al. [14] for the breakthroughs achieved with InfoGAN, among many others.
Despite their remarkable achievements, GANs face several challenges in practice. One prominent issue is the instability of the training process, which can result in mode collapse or oscillation [15]. Another challenge lies in the evaluation of generated data, as conventional assessment criteria may not adequately capture the diversity and realism of the synthesized samples [16]. Furthermore, GANs have been observed to exhibit biases, particularly concerning gender and race, potentially reflecting the biases present in the training data [17, 18]. To overcome the limitations of GAN various modified
training approaches and hybridization with popular deep learning architectures such as Transformers [19], Physics-Informed Neural Network (PINN) [20], Large language models (LLMs) [21], and Diffusion models [22] have been proposed in the literature. These modified methodologies have shown promise in enhancing the synthetic data generation capabilities of GANs.
Finally, GANs have emerged as an effective tool for producing high-quality and varied data in several disciplines. Notwithstanding the difficulties connected with their use, GANs have shown outstanding results and have the potential to drive innovation in disciplines such as computer vision, machine learning, and virtual reality. This in-depth analysis covers the accomplishments and limitations of GAN, as well as the promise of these approaches for future research and applications. This comprehensive survey aims to explore both the accomplishments and challenges of GAN. The contributions of the article can be summarized as follows:
* **Exploration of Vanilla GAN and their applications:** We offer an elaborate description of the GAN model, encompassing its architectural particulars and the mathematical optimization functions it employs. We summarize the areas where GANs have emerged as a promising tool in efficiently solving real-world problems with their generative capabilities.
* **Evolution of state-of-the-art GAN models across the decade:** Our comprehensive analysis encompasses a wide range of cutting-edge GAN adaptations crafted to address practical challenges across various domains. We delve into their structural designs, practical uses, execution methods, and constraints. To facilitate a lucid understanding of the field's progress, we present an intricate chronological breakdown of GAN model advancements. Furthermore, we evaluate recent field surveys, outlining their pros and cons, while also tackling these aspects within our own survey.
* **Theoretical advancements of GANs:** We give a technical overview of the theoretical developments of GANs by exploring the connections between adversarial training and Jensen-Shannon divergence and discussing their optimality features.
* **Assessment of GAN Models:** We provide a comprehensive breakdown of the essential performance measures utilized to assess both the caliber and range of samples produced by GANs. These metrics notably fluctuate depending on the specific domains of application.
* **Limitations of GANs:** We critically examine the constraints associated with GANs, primarily stemming from issues of learning instability, and discuss various enhancement strategies aimed at alleviating these challenges.
* **Anticipating future trajectories:** In addition to evaluating the pros and cons of current GAN-centric approaches, we illuminate the hybridization of emerging deep learning models such as Transformers, PINNs, LLMs, and Diffusion models with GANs. We outline potential avenues for research within this domain by summarizing several open scientific problems.
This survey is structured in the following manner. Section II digs into related works and recent surveys giving background information and emphasizing the most significant developments in GAN done over the decade. Section III is a concise overview of GAN describing the fundamental components and intricate details of its architecture. In Section IV, we examine the wide range of fields that GANs have influenced, such as computer vision, natural language processing, time series, and audio, among many others. Subsequently, Section V reviews the innovations and applications of popular GAN-based frameworks from various domains along with their implementation software and discusses their limitations. This section also provides a timeline for the GAN models to have a clear vision of the development of this field. Section VI summarizes the recent theoretical developments of GAN and its variants. Section VII reviews the metrics used for evaluating GAN-based models. Section VIII analyzes the limitations of GANs and presents its remedial measures. Section IX discusses the potential and usability of GAN with the development of new deep learning technologies such as Transformers, PINNs, LLMs, and Diffusion models. Section X proposes potential directions for further research in this field. Finally, Section XI concludes the survey by indicating prospective directions for future research projects while also offering a closing assessment of the successes and limits of GANs.
## II Related Works and Recent Surveys
GANs are a promising deep learning framework for generating artificial data that closely resembles real-world data [1]. Early GAN-related research focused on creating realistic visuals. Radford et al. proposed a deep convolutional GAN (DCGAN) in 2015 [23], which utilized convolutional layers, batch normalization, and a specific loss function to generate high-quality images. DCGAN introduced important innovations in image generation. In 2017, Karras et al. [5] introduced progressive growing GAN (ProGAN), which generates higher quality and resolution images compared to vanilla GAN. ProGAN trains multiple generators and discriminators in a stepwise manner, gradually increasing the resolution of the generated images. The results demonstrated the ability of ProGAN to produce images closely resembling genuine photos for various datasets, including the CelebA dataset [24].
GANs have found applications beyond image generation, including video production and text generation. Vondrick et al. proposed a video generation GAN (VGAN) in 2018 [38], capable of producing realistic and diverse videos by learning to track and anticipate object motion. The VGAN architecture consisted of a motion estimation network and a video-generating network, jointly trained to generate high-quality videos. The results showcased VGAN's ability to produce realistic and varied films, enabling applications like video prediction and synthesis. Text generation is another domain where GAN has been utilized. In 2017, Yu et al. introduced SeqGAN, a GAN-based text generation model [39]. SeqGAN achieved realistic and diverse text generation capabilities by maximizing a reinforcement learning goal. The model included a generator responsible for text creation and a discriminator
assessing the quality of the generated text. Through reinforcement learning, the generator was trained to maximize the predicted reward based on the discriminator's evaluation. The findings demonstrated that SeqGAN outperformed previous text generation algorithms, producing text that was more varied and lifelike. These advancements in GAN applications for video and text generation highlight the versatility and potential of GAN frameworks in diverse domains.
Another popular area of research focuses on addressing medical questions using GANs, as highlighted in the recent paper by Tan et al. where a GAN-based scale invariant post-processing approach is proposed for lung segmentation in CT Scans [40]. A similar framework called RescueNet, developed by Nema et al., combines domain-specific segmentation methods and general-purpose adversarial learning for segmenting brain tumors [41]. Their study not only suggests a promising technique for brain tumor segmentation but also advances the development of systems capable of answering complex medical inquiries. Despite the significant breakthroughs, there are still unresolved issues in GAN architectures and applications. One prominent challenge is the instability of GAN training, which can be influenced by various factors such as architecture, loss function, and optimization technique. In 2017, Arjovsky et al. proposed a solution called Wasserstein GAN (WGAN) [15], introducing a novel loss function and optimization algorithm to address stability issues in GAN training. Their approach showed improved stability and performance on datasets like CIFAR-10 [42] and ImageNet [43].
**Related survey.** The existing body of research exploring various analytic tasks with GAN is accompanied by numerous surveys, which predominantly concentrate on specific perspectives within constrained domains, particularly computer vision and natural language processing. For instance, the survey by Jabbar et al. [26] explores applications of GANs in various industries, including computer vision, natural language processing, music, and medicine. To demonstrate the influence and promise of GANs in certain application domains, they also highlight noteworthy academic publications and real-world instances. The study tackles the difficulties and problems related to GAN training in addition to discussing their variations. The authors [26] investigate several training strategies, including minimax optimization, training stability, and assessment measures. They examine the typical challenges that arise during GANs training, such as mode collapse and training instability, and they give numerous solutions that have been suggested by researchers to address these problems. However, it does not specifically concentrate on GAN-based methods for imbalanced, time series, geoscience, and other data types and fails to reflect the most recent advancements in the field. The survey by Xia et al. [33] focuses on two primary categories of techniques for GAN inversion: Optimization-based methods and Reconstruction-based methods. To locate the hidden code that optimally reconstructs the supplied output, optimization-based approaches formulate an optimization issue. Reconstruction-based approaches, on the other hand, use different methods, such as feature matching or autoencoders, to directly estimate the latent code. An in-depth discussion of these strategies' advantages, disadvantages, and trade-offs is provided in the article. The non-convexity of the optimization issue and the lack of ground truth data for assessment are only two of the difficulties faced in GAN inversion that are highlighted in this article. The authors [33] additionally go through specific evaluation standards and measures designed for computer vision tasks. In addition, the study discusses current developments and variants in GAN inversion, such as techniques for managing conditional GAN, detaching latent variables, and dealing with different modalities. Aspect modification, domain adaptability, and unsupervised learning are a few of the applications and potential future directions of GAN inversion that are covered. A recent study by Durgadevi et al. [27] presents a comprehensive overview of numerous GAN variants that have been proposed until 2020. Since its inception, GANs have undergone significant evolutions leading researchers to propose various enhancements and modifications aimed at addressing the prevalent challenges. These alterations encompass diverse aspects such as architectural design, training methods, or a combination of both. In this survey [27] the authors delve into the application and impact of GANs in different domains including image processing, medicine, face detection, and text transferring. The survey by Alom et al. [44] covers various aspects of the deep learning paradigm, such as fundamental ideas, algorithms, architec
tures, and contemporary developments including convolutional neural networks (CNNs), recurrent neural networks (RNNs), deep belief networks (DBNs), generative models, transfer learning, and reinforcement learning. The survey of Nandhini et al. [28] offers a thorough investigation of the application of deep CNNs and deep GANs in computational image analysis driven by visual perception. The designs and methodology used, the outcomes of the experiments, and possible uses for these approaches are covered in the paper. Overall this study provides a retrospective review of the development of GANs for the deep learning-based image analysis community. The survey by Kulkarni et al. [25] presents an overview of various strategies, techniques, and developments used in GAN-based music generation. The survey of Sampath et al. [30] summarizes the current advances in the GAN landscape for computer vision tasks including classification, object detection, and segmentation in the presence of an imbalanced dataset. Another survey by Brophy et al. [37] attempts to review various discrete and continuous GAN models designed for time series-related applications. The study by Xun et al. [34] reviews more than 120 GAN-based models designed for region-specific medical image segmentation that were published until 2021. Another recent survey by Ji et al. [35] summarizes the task-oriented GAN architectures developed for symbolic music generation but other application domains are overlooked. The survey by Wang et al. [29] reviews various architecture-variant and loss-variant GAN frameworks designed for addressing practical challenges relevant to computer vision tasks. Another survey by Gui et al. [31] provides a comprehensive review of task-oriented GAN applications and showcases the theoretical properties of GAN and its variants. The study by Iglesias et al. [36] summarizes the architecture of the latest GAN variants, optimization of the loss functions, and validation metrics in some promising application domains including computer vision, language generation, and data augmentation. The survey by Li et al. [32] reviews the theoretical advancements in GAN and also provides an overview of the mathematical and statistical properties of GAN variants. A detailed comparison between our survey and others is presented in Table I.
Although there are several papers reviewing GAN architecture and its domain-specific applications, none of them concurrently emphasize on applications of GAN in geoscience, urban planning, data privacy, imbalanced learning, and time series problems in a comprehensive manner. Methods developed to deal with these practical problems are underrepresented in past surveys. Moreover, the stability of GANs training, assessment of the produced data, and ethical issues with GAN are some of the issues that still need to be resolved. To fully exploit the future potential of GANs, more study in these areas is required. To fill the gap, this survey offers a comprehensive and up-to-date review of GANs, encompassing mainstream tasks ranging from audio, video, and image analysis, to natural language processing, privacy, geophysics, and many more. Specifically, we first provide several applied areas of GAN and discuss existing works from task and methodology-oriented perspectives. Then, we delve into multiple popular application sectors within the existing research of GAN with their limitations and propose several potential future research directions. Our survey is intended for general machine learning practitioners interested in exploring and keeping abreast of the latest advancements in GAN for multi-purpose use. It is also suitable for domain experts applying GANs to new applications or exploring novel possibilities building on recent advancements.
## III Overview of Generative Adversarial Network
Generative Adversarial Networks (GANs) signify a pivotal advancement in artificial intelligence, offering a robust framework to craft synthetic data that closely resembles real-world information [45]. Consisting of two interconnected neural networks, the Generator and Discriminator, GANs engage in a dynamic adversarial process that is redefining the landscape of deep generative modeling [1, 46]. By orchestrating this interplay, GANs transcend data generation frontiers across various domains, from crafting images to generating language, demonstrating a profound influence on reshaping the way machines comprehend and replicate intricate data distributions. This dynamic is facilitated through the Generator (\(G\)) network, entrusted with producing new data samples based on the input data distribution, while the Discriminator (\(D\)) network is devoted to discerning genuine data from their synthetic counterparts.
From a mathematical viewpoint, the \(G\) network considers a latent space \(z\) from the noise distribution \(p_{z}\) as input and generates synthetic samples \(G(z)\). Its goal is to generate data that is indistinguishable from real data samples \(x\) originating from the probability distribution \(p_{\text{data}}\). On the other hand, \(D\) takes both real data samples \(x\) from the actual dataset and fake data samples \(G(z)\) generated by \(G\) as input and classifies whether the input data is real or fake. It essentially acts as a "critic" that evaluates the quality of the generated data. The training process consists of both networks working in a two-player zero-sum game [36]. While \(G\) aims to produce more realistic outcomes, \(D\) enhances its ability to distinguish between real and fake samples. This dynamic prompts both players to evolve in tandem: if \(G\) generates superior outputs, it becomes tougher for \(D\) to discern them. Conversely, if \(D\) becomes more accurate, \(G\) faces greater difficulty in deceiving \(D\). This process resembles a minimax game, where \(D\) strives to maximize accuracy while \(G\) seeks to minimize it [47]. The goal is to find a balance where \(G\) produces increasingly convincing data while \(D\) becomes better at classifying real data from fake ones. The mathematical expression of this minimax loss function can be represented as:
\[\min_{G}\,\max_{D}\,L=\mathbb{E}_{x\sim p_{\text{data}}}\left[\log D(x) \right]+\mathbb{E}_{z\sim p_{z}}\left[\log(1-D(G(z)))\right], \tag{1}\]
where the probability values \(D(\mathbf{x})\) and \(D(G(\mathbf{z}))\) represent the discriminator's outputs for real and fake samples, respectively. The first term in Eq. (1) encourages \(D\) to correctly classify real data by maximizing \(\log D(x)\), whereas the second term encourages \(G\) to produce realistic data that \(D\) classifies as real by minimizing \(\log(1-D(G(z)))\). In essence, \(G\) aims to minimize the loss while \(D\) aims to maximize it, leading to a continual back-and-forth training process. Throughout the
training, the generator's performance improves as it learns to generate more realistic data, and the discriminator's performance improves as it becomes better at distinguishing real from fake data. Ideally, this competition results in a generator that produces data that is virtually indistinguishable from real data, as judged by the discriminator. A visual representation of the GAN's architectural details and its primary functions is presented in Fig. 1.
During the time of the inception of GAN in 2014, Goodfellow et al. [1] proved the existence of a unique solution for the minimax loss function. This solution became popular as Nash Equilibrium (NE) which reflects the equilibrium point where the generator's capacity to generate realistic data matches the discriminator's capacity to distinguish between real and fake data, resulting in high-quality synthetic data that closely resembles the true underlying data distribution [48]. However, recent studies have revealed that attaining NE in GANs is not guaranteed and can be challenging due to various factors, including architecture choices, hyperparameters, and convergence difficulties [49, 50]. To address these challenges and enhance GAN's training stability researchers have developed various techniques, such as different loss functions and architectures over the decade [51]. These alterations of GAN include architectural changes, loss function-based modifications, and many others. They encompass various variations, each with unique attributes and applications, driving significant advancements in generative modeling. Fig. 2 visually depicts the timeline of key developments in GAN research.
## IV Application
As previously noted, GANs have emerged as one of the most prominent advancements in the realm of machine learning over recent years. GAN models have demonstrated their efficacy in domains where prior models fell short, while also substantially enhancing performance in other scenarios. Within this section, we will comprehensively explore the pivotal domains where GAN architectures have been deployed. While much of the recent research has concentrated on employing GANs to generate novel synthesized data, emulating distinct data distributions, our exploration in this section will highlight the broader applications of GANs, extending to areas such as video game development [52], urban planning [10] and others. We also visually showcase the application domains of GAN in Fig. 3.
Image GenerationAmong the most promising domains harnessing the capabilities of GANs is computer vision. Notably, the generation of realistic images stands as one of the paramount applications of GANs [6, 53]. The capacity of GANs to craft authentic images depicting characters, animals, and objects that lack real-world existence holds immense significance [54]. This capability of GAN finds application in diverse projects, spanning from refining facial recognition algorithms to fabricating immersive virtual environments for video games and commercial campaigns [55]. Moreover, GANs have proven instrumental in generating true-to-life virtual realms, a boon for both the gaming industry and advertising ventures. By crafting synthetic landscapes and structures, GANs empower game designers and developers to construct captivating, realistic virtual worlds, thereby elevating the overall player experience [5]. The deployment of GANs
Fig. 1: Architecture of GANs and its primary functions. In this example, different analytical tasks of GANs are categorized into synthetic data generation, style transfer, data augmentation, and anomaly detection.
in this context offers a swift, cost-effective, and efficient alternative to traditional manual design and modeling approaches, enabling the production of high-quality graphics.
Video SynthesisIn addition to generating high-quality images, GANs offer the potential to create synthetic videos, a more complex task due to coherence requirements [56]. GANs, combining generators and discriminators, excel in this challenge [57]. The discriminator learns to differentiate real from synthetic frames, while the generator produces visually authentic video frames. GANs find widespread use in replicating real-world actions, enhancing surveillance and animations [58]. One of the most popular and controversial applications of GAN is the evolution of Deepfake [59]. Deepfakes are AI-generated media, that blend a person's likeness with another's context using GANs. While they offer creative potential, deepfakes raise ethical concerns, requiring a holistic approach to detect them [60, 61].
Augmenting dataGANs possess the capability to generate synthetic data, which can be harnessed to bolster actual data and enhance the performance of deep learning models. This approach is instrumental in mitigating concerns related to data scarcity and refining model accuracy [62]. GANs provide an effective avenue for fortifying machine learning and deep learning frameworks with authentic data. Addressing the challenge of limited data availability, GANs enable the creation of larger, more diverse datasets by generating artificial samples that closely emulate real data [63]. GAN-based data augmentation strategies have showcased promising outcomes across various domains, offering the potential to enhance model precision and transcend the constraints posed by insufficient data [64].
Style TransferGANs are capable of transferring the style of one image to another, resulting in the creation of an entirely new image [65]. This method can be applied to develop novel artistic features or enhance the visual attractiveness of pictures. By facilitating the development of fresh artistic trends and boosting the aesthetic appeal of pictures, GAN-based style transfer approaches have transformed the area of computer vision [3, 66]. These methods have been used in a variety of fields, such as digital art, photography, and graphic design, and they continue to be an inspiration for new developments and studies in the area.
Natural Language ProcessingOver the past few years, GANs have been adapted to process text data, resulting in groundbreaking advancements within the realm of Natural Language Processing (NLP). One notable application involves text generation, where GANs can create coherent and contextually relevant textual content. For instance, the Text GAN framework utilizes Long Short-Term Memory (LSTM) networks [67] as the generator and CNN as the discriminator to synthesize novel text using adversarial training [68]. Furthermore, GANs play a role in text style transfer, allowing alterations in writing styles while preserving content, and enhancing the adaptability of generated material [69]. In the domain of sentiment analysis, GANs contribute by generating text with specific emotional tones, thereby aiding model training and dataset augmentation for sentiment classification tasks. Additionally, GANs are instrumental in text-to-image synthesis, translating textual descriptions into visual representations, proving valuable in fields like accessibility and multimedia content creation [4]. GANs have also been harnessed to enhance machine translation software, refining translation precision and fluidity [70, 39].
Music GenerationGANs are revolutionizing music creation by tapping into existing compositions' patterns and structures [71]. This technology not only fosters original music
Fig. 2: Timeline of the application-based GAN architectures reviewed in this study
composition but also assists musicians in their creative journey. Previous studies have showcased GANs' role in generating music, offering possibilities for both novel compositions and artist support [72, 73]. Beyond composition, GANs empower musicians to explore new styles by generating melodies, harmonies, and rhythms as creative sparks. They also enable style transfer, allowing musicians to reimagine their music in diverse genres and cultural contexts. Moreover, GANs have ventured into musical collaboration, aiding improvisation by responding to musician input with harmonious suggestions. In essence, GANs redefine music creation, from assisting composers in originality to fostering innovative style exploration [74]. This fusion of human creativity and computational ability promises to shape the future of the music industry.
Medical DomainIn the dynamic landscape of the medical domain, GANs have emerged as a game-changing technology with multifaceted benefits. The integration of GANs with medical data holds immense potential in enhancing disease diagnosis through the creation of synthetic medical images thereby eliminating the limited data problem. This expanding diversity and quantity of data made possible by GANs empower the data-driven diagnostic models to deliver more precise and reliable predictions, aiding healthcare practitioners in making accurate diagnoses and ultimately enhancing patient care [75, 76, 77]. Another significant application of GAN is in drug discovery, where it can process and generate molecular structures with desired properties [78, 79]. GAN-driven molecular generation accelerates the process of identifying potential drug candidates, saving time and resources in the search for novel therapeutic compounds. Moreover, GANs extend their impact to surgical training and planning by producing realistic surgical scenarios and simulations [80] and also aid in generating patient-specific medical images, allowing healthcare practitioners to tailor treatment plans to individual patient characteristics [81].
Urban PlanningWith rapid urbanization, predicting transportation patterns is essential for sustainable urban planning and traffic management. Recent advancements in GAN-based methods to simulate hyper-realistic urban patterns, including CityGAN [82], Conditional GAN with physical constraints [83], and MetroGAN [84], have become popular in urban science fields. These GANs can generate synthetic urban universes that mimic global urban patterns, and quantifying landscape structures of these GAN-generated new cities using spatial pattern analysis helps in understanding landscape dynamics and improving sustainable urban planning. In a recent study, a novel RidgeGAN model [10] is proposed that evaluates the sustainability of urban sprawl associated with infrastructure development and transportation systems in medium and small-sized cities.
Geoscience and Remote SensingIn geoscience, there are also recent applications of GANs with novel ways of generating "new" samples that can easily outperform state-of-the-art geostatistical tools. This is very appealing in applications like reservoir modeling as geologists and reservoir engineers are nowadays usually tasked to work with multiple realizations of the subsurface and provide probabilistic estimates to support the subsequent decision-making process. A few examples of early applications of GANs in geoscience are the reconstruction of three-dimensional porous media [85]; Generating geologically realistic 3D reservoir facies models using deep learning of sedimentary architecture [86]; and SeismoGen: Seismic Waveform Synthesis Using GAN With Application to Seismic Data Augmentation [87].
Autonomous VehiclesMachine learning models for autonomous driving can be trained using synthetic pictures of real-world situations created using GANs. This method helps to mitigate the safety concerns of autonomous cars by getting beyond the restrictions of real-world testing [88]. A potential method for training autonomous driving models is the use of GANs to produce synthetic visuals [89]. It makes it possible to investigate a wide range of complex scenarios, improving the performance and safety of the models. Recent studies have illustrated the usefulness and promise of this method for bridging the gap between driving simulations and actual driving situations, ultimately promoting the development of autonomous cars [90, 91].
Fashion and designGANs find utility in generating fresh patterns and designs for clothing, aiding designers in crafting innovative collections. This technology extends its impact on online shopping experiences by producing images of apparel on virtual models, offering customers a realistic preview of how garments would appear on them during online purchases [92]. Within the realms of fashion and design, GANs have become a valuable asset, empowering designers to stretch their creative boundaries by facilitating the creation of novel patterns and designs [93]. Furthermore, GAN-driven virtual try-on systems enhance the convenience of online shopping, granting shoppers lifelike insights into how clothing would fit and appear on them. Several diverse research efforts in this domain have explored the significant contributions of GAN in the evolution of the fashion and design industry [94, 95].
Fig. 3: Diverse Applications of Generative Adversarial Networks (GANs) in various applied domains.
#### Iv-C1 Imbalanced Pattern Classification
A prevalent yet intricate issue encountered in pattern recognition is referred to as "class imbalance", signifying disparities in the frequencies of class labels [96]. To address this challenge, GANs can be used to generate synthetic data for the minority class of various imbalanced datasets as a method of intelligent oversampling [97]. Pioneering approaches such as Balancing GAN (BAGAN) [98] and Classification Enhancement GAN (CEGAN) [99] have been developed to restore balance in the distributions of imbalanced datasets and enhance the precision of the data-driven models.
Time Series Anomaly DetectionIn recent years there has been a significant surge in the availability of real-time sensor data across diverse domains including healthcare systems, power plants, industries, and many others. These vast datasets are often accompanied by several anomalous events which eventually diminishes the modeling capabilities of any machine learning and deep learning frameworks. To address this issue anomaly detection for multivariate time series data has become a critical task for time series analysts [100]. In this context, GANs have become a powerful technology. In recent studies, various GAN-based time series anomaly detection techniques namely, Dilated Convolutional Transformer GAN (DCT GAN) [101], M2GAN [102], Cooperative Network Time Series (CNTS) [103], TADGAN [104], and many others have been developed that leverage the power of adversarial training to efficiently detect the presence of anomalous data.
Data privacyGANs offer the possibility of generating synthetic data that retains the statistical characteristics of the original data, all while safeguarding sensitive information. This approach serves as a means to ensure privacy protection for individuals while enabling the secure utilization of data for research and analytical purposes [105]. A recent study by Torfi et al. has demonstrated how GAN can be leveraged to generate synthetic data that mimics the statistical properties of the real dataset thus preserving data privacy [106]. This development creates new opportunities for private data sharing and analysis, offering insightful information while preserving privacy.
In conclusion, GANs have a wide range of applications across diverse domains, from generating realistic images and movies to aiding in medical diagnosis [1, 6]. The restrictions of data scarcity can be eliminated, and personal information can be safeguarded, by developing synthetic data that closely resembles actual data [107]. As GANs develop further, we can witness more cutting-edge applications in real-data problems [23]. In summary, GANs offer a wide range of applications in a variety of sectors and have the ability to completely change how we produce and use data [108, 109]. Future GAN applications are likely to have even more fascinating uses as the technology develops [110].
## V Variants OF GAN
In this section, we will have a broad review of some of the GAN models based on their distinct characteristics and practical uses. Additionally, we discuss the mathematical formulation of these GAN variants, using standard notations as discussed in Sec. III and present their implementation software in Table II.
**CGAN.** The conditional GAN (CGAN) is a popular version of GAN that generates data by taking external inputs, such as labels or classes, into account. It was introduced by Mirza and Osindero in 2014 [2] and has since been widely used in computer vision applications, including image synthesis, image-to-image translation, and text-to-image synthesis. Unlike the conventional GAN both \(G\) and \(D\) of the CGAN architecture receive conditional information \(y\) that serves as a guide for \(G\) to produce data that aligns with the specified conditions. The loss function for the CGAN framework is given by:
\[L=\mathbb{E}_{x\sim p_{\text{data}}}\left[\log D(x,y)\right]+\mathbb{E}_{z \sim p_{z}}\left[\log(1-D(G(z,y),y))\right].\]
The CGAN model, as discussed in the literature [2, 111], possesses the following key features:
* CGANs generate customized data that is specific to a given input, e.g., a CGAN trained on animal photos can produce images of a particular animal based on the input.
* Unlike Vanilla GAN, CGAN benefits from additional inputs, resulting in synthetic data of higher quality. It exhibits improved coherence, structure, and aesthetic resemblance to real samples.
* CGANs demonstrate superior noise resistance compared to other artificial neural networks due to the utilization of external input to guide the data generation process.
While the CGAN model is known for its versatility, it is also accompanied by several limitations. It is prone to overfitting with scarce or noisy input data, requires explicit labels or classes in the input dataset, is vulnerable to adversarial attacks, and becomes computationally complex with high-dimensional complex datasets [112]. Considering both the advantages and disadvantages of the CGAN model mentioned above, it proves to be a valuable tool for generating data based on external input [113]. However, it is important to take into account these limitations and drawbacks when applying CGANs to address specific problems. Future research can examine alternative conditioning methods including the use of natural language descriptions or a variety of circumstances [114].
**DCGAN.** Deep Convolutional GAN (DCGAN) introduced by Radford et al. in 2015 [23] marks a significant breakthrough in the realm of generative AI, particularly for image generation. Representing a specialized variation of the GAN architecture, DCGANs seamlessly combine CNN and GAN techniques to yield high-quality, photorealistic images with intricate details. With the ability to autonomously learn and generate images without additional control, DCGANs prove their usefulness in unsupervised learning scenarios. DCGANs stand out for their relatively manageable training process, owing to sophisticated architectural components like strided convolutions, batch normalization, and leaky Rectified Linear Unit (ReLU) activation functions [23]. From the experimental perspective, DCGANs have generated excellent results for large-scale picture datasets like CIFAR-10 and ImageNet, [115]. Nonetheless, it is worth noting that
DCGANs exhibit elevated computational demands, sensitivity to hyperparameters, and susceptibility to challenges such as restricted diversity of generated images and mode collapse [116]. Despite these limitations, DCGANs find successful applications across domains encompassing image synthesis, style transfer, and image super-resolution. Their far-reaching impact on the field of generative modeling continues to inspire advancements and innovation.
**AAEs.** Adversarial Autoencoder (AAE) framework, proposed by Makhzani et al. in 2015, is a hybridization of autoencoders with adversarial training [117]. This model has garnered significant attention due to its potential for variational inference by aligning the aggregated posterior of the hidden code vector with a chosen prior distribution. This approach ensures that meaningful outcomes emerge from various regions of the prior space. Consequently, the AAE's decoder acquires the capability to learn a sophisticated generative model, effectively mapping the imposed prior to the data distribution. AAEs excel in producing disentangled representations, showcasing noise resistance, and generating high-quality images. The components within the AAE framework offer notable advantages over alternative generative models. Through adversarial training, AAEs excel in capturing complex data distributions and generating detailed, high-quality images. Their ability to learn disentangled representations in separate latent dimensions empowers precise image control, encompassing alterations to object properties. AAEs exhibit resilience to input variations, making them valuable for noisy data scenarios. Their encoder-decoder design supports denoising and surpasses other models in semi-supervised classification [117]. However, like other generative models, AAEs can encounter mode collapse, demand substantial computational resources, and necessitate cautious hyperparameter tuning. Striking the right balance between adversarial training and autoencoder loss poses a challenge. AAEs lack explicit control over generated samples, hindering targeted data traits in fine-grained control contexts [118]. Yet, the application scope of AAEs is notably expanded by the enhanced encoder, decoder, and discriminator networks, even surpassing traditional autoencoders.
**InfoGAN.** Information Maximizing Generative Adversarial Network (InfoGAN), a modification of GAN, is designed to learn disentangled representations of data by maximizing the mutual information between a subset of the generator's input and the generated output. It was introduced by Chen et al. in 2016 [14]. The loss function formulation for the Generator in InfoGAN is as follows:
\[L=\mathbb{E}_{x\sim p_{\text{data}}}\left[\log D(x)\right]+ \mathbb{E}_{z\sim p_{z}}\left[\log(1-D(G(z)))\right]\\ -\lambda\mathcal{I}(c;G(z)),\]
where \(\mathcal{I}(c;G(z))\) is the mutual information between the generator's output \(G(z)\) and the learned latent code \(c\), and \(\lambda\) is a hyperparameter that regulates the trade-off between the adversarial loss and the mutual information term. The information-theoretic approach employed in the InfoGAN framework enhances its ability to learn representations that facilitate data exploration, interpretation, and manipulation tasks. Unlike supervised methods, InfoGAN does not rely on explicit supervision or labeling, making it a flexible and scalable option for unsupervised learning tasks like image generation and data augmentation. However, the InfoGAN framework may struggle to learn meaningful and interpretable representations for high-dimensional complex datasets, and its benefits may not always justify the additional complexity and computational cost. Overall, InfoGAN shows promising results in learning disentangled representations, but its effectiveness depends on specific goals, data characteristics, and available resources [119]. Ongoing research and advancements hold the potential to address limitations and further improve this approach in the future.
**SAD-GAN.** The Synthetic Autonomous Driving using GANs (SAD-GAN) model, introduced by Ghosh et al. in 2016, is designed to generate synthetic driving scenes using the GAN approach [120]. This model's core concept involves training a controller trainer network using images and keypress data to replicate human learning. To create synthetic driving scenes, the SAD-GAN is trained on labeled data from a racing game, consisting of images portraying a driver's bike and its surroundings. A key press logger software is employed to capture key press data during bike rides. The framework's architecture is inspired by DCGAN [23]. The generator takes a current-time input image and produces the subsequent-time synthetic image. Meanwhile, the discriminator receives the real latest-time image, generates its feature map via convolution, and compares real and synthetic scenes to train the generator through a minimax game. The SAD-GAN framework offers an autonomous driving prediction algorithm suitable for manual driving as a recommendation system. Nevertheless, like DCGAN, it requires substantial computation and is susceptible to mode collapse, limiting its real-time applications.
**LSGAN.** Traditional GAN models typically utilize a discriminator modeled as a classifier with the sigmoid cross entropy loss function. However, this choice of loss function can result in the issue of vanishing gradients during training, resulting in impaired learning of the deep representations. To address this concern, Mao et al. introduced a novel approach called Least Squares GAN (LSGAN) in 2017, which employs the least squares loss function for the discriminator instead [121]. Mathematically, the Generator loss function \((L_{G})\) and the Discriminator loss function \((L_{D})\) of LSGAN model is expressed as follows:
\[L_{G} =\frac{1}{2}\mathbb{E}_{z\sim p_{z}}\left[(D(G(z))-c)^{2}\right],\] \[L_{D} =\frac{1}{2}\mathbb{E}_{x\sim p_{\text{data}}}(D(x)-b)^{2}+\frac{ 1}{2}\mathbb{E}_{z\sim p_{\text{data}}}(D(G(z))-a)^{2},\]
where \(a\)-\(b\) encoding scheme represents the labels for fake data and real data for \(D\), and \(c\) denotes the values that \(G\) wants \(D\) to believe for fake data. The LSGAN framework represents a notable advancement over traditional GANs, offering
improved stability and convergence during training while generating higher-quality synthetic data. It has outperformed regular GANs in generating realistic images, as measured by Inception score, across various datasets such as CIFAR-10 [121]. However, LSGANs often produce fuzzy images due to the use of squared loss in the objective function. The generated images often lack sharpness and fine details, as the loss function penalizes large discrepancies between fake and real images but neglects smaller variations. Researchers have addressed this issue by modifying the loss function in subsequent studies, aiming to enhance the sharpness of synthetic images [122, 123]. While LSGANs show promise in generating high-quality images, ongoing research and development are focused on overcoming their limitations in producing crisp and detailed results.
**SRGAN.** Super Resolution GAN (SRGAN), introduced by Ledig et al. in 2017, is a GAN-based framework for image super-resolution [124]. It generates high-resolution images from low-resolution inputs with an upscaling factor of 4 using a generator network and a discriminator network. To achieve super-resolution, SRGAN incorporates a perceptual loss function, combining content and adversarial losses. Mathematically, the perceptual loss is expressed as:
\[l^{\text{SR}}=l^{\text{SR}}_{x}+10^{-3}l^{\text{SR}}_{\text{Gen}},\]
where \(l^{\text{SR}}_{x}\) represents the content loss and \(l^{\text{SR}}_{\text{Gen}}\) is the adversarial loss. The content loss used in the SRGAN framework relies on a pre-trained VGG-19 model and it provides the network information regarding the quality and content of the generated image. On the other hand, the adversarial loss is responsible for ensuring the generation of realistic images from the generator network. SRGANs offer the ability to generate high-quality images with enhanced details and textures, resulting in improved overall image quality. They excel in producing visually appealing and realistic images, as confirmed by studies on perceptual quality [65]. SRGANs exhibit noise resistance, enabling them to handle low-quality or noisy input images while still delivering high-quality outputs [125]. Moreover, this model demonstrates flexibility and applicability across various domains, including video processing, medical imaging, and satellite imaging [124]. However, training SRGANs can be computationally expensive, especially for complex models or large datasets. Additionally, like other GANs, the interpretability of SRGANs can be challenging, making it difficult to understand the underlying learning process of the generator. Furthermore, while SRGANs excel in image synthesis, they may not perform as effectively with text or audio inputs, limiting their range of applications.
**WGAN.** The Wasserstein GAN (WGAN), introduced by Arjovsky et al. in 2017, is a loss function optimization variant of GAN that improves training stability and mitigates mode collapse [15]. It employs the Wasserstein distance to enhance realistic sample generation and ensure meaningful gradients. By introducing a critic network and weight clipping, WGAN achieves training stability. It finds applications in image synthesis, style transfer, and data generation. The formulation of the WGAN framework utilizes the Wasserstein-1 distance or the Earth Mover distance to measure the distance between real and generated data distributions. Mathematically, the Wasserstein distance for transforming the distribution \(\mathbb{P}\) to distribution \(\mathbb{Q}\) can be expressed as:
\[W(\mathbb{P},\mathbb{Q})=\inf_{\theta\in\pi(\mathbb{P},\mathbb{Q})}\mathbb{E}_{( \tilde{X},\tilde{Y})\sim\theta}\left[\|\tilde{X}-\tilde{Y}\|\right].\]
In the WGAN model, the discriminator function \(D\) is designed as a critic network that estimates the Wasserstein distance between the real data distribution and the generated data distribution instead of probability values as in conventional GAN. These scores reflect the degree of similarity or dissimilarity between the input sample and the real data distribution. The training of the critic in WGAN involves optimizing its parameters to maximize the difference in critic values between real and generated samples. By clipping the discriminator weights, the discriminator loss function in WGAN is adjusted to enforce the Lipschitz continuity requirement, but the fundamental structure of the loss functions is maintained. In general, WGANs have demonstrated improved training stability compared to traditional GANs. They are less sensitive to hyperparameters and more resistant to mode collapse [122]. The use of the Wasserstein distance facilitates smoother optimization and better gradient flow, resulting in faster training and higher-quality samples. However, calculating the Wasserstein distance can be computationally expensive [126]. Although WGANs offer enhanced stability, careful tuning of hyperparameters and network designs is still necessary for satisfactory results. Furthermore, WGANs are primarily suited for generating images and may have limited applicability to other types of data. In summary, WGANs represent a promising advancement in the field of GANs, addressing their limitations and providing insights into distribution distances, but the applicability of WGANs to real-world problems requires careful consideration of its challenges.
**CycleGAN.** Cycle-Consistent GAN (CycleGAN), introduced by Zhu et al. in 2017, is an unsupervised image-to-image translation framework that eliminates the need for paired training data unlike traditional GANs [3]. It relies on cycle consistency, allowing images to be translated between two domains using two generators and two discriminators while preserving coherence. One generator \(G_{XY}\) translates images from the source domain \(X\) to the target domain \(Y\), and the other \(G_{YX}\) performs the reverse. In other words the function \(G_{YX}\) is such that \(G_{YX}(G_{XY}(x))=x\). The discriminators, on the other hand, distinguish between real and translated images generated by the generators. To train this architecture the cycle consistency loss of Cycle GAN plays a crucial role by enforcing consistency between the original and round-trip translated images, the so-called _forward_ and _backward_ consistency. This ensures generators produce meaningful translations, preserving important content and characteristics across domains. Mathematically, the cycle consistency loss
function can be expressed as:
\[\mathcal{L}_{\text{cycle}}(G_{XY},G_{YX}) =\mathbb{E}_{x\sim p_{data}}[\|G_{YX}(G_{XY}(x))-x\|_{1}]\] \[+\mathbb{E}_{y\sim p_{data}}[\|G_{XY}(G_{YX}(y))-y\|_{1}].\]
The main advantage of Cycle GAN lies in its ability to produce high-quality images with remarkable visual fidelity. It excels in various image-to-image translation tasks, including style transfer, colorization, and object transformation. Moreover, its computational efficiency allows training on large datasets. However, CycleGAN often suffers from mode collapse and the increasing amount of parameters reduces its efficiency [127]. Despite its limitations, CycleGAN remains a valuable tool for image translation, and ongoing research for any data translation task aims to address its shortcomings [128]. For example, it shows promising results in medical imaging domain adaptation [129].
**ProGAN.** In 2017, Karras et al. introduced the Progressive Growing of GAN (ProGAN), addressing the limitations of traditional GANs such as training instability and low-resolution output [5]. ProGAN utilizes a progressive growth technique, gradually increasing the size and complexity of the generator and discriminator networks during training. This incremental approach enables the model to learn coarse characteristics first and subsequently refine them, ultimately producing high-resolution images. By starting with low-resolution image generation and progressively adding layers and details, ProGAN achieves training stability and generates visually realistic images of superior quality. This technique has found successful applications in various domains, including image synthesis, super-resolution, and style transfer. During training, the resolution of the generated images is increased progressively from a low resolution (e.g., 4x4) to a high resolution (e.g., 1024x1024). At each resolution level, the generator and discriminator networks are updated using a combination of loss functions. Progressive updates at increasing resolutions ensure high-quality image synthesis with fine features and textures throughout training, unlike the conventional GAN framework. ProGAN offers better scalability, enabling the generation of images at any resolution. It exhibits improved stability during training, overcoming issues like mode collapse. The flexibility of ProGAN makes it suitable for various image synthesis applications, including satellite imaging, video processing, and medical imaging [5]. However, training ProGAN can be computationally expensive, especially for large datasets or complex models. Interpretability may pose challenges, as with other GANs, making it difficult to discern the learned representations. Additionally, ProGAN's generalization to new or unexplored data may be limited, requiring further fine-tuning or training on fresh datasets [130].
**MidiNet.** MidiNet, proposed by Yang et al. in 2017, attempts to generate melodies or a series of MIDI notes in the symbolic domain [8]. Unlike other music generation frameworks, such as WaveNet [131], and Song from PI [132], the MidiNet model can generate melodies either from scratch or by combining the melodies of previous bars. The architectural configuration of the MidiNet framework is motivated by the DCGAN model [23]. The MidiNet model combines a CNN generator with a conditioner CNN in the first phase of training. While the former CNN is employed to generate synthetic melodies based on the random noise vector, the latter provides the available prior knowledge about other melodies in the form of an encoded vector as an optional input to the generator. Once the melody is generated it is processed with a CNN-based discriminator which consists of a few convolutional layers and a fully connected network. The discriminator is optimized using a cross-entropy loss function to efficiently detect whether the input is a real or a generated one. For training the overall network in MidiNet, the minimax loss function is combined with feature mapping and one-sided label smoothing to ensure learning stability and versatility in the generated content. The MidiNet framework proposes a unique CNN-GAN structure for the generation of symbolic melodies. Its ability to synthesize artificial music in the presence or absence of prior knowledge is very useful in the audio domain. However, due to the use of a CNN-based structure, its computational complexity significantly increases in comparison to the standard GAN model. Further research in this domain is required to understand the capabilities of MidiNet in multi-track music generation while simultaneously reducing its running time.
**SN-GAN.** Spectral Normalization GAN (SN-GAN) is a GAN variant that utilizes spectral normalization to stabilize the training of the generator and discriminator networks [133]. In conventional GANs, training can be unstable due to a powerful discriminator or poor-quality generator samples. SN-GAN addresses this by constraining the Lipschitz constant of the discriminator, preventing it from dominating the training process. Spectral normalization normalizes the discriminator's weight matrices, ensuring a stable maximum value and preventing the amplification of minor input perturbations. SN-GAN produces high-quality samples with improved stability and convergence compared to traditional GANs. The adversarial training process used in the SN-GAN framework, similar to the conventional GAN (as in Eq. 1), encourages \(G\) to produce more realistic samples that can fool \(D\), while \(D\) learns to accurately distinguish between real and generated samples. Several benefits of the SN-GAN model over the standard GAN include increased stability in training the generator and discriminator by constraining the Lipschitz constant of the discriminator. This mitigates issues like gradient explosion and mode collapse, resulting in high-quality examples with fine features and edges. SN-GAN is relatively simple to implement and can be integrated into existing GAN systems. However, the computation of singular values during the normalization process adds to the computational burden, potentially extending training time and requiring more memory. SN-GAN's reliance on the spectral norm assumption of discriminator weights may limit its applicability to specific GAN architectures. While SN-GANs may exhibit slower convergence and reduced sample diversity compared to conventional GANs, they excel in stability and
sample quality.
**RGAN.** Relativistic GAN (RGAN) introduces a relativistic discriminator to enhance the stability and quality of GAN-generated samples [134]. Unlike traditional GANs, where the discriminator determines if a sample is real or fake, the RGAN discriminator estimates the probability that a genuine sample is more realistic than a fake sample, and vice versa. It compares the likelihood of a true sample being real with the likelihood of a fake sample being real. This approach guides the generator to produce samples that are more realistic than the discriminator's current estimates for both real and fake samples. To ensure this relativistic nature of RGAN, samples are considered from both real and fake data pairs \(\tilde{x}=(x_{R},x_{F})\), where \(x_{R}\sim\mathbb{P}_{\text{Real}}\) represents the real data and \(x_{F}\sim\mathbb{P}_{\text{Fake}}\) symbolize its fake counterpart. Mathematically, the generator and discriminator loss functions of the RGAN framework can be expressed as:
\[L_{G}= \mathbb{E}_{(x_{R},x_{F})\sim\left(\mathbb{P}_{\text{Real}}, \mathbb{P}_{\text{Fake}}\right)}\left[\tilde{g}_{1}\left(C\left(x_{R}\right)-C \left(x_{F}\right)\right)\right]\] \[+\mathbb{E}_{(x_{R},x_{F})\sim\left(\mathbb{P}_{\text{Real}}, \mathbb{P}_{\text{Fake}}\right)}\left[\tilde{g}_{2}\left(C\left(x_{F}\right)-C \left(x_{R}\right)\right)\right]\text{ and }\] \[L_{D}= \mathbb{E}_{(x_{R},x_{F})\sim\left(\mathbb{P}_{\text{Real}}, \mathbb{P}_{\text{Fake}}\right)}\left[\tilde{f}_{1}\left(C\left(x_{R}\right)-C \left(x_{F}\right)\right)\right]\] \[+\mathbb{E}_{(x_{R},x_{F})\sim\left(\mathbb{P}_{\text{Real}}, \mathbb{P}_{\text{Fake}}\right)}\left[\tilde{f}_{2}\left(C\left(x_{F}\right)-C \left(x_{R}\right)\right)\right],\]
where \(C(\cdot)\) is the non-transformed layer and \(\tilde{g}_{1},\tilde{g}_{2},\tilde{f}_{1},\tilde{f}_{2}\) are scalar-to-scalar functions. The term \(\left(C\left(x_{F}\right)-C\left(x_{R}\right)\right)\) of the modified loss function can be interpreted as the likelihood that the given fake data is more realistic than randomly sampled real data. The relativistic discriminator in RGAN enhances stability by mitigating issues like mode collapse and vanishing gradients, commonly observed in conventional GANs [134]. RGAN surpasses regular GANs in generating high-quality samples. It also exhibits improved resilience against adversarial attacks, ensuring sample security. However, these advantages come at the expense of higher computational requirements compared to regular GANs owing to the use of relativistic discriminator [126]. Additionally, RGAN necessitates careful hyperparameter tuning, including learning rate and regularization parameters, for optimal performance [135, 136, 137]. Furthermore, the efficacy of RGAN depends on the specific use, limiting its universal applicability.
**StarGAN.** StarGAN, a type of GAN model introduced in the work of Choi et al. [138], is specifically designed for multi-domain image-to-image translations. In contrast to the CycleGAN model [3] that focuses on translating images between two specific domains, StarGAN offers the capability to perform translations across a diverse range of domains using a single generator and discriminator. This model trains the generator network \(G\) to map the input image \(x\) to an output image \(y\) conditioned on the randomly generated target domain label \(c\) i.e., \(G(x,c)\longrightarrow y\). In case of the discriminator network \(D\) an additional classifier is used to produce the probability distribution for both source and domain labels \(D:x\longrightarrow\{D_{\text{sc}(x)},D_{\text{cls}(x)}\}\). To ensure an efficient multi-domain image translation this framework utilizes several loss functions namely, the adversarial loss, the domain classification loss, and the reconstruction loss. The conventional adversarial loss ensures the generation of high-quality realistic images. The domain classification loss of real images optimizes \(D\) to accurately classify \(x\) to their input domain label \(c^{\prime}\), whereas, the domain classification loss of fake images optimizes \(G\) to generate images that can be classified as the generated target domain \(c\). Overall, the domain classification loss ensures the coherent multi-domain image classification in the StarGAN model. Furthermore, to ensure that the translated images retain the characteristics of the input image and exclusively modify the domain-related features, a reconstruction loss is used in training the generator network. The overall objective function of the StarGAN model is mathematically expressed as:
\[L_{G}=\mathbb{E}_{x} \left[\log D_{src}(x)\right]+\mathbb{E}_{x,c}\left[\log\left(1-D_ {src}(G(x,c))\right)\right]\] \[-\lambda_{1}\mathbb{E}_{x,c}\left[-\log D_{cls}(c\mid G(x,c))\right]\] \[+\lambda_{2}\mathbb{E}_{x,c,c^{\prime}}\left[\left\|x-G\left(G(x, c),c^{\prime}\right)\right\|_{1}\right]\text{ and }\] \[L_{D}=-\mathbb{E}_{x}\left[\log D_{src}(x)\right]-\mathbb{E}_{x,c} \left[\log\left(1-D_{src}(G(x,c))\right)\right]\] \[-\mathbb{E}_{x,c^{\prime}}\left[\log D_{cls}\left(c^{\prime}\mid x \right)\right],\]
where \(\lambda_{1}\) and \(\lambda_{2}\) are the hyper-parameters that control the effect of the domain classification loss and the reconstruction loss in the StarGAN model, respectively. The training process involves iteratively optimizing the components of the loss functions to achieve high-quality multi-domain image-to-image translations. The StarGAN framework offers several advantages in multi-domain image translation tasks. It utilizes a single generator-discriminator network for all domains, reducing computational complexity. StarGAN can effectively learn domain mappings with limited or unpaired data and preserve the identity of input images in the same target domain. However, it has several drawbacks, including a complex loss function that leads to a time-consuming training process [139, 140]. Additionally, regulating image quality and handling translations between complex domains with significant appearance or structural changes can be challenging in StarGAN [141]. Moreover, this model can be used to manipulate images to a considerable extent which might lead to ethical concerns [142].
**BigGAN.** BigGAN, introduced by Brock et al. in 2018, is an innovative methodology for training GAN on a large scale to achieve a high-quality synthesis of natural images [110]. It aims to address the challenge of generating high-quality images with high resolutions, which traditional GANs struggle to achieve [33]. BigGAN stands out by employing large-scale architecture and a unique truncation technique that allows for the generation of high-fidelity images with intricate details and textures. The model is capable of producing images of various resolutions, reaching up to 512 \(\times\) 512 pixels, and has been trained on a substantial dataset of images. Similar to GAN (as in Eq. 1), during the training of BigGAN model gradient descent techniques are used to update the parameters of \(G\) and \(D\). The discriminator aims to maximize the objective, while the generator aims to minimize it. BigGAN introduces architectural modifications to enhance image quality and diversity. It incorporates class-conditional GANs and self-attention mechanisms. Regularization techniques like
orthogonal regularization and truncation tricks stabilize and control the generator's output. Data augmentation methods, such as progressive resizing and interpolation, are employed to handle high-resolution images effectively. The modified training approach in the BigGAN architecture enables the generation of high-quality images with detailed features and textures, surpassing the capabilities of regular GANs. This enhanced model offers scalability, addresses mode collapse issues, and has broad applications in fields such as video processing, satellite imaging, and medical imaging. However, it is computationally demanding, especially when dealing with large datasets or complex models [143, 144]. Additionally, the generalization of the framework to new, unseen data is limited, requiring further fine-tuning or training on fresh datasets [145].
**MI-GAN.** In the field of deep learning, constrained data sizes within the medical domain pose a significant challenge for supervised learning tasks, elevating concerns about overfitting. To address this, Iqbal et al. introduced Medical Imaging GAN (MI-GAN) in 2018, an innovative GAN framework tailored for Medical Imaging [146]. MI-GAN is specialized in generating synthetic retinal vessel images along with segmented masks based on limited input data. The architecture of the MI-GAN framework's generator network adopts an encoder-decoder structure. Given a random noise vector, the encoder functions as a feature extractor, capturing local and global data representations through its fully connected neural network design. These learned representations are then channeled into the decoder using skip connections, facilitating the generation of segmented images. The generator's enhancements encompass the integration of global standard segmented images and style transfer mechanisms, refining the segmented image generation process. Consequently, the modified MI-GAN generator is trained using a blend of adversarial, segmentation, and style transfer loss functions. In contrast, the discriminator network within the MI-GAN model consists of multiple convolutional layers, and it is trained using adversarial loss functions to effectively distinguish between real and generated images. MI-GAN refines the conditional GAN model for retinal image synthesis and segmentation. Remarkably, despite being trained with a mere ten real examples, this model holds tremendous potential in medical image generation. Nonetheless, this approach relies on spatial alignment to achieve superior outcomes, which can often be scarce [147].
**AttGAN.** AttGAN, also known as Attribute GAN, is a variation of the GAN framework that focuses on generating images with customizable properties such as age, gender, and expression. It was introduced by He et al. in 2019 in their work "AttGAN: Facial Attribute Editing by Only Changing What You Want" [148]. AttGAN aims to allow users to modify specific facial attributes while preserving the overall identity and appearance of the face. By manipulating attribute vectors, users can control the desired changes in the facial attributes, resulting in realistic and visually appealing image transformations. The AttGAN framework combines two sub-networks an encoder \(G_{\text{Enc}}\) and a decoder \(G_{\text{Dec}}\) in place of \(G\) of conventional GAN and it utilizes an attribute classifier \(C\) with the discriminator network. During the training phase, given an input image \(x^{\tilde{a}}\) with a set of \(n\)-dimensional binary attribute \(\tilde{a}\), \(G_{\text{Enc}}\) encodes \(x^{\tilde{a}}\) into a latent vector representation i.e., \(s=G_{\text{Enc}}\left(x^{\tilde{a}}\right)\). Simultaneously, \(G_{\text{Dec}}\) is employed for editing the attributes of \(x^{\tilde{a}}\) to another set of \(n\)-dimensional attributes \(\tilde{b}\) i.e., the edited image \(x^{\tilde{b}}\) is constructed as \(x^{\tilde{b}}=G_{\text{Dec}}\left(s,\tilde{b}\right)\). To perform this unsupervised learning task \(C\) is used with the encoder-decoder pair to constrain \(x^{\tilde{b}}\) to possess the desired qualities. Moreover, the adversarial loss used in the training process ensures realistic image generation. On the other hand, to allow for satisfactory preservation of attribute-excluding details in the network a reconstruction loss is utilized in the framework. This loss ensures that the interaction between the latent vector \(s\) with attribute \(\tilde{b}\) will always produce \(x^{\tilde{b}}\) and the interaction between \(s\) with attribute \(\tilde{a}\) will always produce \(x^{\tilde{a}}\), approximating the input image \(x^{\tilde{a}}\). Thus the overall loss function for the encoder-decoder-based generator of AttGAN can be expressed as:
\[L_{\text{Enc, Dec}} =\lambda_{\text{Rec}}\mathbb{E}_{x^{\tilde{a}}}\left[\left\|x^{ \tilde{a}}-x^{\tilde{a}}\right\|_{1}\right]\] \[+\lambda_{\text{Cl}_{G}}\mathbb{E}_{x^{\tilde{a}},\tilde{b}} \left[\text{H}\left(\tilde{b},C(x^{\tilde{b}})\right)\right]-\mathbb{E}_{x^{ \tilde{a}},\tilde{b}}\left[D\left(x^{\tilde{b}}\right)\right]\]
and the loss for the classifier and the discriminator is formulated as:
\[L_{\text{D, Cls}}=\lambda_{\text{Cl}_{D}}\mathbb{E}_{x^{\tilde{ a}}}\left[\text{H}\left(\tilde{a},C(x^{\tilde{a}})\right)\right]-\\ \mathbb{E}_{x^{\tilde{a}}}\left[D\left(x^{\tilde{a}}\right) \right]+\mathbb{E}_{x^{\tilde{a}},\tilde{b}}\left[D\left(x^{\tilde{b}}\right) \right],\]
where \(\text{H}\) is the cross entropy loss, and \(\lambda_{\text{Rec}},\lambda_{\text{Cl}_{G}},\lambda_{\text{Cl}_{D}}\) are hyperparameters for balancing the losses. AttGAN offers several benefits in the image generation domain including precise control over the attributes of generated images, allowing users to modify age, gender, expression, and other qualities. It provides flexibility by adapting to multiple domains and tasks, enabling customization and flexibility in image synthesis applications. The model produces realistic images that approximate the desired attributes while maintaining the visual aspects of the original image. However, ethical considerations regarding representation, identity, and privacy must be addressed when using AttGAN or similar models [149, 17]. The computational complexity of AttGAN requires significant resources and may pose challenges for deployment in production settings or on resource-limited devices. Additionally, AttGAN relies on labeled data with attribute annotations, which may not always be readily available, and the performance and generalizability of the model can be influenced by the quantity and quality of the attribute annotations [150]. The distribution and diversity of the training data can also impact the model's performance and ability to handle uncommon or out-of-distribution features [151]. In conclusion, AttGAN provides precise attribute control, flexibility, and realistic image generation capabilities, but careful ethical considerations, resource requirements, and data dependencies should be taken into account when utilizing the model in practical applications.
**DM-GAN.** The Dynamic Memory GAN (DM-GAN) introduced by Zhu et al. in 2019 combines the power of GANs with a memory-augmented neural network design to overcome the limitations of conventional GANs [152, 153]. By addressing issues like mode collapse and lack of fine-grained control, DM-GAN aims to improve the image synthesis process. This deep learning model focuses on generating realistic images from text descriptions, tackling two main challenges in existing methods. Firstly, it addresses the impact of initial image quality on the refinement process, ensuring satisfactory results. Secondly, DM-GAN considers the importance of each word in conveying image content by incorporating a dynamic memory module. The two-stage training of the DM-GAN framework initially transforms the textual description into an internal representation using a text encoder and a deep generator model is utilized to generate an initial image based on the encoded text and random noise. In the subsequent dynamic memory-based image refinement step the generated fuzzy image is processed using a memory writing gate to select relevant text information based on the initial image content and a response gate to fuse information from memories and image features. These advancements enable DM-GAN to generate high-quality images from text descriptions accurately. The dynamic memory module of DM-GAN enhances image generation by capturing long-range relationships and maintaining global context, resulting in persuasive and visually appealing images. It provides fine-grained control over attribute-guided synthesis and increases diversity by addressing mode collapse. However, DM-GAN's computational complexity and memory management pose challenges, and it relies on labeled data [154, 155]. The model's interpretability is limited due to the complexity of the memory module [156, 157]. In conclusion, DM-GAN offers enhanced image generation capabilities with control, diversity, and robustness, while considerations such as computational resources, data availability, and interpretability should be considered.
**SinGAN.** Single-Image GAN (SinGAN) is an unconditional generative model introduced by Shaham, et al. in 2019 for learning the internal statistics from a single image without the need for additional training data [158]. SinGAN allows for a wide range of image synthesis and manipulation tasks, including animation, editing, harmonization, and super-resolution, among many others. The key innovation of SinGAN is the use of a multi-scale pyramid of GANs, where each GAN is responsible for generating images at a different scale. This hierarchical structure enables SinGAN to capture both the global and local characteristics of the input image, resulting in high-quality and coherent output images. By training on a single image, SinGAN eliminates the need for a large dataset, making it a versatile and practical tool for image generation tasks. During the training phase of SinGAN, a hierarchical structure called the multi-scale pyramid is utilized. This pyramid consists of a series of generators denoted as \(\{G_{0},G_{1},\ldots,G_{N}\}\). The generators take input patches of the image at different downsampled levels, represented as \(\{x_{0},x_{1},\ldots,x_{N}\}\), where each level is downsampled by a factor of \(r^{n}\) (\(r>1\)). The generators, along with their corresponding discriminators \(D_{n}\), are trained using adversarial training. The goal is to generate realistic samples that cannot be distinguished from the downsampled image \(x_{n}\). The SinGAN architecture consists of 5 convolutional blocks in both \(G_{n}\) and \(D_{n}\) networks. Each block consists of a 3\(\times\)3 convolutional layer with 32 kernels, followed by batch normalization and LeakyReLU activation. The patch size for the discriminator remains fixed at 11\(\times\)11 across all pyramid levels. During training, the generator and discriminator networks are iteratively updated to optimize a combination of adversarial loss and reconstruction loss. As the training progresses to higher pyramid levels, the generator incorporates the output from the previous level, enabling it to capture finer details and generate more realistic images. To enhance the model's ability to handle diverse variations, noise injection is introduced during training, where random noise patterns are added to the input image at each scale. This helps in generating diverse outputs. The training process continues until convergence, where the generator is capable of synthesizing images that closely resemble the training image at all scales of the pyramid.
SinGAN offers numerous advantages in image manipulation tasks, requiring minimal data. It enables controlled alteration, synthesis, and modification of images, allowing users to adjust lighting, colors, textures, and objects. The model produces aesthetically realistic and visually consistent results that align with the input image. Its multi-stage training process captures global and local characteristics, resulting in high-quality outputs. However, SinGAN lacks explicit control over specific image traits and quality depends on input image quality and quantity [159]. Ethical considerations should be addressed, and the model is computationally complex with limited interpretability [160]. Nevertheless, SinGAN's multi-stage training has gained popularity due to its versatility and the powerful image generation capabilities it offers.
**PATE-GAN.** In our data-centric world, safeguarding data privacy holds paramount importance, ensuring the protection of individual rights, ethical data handling, and the establishment of a reliable digital environment. It ensures a harmonious blend of leveraging the benefits of data-driven technologies while respecting individual's autonomy and rights. To uphold these concerns and to enable the ethical usage of real-world data in various machine-learning frameworks, Jordan et al. in 2019 proposed the Private Aggregation of Teacher Ensembles Generative Adversarial Network (PATE-GAN) framework [161]. Combining the differential privacy principles of Private Aggregation of Teacher Ensembles (PATE) with the generative prowess of GANs, PATE-GAN generates synthetic data for training algorithms while aiming for a positive societal impact. Similar to the conventional GAN model, PATE-GAN comprises of a generator network that receives a latent vector as input and provides generated data as an output. However, in the discriminator aspect, PATE-GAN innovatively integrates the PATE mechanism involving multiple teacher discriminators and a single student discriminator. The teacher discriminators classify real and generated samples within their dataset
segments, while the student discriminator employs the labels aggregated from the teacher discriminators to classify generated samples. The framework's training employs an asymmetric adversarial process, where teachers aim to enhance their loss relative to the generator, the generator targets the student's loss, and the student seeks to optimize its loss against the teachers. This arrangement with the student discriminator ensures differential privacy concerning the original dataset.
**POLY-GAN.** Introduced by Pandey et al. in 2020, Poly-GAN is a novel conditional GAN architecture aimed at fashion synthesis [95]. This architecture is designed to automatically dress human model images in diverse poses with different clothing items. Poly-GAN employs an encoder-decoder structure with skip connections for tasks like image alignment, stitching, and inpainting. The training procedure of the Poly-GAN framework consists of four steps. This model takes input images, including a reference garment and a model image for clothing placement. Initially, pre-processing involves using a pre-trained LCR-Net\(++\) pose estimator [162] to extract the model's pose skeleton and a U-Net\(++\) segmentation network [125, 163] to obtain the segmented mask of the old garment from the model image. The Poly-GAN pipeline begins by passing the reference garment and generated RGB pose skeleton through the generator to create a garment image that aligns with the skeleton's shape. The architecture of \(G\) follows an encoder-decoder structure. The encoder incorporates three components: a Conv module for propagating pose skeleton information at each layer, a ResNet module for generating a feature vector [164], and a Conv-norm module with two convolutional layers to process the other two modules' outputs. On the other hand, the decoder learns to produce the desired garment image based on pose condition embedding sent by the encoder using skip connections. The transformed garment image and segmented pose skeleton are sent as inputs to the second stage of the network for image stitching, yielding an image of the pose skeleton with the reference attire. In the third stage, the model performs inpainting to eliminate any irregularities in the generated model image. The discriminator, similar in structure to SR-GAN [124], is employed during these stages to differentiate real from fake images. Finally, in the fourth stage, post-processing is applied, stitching the model's head to the image to produce the final output. The Poly-GAN framework utilizes adversarial, GAN, and identity losses for training, ensuring high image quality and minimizing texture and color discrepancies from real images. Poly-GAN presents an advancement in fashion synthesis compared to other models [165], as it operates with multiple conditional inputs and achieves satisfactory fitting results without requiring 3D model information [166]. However, the generated images can exhibit texture deformation and body part loss, affecting the fitting outcomes [167]. Further research is needed to address these issues in this domain.
**MIEGAN.** Mobile Image Enhancement GAN (MIEGAN), introduced by Pan et al. in 2021, is a novel approach within the realm of GAN-based architectures, with the primary objective of elevating the visual caliber of images taken via mobile devices [168]. This endeavor involves several modifications to the conventional GAN architecture. In the MIEGAN model, a multi-module cascade generative network is utilized which combines an Autoencoder and a feature transformer. The encoder of this modified generator comprises of two streams with the second stream being responsible for enhancing the regions with low luminance - a common issue in mobile photography leading to reduced clarity. In the feature transformative module, the local and global information of the image is further captured using a dual network structure. Furthermore, to enhance the generative network's ability to produce images of superior visual quality, an adaptive multi-scale discriminator is employed in lieu of a standard single discriminator in the MIEGAN model. This multi-scale discriminator serves to differentiate between real and fake images on both global and local scales. To harmonize the evaluations from the global and local discriminators, an adaptable weight allocation strategy is utilized in the discriminator. Additionally, this model is trained based on a contrast loss mechanism and a mixed loss function, which further enhances the visual quality of the generated images. Despite the image quality enhancement capabilities of the MIEGAN framework, their high computation complexity poses a significant challenge for their real-time application in mobile photography.
**VQGAN.** Vector Quantized GAN (VQGAN) introduces a novel methodology that merges the capabilities of GAN with vector quantization techniques to generate high-quality images [169]. This approach effectively leverages the synergies between the localized interactions of CNN and the extended interactions of Transformers [19] in tasks involving the conditional synthesis of data. The distinctive architecture of VQGAN not only yields images of exceptional quality but also empowers a degree of creative influence, enabling the manipulation of various attributes within the generated content. The training process of the VQGAN architecture unfolds in two pivotal phases. Initially, a variational autoencoder and decoder are trained, as opposed to the conventional GAN generator network. This training aims to reconstruct the image by utilizing a discrete latent vector representation derived from the input image. This intermediate representation is subsequently linked to a codebook, efficiently capturing the underlying semantic information. To augment the fidelity of the reconstructed image, a discriminator is incorporated into the autoencoder structure. The training of the autoencoder model, the codebook, and the discriminator involves optimizing a fusion of adversarial loss and perceptual loss functions. In the subsequent phase, the codebook indices, constituting the intermediate image representations, are fed into Transformers. These Transformers are trained through a transformer loss mechanism, guiding them to predict the succeeding indices within the encoded sequence, resulting in an improved codebook representation. Finally, the information from the codebook is utilized by the decoder to generate images of
higher resolutions. The unique aspect of VQGAN lies in its ability to allow users to manipulate generated images in creative ways. By modifying the quantized codes, users can control specific features of the generated content, thereby unlocking a spectrum of artistic potentials. Nonetheless, the caliber of the images generated by VQGAN depends largely on its input data, necessitating expansive datasets and substantial computational resources to produce images of exceptional excellence [170]. Consequently, this restricts its immediate applicability in real-time case studies. Moreover, the codebook representation used in the vector quantization process can significantly reduce the variation in the generated images [171].
**DALL-E.** DALL-E is an advanced text-to-image generative framework created by OpenAI that utilizes a two-stage process to generate images from textual prompts [172, 173]. It combines the concepts of GANs and Transformers to generate highly realistic and coherent images from textual descriptions. What sets DALL-E apart is its ability to generate realistic art and images from textual descriptions that may describe completely novel concepts or objects. The working principle of the pre-trained DALL-E model comprises of two phases. The first stage involves a prior model that generates a Contrastive Language-Image Pretraining (CLIP) [174] image embedding, capturing the essential gist of the image based on the provided caption. In the second stage, a decoder model known as GLIDE takes the image embedding and reconstructs the image itself, gradually removing noise and generating a realistic and visually coherent image. The CLIP model, consisting of a text encoder and an image encoder, is trained using contrastive training to learn the relationship between images and their corresponding captions. This allows the model to generate the CLIP text embedding from the input caption. Further, the prior model of DALL-E processes this text representation to generate the CLIP image embedding. In case of the decoder, DALL-E utilizes a Diffusion model [22] which generates the image by using CLIP image embedding and the CLIP text embedding as an additional input. DALL-E's two-stage process offers advantages in prioritizing high-level semantics and enabling intuitive transformations. It excels in generating creative and imaginative images based on textual descriptions, making it valuable for creative tasks. However, training DALL-E requires substantial computational resources and presents challenges in fine-tuning and attribute control. Ethical concerns and biases surrounding AI-generated content also arise [175, 176]. Moreover, the lack of interpretability and explainability of this framework restricts its applications in legal, medical, or safety-sensitive domains [177]. Nevertheless, DALL-E represents a significant advancement in image synthesis and has garnered attention for its creative potential. Ongoing research, such as DALL-E 2 [178], continues to push the boundaries of this field and attempts to mitigate the explainability concerns [179].
**CEGAN.** Class imbalance is a prevalent challenge across many real-world datasets. In the context of classification tasks, this skewed distribution of classes leads to a significant bias favoring the majority class. Previous studies have suggested oversampling approaches, involving the artificial generation of samples from the minority class, as an efficient mechanism to mitigate this issue. Classification Enhancement GAN (CEGAN) model introduces a solution to address the class imbalance issue through the utilization of a GAN-based framework, as outlined in the work by Suh et al. [99]. This model particularly focuses on enhancing the quality of data generated from the minority class, thereby mitigating the classifier's bias towards the distribution of the majority class. Differing from the conventional GAN model, the CEGAN framework combines three distinct networks - a generator, a discriminator, and a classifier. The training process of the CEGAN model involves a two-step sequence. In the initial phase, the generator generates synthetic data using input noise and real class labels. Simultaneously, the discriminator distinguishes between real and synthetic data, while the classifier assigns class labels to input samples. The subsequent stage involves the integration of the generated samples with the original training data, creating an augmented dataset for training the classifier. The CEGAN framework serves as an efficient methodology that incorporates techniques such as data augmentation, noise reduction, and ambiguity reduction to effectively tackle class imbalance problems. Notably, this approach overcomes the limitations associated with traditional resampling techniques, as it avoids the need to modify the original dataset.
**SeismoGen.** Seismogen is a seismic waveform synthesis technique that utilizes GAN for seismic data augmentation [87]. The motivation behind Seismogen arises from the need for abundant labeled data for accurate earthquake detection models. To overcome the scarcity of seismic waveform datasets, Wang et al. introduced the Seismogen framework, employing GAN to generate realistic multi-labeled waveform data based on limited real seismic datasets. Incorporating this additional dataset enhances the training of machine learning-based seismic analysis models, leading to more robust predictions for out-of-sample datasets. The mathematical formulation of the Seismogen framework follows the Wasserstein GAN [109] framework and can be expressed as:
\[L_{G}= -\mathop{\mathbb{E}}_{z\sim\mathrm{N}(0,1)}D(G(z)),\] \[L_{D}= \mathop{\mathbb{E}}_{z\sim\mathrm{N}(0,1)}D(G(z))-\mathop{\mathbb{ E}}_{x\sim\mathrm{Plain}}D(x)\] \[+\lambda\mathop{\mathbb{E}}_{z\sim\mathrm{N}(0,1)}\left[\left(\|D( G(z))\|_{2}-1\right)^{2}\right],\]
where the noise \(z\) is a standard normal variable and \(\lambda\) is a hyperparameter. The primary objective is to minimize the difference between the true seismic waveforms and the synthetic waveforms generated by the Seismogen. This is achieved by iteratively optimizing \(L_{G}\) and \(L_{D}\) to find an equilibrium between the generator and discriminator networks. SeismoGen has demonstrated its ability to generate highly realistic seismic waveforms, making it valuable for seismic waveform analysis and data augmentation. Its conditional generation feature allows users to produce waveforms labeled with specific categories, enhancing its versatility for various
applications. SeismoGen is scalable and capable of generating large databases of artificial waveforms, which is beneficial for tasks requiring extensive training data. However, SeismoGen's effectiveness is influenced by the quality and distribution of the training data. It does not model the expected waveform move-out, which is relevant in various seismic research. Additionally, due to imbalanced real seismic waveform datasets, SeismoGen struggles to generate data with rare characteristics. Moreover, the computational cost of training and using SeismoGen may be a limiting factor, especially for real-time seismic hazard assessment applications. As a relatively new technology, there might be some potential for unexpected behavior when using SeismoGen, as its full capabilities and limitations are yet to be fully explored.
**MetroGAN.** Zhang et al. introduced Metropolitan GAN (MetroGAN) as a geographically informed generative deep learning model for urban morphology simulation [84]. MetroGAN incorporates a progressive growing structure to learn urban features at various scales and leverages physical geography constraints through geographical loss to ensure that urban areas are not generated on water bodies. The generation of cities with MetroGAN involves a global city dataset comprising three layers: terrain (digital elevation model), water, and nighttime lights, effectively capturing the physical geography characteristics and socioeconomic development of cities. The model detects and represents over 10,000 cities worldwide as 100km \(\times\) 100km images. The mathematical formulation of the MetroGAN framework is a modified version of the LSGAN model [121], which can be expressed as follows:
\[L^{*} =\arg\min_{G}\max_{D}\frac{1}{2}\mathbb{E}_{x,y}\left[\left(D(x, y)-1\right)^{2}\right]\] \[\quad+\frac{1}{2}\mathbb{E}_{x,z}\left[\left(D(x,G(x,z))\right)^{ 2}\right]+\lambda_{L1}L_{L1}(G)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left.-\lambda_{\text{Geo}}\mathbb{E}_{x,z}\left[x_{\text{water}}\odot G(x,z) \right],\right.\]
where images \(x\) with corresponding labels \(y\) and a random vector \(z\) in the latent space are fed into \(G\) to produce simulated images \(G(x,z)\). Both real input pairs \((x,y)\) and simulated pairs \((x,G(x,z))\) are then presented to \(D\) to distinguish real images from fake ones and also to assess if the input pairs match. The objective loss function comprises different terms, including least square adversarial loss (from the first two expectation terms), \(L1\) loss denoted as \(L_{L1}\), and a geographical loss with hyperparameters \(\lambda_{L1}\) and \(\lambda_{\text{Geo}}\), respectively. The geographical loss (last term) utilizes Hadamard product \(\odot\) to filter out pixels that generate urban areas on water area \(x_{\text{water}}\). MetroGAN, a robust urban morphology simulation model, has several notable advantages and limitations. On the positive side, it incorporates geographical knowledge, resulting in enhanced performance. Its progressive growing structure allows for stable learning at different scales, while multi-layer input ensures precise city layout generation. The model's evaluation framework covers various aspects, ensuring the quality of its output. Furthermore, MetroGAN finds wide applications in urban science and data augmentation. However, these strengths come with challenges, including high computational costs due to extensive data requirements and dependence on data quality, which may hinder its performance with noisy or missing data. Additionally, the model lacks interpretability, making it difficult to understand the reasoning behind its predictions, and it may struggle to represent all intricate features of complex urban systems effectively.
**M3GAN.** Anomaly detection in multi-dimensional time series data has received tremendous attention in the fields of medicine, fault diagnosis, network intrusion, and climate change. In this work, the authors have proposed the M2GAN (a GAN framework based on a masking strategy for multi-dimensional anomaly detection) and M3GAN (M2GAN for mutable filter) for improving the robustness and accuracy of GAN-based anomaly detection methods. M2GAN generates fake samples by directly reconstructing real samples, which are sufficiently realistic [102]. This is done by extracting various information from the original data by the mask method which improves the robustness of the model. M3GAN fuses the fast Fourier transform (FFT) [180] and wavelet decomposition [181] to obtain a mutable filter to process the raw data so that the model can learn various types of anomalies. The architecture of the M2GAN framework utilizes the AAE [117] in place of the generator of the conventional GAN model for generating realistic fake data. A masking strategy of the AAE enhances the variability within the original time series and overcomes the mode collapse problem. For the discriminator network, this framework employs an AnoGAN [182] architecture that distinguishes between normal data and anomalous data using DCGAN [23]. The M3GAN model combines a dynamic switch-based adaptive filter selection mechanism with the multidimensional anomaly detection capabilities of the M2GAN model. This approach allows one to select the most suitable filter for the given data that better exploits the complex characteristics of the series, leading to improved accuracy in anomaly detection. Both M2GAN and M3GAN architectures excel in spotting anomalies in multi-dimensional time series data, offering adaptability for dynamic settings. Its capacity to generate synthetic data aids tasks like diverse model training. However, their high computational complexity leads to extended processing times. Moreover, their limited interpretability also poses a significant challenge in understanding the marked anomalies. Further research is needed in this domain to address these issues and provide support for adaptive filter parameters in M3GAN.
**CNTS.** Cooperative Network for Time Series (CNTS), introduced by Yang et al. in 2023, is a reconstruction-based unsupervised anomaly detection technique for time series data [103]. This model aims to overcome the limitations of the previous generative methods that were sensitive to outliers and showed sub-optimal anomaly detection performance due to their emphasis on time series reconstruction. The CNTS framework consists of two FEDformer [183] networks, namely a reconstructor (\(R\)) and a detector (\(D\)). The reconstructor aims to regenerate the series that closely matches the known data distribution (without anomalies) i.e., data reconstruction. On the other hand, the detector focuses on identifying the
values that deviate from the fitted data distribution, effectively detecting anomalies. Despite having different purposes, these two networks are trained using a cooperative mode, enabling them to leverage mutual information. During the training phase, the reconstruction error of \(R\) serves as a labeling mechanism for \(D\), while \(D\) provides crucial information to \(R\) regarding the presence of anomalies, enhancing the robustness to outliers. Thus the multi-objective function of the CNTS model can be expressed as:
\[\left[\begin{array}{l}\min_{\theta_{D}\theta_{R}}\sum_{i=1}^{n}L_{D}(D(x_{i},\theta_{D}),L_{R}(x_{i},R(x_{i},\theta_{R})))\\ \min_{\theta_{D}\theta_{R}}\sum_{i=1}^{n}(1-\hat{y}_{i}(x_{i},\theta_{D}))L_{ R}(x_{i},R(x_{i},\theta_{R})))\end{array}\right],\]
where \(x_{i}\) is the value for the \(i^{th},i=1,2,\ldots,n\) time stamp of the input series, \(\theta_{D}\) and \(\theta_{R}\) denotes the parameters of \(D\) and \(R\), while \(L_{D}\) and \(L_{R}\) represent their corresponding loss functions, respectively. The categorical label \(\hat{y}_{i}\) indicates the presence of anomalies as identified by \(D\) and helps to remove data with high anomaly scores, thereby reducing their impact on the training of \(R\). The cooperative training approach employed by CNTS allows it to model complex temporal patterns present in real-world time series data, thus significantly enhancing its performance in various anomaly detection tasks. The flexibility and adaptability of the CNTS model make it robust to the presence of outliers in the series. However, the presence of the dual-network architecture of the CNTS model increases its computational complexity, hindering its real-time applicability. Moreover, the lack of interpretability of the model poses a significant challenge to its potential use cases. Furthermore, the success of the CNTS model is contingent on the availability of representative and diverse time series datasets and the choice of sub-networks. Further research in this domain is required to comment on the performance of the model for diverse datasets and appropriate sub-network choices.
**RidgeGAN.** RidgeGAN, introduced by Thottolil et al. in 2023, is a hybridization of the nonlinear kernel ridge regression (KRR) [184, 185] and the generative CityGAN model [10]. This framework aims to predict the transportation network of the future small and medium-sized cities of India by analyzing the spatial indicators of human settlement patterns. This prediction is crucial for facilitating sustainable urban planning and traffic management systems. The RidgeGAN framework operates in three steps. Firstly, it generates an urban universe for India based on spatial patterns by learning urban morphology using the CityGAN model [82]. Secondly, it utilizes KRR to study the relationship between the human settlement indices (HSI) and the transportation indices (TI) of 503 real small and medium-sized cities in India. Finally, the KKR model's regression framework is applied to the synthetic hyper-realistic samples of future cities and their TI is predicted. RidgeGAN framework has its applications in diverse areas, such as analyzing urban land patterns, forecasting essential urban infrastructure, and assisting policymakers in achieving a more inclusive and effective planning process. Moreover, this model is especially valuable when designing the transportation network of developing nations with limited or partial real data, as the model can produce data that closely resembles actual urban morphology and helps in data augmentation. However, the framework fails to showcase its performance for the generated human settlements which is crucial in the urban planning procedure. Further studies in this domain are indeed required to understand the suitability of the framework for large cities as well.
## VI Recent Theoretical Advancements of GAN
Empirical studies have shown great success of GAN and their variants in producing state-of-the-art results in diverse domains ranging from image, video, and text generation to automatic vehicles, time series, and drug discovery, among many others. The mathematical reasoning of GANs is to approximate the unknown distribution of a given data by optimizing an objective function through an adversarial game between a family of generators and a family of discriminators. Biau et al. [192] analyzed the mathematical and statistical properties of GANs by establishing connections between adversarial principles and Jensen-Shannon (JS) divergence. Their work provides the large sample properties for the parameters of the estimated distribution and a result towards the central limit theorem. Another cousin approach of GAN called WGAN has more stable training dynamics than typical GANs. Biau et al. [193] studied the convergence of empirical WGANs when sample size approaches infinity. More recently, the rate of convergence for density estimation with GANs has been studied in [194]. In particular, they studied the non-asymptotic properties of the vanilla GAN and derived a theoretical guarantee of the density estimation with GANs under a proper choice of deep neural network classes representing generators and discriminators. It suggests that the resulting estimates converge to the true density (\(p^{*}\)) in terms of the JS divergence at the rate of \((\log n/n)^{2\beta/(2\beta+d)}\), where \(n\) is the sample size, \(\beta\) determines the smoothness of \(p^{*}\), and \(d\) is the data dimension. In Theorem 2 of [194] if the choice of \(G\) and \(D\) to be classes of neural networks with rectified quadratic unit (ReQU) activation functions, the rates of convergence for the estimate \(p_{\tilde{g}}\) to the true density \(p^{*}\) in terms of JS divergence holds the following inequality with probability at least \(1-\delta\);
\[\mathrm{JS}\left(p_{\tilde{g}},p^{*}\right)\lesssim\left(\frac{\log n}{n} \right)^{\frac{2\beta}{2\beta+d}}+\frac{\log\left(1/\delta\right)}{n}.\]
The above mathematical result suggests that the convergence rate of vanilla GAN's density estimate in the JS divergence is faster than \(n^{-1/2}\) when \(\beta>\frac{d}{2}\); therefore, the obtained rate is minimax optimal for the considered class of densities. Meitz et al. [195] studied statistical inference for GAN by addressing two critical issues for the generator and discriminator's parameters, namely consistent estimation and confidence sets. Mbacke et al. [196] studied PAC-Bayesian generalization bound for WGANs based on Wasserstein distance and Total variational distance. The generalization properties of GANs try to answer the following question: How to certify that the learned distribution \(p_{\tilde{g}}\) is "close" to the true one \(p^{*}\)? This question is pivotal since the true distribution \(p^{*}\) is unknown in real problems and generative models can only access its
empirical counterpart. Liu et al. [197] studied how well GAN can approximate the target distribution under various notions of distributional convergence. Lin et al. [198] showed that under certain conditions GAN-generated samples inherently satisfy some (weak) privacy guarantees. Another study offers a theoretical perspective on why GANs sometimes fail for certain generation tasks, in particular, sequential tasks such as natural language generation [199]. Further research on the comparative theoretical aspects, both pros and cons, of different generative approaches will enhance support for the wide applications of GANs and address their limitations.
## VII Evaluation Measures
In contrast to conventional deep learning architectures that employ convergence-based optimization of the objective function, generative models like GANs utilize a minimax loss function, trained iteratively to establish equilibrium between the generator and discriminator networks [1]. The absence of an objective loss function for GAN training restricts the ability of loss measurements to assess training progress or model performance. To address this challenge, a mix of qualitative and quantitative GAN evaluation approaches has been developed [200]. These evaluation measures particularly vary based on the quality and diversity of the generated synthetic data, as well as the potential applications of the generated data [201].
Owing to the lack of consensus amongst the researchers on the use of a universal metric to gauge the performance of the deep generative models, different metrics have been developed in the last decade with their unique strengths and particular applicability [47]. In this section, we will briefly overview the popular evaluation measures used in different applications.
### _Inception Score_
The Inception Score (IS) is a widely used metric to assess the quality and diversity of GAN-generated samples [202]. It leverages a pre-trained neural network classifier called Inception v3 [203], which was initially trained on the Imagenet [204] dataset containing a diverse range of real-world images categorized into 1,000 classes. The IS measures the quality of generated samples based on their classification probabilities predicted by Inception v3. Essentially, higher-quality samples are expected to be strongly classified into specific classes, implying low entropy. In general, the IS value ranges between 1 and the number of classes in the classifier, reflecting the diversity of the generated samples, with higher scores indicating better performance. Nevertheless, the Inception Score does come with a number of limitations. It encounters challenges when dealing with instances of mode collapse, wherein the generated samples by GANs are extremely similar, causing artificially inflated IS values that don't accurately represent diversity. Additionally, it relies on the performance of the Inception v3 model, which might not always align with human perception of image quality. To mitigate these drawbacks of IS, several modified versions have been proposed in the literature. For example, the modified Inception Score (m-IS) attempts to address the mode collapse problem in GAN by evaluating the diversity of images with the same category [205]. Other modification of IS includes the Mode Score (MS) which
evaluates the quality and diversity of the generated data by considering the prior data distribution of the labels [206].
### _Frechet Inception Distance_
The Frechet Inception Distance (FID) is a widely used evaluation metric that measures the quality and diversity of GAN-generated images [49]. It calculates the similarities and differences between the distributions of real and generated images using the Frechet distance, which is a form of the Wasserstein-2 distance. The FID metric calculates the mean and covariance of both the real and generated images and then computes the distance between their distributions. Mathematically the FID is expressed as:
\[\mathrm{FID}=|\mu-\mu_{w}|^{2}+\mathrm{tr}\left(\Sigma+\Sigma_{w}-2\left( \Sigma\Sigma_{w}\right)^{1/2}\right),\]
where (\(\mu\), \(\Sigma\)) and (\(\mu_{w}\), \(\Sigma_{w}\)) represent the mean and covariance pair for the real images and the generated images respectively.
The strength of FID lies in its ability to account for various forms of contamination, such as Gaussian noise, Gaussian blur, black rectangles, and swirls, among others. FID's incorporation of these factors contributes to a more robust evaluation of GAN-generated images. As a widely accepted and utilized metric, FID offers a common ground for comparing results across different GAN architectures, promoting a standardized approach for assessing image quality [5, 6, 207].
### _Multi-Scale Structural Similarity_
The Multi-Scale Structural Similarity metric (MS-SSIM), an extension of the traditional Structural Similarity Index (SSIM), serves as an effective measure for evaluating the quality of GAN-generated images [208]. MS-SSIM focuses on comparing image structures, including luminance and contrast, across different scales. This metric provides a comprehensive evaluation of the similarity between the real and synthesized datasets, considering their structural and geometric aspects. Moreover, the ability of MS-SSIM to account for strong dependencies between closely correlated pixels enhances its sensitivity to perceptual quality.
### _Classifier Two-Sample Test_
Classifier Two-Sample Test (C2ST) is a classification-based approach that evaluates the generalization capabilities of GAN for any synthetic data generation task [209]. This metric utilizes a classifier (for example, 1-Nearest Neighbour [210]) to distinguish between the real and generated samples. The performance of this classifier is then used as a metric to determine the quality of the generated samples. The C2ST metric provides an essential tool for measuring the performance of GAN-based architectures for any applied domains, since the classifier is not restricted to a specific data type. Moreover, it focuses on the discriminative aspect of the generated data quality and complements other evaluation metrics that focus on the distributional and perceptual aspects of the generated data.
### _Music Evaluation Metric_
Evaluating the quality of music generated by GANs presents unique challenges due to the subjective nature of musical perception. Traditional quantitative metrics like those used for image evaluation may not fully capture the richness and complexity of musical content. However, several methods have been developed to assess the quality and coherence of GAN-generated music. Certain objective evaluation metrics encompass factors such as musical characteristics, structure, style, uniqueness, and tonality, drawing from statistical representations [35]. Amid these, subjective listening is the most reliable metric for evaluating GAN-generated music. This approach encompasses dimensions like melody, harmony, rhythm, and emotional resonance, thereby furnishing insightful glimpses into the musical caliber.
### _Maximum Mean Discrepancy_
Maximum Mean Discrepancy (MMD) is a statistical measure that quantifies the dissimilarity between two probability distributions. In the context of GAN evaluation, MMD is employed to assess the quality of generated samples by comparing them with real data distributions based on their mean values in a high-dimensional space [211]. A lower MMD score indicates that the difference between the two data distributions is relatively smaller, hence the synthetic data is similar to the original data.
### _Time Series Evaluation Metric_
Assessing time series GAN models presents a notable challenge due to the temporal dependencies inherent in the data. Traditional evaluation metrics tailored to static image datasets struggle to capture the intricate patterns found in sequential data. As a result, a combined approach of qualitative and quantitative measures is employed for evaluation purposes [37]. Qualitative assessment relies primarily on human visual judgment when examining the generated samples. However, these methods lack objectivity. To address this limitation, a range of quantitative evaluation techniques is employed within GAN-based time series evaluation. These encompass metrics such as root mean square error, Wasserstein-1 distance, dynamic time warping, and Pearson correlation coefficient, among others.
### _Uncertainty Quantification in GANs_
Uncertainty Quantification (UQ) plays a vital role in characterizing and estimating the uncertainties in both computation and real-world applications. Due to the fact that the analysis of physical processes based on computer models is riddled with uncertainty, therefore, it has to be addressed to perform 'trustworthy' model-based inference [212]. Oberdiek et al. presented a method to quantify uncertainties of deep neural networks in image classification based on GANs. By employing GANs to generate out-of-distribution (OoD) samples, their methodology enables the classifier to effectively gauge uncertainties for both OoD examples and minor positives [213]. He et al. presented a survey on UQ models for deep
neural networks based on two types of uncertainty sources, namely data uncertainty and model uncertainty [214]. They highlighted that GAN-based models can capture the structure of data uncertainty, however, they are hard to train. Another survey [215] highlighted various measures to quantify uncertainties in deep neural networks. However, it still remains difficult to validate existing methods due to the lack of uncertain ground truths.
## VIII Limitations and scope for improvement
Although GANs have brought a transformative shift in generative modeling, it's crucial to address the substantial challenges embedded within their training process that demand careful consideration [202]. Various architectural modifications of GAN (as discussed in Section V) aim to address specific GAN-related issues and optimize their overall performance. In this section, we summarize the different obstacles in GAN and discuss their potential remedies.
### _Mode Collapse_
The foremost challenge during GANs training is mode collapse (MC), a phenomenon where the generator's output becomes constrained, yielding repetitive samples that lack the comprehensive range of the target data distribution [173]. MC arises when the generator doesn't explore the full spectrum of potential outputs and instead generates identical outputs for distinct inputs from the latent space. This issue can manifest due to an overpowering discriminator or insufficient feedback for the generator to diversify its outputs [216]. Partial and complete mode collapse are its two variants, with the former leading to a limited diversity in generated data and the latter resulting in entirely uniform patterns across generated samples. While partial mode collapse is common, complete mode collapse is relatively rare [47].
Many efforts have been made to tackle the mode collapse problem [217, 218]. Some of these approaches include the application of Unrolled GAN [219] where the generator network is updated by unrolling the discriminator's update steps, unlike the conventional GAN, where \(D\) is first updated while \(G\) is kept fixed and \(G\) is updated based on the updated \(D\). Moreover, mini-batch discrimination is often used to mitigate the MC problem [202]. In this approach, instead of modeling each data example independently, \(D\) processes multiple data examples in mini-batches. The use of modified loss functions, for example, Least-Square GAN [121], Wasserstein GAN [109], Cycle consistency GAN [3] also reduces the mode collapse problem.
### _Vanishing Gradients_
The vanishing gradients problem is another significant challenge encountered during the training phase of GANs. This issue emerges due to the complex architecture of GANs, where both \(G\) and \(D\) need to maintain a balance and learn collaboratively [220]. During the training process, as gradients are backpropagated through the layers of the network, they can diminish drastically, leading to stagnancy in learning. This circumstance can occur when the discriminator becomes very accurate, such as when \(D(G(z)=0\) and \(D(x)=1\) or when \(D\) is inadequately trained and fails to differentiate between real and generated data. Consequently, the loss function might approach zero, hindering constructive feedback to the generator and restricting the generation of high-quality data. Several strategies have been proposed to address vanishing gradients in GANs. One approach is to use a modified loss function, such as the Least-Square GAN [121] that mitigates the vanishing gradient problem to a considerable extent. Furthermore, advanced optimization algorithms, alternative activation functions, and batch normalization strategies are often adopted to reduce the effect of vanishing gradients during GANs training.
### _Learning Instability and Nash Equilibrium_
The architectural characteristics of GAN involve a complex interplay between the two deep neural networks in an adversarial manner. Their training happens in a cooperative yet competitive way using a zero-sum game strategy where both \(G\) and \(D\) aim to optimize their respective objective functions to achieve the Nash equilibrium i.e., a state beyond which they can not improve their performance unilaterally [48]. While this cooperative architecture aims to optimize a global loss function, the optimization problems faced by the individual networks are fundamentally opposing. Due to this complexity in the loss function, there can be situations where some minor adjustments in one network can trigger substantial modifications in the other. Moreover, when both the networks aim to independently optimize their loss functions without coordination, attaining the Nash equilibrium can be hard. Such instances of desynchronization between the networks can lead to instability in the overall learning process and substantially increase the computation time [221]. To counter this challenge, recent advancements in GAN architectures have been focusing on enhancing training stability. The feature matching technique improves the stability of the GAN framework by introducing an alternative cost function for \(G\) combining the output of the discriminator [202]. Additionally, historical averaging of the parameters [202], unrolled GAN [219], and gradient penalty [122] strategies mitigate learning instability and promote convergence of the model.
### _Stopping Problem_
During GANs training, determining the appropriate time at which the networks are fully optimized is crucial for addressing the problems related to overfitting and underfitting. However, in GANs due to the minimax objective function determining the state of the networks based on their respective loss functions is impossible. To address this issue related to the GANs stopping criterion, researchers often employ an early stopping approach where the training halts based on a predefined threshold or the lack of improvement in evaluation metrics.
### _Internal Distributional Shift_
The internal distributional shift often called internal covariate shift refers to the changing distribution in the network
activations of the current layer w.r.t the previous layer. In the context of GAN, when the generator's parameters are updated, the distribution of its output may change, leading to internal distributional shifts in subsequent layers and causing the discriminator's learning to lag behind. This phenomenon affects the convergence of the GAN training process and the computational complexity of the network significantly increases to counter the shifts. To address this issue batch normalization technique is widely adopted in various applications of GAN [222].
## IX Discussion
Over the past decade, GANs have emerged as the foremost and pivotal generative architecture within the areas of computer vision, natural language processing, and related fields. To enhance the performance of GAN architecture, numerous studies have focused on the following: (i) the generation of high-quality samples, (ii) diversity in the simulated samples, and (iii) stabilizing the training algorithm. Constant efforts and improvements of the GAN model have resulted in plausible sample generation, text/image-to-image translations, data augmentation, style transfer, anomaly detection, and other applied domains.
Recent advancements in machine learning with the help of Diffusion models [223, 224, 222] also known as score-based generative models have made a strong impression on a variety of tasks including image denoising, image inpainting, image super-resolution, and image generation. The primary goal of Diffusion models is to learn the latent structure of the dataset by modeling the way in which data points diffuse through the latent space. [225] has shown that Diffusion models outperform GANs on image synthesis due to their better stability and non-existence of mode collapse. However, the cost of synthesizing new samples and computational time for making realistic images lead to its shortcomings when applied to real-time application [226, 227]. Due to the fact that GANs need fine-tuning in their hyperparameters, Transformers [19] have been used to enhance the results of GANs that can adopt self-attention layers. This helps in designing larger models and replacing the neural network models of \(G\) and \(D\) within the GAN structure. TransGAN [228] introduces a GAN architecture without convolutions by using Transformers in both \(G\) and \(D\) of the GAN resulting in improved high-resolution image generation. [229] presented an intersection of GANs and Transformers to predict pedestrian paths. Although Transformers and their variants have several advantages, they suffer from high computational (time and resource) complexity [230]. More recently, physics-informed neural networks (PINN) [20] was introduced as a universal function approximator that can incorporate knowledge of physical laws to govern the data in the learning process. PINNs overcome the low data availability issue [231] in which GANs and Transformers lack robustness, rendering them ineffective scenarios. A GAN framework based on a physics-informed (PI) discriminator for uncertainty quantification is used to inform the knowledge of physics during the learning of both \(G\) and \(D\) models. Physics-informed Discriminator GAN (PID-GAN) [232] doesn't suffer from an imbalance of generator gradient from multiple losses. Another architecture namely Physics-informed GAN (PI-GAN) [233] tackles the problem of sequence generation with limited data. It integrates a transition module in the generator part that can iteratively construct the sequence with only one initial point as input. Solving differential equations using GANs to learn the loss function was presented in the Differential Equation GAN (DEQ-GAN) model [234]. Combining GANs with PINNs achieved solution accuracies that are competitive with popularly used numerical methods.
Large language models (LLMs) [21] became a very popular choice for their ability to understand and generate human language. LLMs are neural networks that are trained on massive text datasets to understand the relationship between words and phrases. This enables LLMs to generate text that is both coherent and grammatically correct. Recently, LLMs and their cousin ChatGPT revolutionized the field of natural language processing, question-answering, and creative writing. Additionally, LLMs and their variants are used to create creative content such as poems, scripts, and codes. GANs and LLMs are two powerful co-existing models where the former is used to generate realistic images. Mega-TTS [235] adopt a VQGAN [169] based acoustic model and a latent-code language model called Prosody-LLM (P-LLM) [236] to solve zero-shot text-to-speech at scale with intrinsic inductive bias. Future works in the hybridization of GANs with several other architectures will be a promising field of future research.
## X Future Research Direction
Despite the substantial advancements achieved by GAN-based frameworks over the past decade, there remain a number of challenges spanning both theoretical and practical aspects that require further exploration in future research. In this section, we identify these gaps that necessitate deeper investigation to enhance our comprehension of GANs. The summary is presented below:
Fundamental questions on the theory of GANsRecent advancements in the theory of GAN by [192, 193, 197] explored the role of the discriminator family in terms of JS divergence and some large sample properties (convergence and asymptotic normality) of the parameter describing the empirically selected generator. However, a fundamental question of how well GANs can approximate the target distribution \(p^{*}\) remained largely unanswered. From the theoretical perspective, there is still a mystery about the role and impact of the discriminator on the quality of the approximation. The universal consistency and the rate of convergence of GANs and their variants still remain an open problem.
Improvement of training stability and diversityAchieving the Nash equilibrium in GAN frameworks, which is essential for the generator to learn the actual sample distribution, requires stable training mechanisms [237, 238]. However, attaining this optimal balance between the generator and discriminator remains challenging. Various approaches have been explored, such as WGAN [109], SN-GAN [133], One-sided Label Smoothing [203], and WGAN with gradient penalty (WGAN-GP) [122], to enhance training stability. Additionally, addressing mode collapse, a common GAN issue
that leads to limited sample diversity, has prompted strategies like WGAN [109], U-GAN [219], generator regulating GAN (GRGAN) [239], and Adaptive GAN [240]. Future research could focus on devising techniques to stabilize GAN training and alleviate problems like mode collapse through regularization methods, alternative loss functions, and optimized hyperparameters. Incorporating methods like multi-modal GANs, designed to generate diverse outputs from a single input, might contribute to enhancing sample diversity [239].
Data scarcity in GANAddressing the issue of data scarcity in GANs stands as a crucial research trajectory. To expand GAN applications, forthcoming investigations could focus on devising training strategies for scenarios with limited data. Approaches such as few-shot GANs, transfer learning, and domain adaptation offer the potential to enhance GAN performance when data is scarce [241, 242]. This challenge becomes especially pertinent when acquiring substantial datasets poses difficulties. Additionally, refining training algorithms for maximal data utility could be pursued. Bolstering GAN effectiveness in low-data situations holds pivotal significance for broader adoption across various industries and domains.
Ethics and privacySince its inception in 2014, GAN development has yielded substantial benefits in research and real-world applications. However, the inappropriate utilization of GANs can give rise to latent societal issues such as producing deceptive content, malicious images, fabricated news, deepfakes, prejudiced portrayals, and compromising individual safety [243]. To tackle these issues, the establishment of ethical guidelines and regulations is imperative [244]. Future research avenues might center on developing robust techniques to detect and alleviate ethical concerns associated with GANs, while also advocating their ethical and responsible deployment in diverse fields. Essential to this effort is the creation of forgery detection methods capable of effectively identifying AI-generated content, including images produced through GANs. Furthermore, GANs can be susceptible to adversarial attacks, wherein minor modifications to input data result in visually convincing yet incorrect outputs [116, 245]. Future investigations could prioritize the development of robust GANs that can withstand such attacks, alongside methods for identifying and countering them. Ensuring the integrity and reliability of GANs is of utmost importance, particularly in contexts like authentication, content verification, and cybersecurity [246, 216].
Real-time implementation and scalabilityWhile GANs have shown immense potential, their resource-intensive nature hinders real-time usage and scalability. Recent GAN variants like ProGAN [5] and Att-GAN [148] aim to address this complexity. Future efforts might focus on crafting efficient GAN architectures capable of generating high-quality samples in real-time, vital for constrained platforms like mobile devices and edge computing. Integrating GANs with reinforcement learning, transfer learning, and supervised learning, as seen in RidgeGAN [10], opens opportunities for hybrid models with expanded capabilities. Research should delve into hybrid approaches, leveraging GANs alongside other techniques for enhanced generative potential. Additionally, exploring multimodal GANs that produce diverse outputs from multiple modalities can unlock novel avenues for creating complex data [247].
Human-centric GANsGANshave the potential to enable human-machine creative cooperation [248]. Future research could emphasize human-centric GANs, integrating human feedback, preferences, and creativity into the generative process. This direction might pave the way for interactive and co-creative GANs, enabling the production of outputs aligned with human preferences and needs, while also involving users in active participation during the generation process.
Other innovative applications and industry usageInitially designed for generating realistic images, GANs have exhibited impressive performance in computer vision. While their application has extended to domains like time series generation [102, 103], audio synthesis [8], and autonomous vehicles [120], their use outside computer vision remains somewhat constrained. The divergent nature of image and non-image data introduces challenges, particularly in non-image contexts like NLP, where discrete values such as words and characters predominate [199]. Future research can aim to overcome these challenges and enhance GANs' capabilities in discrete data scenarios. Furthermore, exploring unique applications of GANs in fields like finance, education, and entertainment offers the potential to introduce new possibilities and positively impact various industries [249]. Collaborative efforts across disciplines could also harness diverse expertise, fostering synergies to enhance GANs' adaptability across a broad spectrum of applications [250].
## XI Conclusion
In this article, we presented a GAN survey, GAN variants, and a detailed analysis of the wide range of GAN applications in several applied domains. In addition, we reviewed the recent theoretical developments in the GAN literature and the most common evaluation metrics. Despite all these one of the core contributions of this survey is to discuss several obstacles of various GAN architectures and their potential solutions for future research. Overall, we discuss GANs' potential to facilitate practical applications not only in image, audio, and text but also in relatively uncommon areas such as time series analysis, geospatial data analysis, and imbalanced learning. In the discussion section, apart from GANs' significant success, we detail the failures of GANs due to their time complexity and unstable training. Although GANs have been phenomenal for the generation of hyper-realistic data, current progress in deep learning depicts an alternative narrative. Recently developed architectures such as Diffusion models have demonstrated significant success and outperformed GANs on image synthesis. On the other hand, Transformers, a deep learning architecture based on a multi-head attention mechanism, has been used within GAN architecture to enhance its performance. Furthermore, Large Language Models, a widely utilized deep learning structure designed for comprehending and producing natural language, have been incorporated into GAN architecture to bolster its effectiveness. The hybridization of PINN and GAN namely, PI-GAN can solve inverse and mixed stochastic problems
based on a limited number of scattered measurements. On the contrary, GANs' ability which relies on large data for training, using physical laws inside GANs in the form of stochastic differential equations can mitigate the limited data problem. Several hybrid approaches combining GAN with other powerful deep learners are showing great merit and success as discussed in the discussion section. Finally, several applications of GANs over the last decade are summarized and criticized throughout the article.
|
2302.04991 | A Graph-Based Modeling Framework for Tracing Hydrological Pollutant
Transport in Surface Waters | Anthropogenic pollution of hydrological systems affects diverse communities
and ecosystems around the world. Data analytics and modeling tools play a key
role in fighting this challenge, as they can help identify key sources as well
as trace transport and quantify impact within complex hydrological systems.
Several tools exist for simulating and tracing pollutant transport throughout
surface waters using detailed physical models; these tools are powerful, but
can be computationally intensive, require significant amounts of data to be
developed, and require expert knowledge for their use (ultimately limiting
application scope). In this work, we present a graph modeling framework --
which we call ${\tt HydroGraphs}$ -- for understanding pollutant transport and
fate across waterbodies, rivers, and watersheds. This framework uses a
simplified representation of hydrological systems that can be constructed based
purely on open-source data (National Hydrography Dataset and Watershed Boundary
Dataset). The graph representation provides an flexible intuitive approach for
capturing connectivity and for identifying upstream pollutant sources and for
tracing downstream impacts within small and large hydrological systems.
Moreover, the graph representation can facilitate the use of advanced
algorithms and tools of graph theory, topology, optimization, and machine
learning to aid data analytics and decision-making. We demonstrate the
capabilities of our framework by using case studies in the State of Wisconsin;
here, we aim to identify upstream nutrient pollutant sources that arise from
agricultural practices and trace downstream impacts to waterbodies, rivers, and
streams. Our tool ultimately seeks to help stakeholders design effective
pollution prevention/mitigation practices and evaluate how surface waters
respond to such practices. | David L. Cole, Gerardo J. Ruiz-Mercado, Victor M. Zavala | 2023-02-10T00:30:38Z | http://arxiv.org/abs/2302.04991v3 | # A Graph-Based Modeling Framework for Tracing
###### Abstract
Anthropogenic pollution of hydrological systems affects diverse communities and ecosystems around the world. Data analytics and modeling tools play a key role in fighting this challenge, as they can help identify key sources as well as trace transport and quantify impact within complex hydrological systems. Several tools exist for simulating and tracing pollutant transport throughout surface waters using detailed physical models; these tools are powerful, but can be computationally intensive, require significant amounts of data to be developed, and require expert knowledge for their use (ultimately limiting application scope). In this work, we present a graph modeling framework--which we call HydroGraphs--for understanding pollutant transport and fate across waterbodies, rivers, and watersheds. This framework uses a simplified representation of hydrological systems that can be constructed based purely on open-source data (National Hydrography Dataset and Watershed Boundary Dataset). The graph representation provides a flexible intuitive approach for capturing connectivity and for identifying upstream pollutant sources and for tracing downstream impacts within small and large hydrological systems. Moreover, the graph representation can facilitate the use of advanced algorithms and tools of graph theory, topology, optimization, and machine learning to aid data analytics and decision-making. We demonstrate the capabilities of our framework by using case studies in the State of Wisconsin; here, we aim to identify upstream nutrient pollutant sources that arise from agricultural practices and trace downstream impacts to waterbodies, rivers, and streams. Our tool ultimately seeks to help stakeholders design effective pollution prevention/mitigation practices and evaluate how surface waters respond to such practices.
**Keywords**: graph theory, hydrology, connectivity, pollutants, nutrients, watersheds, lakes, rivers.
## Highlights
* We present a general framework for representing hydrological systems as graphs
* The graph-based framework can help identify pollutant transport and fate across waterbodies, rivers, and watersheds
* Representing these systems as graphs enables advanced metrics and algorithms to adi data analytics and decision making
* We apply our framework to case studies in Wisconsin, USA to highlight applications
## 1 Introduction
Anthropogenic pollution in hydrological systems has significant impacts on communities and ecosystems around the globe. These pollutants arise from diverse sources and come in many forms; common pollutants include nutrients such as nitrogen- and phosphorus-based fertilizers (Carpenter et al., 1998), emerging contaminants (ECs) (Wilkinson et al., 2017), microplastics (Haddout et al., 2022), heavy metals (Ciazela et al., 2018), and microbes (Nawab et al., 2016). These contaminants can move through surface waters (e.g., lakes, rivers, streams) and groundwaters by following complex network pathways, leading to impacts near the contaminant release as well as to downstream impacts that span thousands of miles. Studies have shown that many of these pollutants can be toxic to humans and animals, and many of these impacts are still not fully understood. For instance, ECs include chemicals such as pharmaceuticals, personal care products, and per- and poly-fluoroalkyl substances (PFAS) (Tong et al., 2022). Risk assessment studies on some ECs suggest that they can impact the immune system or cause cancer, while other ECs have little published health information (Bonato et al., 2020; Cousins et al., 2020). Other pollutants, such as heavy metals or microbes, can likewise lead to cancer or waterborne diseases, such as dysentery or diarrhea (Khan et al., 2013; Lim et al., 2008).
Pollutants can also cause significant environmental and economic problems; for instance, nutrient pollution is a major driver of harmful algae blooms (HABs) in both freshwater and saline waterbodies (Nie et al., 2018; Shortle and Horan, 2017; Committee on the Causes and Management of Coastal Eutrophication et al., 2000). Nutrient pollution and HABs can destroy marine wildlife through anoxia, poisoning, and other mechanisms (Bauman et al., 2010; Brusle, 1994; Rabotyagov et al., 2020), can cause human health impact, and they can decrease property values and hurt recreational and fishing operations on the order of billions of US dollars (Dodds et al., 2009; Sampat et al., 2021). Many pollutants--such as ECs, microplastics, and heavy metals--have also been shown to bioaccumulate in wildlife, leading to long-term effects (Copat et al., 2012; Zhang et al., 2019; Wilkinson et al., 2017). Furthermore, the fate of some pollutants such as plastics is not fully understood; many of these contaminants have uncertain environmental and human health impacts and can degrade into smaller compounds with unknown properties (Gogoi et al., 2018; Wilkinson et al., 2017). These pollutants also tend to travel (via hydrological systems) over long distances and find their way to oceans (Ho et al., 2019; Rabotyagov et al., 2020). Moreover, such pollutants may disproportionately impact vulnerable communities, such as those in rural areas and developing countries (Ashbolt, 2004).
To better understand and combat the environmental, economic, and health impacts of hydrological pollutants, there is a need for tools that provide intuitive and easy-to-use tools that can help navigate complexity and answer questions of interest. For instance, for a given pollutant release, it is important to understand which parts of a hydrological system will likely be impacted or how far a pollutant can travel. Similarly, if a contaminant is discovered in a given river or lake, we might be interested in identifying what are possible upstream sources from which it originated. These types of questions are challenging to address because hydrological systems involve large and highly interconnected networks. For instance, the continuous United States contains more than 85,000 lakes (King et al., 2021) and over one million kilometers of rivers and streams (U.S. EPA, 2020). Moreover, interconnections between waterbodies span multiple spatial scales
and are often non-intuitive. For instance, the Mississippi river is connected to waterbodies in Pennsylvania but not to waterbodies in Michigan. As a result, pollutant transport can also be complex, as many pollutants can originate from point and non-point sources (Carpenter et al., 1998; Nie et al., 2018; Xue et al., 2022), can come from far upstream (Saul et al., 2019), can involve significant spatial/temporal scales (i.e., legacy pollutants) (Motew et al., 2017; Li et al., 2018; Sharpley et al., 2013), and can be dependent on many factors such as weather, topology, soil type, or land cover (Sharpley et al., 1993; Van Es et al., 2004; Zhu et al., 2021).
Diverse tools exist for understanding pollutant transport using detailed physical models (Costa et al., 2021; Lindim et al., 2016; Mispan et al., 2015; Tong et al., 2022; Wellen et al., 2015; Yuan et al., 2020); these tools are powerful, but are computationally intensive and may require significant expertise and data to be used (ultimately limiting application scope). A modeling approach that can help trace pollutant pathways in a more simplified manner consists of representing hydrological systems as graphs (networks). Graphs are mathematical representations (models) that are comprised of sets of nodes and edges; nodes represent different objects (e.g., lakes), and edges are links placed between nodes (e.g., rivers and streams connecting lakes). A wide range of applications of graph representations have been explored in science and engineering (from cosmology to social networks and infrastructure networks). The success of graph modeling tools in such applications has been due to the availability of diverse analysis/visualization tools and of underlying algorithms that enable scalable analysis. In addition, graphs provide an intuitive and flexible approach for analyzing connectivity, which can be an important factor in understanding pollutant impacts and transport within hydrological systems (Carpenter and Lathrop, 2014; Cheruvelil et al., 2022; Soranno et al., 2015).
Graph models have been used in different studies to model hydrological systems. Recently, King and co-workers (King et al., 2021) used graphs for analyzing connectivity of lakes in the US; however, this work did not target hydrological pollutant tracing and the graph representation used did not capture rivers and streams as nodes within the graph. Other works have used graphs to represent river systems, but these did not include explicit connectivity to waterbodies or analyzed pollutant transport to waterbodies (Abed-Elmdoust et al., 2017; Heckmann et al., 2015; Schmidt et al., 2020; Tejedor et al., 2015; Tejedor et al., 2015; Zaliapin et al., 2010). In addition, there are other datasets that capture connectivity of some hydrological systems, such as the Watershed Index Online (WSIO; captures connectivity of HUC12 watersheds) (U.S. EPA, 2022) or the Wisconsin Department of Natural Resources (DNR) 24K Hydrography Geodatabase (U.S. EPA, 2014, Wisconsin DNR, 2017). These sources do not explicitly represent the waterbody-river system as a graph, but they provide important data that can be incorporated into a graph model of rivers and waterbodies. As will be shown later, a graph model can provide an easy way to incorporate pollutant data, and it can be applied to areas not covered by WSIO or the Wisconsin DNR 24K Hydrography Geodatabase.
In this work, we present a graph modeling framework--which we call HydroGraphs--for capturing watershed-river-waterbody connectivity in hydrological systems. The proposed framework was implemented in Python and was developed with the goal of analyzing pollutant transport (Figure 1). A unique aspect of our tool is that it allows the user to trace pollutant transport in surface waters from a given watershed or pollutant source through downstream waterbodies and watersheds. Here, we provide a detailed description of the methods used for building a graph representation starting from public and open-source databases such as the National Hydrography Dataset (NHDPlusV2) (McKay et al., 2012) and Watershed Boundary Dataset (WBD) (USGS, 2022) by using GeoPandas (Jordahl et al., 2020) and NetworkX(Hagberg et al., 2008).
We demonstrate the capabilities of HydroGraphs by providing case studies in the State of Wisconsin, where our resulting graph contains over 45,000 nodes, representing more than 3,000 km\({}^{2}\) of waterbodies and
79,000 km of rivers and streams. We also provide case studies showing how the graph framework can be applied to analyze upstream sources and downstream impacts of anthropogenic pollution. In these studies, we analyze nutrient pollution in Wisconsin, a challenge that has lasted for decades and that originates from intensive agricultural practices and other anthropogenic sources. We emphasize with these studies how HydroGraphs can easily incorporate data for point and non-point pollutant sources as well as impact data; in particular, we use data for hundreds of concentrated animal feeding operations (CAFOs) as point sources within the graph and include over 93,000 km\({}^{2}\) of agricultural land as non-point sources of pollution and we show how to link such source data to impact data (chlorophyll-a concentration in lakes to monitor the onset of HABs). With this, we aim to show how HydroGraphs can be a valuable tool for researchers and decision-makers to conduct quick assessments of pollutant impacts on the environment. Moreover, we discuss how the framework can be used in conjunction with supply chain optimization models to understand how changes in agricultural practices or in infrastructure can increase (or decrease) the quality of hydrological systems.
## 2 Graph Representation of Hydrological Systems
The focus of our work was to create a graph modeling tool, HydroGraphs, to link upstream pollutant sources and downstream pollutant impacts. Specifically, we aim to develop a tool that help us answer questions such as: What waterbodies will be affected by a pollutant release in a specific watershed? What upstream pollutant releases may be impacting a given waterbody? What waterbodies may be "storing" pollutants along a given pathway? To answer such questions, it is necessary to model the connectivity between the objects of interest (e.g., pollutant sources or waterbodies) and creating simple and intuitive ways of analyzing these interconnections. Graphs provide a natural mathematical representation to achieve these goals. In this section, we provide an overview on graph representations, outline the data needed to build such graphs, and outline the specific steps to express the graph connectivity. Our framework can be used to capture diverse hydrological systems in the United States (or in the world, provided that data is available in the required format). In the next section, we illustrate these general capabilities by building graph representations to capture interconnectivity of
Figure 1: An overview of HydroGraphs, a graph-based modeling framework for incorporating GIS data and pollutant data to trace anthropogenic pollution transport through surface waters. The framework uses open data from the National Hydrography Dataset (NHDPlusV2) (McKay et al., 2012) and the Watershed Boundary Dataset (USSG, 2022), and uses GeoPandas (Jordahl et al., 2020) and Networkx(Hagberg et al., 2008) for building and analyzing the resulting graph representation. Total phosphorus and chlorophyll-a data shown in the top right is from the Wisconsin Department of Natural Resources for Lake Winnebago, WI[Wisconsin DNR, c].
surface waters in the State of Wisconsin and we explore pollutant tracing applications.
An illustration of the methodology followed by our framework is provided in Figure 2. We start with river segments (line features) and waterbodies (polygon features) within watersheds (panel a). From existing data in the NHDPPlusV2, we create a graph of the river network (panel b) which does not include any waterbodies. We then determine the connectivity of the waterbodies and add them to the river graph (panel c).
### Graph Theory Overview
Graphs are modeling abstractions that are comprised of a set of nodes and edges. Nodes are used to represent diverse objects/elements of a system while edges are used to model connectivity between nodes. As a graph is a mathematical model, there are many different ways in which nodes and edges can be defined and a given selection is often driven by the insights needed from the model and from the data available. To build the watershed-river-waterbody system of interest as a graph, we chose to represent river segments and waterbodies as nodes. Edges are then placed between river segments and waterbodies that flow into one another through surface waters (e.g., by rivers or streams).
Nodes and edges of graphs can contain attributes (data) that are useful in manipulating and visualizing a graph. For example, an attribute that we used in our model is the watershed in which the node resides (the node encodes spatial/geographical context); this attribute can be used to filter out nodes that lie on a specific watershed. Our representation uses a directed (rather than undirected) graph. In undirected graphs, edges only capture connectivity (with no notion of directionality); while, in directed graphs, edges capture directionality. In our context, we are interested in tracing nutrient pollution, and we thus need to capture flow directionality.
A key benefit of using graph representations is that there are a wide range of theory and computational
Figure 2: Visualization of methods used for building a graph from geospatial data. We start with river segments and waterbodies from the NHDPPlusV2 dataset (panel a) and form a directed graph of the river systems (panel b). We then identify lake connectivity to the river graph and add waterbody nodes to the directed graph (Panel c). Watersheds shown are the HUC10 watersheds 0709000205, 0709000206, and 0709000207, primarily in Dane County, Wisconsin.
techniques for analyzing large-scale graphs; for instance, one can use algorithms to identify a set of nodes that is connected to a given node by using pathway analysis. Moreover, it is possible to visualize, aggregate, and partition graphs to gain insights into the connectivity and properties of a graph. In addition, it is possible to compute statistics of a given graph object, such as the number of nodes in a given pathway, the fractal dimension of a graph (e.g., a measure of complexity), or the node degree distribution (e.g., number of connections of a node). Another key benefit of using graph representations is that there are a wide range of open-source tools that can be leveraged for building and visualizing graphs.
### Data Overview
The graph representation was constructed directly using GIS data from the NHDPlusV2 and the WBD datasets. While the data within these datasets are for the United States, similar methods could be applied to other geographical areas provided that the data is in similar formats as what is discussed below. We used GeoPandas (Jordahl et al., 2020) in Python to work with the GIS data, and we use NetworkX(Hagberg et al., 2008) for building and managing the graph. The code for our framework is available at [https://github.com/zavalab/ML/tree/master/HydroGraphs](https://github.com/zavalab/ML/tree/master/HydroGraphs).
In our representation there are three main types of objects used to build the graph (see Figure 1(a)): rivers (NHDPlwline from NHDPlusV2; represented by line features), waterbodies (NHDPuderbody from NHDPlusV2; represented by polygon features), and watersheds (from WBD; represented by polygon features). Each of these lines or polygons has a geographic location (e.g., edges of the polygons were represented by specific geographic coordinates) and has a unique identifier. The NHDPlusV2 dataset uses a unique common identifier (COMID) for every individual river segment or waterbody, while the WBD uses a unique hydrologic unit code (HUC) for individual watersheds. In addition, the WBD has different hierarchical levels, with 8-digit, 10-digit, and 12-digit codes depending on the size of the watersheds, where the higher digit codes are partitions of the lower digit codes (e.g., each HUC10 watershed is made up of multiple, smaller HUC12 watersheds). We will use these identifiers in talking about their corresponding objects; for example, we will form nodes out of each river segment or waterbody, and we will identify these nodes by their corresponding COMID.
### Expressing Connectivity
One of the key technical challenges in building the graph representation is identifying the connectivity between the rivers, waterbodies, and watersheds. Conveniently, the NHDPlusV2 dataset provides connectivity between river segments by giving directed pairs of river segments identified by their COMID (i.e., these are given as pairs of "from" COMIDs and their corresponding "to" COMIDs). This is essentially a list of directed edges of a graph and could be used to build a graph of the river system. In this case, each river segment corresponds to a node, where the node is identified by the river segment's COMID. The graph formed by this list of directed edges (which we will refer to as the "river graph"; see Figure 1(b)) was a basis for building our overall graph. Note, however, that the river graph does not include specific nodes that correspond to waterbodies ("waterbody nodes" will be added in a later process). The NHDPlusV2 data overlaps river segments with waterbody polygons, and many polygons may overlap with several river segments. Furthermore, some river segments may overlap with multiple waterbodies, making it difficult to identify which waterbodies flow into one another. Our methods for identifying which river segments overlapped with waterbodies and for adding waterbodies to the given river segment graph are outlined in this subsection.
The first step in adding the waterbodies to the graph was identifying the connectivity of waterbodies and rivers. We first built empty lists for every river segment and waterbody polygon. These lists would contain
the COMIDs of all intersecting rivers (for waterbodies) or all intersecting waterbodies (for river segments). We then tested every river segment against every waterbody to see if the river segment intersected the waterbody polygon. This was performed within two loops using the GeoPandas' intersects function. If a river segment intersected a waterbody, the river segment COMID was added to the waterbody's list of intersecting rivers, and the waterbody COMID was added to the river segment's list of intersecting lakes. These lists could then be used to add the respective lakes to the graph.
After identifying all intersections between rivers and waterbodies, we added waterbodies to the river graph by replacing river COMIDs with waterbody COMIDs (i.e., replacing river nodes with waterbody nodes) and adding edges between the resulting waterbody nodes and other river nodes (See Figure 2c). We first looped through every river segment; if the list of intersecting waterbodies for a given river segment contained only one waterbody, we replaced the river segment COMID in the river graph's edge list with the intersecting waterbody COMID. This added several waterbodies to the river graph. After completing this loop, we looped over every river segment again. If the river segment intersected multiple waterbodies, then the river segment was replaced by all waterbodies which it intersected. This meant that more nodes were added to the graph. A visualization of this process is given in Figure 3. Note that this results in waterbodies that intersect the same river segment not necessarily being directly connected (e.g., waterbody B in Figure 3 should connect to waterbody C, but it does not). This was a simplification that had to be made because individual river segments do not have an inherent flow direction to them, meaning that we cannot tell which waterbody is intersected first by the river segment. We believe that this is a reasonable simplification because the general connectivity of the full graph is still maintained; moreover, this highlights how the graph representation used is inherently
Figure 3: Visualization of methods used for representing the river-waterbody system as a graph. The red lines represent river segments and are identified by numbers while the blue polygons represent waterbodies and are identified with letters. River segment 1 only intersects waterbody A, so it is replaced by a single node corresponding to waterbody A. River segment 4 intersects two waterbodies, B and C, so it is replaced by two nodes within the graph corresponding to waterbodies B and C.
limited by the availability of data. The affected waterbodies still have the same upstream and downstream connections with the minor exception of not being connected to waterbody(ies) that intersect their same line segment. For example, waterbody A still flows into waterbodies B and C in Figure 3 even though waterbodies B and C are not directly connected to each other. In other words, waterbody A's downstream graph (and river node 5's upstream graph) includes the same set of nodes as it would if this simplification was not applied. For the State of Wisconsin, this latter simplification applied to less than 10% of the waterbodies in the graph. This simplification is very localized (average river segment length is \(<2\) km, and the impacts are only on waterbodies connected by the same river segment), and thus we believe this will produce minimal error. Further details about this method can be found in the supporting information. The resulting graph obtained for Wisconsin is presented in Figure 4.
Figure 4: Wisconsin hydrological system (a), representation as a directed graph (b), and aggregated form of the directed graph (c). The full directed graph contains over 45,000 nodes and 47,000 edges, representing more than 3,000 km\({}^{2}\) of waterbodies and 79,000 km of rivers or streams.
We make a few additional remarks about our data and methods. First, we chose to exclude a few waterbodies from our graph. We excluded Lake Michigan and Lake Superior from the graph to make the visualizations simpler. Ultimately, for the Wisconsin area explored in the next section, everything that does not flow into the Mississippi River flows into these two lakes (i.e., all nodes in the 04 HUC2 watershed are connected to the Great Lakes), so the connectivity to these lakes is established by nodes being within the 04 HUC2 watershed. Representing either of these lakes by a single node makes the visualizations more difficult to follow, and thus were undesirable for our needs. However, these could be included in our methods simply by not removing the COMIDs for these two lakes from the original dataset.
In addition, while swamps and marshes are included in the NHDWaterbody dataset, we removed these objects to simplify the analysis. Swamps and marshes impact nutrient transport differently than some other waterbodies, so we did not want to include them in the same category with lakes and reservoirs. However, we note that swamps and marshes could be influential in nutrient transport (in fact they can be used to control nutrient pollution) (Dolph et al., 2019; Fisher and Acreman, 2004; Verhoeven et al., 2006; Walton et al., 2020). Thus, these will be a subject of future research, but they are outside the scope of this study. Wetlands like swamps and marshes can introduce significant complexity because the pollutant transport can be dependent on soil type and vegetative processes (as is the case for nutrient pollution (Fisher and Acreman, 2004; Walton et al., 2020)), and they could significantly impact the time scales of the pollutant transport. This study does not focus on the temporal aspect of hydrological pollutant transport, but rather on the pathways and fate of the pollutants in hydrological systems.
We also note that much of the connectivity we give here could be elucidated from the LAGOS-US NETWORKS v1 data set (King et al., 2021). The LAGOS-US NETWORKS v1 dataset includes information on the lakes that are connected to the river graph edge list (see nets_flow_medres_csv(King et al., 2021)) by indicating whether an edge of the river graph also goes to or from a lake in the dataset. However, we chose to build our connectivity list from scratch because we wanted to include several waterbodies that were not included in the LAGOS-US NETWORKS v1 dataset. For example, they omit lakes that are \(<1\) hectare in size, and they do not include reservoirs (NHDWaterbodies attribute FTYPE equal to "Reservoir"). Both of these sets of waterbodies could be areas that anthropogenic pollutants accumulate and could be a focus of pollutant studies (see for example (Wang et al., 2018; Oliver et al., 2019)). Excluding these waterbodies could thus lead to incorrect results when seeking to identify areas of pollutant impacts.
We recognize that the above methods are only focused on waterbodies that are connected through surface waters. The resulting graph outlined above does not include every waterbody in a geographic area because many waterbodies are isolated and not connected by surface waters to other objects in the graph. Furthermore, this graph is specific to surface water and does not include transport through other means such as groundwater, which can be important factors (Meinikmann et al., 2015; Valiela et al., 1990; Wang and Baerenklau, 2015). Including other transport mechanisms greatly impacts the complexity and will be explored in future work.
### Aggregating River Nodes
The above methods for building this graph result in several intermediate river nodes upstream or downstream of waterbodies. In many cases, it may be desirable to aggregate nodes to simplify the model representation, either to make the visualizations simpler or to reduce the number of nodes involved when analyzing the graph with varying degrees of spatial resolution. Our framework provides capabilities for automating aggregation; details of this aggregation procedure are included in the supporting information.
## 3 Wisconsin Case Studies
We highlight how HydroGraphs can be used to identify pollutant sources and their potential destinations; we do this by developing some specific case studies. Case studies focus on nutrient pollution in Wisconsin, a challenge that has existed for decades due in part to the large amount of agricultural land and CAFOs throughout the state that result in nitrogen (N) and phosphorus (P) flowing into nearby waterways. Nutrient losses from these sources frequently lead to HABs, which can have negative health, economic, and environmental impacts for local communities.
The first case study compares a couple of lakes in Wisconsin with differing total phosphorus (TP) concentrations and looks at their upstream graphs and likely P sources that contribute to these differences. The second case study looks more generally at several hundred lakes for which we have TP and chlorophyll-a data and compares connectivity attributes of the graph between polluted and clean lakes. The final case study looks at how our framework can be used to identify impacts that a potential pollutant source could have. This is done by inspecting the nodes in the graph that are downstream of the source.
The case studies presented herein are intended as examples of ways that this graph framework could be applied. While these case studies do incorporate real data, they are not rigorous studies intended to give exact causation or to make policy recommendations. Rather, their purpose is to present how the graph could be applied by experts and researchers in this field. Further, they highlight how HydroGraphs could be used in helping decision-makers approach complex problems involving pollutants in hydrological systems. For the code to replicate these case studies, see [https://github.com/zavalab/ML/tree/master/HydroGraphs](https://github.com/zavalab/ML/tree/master/HydroGraphs).
### Case Study I: Identifying Upstream Sources
In this case study, we look at how this graph can enable identifying upstream influences to a given waterbody. We focus in this case on P pollution in waterbodies, but the principles in this case study could easily be applied to other pollutants, such as ECs, microplastics, or heavy metals. Here, we build the upstream graphs for two lakes in Wisconsin--Lake Altoona and Mohawksin Lake--and identify possible pollutant sources that contribute to these lakes. We choose these lakes because we have TP concentration measurements from the Wisconsin DNR for each lake [Wisconsin DNR, a, Wisconsin DNR, b]. Based on the measured TP concentrations and reported lake perception, Lake Altoona has poorer water quality (average measured TP of 103 mg/m\({}^{3}\)) and has noticeably worse problems with algae than Mohawksin Lake (average measured TP of 40 mg/m\({}^{3}\)). Data for these lakes and their reported perception measurements are available in the supporting information.
There are several upstream factors that can impact pollutant transport to waterbodies. We look at three specific factors that may influence the TP concentrations within the waterbodies. The first factor is the waterbodies that are upstream to a given waterbody. Upstream waterbodies could accumulate pollutants and could impact how much of a pollutant reaches a downstream waterbody and when it reaches it [Carpenter and Lathrop, 2014, Motew et al., 2017, Jones, 2010]. These waterbodies are naturally a part of the upstream graph from a specific waterbody. The second factor we consider are CAFOs; these can be a source of pollutants in Wisconsin because of their large production of manure, and many sources show that these can be significant contributors of P to waterbodies [Burkholder et al., 2007, Long et al., 2018, Parry, 1998]. For our analysis here, we use data from Hu and co-workers[Hu et al., 2018] to identify locations for more than 200 CAFOs. We add the CAFOs from
this dataset to the directed graph outlined above by adding a directed edge from the location of the CAFO to the closest node in the same HUC12 watershed as the given CAFO. The other source we consider is agricultural land; agricultural land is a significant non-point source of N/P and can be an indicator of surface water pollutant concentrations (Carpenter et al., 1998; Le et al., 2010; Motew et al., 2019; Robertson et al., 2006). This source can be closely related to CAFOs because the manure from CAFOs is often applied as a nutrient source for crops. We use the shapefile from (James and Tomer, 2020) to identify agricultural land, and we only look at agricultural land that shares a HUC12 watershed with the given waterbody or its upstream nodes. For this dataset of agricultural land, we only use the polygons that are labeled as agricultural land that exclude pasture class (where feature isAG equals 1).
Because we are using a directed graph representation, we can easily form the upstream graphs for both Altoona Lake and Lake Mohawksin. This allows us to frame potential hypotheses of why Altoona Lake has much higher TP concentrations than Lake Mohawksin. These upstream graphs for these lakes can be seen in Figures 5 and 6. Panel a) shows all upstream nodes (including CAFOs) overlayed on the HUC12 watersheds and where these watersheds lie in Wisconsin. Panel b) includes the agricultural land polygons. These figures also include the upstream waterbody polygons to give an idea of the size of the waterbodies to which waterbody nodes correspond.
From Figures 5 and 6, it is clear that both Altoona Lake and Lake Mohawksin have large upstream graphs that include several waterbodies. Altoona Lake includes 34 upstream waterbodies that cover an area of more than 7 km\({}^{2}\), while Lake Mohawksin includes 265 upstream waterbodies covering more than 345 km\({}^{2}\). Altoona Lake also has upstream connections to two CAFOs while Mohawksin Lake has no CAFOs in its upstream watersheds. The figures also show that there is significantly more agricultural land in Altoona Lake's upstream graph. The agricultural land makes up 20.6% of Altoona Lake upstream watersheds while it makes up only 0.7% of Lake Mohawksin's upstream watersheds. The addition of CAFOs and the higher fraction of agricultural land can thus be likely contributors to the high TP concentrations of Altoona Lake.
The above analysis gives some examples of how HydroGraphs could be used for analysis. It is very possible that the high amount of agricultural land (agricultural land fraction is more than ten times higher for Lake Altoona as Mohawksin Lake) and the upstream CAFOs (two CAFOs for Lake Altoona compared to zero CAFOs for Mohawksin Lake) are contributors to the high TP concentrations within Altoona Lake. Further, the high number of upstream waterbodies for Lake Mohawksin may influence the transfer of pollutants to Lake Mohawksin (e.g., through the accumulation of nutrient pollution in upstream waterbody sediments) (Carpenter and Lathrop, 2014; Leavitt et al., 2006; Soranno et al., 1999). Formulating these systems as a graph enables the above visualizations and simplifies analysis. It makes it easier to identify upstream point sources that could contribute to pollutant concentrations in waterbodies. In addition, just as CAFOs were added to the graph, other pollutant sources (such as wastewater treatment plants (Brooker et al., 2018; Makarewicz et al., 2012) or landfills (Hu et al., 2018)) could also be added. This could be useful in the event that a pollutant, such as an EC, is discovered in a given lake or stream. Building the upstream graph would allow researchers and decision makers to identify where this pollutant may be coming from, and to identify other upstream waterbodies or rivers that may need to be tested to see if they are likewise contaminated. As seen from the CAFOs in Figure 5, some of these upstream pollutant sources could be far upstream but are more easily identifiable by building the graph.
Figure 5: Upstream graph of Altoona Lake in Wisconsin. Panel a) shows all upstream river and waterbody nodes along with CAFOS connected to the closest node within their HUC12 watershed. Panel b) includes agricultural land polygons in green.
Figure 6: Upstream graph of Lake Mohawksin in Wisconsin. Panel a) shows all upstream river and waterbody nodes. Panel b) includes agricultural land polygons in green.
### Case Study II: Graph Connectivity Metrics
In this case study, we study upstream graph metrics for waterbodies for which we have TP and chlorophyll-a (a measure that relates to the level of algae in the lake) data from the DNR [Wisconsin DNR, c]. The DNR provides water quality data collected by volunteers for hundreds of waterbodies in Wisconsin. We compiled their data for more than 700 unique waterbodies. Data for each waterbody could vary in terms of frequency of measurements and type of measurements taken. To ensure that the lakes considered had significant data, we only studied waterbodies for which there were at least 50 measurements for both TP and chlorophyll-a which resulted in a set of 241 waterbodies. We chose the cutoff of 50 to ensure that we had several data of both TP and chlorophyll-a while maintaining a reasonable subset of lakes to analyze (e.g., choosing a higher cutoff such as 100 resulted in too few lakes, while choosing a smaller cutoff like 10 could result in too little data for the lakes). For more details on how this data was compiled and what was included, see the Supporting Information.
Based on this data for TP and chlorophyll-a, we studied five waterbodies that had the highest and lowest levels of TP and chlorophyll-a within our graph. The times at which lakes were sampled varied between waterbodies; as such, we averaged all reported measurements of TP and chlorophyll-a. This is a primary reason why we required that lakes have at least 50 data points for both TP and chlorophyll-a as we assumed that this would reduce or eliminate any differences between waterbodies caused by different sampling times. We then looked at the five waterbodies that were in our graph that had the highest average TP measurement and an average chlorophyll-a measurement of at least 30 mg/m\({}^{3}\), and we looked at the five waterbodies with the lowest average TP measurement and an average chlorophyll-a measurement of no more than 5 mg/m\({}^{3}\). The cutoff values for chlorophyll-a were chosen to ensure that the given waterbodies were at the high or low extremes of both TP and chlorophyll-a within our dataset (for further analysis of and explanation for these cutoff values, please see the Supporting Information). After determining these five most and least polluted waterbodies for which we have data, we built the upstream graphs using similar methods as discussed in Case Study I, where we added CAFOs and agricultural land to the visualizations. The results can be seen in Figure 7.
Figure 7: Comparison of upstream graphs for five polluted waterbodies and five clean waterbodies, as determined by data from the Wisconsin DNR [Wisconsin DNR, c]. “TP avg” and “chla avg” are the average TP and chlorophyll-a measurement, and “ag land” is the agricultural land fraction in the HUC12 watersheds comprising the graph.
By creating the graphs in Figure 7, we are able to observe some trends between the polluted and clean waterbodies. The polluted waterbodies all had agricultural land fractions higher than the clean waterbodies, and no polluted waterbody had an agricultural land fraction lower than 24%. Both Lake Tomah and Puckaway Lake (polluted waterbodies) had upstream CAFOs while no clean waterbody had an upstream CAFO. The polluted waterbodies also had three waterbodies with significant upstream connections. In addition, there were two waterbodies in the polluted waterbodies (Carstens Lake and Little Green Lake) and one in the clean waterbodies (Maiden Lake) that had no upstream connections. This is because they are in the graph, but only have downstream connections.
The analysis of Figure 7 can also be expanded to include more waterbodies. To look at how well these metrics apply to other waterbodies, we took a bigger subset of waterbodies by taking as "clean" all waterbodies with TP concentrations of \(<15\) mg/m\({}^{3}\) and chlorophyll-a concentrations of \(<5\) mg/m\({}^{3}\) which totaled 60 waterbodies. For the "polluted", we used all waterbodies with TP concentrations of \(>60\) mg/m\({}^{3}\) and chlorophyll-a concentrations of \(>15\) mg/m\({}^{3}\) which totaled 18 waterbodies (for further analysis of and explanation for these cutoff values, please see the Supporting Information). We then looked at how many of these waterbodies are connected to CAFOs upstream (or, in the case of waterbodies without an upstream graph, if there is a CAFO in their HUC12 watershed), what is their agricultural land fraction, how many waterbodies are not in the graph (i.e., how many are not connected to a river or stream in the graph), and how many upstream nodes these waterbodies have that were in the graph. The results of this analysis are shown in Table 1. In addition, we also found that the average agricultural land fraction for the polluted waterbodies (26.7%) was three times higher than the average for the clean waterbodies (8.0%).
Overall, this analysis suggests that some graph metrics may be feasible indicators that help identify polluted lakes. One third of the polluted waterbodies were connected to an upstream CAFO, and the polluted waterbodies were generally in watersheds with much higher agricultural land fractions. Further, the polluted waterbodies often exhibited much higher upstream connectivity than the clean waterbodies. These metrics are all relatively easy to compute using the graph representation of the hydrological system, and the results could potentially be extrapolated to other lakes for which we do not have data compiled. Building these systems as a graph provides new tools for pollutant fate and transport. We would like to highlight that the results found in this study are only speculative and do not aim to provide a final recommendation on the origins of Lake nutrient pollution (which can be the result of many factors).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & Total & CAFO & In Graph & Headwater & \(>20\)\% ag frac & \(10+\) nodes \\ \hline \hline Polluted & Number & 18 & 6 & 15 & 2 & 13 & 9 \\ Waterbodies & Fraction & & 0.33 & 0.83 & 0.11 & 0.72 & 0.50 \\ \hline Clean & Number & 60 & 4 & 24 & 10 & 9 & 3 \\ Waterbodies & Fraction & & 0.07 & 0.40 & 0.17 & 0.17 & 0.05 \\ \hline \end{tabular}
\end{table}
Table 1: Metrics for Polluted and Clean waterbodies using HydroGraphs. Metrics indicate the number of waterbodies that are connected upstream to a CAFO (CAFO); the number of waterbodies that are in the graph (In Graph); the number of waterbodies that could be considered headquarters as they are in the graph but have no upstream nodes (Headwater); the number of waterbodies that have an agricultural land fraction of more than 0.2 in their watershed (\(>20\)% ag frac); and the number of waterbodies that have at least 10 nodes in their upstream graph (\(10+\) nodes).
### Case Study III: Downstream Impacts
For this final case study, we consider a hypothetical example of placing a potential pollutant source (e.g., a new CAFO or wastewater treatment plant) and identifying its downstream impacts. In determining where to place this new pollutant source, we need to consider the potential downstream impacts this could have. We look at a couple of potential locations for its placement within the same area. Interestingly, the locations are just within five kilometers of each other, but they have drastically different downstream destinations (because they lie in different watersheds). These pollutant sources are added to the graph by placing a directed edge from the pollutant source to the nearest node within the pollutant source's HUC12 watershed. The downstream, aggregated graphs for these two pollutant sources are shown in Figure 8.
Location 1 (Figure 8a) results in the pollutant passing through 15 waterbodies and going into the Mississippi River (the Western border of Wisconsin), while location 2 (Figure 8b) shows the pollutant also passing through 15 waterbodies and moving into Lake Michigan. This analysis can be useful in identifying waterbodies potentially impacted by new pollutant releases. In the event of a contaminant release, this graph can help identify waterbodies that may be impacted by such release and this can help develop mitigation/response strategies. Furthermore, the graph can also be useful for decision-makers in identifying where to place potential pollutant sources (such as building a new wastewater treatment plant) to minimize that sources impacts on the environment and communities. The locations shown in Figure 8 travel through completely different waterbodies, and these waterbodies may be at different stages of eutrophication and may be of varying importance/priority (e.g., if some waterbodies serve as drinking water for a locality, it may be more important than another waterbody). Thus, building the downstream graph enables researchers or decision-makers to see
Figure 8: Downstream aggregated graphs for two potential pollutant sources. Source locations are separated by less than 5 km
potential impacts of a newly introduced pollutant source.
### Conclusions and Future Work
HydroGraphs provides a framework for analyzing hydrological pollution pathways to identify upstream sources and downstream impacts. The above case studies have shown how HydroGraphs can be used with point and non-point pollution sources to identify upstream sources and link attributes within the graph to pollutant data. Further, it can also help identify potential downstream impacts from a given pollutant source. While the case studies in this paper focused on anthropogenic nutrient pollution, similar methods could be applied for other pollutants such as ECs or microplastics. Point or non-point sources of these contaminants could be added to the graph following a similar analysis as that done with CAFOs and agricultural land. Building these hydrological systems as a graph ultimately provides simple visualization and rapid analysis.
There are two areas we would like to address in the future using HydroGraphs. First, there is additional data that we can incorporate into the graph. For example, the NHDPlusV2 dataset includes attributes such as average stream flowrates that could be added to the graph as edge weights. As waterbodies often have multiple streams flowing into them, these flowrates could show which upstream sources have a stronger impact on the waterbody (e.g., the upstream sources connected through the larger stream may have a stronger impact). Second, we would like to incorporate HydroGraphs into additional decision-making models, such as into supply chain optimization. For example, Tominac et al. (Tominac et al., 2020) included environmental policy-makers as stakeholders within their supply chain model. One of the challenges in doing so is quantifying the environmental or social impacts within the supply chain. HydroGraphs could provide a tool for quantifying these impacts, such as quantifying the number of lakes that would be impacted by the introduction of a new pollutant source. This would allow for these decision-making models to highlight the environmental, economic, and social impacts of many pollutant sources.
## Supporting Information
Additional methodological details on graph construction and aggregation, details of the DNR data presented herein, and an overview of functionality for working with the graph representation are provided in the SI.
## Acknowledgments
We acknowledge support from the U.S. EPA (contract number EP-18-C-000016). We thank Eric Booth for helpful feedback on an early version of this manuscript.
The views expressed in this article are those of the authors and do not necessarily reflect the views or policies of the U.S. Environmental Protection Agency. Mention of trade names, products, or services does not convey, and should not be interpreted as conveying, official U.S. EPA approval, endorsement, or recommendation.
## References
* [Abed-Elmdoust et al., 2017] Abed-Elmdoust, A., Singh, A., and Yang, Z.-L. (2017). Emergent spectral properties of river network topology: An optimal channel network approach. _Scientific reports_, 7(1):1-9.
* [Ashbolt, 2004] Ashbolt, N. J. (2004). Microbial contamination of drinking water and disease outcomes in developing regions. _Toxicology_, 198(1-3):229-238.
* [Bauman et al., 2010] Bauman, A. G., Burt, J. A., Feary, D. A., Marquis, E., and Usseglio, P. (2010). Tropical harmful algal blooms: An emerging threat to coral reef communities? _Marine pollution bulletin_, 60(11):2117-2122.
* [Bonato et al., 2020] Bonato, M., Corra, F., Bellio, M., Guidolin, L., Tallandini, L., Irato, P., and Santovito, G. (2020). Pfas environmental pollution and antioxidant responses: an overview of the impact on human field. _International journal of environmental research and public health_, 17(21):8020.
* [Brooker et al., 2018] Brooker, M., Longnecker, K., Kujawinski, E., Evert, M., and Mouser, P. (2018). Discrete organic phosphorus signatures are evident in pollutant sources within a lake erie tributary. _Environmental science & technology_, 52(12):6771-6779.
* [Brusle, 1994] Brusle, J. (1994). The impact of harmful algal blooms on finfish. mortality, pathology and toxicology. _Repres Oceans_.
* [Burkholder et al., 2007] Burkholder, J., Libra, B., Weyer, P., Heathcote, S., Kolpin, D., Thorne, P. S., and Wichman, M. (2007). Impacts of waste from concentrated animal feeding operations on water quality. _Environmental health perspectives_, 115(2):308-312.
* [Carpenter et al., 1998] Carpenter, S. R., Caraco, N. F., Correll, D. L., Howarth, R. W., Sharpley, A. N., and Smith, V. H. (1998). Nonpoint pollution of surface waters with phosphorus and nitrogen. _Ecological applications_, 8(3):559-568.
* [Carpenter and Lathrop, 2014] Carpenter, S. R. and Lathrop, R. C. (2014). Phosphorus loading, transport and concentrations in a lake chain: a probabilistic model to compare management options. _Aquatic Sciences_, 76:145-154.
* [Cheruvelil et al., 2022] Cheruvelil, K. S., Webster, K. E., King, K., Poisson, A. C., and Wagner, T. (2022). Taking a macroscale perspective to improve understanding of shallow lake total phosphorus and chlorophyll a. _Hydrobiologia_, pages 1-15.
* [Ciazela et al., 2018] Ciazela, J., Siepak, M., and Wojtowicz, P. (2018). Tracking heavy metal contamination in a complex river-oxbow lake system: Middle oda valley, germany/poland. _Science of the Total Environment_, 616:996-1006.
* [Committee on the Causes and Management of Coastal Eutrophication et al., 2000] Committee on the Causes and Management of Coastal Eutrophication, Ocean Studies Board, Water Science and Technology Board, Commission on Geosciences, Environment, and Resources, and National Research Council (2000). _Clean coastal waters: understanding and reducing the effects of nutrient pollution_. National Academies Press.
* [Copat et al., 2012] Copat, C., Bella, F., Castaing, M., Fallico, R., Sciacca, S., and Ferrante, M. (2012). Heavy metals concentrations in fish from sicily (mediterranean sea) and evaluation of possible health risks to consumers. _Bulletin of Environmental Contamination and Toxicology_, 88(1):78-83.
* [Costa et al., 2021] Costa, C. M. d. S. B., Leite, I. R., Almeida, A. K., and de Almeida, I. K. (2021). Choosing an appropriate water quality model--a review. _Environmental Monitoring and Assessment_, 193(1):1-15.
* Cousins et al. (2020) Cousins, I. T., DeWitt, J. C., Gluge, J., Goldenman, G., Herzke, D., Lohmann, R., Miller, M., Ng, C. A., Scheringer, M., Vierke, L., and Wang, Z. (2020). Strategies for grouping per-and polyfluoroalkyl substances (pfas) to protect human and environmental health. _Environmental Science: Processes & Impacts_, 22(7):1444-1460.
* Dodds et al. (2009) Dodds, W. K., Bouska, W. W., Eitzmann, J. L., Pilger, T. J., Pitts, K. L., Riley, A. J., Schloesser, J. T., and Thornbrugh, D. J. (2009). Eutrophication of us freshwaters: Analysis of potential economic damages. _Environmental Science and Technology_, 43:12-19.
* Dolph et al. (2019) Dolph, C. L., Boardman, E., Danesh-Yazdi, M., Finlay, J. C., Hansen, A. T., Baker, A. C., and Dalzell, B. (2019). Phosphorus transport in intensively managed watersheds. _Water Resources Research_, 55(11):9148-9172.
* Fisher and Acreman (2004) Fisher, J. and Acreman, M. (2004). Wetland nutrient removal: a review of the evidence. _Hydrology and Earth system sciences_, 8(4):673-685.
* Gogoi et al. (2018) Gogoi, A., Mazumder, P., Tyagi, V. K., Chaminda, G. T., An, A. K., and Kumar, M. (2018). Occurrence and fate of emerging contaminants in water environment: A review. _Groundwater for Sustainable Development_, 6:169-180.
* Haddout et al. (2022) Haddout, S., Gimiliani, G., Priya, K., Hoguane, A., Casila, J. C. C., and Ljubenkov, I. (2022). Microplastics in surface waters and sediments in the sebou estuary and atlantic coast, morocco. _Analytical Letters_, 55(2):256-268.
* 15, Pasadena, CA USA.
* Heckmann et al. (2015) Heckmann, T., Schwanghart, W., and Phillips, J. D. (2015). Graph theory--recent developments of its application in geomorphology. _Geomorphology_, 243:130-146.
* Ho et al. (2019) Ho, J. C., Michalak, A. M., and Pahlevan, N. (2019). Widespread global increase in intense lake phytoplankton blooms since the 1980s. _Nature_, 574(7780):667-670.
* Hu et al. (2018) Hu, Y., Scarborough, M., Aguirre-Villegas, H., Larson, R. A., Noguera, D. R., and Zavala, V. M. (2018). A supply chain framework for the analysis of the recovery of biogas and fatty acids from organic waste. _ACS Sustainable Chemistry & Engineering_, 6(5):6211-6222.
* James and Tomer (2020) James, D. and Tomer, M. (2020). Agricultural land use by field: Wisconsin 2010-2019. [https://doi.org/10.15482/USDA.ADC/1520625](https://doi.org/10.15482/USDA.ADC/1520625). Ag Data Commons. Accessed 08-26-2021.
* Jones (2010) Jones, N. E. (2010). Incorporating lakes within the river discontinuum: longitudinal changes in ecological characteristics in stream-lake networks. _Canadian Journal of Fisheries and Aquatic Sciences_, 67(8):1350-1362.
* Jordahl et al. (2020) Jordahl, K., den Bossche, J. V., Fleischmann, M., Wasserman, J., McBride, J., Gerard, J., Tratner, J., Perry, M., Badaracco, A. G., Farmer, C., Hjelle, G. A., Snow, A. D., Cochran, M., Gillies, S., Culbertson, L., Bartos, M., Eubank, N., maxalbert, Bilogur, A., Rey, S., Ren, C., Arribas-Bel, D., Wasser, L., Wolf, L. J., Journois, M., Wilson, J., Greenhall, A., Holdgraf, C., Filipe, and Leblanc, F. (2020). geopandas/geopandas-vx0.8.1. Accessed 11-07-2022.
* Khan et al. (2013) Khan, S., Shahnaz, M., Jehan, N., Rehman, S., Shah, M. T., and Din, I. (2013). Drinking water quality and human health risk in charsadda district, pakistan. _Journal of cleaner production_, 60:93-101.
* [King et al., 2021a] King, K. B., Wang, Q., Rodriguez, L. K., and Cheruveil, K. S. (2021a). Lake networks and connectivity metrics for the conterminous us (lagos-us networks v1). _Limnology and Oceanography Letters_, 6(5):293-307.
* [King et al., 2021b] King, K. B., Wang, Q., Rodriguez, L. K., Haite, M., Danila, L., Tan, P.-N., Zhou, J., and Cheruveil, K. S. (2021b). Lagos-us networks v1.0: Data module of surface water networks characterizing connections among lakes, streams, and rivers in the conterminous u.s. [https://doi.org/10.6073/pasta/98c9f11df55958065985c3e84a4fe995](https://doi.org/10.6073/pasta/98c9f11df55958065985c3e84a4fe995). Accessed 05-02-2022.
* [Le et al., 2010] Le, C., Zha, Y., Li, Y., Sun, D., Lu, H., and Yin, B. (2010). Eutrophication of lake waters in china: cost, causes, and control. _Environmental management_, 45(4):662-668.
* [Leavitt et al., 2006] Leavitt, P. R., Brock, C. S., Ebel, C., and Patoine, A. (2006). Landscape-scale effects of urban nitrogen on a chain of freshwater lakes in central north america. _Limnology and Oceanography_, 51(5):2262-2277.
* [Li et al., 2018] Li, A., Guo, J., Li, Z., Lin, T., Zhou, S., He, H., Ranansinghe, P., Sturchio, N. C., Rockne, K. J., and Giesy, J. P. (2018). Legacy polychlorinated organic pollutants in the sediment of the great lakes. _Journal of Great Lakes Research_, 44(4):682-692.
* [Lim et al., 2008] Lim, H.-S., Lee, J.-S., Chon, H.-T., and Sager, M. (2008). Heavy metal contamination and health risk assessment in the vicinity of the abandoned songcheon au-ag mine in korea. _Journal of Geochemical Exploration_, 96(2-3):223-230.
* [Lindim et al., 2016] Lindim, C., Van Gils, J., and Cousins, I. T. (2016). A large-scale model for simulating the fate & transport of organic contaminants in river basins. _Chemosphere_, 144:803-810.
* [Long et al., 2018] Long, C. M., Muenich, R. L., Kalcic, M. M., and Scavia, D. (2018). Use of manure nutrients from concentrated animal feeding operations. _Journal of Great Lakes Research_, 44(2):245-252.
* [Makarewicz et al., 2012] Makarewicz, J. C., Booty, W. G., and Bowen, G. S. (2012). Tributary phosphorus loading to lake ontario. _Journal of Great Lakes Research_, 38:14-20.
* [McKay et al., 2012] McKay, L., Bondelid, T., Dewald, T., Johnston, C., Moore, R., and Rea, A. (2012). _NHDPlus Version 2: User Guide_. [https://edap-ow-data-commons.s3.amazonaws.com/NHDPlusV21/Documentation/NHDPlusV2_User_Guide.pdf](https://edap-ow-data-commons.s3.amazonaws.com/NHDPlusV21/Documentation/NHDPlusV2_User_Guide.pdf). Accessed on 09-21-2022.
* [Meinikmann et al., 2015] Meinikmann, K., Hupfer, M., and Lewandowski, J. (2015). Phosphorus in groundwater discharge-a potential source for lake eutrophication. _Journal of Hydrology_, 524:214-226.
* [Mispan et al., 2015] Mispan, M. R., Abd Rahman, N. F., Khalid, K., Haron, S., Abdul Rasid, M., and Mohd, M. (2015). Nutrient transport modeling: A review on models capabilities. _Int. J. Innov. Sci. Eng. Technol_, 2:908-914.
* [Motew et al., 2017] Motew, M., Chen, X., Booth, E. G., Carpenter, S. R., Pinkas, P., Zipper, S. C., Loheide, S. P., Donner, S. D., Tsuruta, K., Vadas, P. A., and Kucharik, C. J. (2017). The influence of legacy p on lake water quality in a midwestern agricultural watershed. _Ecosystems_, 20(8):1468-1482.
* [Motew et al., 2019] Motew, M., Chen, X., Carpenter, S. R., Booth, E. G., Seifert, J., Qiu, J., Loheide II, S. P., Turner, M. G., Zipper, S. C., and Kucharik, C. J. (2019). Comparing the effects of climate and land use on surface water quality using future watershed scenarios. _Science of the Total Environment_, 693:133484.
* [Nawab et al., 2016] Nawab, J., Khan, S., Ali, S., Sher, H., Rahman, Z., Khan, K., Tang, J., and Ahmad, A. (2016). Health risk assessment of heavy metals and bacterial contamination in drinking water sources: a case study of malakand agency, pakistan. _Environmental monitoring and assessment_, 188(5):1-12.
* [Nie et al., 2018] Nie, J., Feng, H., Witherell, B. B., Alebus, M., Mahajan, M. D., Zhang, W., and Yu, L. (2018). Causes, assessment, and treatment of nutrient (n and p) pollution in rivers, estuaries, and coastal waters. _Current pollution reports_, 4(2):154-161.
* [Oliver et al., 2019] Oliver, S., Corburn, J., and Ribeiro, H. (2019). Challenges regarding water quality of eutrophic reservoirs in urban landscapes: a mapping literature review. _International Journal of Environmental Research and Public Health_, 16(1):40.
* [Parry, 1998] Parry, R. (1998). Agricultural phosphorus and water quality: A us environmental protection agency perspective. _Journal of Environmental Quality_, 27(2):258-261.
* [Rabotyagov et al., 2020] Rabotyagov, S. S., Kling, C. L., Gassman, P. W., Rabalais, N. N., and Turner, R. E. (2020). The economics of dead zones: Causes, impacts, policy challenges, and a model of the gulf of mexico hypoxic zone. _Review of Environmental Economics and Policy_.
* [Robertson et al., 2006] Robertson, D. M., Graczyk, D. J., Garrison, P. J., Wang, L., LaLiberte, G., and Bannerman, R. (2006). Nutrient concentrations and their relations to the biotic integrity of wadeable streams in wisconsin. Professional Paper 1722.
* [Sampat et al., 2021] Sampat, A. M., Hicks, A., Ruiz-Mercado, G. J., and Zavala, V. M. (2021). Valuing economic impact reductions of nutrient pollution from livestock waste. _Resources, Conservation and Recycling_, 164:105199.
* [Saul et al., 2019] Saul, B. C., Hudgens, M. G., and Mallin, M. A. (2019). Downstream effects of upstream causes. _Journal of the American Statistical Association_, 114(528):1493-1504.
* [Schmidt et al., 2020] Schmidt, C., Kumar, R., Yang, S., and Buttner, O. (2020). Microplastic particle emission from wastewater treatment plant effluents into river networks in germany: Loads, spatial patterns of concentrations and potential toxicity. _Science of the Total Environment_, 737:139544.
* [Sharpley et al., 2013] Sharpley, A., Jarvie, H. P., Buda, A., May, L., Spears, B., and Kleinman, P. (2013). Phosphorus legacy: Overcoming the effects of past management practices to mitigate future water quality impairment. _Journal of environmental quality_, 42(5):1308-1326.
* [Sharpley et al., 1993] Sharpley, A. N., Daniel, T., and Edwards, D. (1993). Phosphorus movement in the landscape. _Journal of Production Agriculture_, 6(4):492-500.
* [Shortle and Horan, 2017] Shortle, J. and Horan, R. D. (2017). Nutrient pollution: A wicked challenge for economic instruments. _Water Economics and Policy_, 3(02):1650033.
* [Soranno et al., 2015] Soranno, P. A., Cheruvelil, K. S., Wagner, T., Webster, K. E., and Bremigan, M. T. (2015). Effects of land use on lake nutrients: The importance of scale, hydrologic connectivity, and region. _PloS one_, 10(8):e0135454.
* [Soranno et al., 1999] Soranno, P. A., Webster, K. E., Riera, J. L., Kratz, T. K., Baron, J. S., Bukaveckas, P. A., Kling, G. W., White, D. S., Caine, N., Lathrop, R. C., et al. (1999). Spatial variation among lakes within landscapes: ecological organization along lake chains. _Ecosystems_, 2(5):395-410.
* [Tejedor et al., 2015a] Tejedor, A., Longjas, A., Zaliapin, I., and Foufoula-Georgiou, E. (2015a). Delta channel networks: 1. a graph-theoretic approach for studying connectivity and steady state transport on deltaic surfaces. _Water Resources Research_, 51(6):3998-4018.
* [Tejedor et al., 2015b] Tejedor, A., Longjas, A., Zaliapin, I., and Foufoula-Georgiou, E. (2015b). Delta channel networks: 2. metrics of topologic and dynamic complexity for delta comparison, physical inference, and vulnerability assessment. _Water Resources Research_, 51(6):4019-4045.
* [Tominac et al., 2020] Tominac, P., Aguirre-Villegas, H., Sanford, J., Larson, R., and Zavala, V. (2020). Evaluating landfill diversion strategies for municipal organic waste management using environmental and economic factors. _ACS Sustainable Chemistry & Engineering_, 9(1):489-498.
* [Tong et al., 2022] Tong, X., Mohapatra, S., Zhang, J., Tran, N. H., You, L., He, Y., and Gin, K. Y.-H. (2022). Source, fate, transport and modelling of selected emerging contaminants in the aquatic environment: Current status and future perspectives. _Water Research_, page 118418.
* [U.S. EPA, 2014] U.S. EPA (2014). Wisconsin integrated assessment of watershed health. Technical report. EPA 841-R-14-001.
* [U.S. EPA, 2020] U.S. EPA (2020). National rivers and streams assessment 2013-2014: a collaborative survey. Technical report, U.S. Environmental Protection Agency, Washington, DC. EPA 841-R-19-001.
* [U.S. EPA, 2022] U.S. EPA (2022). Watershed index online. [https://www.epa.gov/wsio](https://www.epa.gov/wsio). Accessed 09-21-2022 (dataset).
* [USGS, 2022] USGS (2022). Watershed boundary dataset. [https://apps.nationalmap.gov/downloader/#/](https://apps.nationalmap.gov/downloader/#/). Accessed on 04-20-2022 (dataset).
* [Valiela et al., 1990] Valiela, I., Costa, J., Foreman, K., Teal, J. M., Howes, B., and Aubrey, D. (1990). Transport of groundwater-borne nutrients from watersheds and their effects on coastal waters. _Biogeochemistry_, 10(3):177-197.
* [Van Es et al., 2004] Van Es, H., Schindelbeck, R., and Jokela, W. (2004). Effect of manure application timing, crop, and soil type on phosphorus leaching. _Journal of Environmental Quality_, 33(3):1070-1080.
* [Verhoeven et al., 2006] Verhoeven, J. T., Arheimer, B., Yin, C., and Hefting, M. M. (2006). Regional and global concerns over wetlands and water quality. _Trends in ecology & evolution_, 21(2):96-103.
* [Walton et al., 2020] Walton, C. R., Zak, D., Audet, J., Petersen, R. J., Lange, J., Oehmke, C., Wichtmann, W., Kreyling, J., Grygoruk, M., Jablonska, E., et al. (2020). Wetland buffer zones for nitrogen and phosphorus retention: Impacts of soil type, hydrology and vegetation. _Science of the Total Environment_, 727:138709.
* [Wang and Baerenklau, 2015] Wang, J. and Baerenklau, K. A. (2015). How inefficient are nutrient application limits? a dynamic analysis of groundwater nitrate pollution from concentrated animal feeding operations. _Applied Economic Perspectives and Policy_, 37(1):130-150.
* [Wang et al., 2018] Wang, X., Zhang, L., Zhao, Z., and Cai, Y. (2018). Heavy metal pollution in reservoirs in the hilly area of southern china: Distribution, source apportionment and health risk assessment. _Science of the Total Environment_, 634:158-169.
* [Wellen et al., 2015] Wellen, C., Kamran-Disfani, A.-R., and Arhonditsis, G. B. (2015). Evaluation of the current state of distributed watershed nutrient water quality modeling. _Environmental science & technology_, 49(6):3278-3290.
* [Wilkinson et al., 2017] Wilkinson, J., Hooda, P. S., Barker, J., Barton, S., and Swinden, J. (2017). Occurrence, fate and transformation of emerging contaminants in water: An overarching review of the field. _Environmental Pollution_, 231:954-970.
* center of lake water quality data. [https://dnr.wi.gov/lakes/waterquality/Station.aspx?id=183082](https://dnr.wi.gov/lakes/waterquality/Station.aspx?id=183082). Accessed 05-16-2022 (dataset).
* deep hole water quality data. [https://dnr.wi.gov/lakes/waterquality/Station.aspx?id=353089](https://dnr.wi.gov/lakes/waterquality/Station.aspx?id=353089). Accessed 05-16-2022 (dataset).
* [Wisconsin DNR, c] Wisconsin DNR. Water quality data. [https://dnr.wi.gov/lakes/waterquality/](https://dnr.wi.gov/lakes/waterquality/). Accessed 08-10-2021 (dataset).
* [Wisconsin DNR, 2017] Wisconsin DNR (2017). 24k hydro geodatabase. [https://www.arcgis.com/home/item.html?id=cb1c7f75d14f42ee819a46894fd2e771](https://www.arcgis.com/home/item.html?id=cb1c7f75d14f42ee819a46894fd2e771). Accessed 09-21-2022 (dataset).
* [Xue et al., 2022] Xue, J., Wang, Q., and Zhang, M. (2022). A review of non-point source water pollution modeling for the urban-rural transitional areas of china: Research status and prospect. _Science of The Total Environment_, page 154146.
* [Yuan et al., 2020] Yuan, L., Sinshaw, T., and Forshay, K. J. (2020). Review of watershed-scale water quality and nonpoint source pollution models. _Geosciences_, 10(1):25.
* [Zaliapin et al., 2010] Zaliapin, I., Foufoula-Georgiou, E., and Ghil, M. (2010). Transport on river networks: A dynamic tree approach. _Journal of Geophysical Research: Earth Surface_, 115(F2).
* [Zhang et al., 2019] Zhang, S., Ding, J., Razanajatovo, R. M., Jiang, H., Zou, H., and Zhu, W. (2019). Interactive effects of polystyrene microplastics and roxithromycin on bioaccumulation and biochemical status in the freshwater fish red tilapia (oreochromis niloticus). _Science of the total environment_, 648:1431-1439.
* [Zhu et al., 2021] Zhu, L., Jiang, C., Panthi, S., Allard, S. M., Sapkota, A. R., and Sapkota, A. (2021). Impact of high precipitation and temperature events on the distribution of emerging contaminants in surface water in the mid-atlantic, united states. _Science of The Total Environment_, 755:142552.
**Supplementary Information**
A Graph-Based Modeling Framework for Tracing Hydrological Pollutant Transport in Surface Waters
David L. Cole\({}^{a}\), Gerardo J. Ruiz-Mercado\({}^{bc}\), and Victor M. Zavala\({}^{a}\)*
Footnote *: Corresponding Author: [email protected]
\({}^{a}\)Department of Chemical and Biological Engineering,
University of Wisconsin-Madison, Madison, WI 53706
\({}^{b}\)Office of Research and Development,
U.S. Environmental Protection Agency, Cincinnati, OH 45268, USA,
and Chemical Engineering Graduate Program,
Universidad del Atlantico, Puerto Colombia 080007, Colombia
###### Abstract
We study the "GEoDataFrames" for the Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide- Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide Web-Wide- Web-Wide Web-Wide Web-Wide- Web-Wide Web-Wide- Web-Wide Web-Wide Web-Wide Web-Wide- Web-Wide Web-Wide- Web-Wide Web-Wide- Web-Wide Web-Wide- Web-Wide- Web-Wide- Web-Wide Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide-Wide- Web-Wide-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-Wide-Wide-Wide-Wide- Web-Wide-Wide-Wide- Web-
file for the state of Wisconsin. We did this by starting with a shapefiles of all Wisconsin counties [Wisconsin Department of Natural Resources Data Curator, ] and dissolving these into a single polygon. This resulting polygon was then used to get the lakes, rivers, and watersheds that applied to our area. All of these starting files can be found in the folder "graph_construction/lakes_rivers".
After assembling this initial data, we built shapefiles containing the geographic area of interest for all of the watersheds, rivers, and waterbodies. This was done by overlaying the polygon of Wisconsin with all other shapefiles. The resulting shapefiles were saved to the folder "WIgeodataframes." The methods for doing this can be found within the file "build_base_dataframes.py" where we convert all shapefiles to the same crs code and overlay using GeoPandas. In addition, we also add HUC8 codes to all HUC10 shapefiles and we add HUC8 and HUC10 codes to all HUC12 shapefiles.
After building the shapefiles for our area of interest, we also added additional data to our waterbody and river GeoDataFrames. We first add the HUC8, HUC10, and HUC12 codes to all rivers and waterbodies. This was done by getting the centroid of the river or waterbody object and finding within what HUC polygon that centroid lies. After completing this (see "add_hucs_to_lakes_rivers.py"), we then looped through these objects and added the list of intersecting rivers (for the waterbody objects) or the list of intersecting waterbodies (for the river objects). These lists were later used in adding the waterbodies to the river graph. This process was performed within the script "add_river_lake_nodes.py".
After adding the river and waterbody intersection columns to our dataframes, we then built the list of edges for our graph. We do this by starting with the PlusFlow.csv list containing the FROM-COMID and TOCOMID list for the rivers. We then add waterbodies to this list using the methods described in our manuscript. This was done within "add_to_comid_list.py". Finally, we include an optional aggregation step "aggregate_graph.py" which we outline in the next section. This above process results in a list of edges (FROMCOMPID and TOCOMID lists) that can be converted to a graph.
## 2 Graph Aggregation
In this section, we present the algorithm we used to aggregate our graph. "Aggregation" involves combining nodes (or sets of nodes) into a single node. Aggregating the graph can be useful to simplify visualizations and to decrease the number of nodes in the graph. This latter point can be helpful in decreasing computation time for different graph algorithms because there are less nodes to iterate through. Our algorithm aggregates some river nodes together and it aggregates some river nodes into waterbody nodes. Aggregation only occurs if certain conditions are met and if two nodes are in the same HUC12 watershed. Within our algorithm, aggregation never merges two waterbody nodes together, thus making sure that the total number of waterbody nodes in the graph remains the same before and after aggregation. This also makes sure that the connectivity of those waterbody nodes does not change.
Our algorithm for performing this aggregation is an iterative process. It iterates through the set of all edges in the graph (by iterating through the list of TOCOMIDs and FROMCOMIDs), tests certain conditions, and merges nodes if those conditions are met. After all edges have been passed through, the process repeates until no nodes can be merged. The code for performing this aggregation can be found in "aggregate_graph.py" and "WI_graph_functions.py" at the Github link in the above section.
The algorithm iterates throught he list of TOCOMIDs and FROMCOMIDs (i.e., the edges of the graph), and goes as follows:
* If both the nodes of the edge are waterbodies, then do nothing to the edge.
* If the FROMCOMID node is a waterbody and the TOCOMID node is a river, test if the TOCOMID river node has any immediate upstream connections besides the FROMCOMID waterbody node.
* If there are no other immediate upstream connections, test if the FROMCOMID waterbody node and the TOCOMID river node are in the same HUC12 watershed.
* If they are in the same HUC12 watershed, merge the TOCOMID river node with the FROMCOMID waterbody node. Replace all instances of the merged river COMID in the list of TOCOMIDs and FROMCOMIDs with the waterbody COMID.
* If they are not in the same HUC12 watershed, do nothing to the edge and move on to the next edge in the list.
* If there are other immediate upstream connections besides the FROMCOMID waterbody node, do nothing to the edge and move on to the next edge in the list.
* If the FROMCOMID node is a river and the TOCOMID node is a waterbody, test if the FROMCOMID river node has any immediate downstream connections besides the TOCOMID waterbody node.
* If there are no other immediate downstream connections, test if the FROMCOMID river node and the TOCOMID waterbody node are in the same HUC12 watershed.
* If they are in the same HUC12 watershed, merge the FROMCOMID river node with the TOCOMID waterbody node. Replace all instances of the merged river COMID in the list of TOCOMIDs and FROMCOMIDs with the waterbody COMID.
* If they are not in the same HUC12 watershed, do nothing to the edge and move on to the next edge in the list.
* If there are other immediate downstream connections, do nothing to the edge and move on to the next edge in the list.
* If there are other immediate downstream connections, do nothing to the edge and move on to the next edge in the list.
* If there are other immediate downstream connections, test if the FROMCOMID river node and the TOCOMID waterbody node are in the same HUC12 watershed.
* If there are other immediate downstream connections, test if the FROMCOMID river node and the TOCOMID waterbody node are in the same HUC12 watershed.
* If there are other immediate downstream connections, test if the FROMCOMID river node and the TOCOMID waterbody node are in the same HUC12 watershed.
* If there are no other immediate downstream connections, test if the FROMCOMID river node and the TOCOMID waterbody node are in the same HUC12 watershed.
* If there are no other immediate downstream connections, test if the FROMCOMID river node and the TOCOMID waterbody node are in the same HUC12 watershed.
* If there is another immediate upstream node(s), test if the upstream nodes are connected immediately downstream to any nodes other than the TOCOMID river node. If they are connected to other nodes, do nothing to the edge and move on to the next edge in the list.
* If there is another immediate upstream node(s), test if any of them are waterbodies.
* If any of the immediate upstream nodes are waterbodies, do nothing to the edge and move on to the next edge in the list.
* If the immediate upstream nodes are not waterbodies, test if all upstream nodes and the TOCOMID node are in the same HUC12 watershed. If they are, then replace all instances of the FROMCOMID river node in the list of TOCOMIDs and FROMCOMIDs with the TOCOMID node. If they are not in the same HUC12 watershed, do nothing to the edge and move on to the next edge in the list.
* If there is not another immediate upstream node to the TOCOMID node, then test if the FROMCOMID and TOCOMID node are in the same HUC12 watershed.
* If they are in the same HUC12 watershed, replace all instances of the FROMCOMID river node in the list of TOCOMIDs and FROMCOMIDs with the TOCOMID node.
* If they are not in the same HUC12 watershed, do nothing to the edge and move on to the next edge in the list.
This algorithm is repeated continuously until it ceases to change the set of TOCOMIDs and FROMCOMIDs. In the case of the graph of Wisconsin, this method reduced our graph from 45,997 nodes to 8,526 nodes. The results of this aggregation procedure can be seen in Figure 1.
We make two notes on this algorithm. First, in several steps of the algorithm, we are concerned with the immediate upstream or downstream connections to a given node. This is because we have to be careful not to merge these connections as they would change the connectivity of the graph. For example, imagine a waterbody with an upstream node that flows both into the waterbody and into a separate river node. If we merge that upstream node into the waterbody, the waterbody would now also flow into the separate river node. This would not maintain the overall connectivity of the original graph. Second, this algorithm also highlights one of the reasons that we added the HUC watershed codes as attributes of the waterbodies shapefile since we use the HUC12 codes to only merge nodes in the same watershed. These attributes (such as the watershed codes) can be important in working with these objects.
In aggregating our graph, we wanted to confirm that the connectivity of the original graph was maintained. In particular, it is important that if two waterbodies are connected in the original graph, they should also be connected in the aggregated graph. To confirm this was the case, we looped through every waterbody within the aggregated graph in an outer loop, and then looped through the same set of waterbodies within an inner loop. Inside the inner loop, we used the function network.has_path to test if the waterbody of the outer loop and the waterbody of the inner loop are connected within a given graph. We ran this test for both the original and aggregated graph.
After running this function, we tested whether the original graph and the aggregated graph yielded the same results. This test can be found at the end of "Visualizations.ipynb". It looped over 3,199 waterbodies, and the connectivity was the same between the original and aggregated graphs for all waterbodies.
## 3 Case Study I: Lake Altoona and Mohawksin Lake Data
In this section, we report the data used for Lake Mohawksin and Altoona Lake as discussed in case study 1 of our manuscript. We report the individual measurements for total phosphorus (TP) and the lake perception (Tables 1 - 4) taken from the Wisconsin Department of Natural Resources [Wisconsin Department of Natural Resources, a, Wisconsin Department of Natural Resources, c]. This latter measurement is a 1-5 scale of algae levels on the lake, with 1 ("beautiful, could not be nicer") representing cleaner levels and 5 ("swimming and aesthetic enjoyment of lake substantially reduced because of algae levels") representing poorer water quality. The lake perception can give a general idea of water quality levels, but it is subjective (based on the opinion of the data collector) and has no quantitive measure [Wisconsin Department of Natural Resources, b]. Overall, Altoona Lake had much higher TP measurements, and there were also higher perception levels reported. Lake Mohawksin never had a perception level above 2 while Altoona Lake had several reported levels of 3 and 4. The data strongly suggests that Altoona Lake has poorer water quality than Lake Mohawksin.
Figure 1: A visualization of the aggregation of our graph. Nodes of the original graph (panel a) are iteratively merged to form the aggregated graph (panel b). Watersheds shown are the the HUC10 watersheds 0709000205, 0709000206, and 0709000207, primarily in Dane County, Wisconsin.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Date & TP (mg/m3) & Date & TP (mg/m3) \\ \hline
5/15/2001 & 79 & 6/27/2015 & 85.9 \\
7/31/2001 & 75 & 7/24/2015 & 95.5 \\
8/8/2011 & 142 & 8/29/2015 & 87.9 \\
8/29/2011 & 72 & 5/9/2016 & 64.1 \\
5/24/2013 & 127 & 6/25/2016 & 129 \\
7/6/2013 & 138 & 7/23/2016 & 52.2 \\
7/29/2013 & 80.3 & 9/6/2016 & 160 \\
8/24/2013 & 92.7 & 6/26/2017 & 142 \\
5/10/2014 & 71.7 & 5/7/2018 & 113 \\
6/27/2014 & 123 & 6/28/2018 & 123 \\
7/26/2014 & 80.2 & 7/29/2018 & 117 \\
8/30/2014 & 94.6 & 8/23/2018 & 166 \\
4/18/2015 & 67.4 & 6/22/2019 & 98.3 \\ \hline \end{tabular}
\end{table}
Table 1: Total phosphorus measurements for Altoona Lake, Wisconsin
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Date & Perception & Date & Perception \\ \hline
5/12/2011 & 3-Enjoyment somewhat impaired (algae) & 7/16/2019 & 2-Very minor aesthetic problems \\
8/29/2011 & 4-Would not swim but boating OK (algae) & 7/24/2019 & 2-Very minor aesthetic problems \\
9/15/2011 & 4-Would not swim but boating OK (algae) & 8/1/2019 & 2-Very minor aesthetic problems \\
5/19/2018 & 2-Very minor aesthetic problems & 8/27/2019 & 1-Beautiful \\
6/12/2018 & 3-Enjoyment somewhat impaired (algae) & 4/11/2020 & 1-Beautiful \\
6/20/2018 & 2-Very minor aesthetic problems & 4/27/2020 & 1-Beautiful \\
6/28/2018 & 2-Very minor aesthetic problems & 5/24/2020 & 1-Beautiful \\
7/14/2018 & 2-Very minor aesthetic problems & 5/31/2020 & 1-Beautiful \\
7/22/2018 & 4-Would not swim but boating OK (algae) & 6/6/2020 & 1-Beautiful \\
7/29/2018 & 4-Would not swim but boating OK (algae) & 6/8/2020 & 1-Beautiful \\
8/14/2018 & 4-Would not swim but boating OK (algae) & 6/17/2020 & 1-Beautiful \\
8/23/2018 & 4-Would not swim but boating OK (algae) & 6/24/2020 & 2-Very minor aesthetic problems \\
8/31/2018 & 4-Would not swim but boating OK (algae) & 7/5/2020 & 2-Very minor aesthetic problems \\
9/8/2018 & 2-Very minor aesthetic problems & 7/11/2020 & 2-Very minor aesthetic problems \\
9/16/2018 & 2-Very minor aesthetic problems & 7/19/2020 & 2-Very minor aesthetic problems \\
9/23/2018 & 2-Very minor aesthetic problems & 7/27/2020 & 2-Very minor aesthetic problems \\
4/14/2019 & 1-Beautiful & 8/9/2020 & 3-Enjoyment somewhat impaired (algae) \\
4/23/2019 & 2-Very minor aesthetic problems & 8/11/2020 & 3-Enjoyment somewhat impaired (algae) \\
5/4/2019 & 1-Beautiful & 8/20/2020 & 4-Would not swim but boating OK (algae) \\
5/16/2019 & 2-Very minor aesthetic problems & 8/27/2020 & 4-Would not swim but boating OK (algae) \\
6/6/2019 & 2-Very minor aesthetic problems & 9/13/2020 & 2-Very minor aesthetic problems \\
6/14/2019 & 2-Very minor aesthetic problems & 9/21/2020 & 2-Very minor aesthetic problems \\
6/22/2019 & 2-Very minor aesthetic problems & 9/30/2020 & 2-Very minor aesthetic problems \\
7/2/2019 & 2-Very minor aesthetic problems & 10/7/2020 & 2-Very minor aesthetic problems \\
7/8/2019 & 2-Very minor aesthetic problems & & \multicolumn{1}{c}{} \\ \end{tabular}
\end{table}
Table 2: Perception measurements for Altoona Lake, Wisconsin
## 4 Case Study II: DNR Data
### Data Compilation
The data used in Case Study 2 of our manuscript was taken from the Wisconsin Department of National Resources's (DNR) website [Wisconsin Department of Natural Resources, d] on 10 August 2021. We wrote a script to webscrape and download the data of interest. Many of the measurements reported by the DNR were taken at different points within a given lake. For consistency, we only compiled data where the measurements were taken near the deepest point of the lake (i.e., only where the title of the data included the words "Deep Hole", "Max Depth", "Deepest", or "Maximum Depth"). The resulting reports were reformatted so that the TP and chlorophyll-a data was more accessible. The scripts for completing this process are availabe at [https://github.com/zavalab/ML/tree/master/HydroGraphs/DNR_data](https://github.com/zavalab/ML/tree/master/HydroGraphs/DNR_data). The original lake reports that we downloaded ("original_lake_reports/") and the cleaned, reformatted reports ("Lakes/") are also available at this link. This folder ("Lakes/") is indexed by "lake_index_WBIC_COMID.csv".
In addition, one of the challenges of using this data is matching the data to the correct waterbody within the NHDPlusV2. The DNR data uses a Waterbody ID (WBIC) while the NHDPlusV2 uses a COMID. To match these, we needed a set of shapefiles that identified the lakes by the WBIC. We use the Wisconsin DNR's 24k Hydro Waterbodies dataset [WI DNR Data Curator, ] dataset which uses a WBIC to identify all of the waterbody polygons. To match the NHDPlusV2 with the 24k Hydro Waterbodies, we first took all 24k Hydro Waterbodies which had a WBIC that belonged to any of the DNR data which we compiled. This resulted in a set of about 800 waterbodies. We then iterated through this set of waterbodies and tested whether the centroid of any of these HYDROLake polygons was within a NHDPlusV2 waterbody. If they were, we saved the COMID of that NHDPlusV2 waterbody so that it corresponded to the WBIC of the HYDROLake. Because of the shape of some of the waterbodies, the some centroids lie outside of the given waterbody. Consequently, for all of those waterbodies for which the centroid did not lie within a NHDPlusV2 waterbody, we tested whether the HYDROLake polygon intersected any NHDPlusV2 waterbody. If it only intersected a single NHDPlusV2 waterbody, we also saved the COMID of the NHDPlusV2 waterbody so that it corresponded to the WBIC of the intersecting HYDROLake. The code for performing this task can be found in the above github link, along with the HYDROLakes shapefiles that we used.
### Cutoff Values for Analysis
In analyzing the waterbodies with the highest and lowest pollutant concentrations, we used cutoff values to help ensure that both the TP and chlorophyll-a data suggested that the waterbodies were polluted. Carlson [Carlson, 1977] introduced a Trophic State Index (TSI) for helping to identify the eutrophication level (strongly related to the water quality) of a waterbody. The TSI of a waterbody can be calculated from the secchi depth, total phosphorus level, or chlorophyll-a level. Carlson also points out that the calculated TSI value should be the same regardless of what initial indicator is used. Consequently, we chose the cutoffs to ensure that both values would be above a certain threshold. In
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Date & TP (mg/m3) & Date & TP (mg/m3) \\ \hline
[MISSING_PAGE_POST]
\\ \hline \end{tabular}
\end{table}
Table 3: Total phosphorus measurements for Lake Mohawksin, Wisconsin
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Date & Perception & Date & Perception \\ \hline
7/17/2006 & 2-Very minor aesthetic problems & 5/17/2014 & 1-Beautiful \\
7/31/2006 & 2-Very minor aesthetic problems & 6/26/2014 & 1-Beautiful \\
8/15/2006 & 2-Very minor aesthetic problems & 7/24/2014 & 2-Very minor aesthetic problems \\
9/4/2006 & 2-Very minor aesthetic problems & 8/28/2014 & 1-Beautiful \\
[MISSING_PAGE_POST]
\\ \hline \end{tabular}
\end{table}
Table 4: Perception measurements for Lake Mohawksin, Wisconsin
addition, Carlson and Simpson (Carlson and Simpson, 1996) suggest general ranges for classifying a lake as oligotrophic or eutrophic (essentially good or poor water quality, respectively).
We calculated the TSI for the cutoffs used in this study, and the resulting values suggest that the chosen cutoffs are reasonable. For identifying the five most and least polluted levels, the cutoffs for the least polluted lakes require that the TSI be below 47 in both TP and chlorophyll-a (based on Carlson and Simpson (Carlson and Simpson, 1996), this is below the classical eutrophy range). For the most polluted lakes, the cutoffs require the TSI to be above 63. However, we note that for the lakes chosen, all of the least polluted lakes had a TSI--based on average TP and chlorophyll-a values--below 40. The most polluted lakes in contrast had a TSI of no lower than 65 (well into the eutrophic range reported by Carlson and Simpson), and most lakes had a TSI of over 70 (likely making them hypereutrophic). For the cutoffs used to form a larger subset of lakes, the values of \(<15\) mg/m\({}^{3}\) TP and \(<5\) mg/m\({}^{3}\) chlorophyll-a correspond to a TSI no larger than 47 for the less polluted lakes. For the more polluted lakes, the cutoffs of \(>60\) mg/m\({}^{3}\) TP and \(>15\) mg/m\({}^{3}\) chlorophyll-a correspond to a TSI of no smaller than 57.
## 5 Additional Functions for working with Graphs
In this section, we give some of the functions that we have defined in "HydroGraph_functions.py" at [https://github.com/zavalab/ML/tree/master/HydroGraphs/graph_construction](https://github.com/zavalab/ML/tree/master/HydroGraphs/graph_construction) to make managing and building the graphs easier. See the NetworkX documentation for further details on how to use specific functions within their package.
* build_graph(toforms)-This function takes the dataframe that contains the list of TOCOMID and FROMCOMID values for a given graph and converts it into a directed graph within NetworkX. It does this by adding edges to the graph from the TOCOMID and FROMCOMID values.
* get_pos_dict(G, lake_gdf, river_gdf)-This function takes a graph and the GeoDataFrames for all waterbodies and rivers within the graph and returns a dictionary of positions (based on the centroid of the objects in the GeoDataFrames), a list of node colors (with a default of blue for waterbodies and red for rivers), and a list of node sizes. This information is useful in plotting within NetworkX's draw() function.
* get_upstream_graph(G, node)-This function returns the upstream graph for a specified node.
* get_downstream_graph(G, node)-This function returns the downstream graph for a specified node.
* get_upstream_graph_and_cols(G, node, lake_gdf, river_gdf)-This function returns the upstream graph for a specified node, and it returns a list of colors that can be used for plotting the upstream graph.
* get_downstream_graph_and_cols(G, node, lake_gdf, river_gdf) -This function returns the downstream graph for a specified node, and it returns a list of colors that can be used for plotting the downstream graph.
For specific details of how these functions operate, please see the comments within the file containing these functions.
|
2310.15455 | UI Layout Generation with LLMs Guided by UI Grammar | The recent advances in Large Language Models (LLMs) have stimulated interest
among researchers and industry professionals, particularly in their application
to tasks concerning mobile user interfaces (UIs). This position paper
investigates the use of LLMs for UI layout generation. Central to our
exploration is the introduction of UI grammar -- a novel approach we proposed
to represent the hierarchical structure inherent in UI screens. The aim of this
approach is to guide the generative capacities of LLMs more effectively and
improve the explainability and controllability of the process. Initial
experiments conducted with GPT-4 showed the promising capability of LLMs to
produce high-quality user interfaces via in-context learning. Furthermore, our
preliminary comparative study suggested the potential of the grammar-based
approach in improving the quality of generative results in specific aspects. | Yuwen Lu, Ziang Tong, Qinyi Zhao, Chengzhi Zhang, Toby Jia-Jun Li | 2023-10-24T02:00:12Z | http://arxiv.org/abs/2310.15455v1 | # Exploring Mobile UI Layout Generation
###### Abstract
The recent advances in Large Language Models (LLMs) have stimulated interest among researchers and industry professionals, particularly in their application to tasks concerning mobile user interfaces (UIs). This position paper investigates the use of LLMs for UI layout generation. Central to our exploration is the introduction of _UI grammar_ -- a novel approach we proposed to represent the hierarchical structure inherent in UI screens. The aim of this approach is to guide the generative capacities of LLMs more effectively and improve the explainability and controllability of the process. Initial experiments conducted with GPT-4 showed the promising capability of LLMs to produce high-quality user interfaces via in-context learning. Furthermore, our preliminary comparative study suggested the potential of the grammar-based approach in improving the quality of generative results in specific aspects.
Machine Learning, ICML, ICML
## 1 Introduction
### Mobile UI Layout Generation
Layout generation for User interfaces (UIs), or Graphical User Interfaces (GUIs), has been explored by researchers across AI and Human-Computer Interaction (HCI). From a machine-learning perspective, the inherent multi-modal characteristics of UIs pose interesting research challenges for effective UI modeling, understanding, and generation (Jiang et al., 2023, 2022); from an HCI perspective, UIs have been intensively studied as a medium for good user experience (UX). Various needfinding (Dow et al., 2005; Zimmerman & Forlizzi, 2017; Martelaro & Ju, 2017) and usability study (Nielsen, 1994, 2005) methodologies have been developed both in academia and industry to improve the usability, functionality, and user-friendliness of UIs. Solving these challenges is seen as an early step to improving user experience at scale and reducing the workload for UI/UX designers (Lu et al., 2022; Knearem et al., 2023).
Following the release of the large-scale mobile UI dataset RICO (Deka et al., 2017), several AI model architectures for mobile UI layout generation have been proposed. These architectures include but not limit to Generative Adversarial Network (GAN) (Li et al., 2019; Kikuchi et al., 2021), Variational Autoencoder (VAE) (Arroyo et al., 2021; Jing et al., 2023), Diffusion Model (Cheng et al., 2023; Hui et al., 2023), Graph Neural Network (GNN) (Lee et al., 2020), and other Transformer-based neural networks (Li et al., 2020; Gupta et al., 2021; Huang et al., 2021; Kong et al., 2022; Sobolevsky et al., 2023).
### Large Language Models for UI Tasks
Recent research work has explored some of LLMs' abilities on various UI-related tasks. Wang et al. (2023) utilized Large Language Models (LLMs) to conduct 4 UI modeling tasks through in-context learning and chain-of-thought prompting. Liu et al. (2023) conducted automated GUI testing by simulating human-like interactions with GUIs using LLMs. Kargaran et al. (2023) explored user interface menu design with LLMs through natural language descriptions of designers' intentions and design goals. These efforts have demonstrated LLMs' capabilities to effectively work with UIs with careful interaction design and prompting techniques. Some experiments also exhibited competitive performance on UI task evaluation metrics, without the need for large-scale datasets or extensive training processes.
### Research problem and objectives
In this work, we seek to explore LLMs' potential for generating mobile UI layouts. Specifically, we set out to determine how the in-context learning capabilities of LLMs can be harnessed in a one-shot learning scenario to generate high-quality UI layouts. A key challenge here involves the representation and integration of the hierarchical structure inherent in UI elements into the generation process.
In response to this problem, we propose _UI grammar_--a novel approach that accurately represents the hierarchical relationships between UI elements. This approach serves to guide the generation process of LLMs, thereby making the generation more structured and contextually appropriate. From a human-centered perspective, we discuss how the inclusion of _UI grammar_ provides an intermediary layer of representation that could potentially improve the _explainability_ and _controllability_ of LLMs in the generation process. Users can better _understand_ and _steer_ LLMs' internal generation mechanisms by reviewing and editing the grammar used for coming up with the final result.
Our objectives here are twofold. First, we aim to evaluate the performance of LLMs in generating UI layouts. Second, we set out to evaluate the impact of our proposed _UI grammar_ on LLMs' generation process. We comparatively assessed the generation quality with/without the integration of _UI grammar_ in prompts against 3 common metrics for layout generation tasks: Maximum intersection over union (MaxIoU), Alignment, and Overlap. Our preliminary experiment results demonstrated LLMs' ability to generate high-quality mobile UI layouts through in-context learning and showcased the usefulness of _UI grammar_ in improving certain aspects of generation quality.
## 2 LLM Prompting for UI Layout Generation
Wang et al. (2023) have discussed a few key aspects in constructing LLM prompts for mobile UI tasks, including _screen representation_, _UI element properties_, and _class mappings_. While prompting LLMs remain an open research area, here, we continue this line of discussion by reviewing techniques from recent work on adapting UI for authoring LLM prompts. We then propose our own prompt strategy, specifically designed for UI layout generation, and provide our rationales.
### UI Representation for In-Context Learning
LLMs have showcased an impressive capacity for in-context learning, which involves adapting to a limited number of user-provided examples while maintaining competitive performance across a variety of tasks (Brown et al., 2020). This ability, confirmed to be an emergent ability as language models' sizes scale up (Wei et al., 2022), offers a more streamlined alternative to the process of fine-tuning pre-trained models with large datasets for domain adaptation.
UI data is inherently multi-modal and can often be represented in a variety of data formats. These include, but are not limited to, screenshots, Android view hierarchies, code implementations, and natural language descriptions (e.g., _"a welcome page for a comics reading app"_). Within existing UI datasets, such as RICO (Deka et al., 2017), each UI screen is typically offered in multiple data formats. This approach serves to capture visual, structural, and contextual information of UI screens.
This created challenges in providing UI exemplars to LLMs for in-context learning, especially given the limited context window and text-only input/output modality for existing LLMs. Here, we review recent work's approaches to adapting UI input for LLM prompting:
* Wang et al. (2023) parsed UI into **HTML** files to feed into PALM for 4 mobile UI tasks, i.e. screen question-generation, screen summarization, screen question-answering, and mapping instruction to UI action. They used the class, text, source_id, content_desc attributes to include detailed information of screen widgets.
* Liu et al. (2023) investigated using GPT-3 for automatic GUI testing through natural language conversations. They extracted static contexts using attributes of the app and screen widgets from the corresponding _AndroidManifest.xml_ file, including AppName, ActivityName, WidgetText, and WidgetID, and constructed **natural language sentences** describing the UI state with these attributes.
* While Feng et al. (2023) did not directly work with UI data, they used GPT-3.5 for a 2D image layout generation, a task sharing many similarities with mobile UI layout generation. They parsed the position of image elements into **CSS** (short for Cascading Style Sheets) snippets with normalized position values as GPT input.
### Hierarchical Structures as UI Grammar
UI elements within a screen have hierarchical relationships (Li et al., 2021, 2018), which can be reflected in the atomic design principle (Frost, 2016), the grouping feature of UI design tools like Figma, and the Android view hierarchies. Some previous work (Huang et al., 2021) flattened the hierarchical structures of UI elements and reduced the layout generation task into predicting a flattened sequence of elements and the accompanying bounding boxes. However, our assumption is that preserving the hierarchical relationship between UI elements and using them to implicitly guide the generation process can improve the generation quality.
To include such hierarchical information into our prompt to guide LLMs in generation, here we take inspiration from previous work (Kong et al., 2008; Talton et al., 2012) and define **UI Grammar** as one possible way to represent the hierarchical relationship between UI elements.
**UI Grammar** is defined as a set of production rules for describing the parent-children relationships between UI elements within a given screen hierarchical tree structure. Each
production rule is of the form A \(\rightarrow\) B, where A represents a parent UI element and B represents a sequence of one or more child elements. The definition resembles context-free grammar in syntax analysis (Earley, 1970), hence the name _UI Grammar_.
For example, for a simple UI structure visualized in 1, we can parse out the following UI grammar based on the parent-children relationships: \(\texttt{Root}\rightarrow\texttt{Container}\) Button, \(\texttt{Container}\rightarrow\texttt{Pictogram}\) Text.
In this work, we conduct an initial comparative study between UI layout generation using LLMs _with_ and _without_ the guidance of UI grammar as part of the prompt.
### Problem Definition and Prompt Design
We define our UI layout generation task as follows:
Given a natural language summary of a mobile UI screen \(S\), we use LLM to generate a target hierarchical sequence of UI elements \(T=\{o_{j}|j=1,2,\ldots,n_{u}\}\) where \(o_{j}\) denotes a tuple of \((\texttt{label},\texttt{bounding}\texttt{box})\) for UI element \(j\). These two fields in the tuple represent the _type_ and _position_ of the UI element on the screen.
With this problem definition, we design our prompt with the following objectives:
1. Using a UI format easy for LLMs to understand and generate layouts
2. Encapsulating hierarchical relationship between UI elements through UI grammar
3. Removing redundant non-visual information that is non-essential for layout generation
Based on these objectives, we chose to use JSON as the data format to represent UIs in UI layout generation. We selected JSON due to its advantages in the following aspects:
* _Compatibility:_ JSON is ideal and commonly used for data with hierarchically structured relationships. Also, given that many LLMs use programming code in training data and prompts falling within the training data distribution tend to perform better (Wang et al., 2023), JSON is a compatible data format on both ends for our task.
* _Flexibility:_ JSON supports multiple types of attributes for each element, suitable for representing the string.label and the list of integer coordinates for bounding.box
* _Processing Simplicity:_ UI datasets such as RICO already use JSON to represent UI view hierarchy, reducing processing efforts.
In order to compare the efficacy of UI layout generation _with_ and _without_ the guidance from UI grammar, we have created two analogous pipelines for the generation process (Fig 2, Fig 3). The main differentiation between these pipelines lies in the inclusion of UI grammar in our prompts for LLMs. We will first discuss the pipeline that operates without the UI grammar, then introduce how we integrated UI grammar into the prompt to steer LLMs in generating UI layouts.
Rather than directly work with the RICO dataset containing approximately 66k unique UI screens, we used an improved dataset Clay(Li et al., 2022) that is based on RICO. Clay removed noise from RICO UI data by detecting UI element types and visual representation mismatches and assigning semantically meaningful types to each node. It contains 59k UI human-annotated screen layouts and contains less-noisy visual UI layout data. We also utilized the Screen2Words dataset (Wang et al., 2021) which contains natural language summaries of UI screens in RICO to construct our prompt.
#### 2.3.1 Prompt Without UI Grammar
For generating layouts without involving UI grammar (Fig. 2), we first randomly select a screen from Clay to use as an example for our in-context learning (i.e. 1-shot prompting) and exclude it from generation to prevent data leakage. For each UI screen in Clay, we retrieve the corresponding natural language description from Screen2Words as the description for our generation target. To control the generation result and only receive layouts with meaningful UI elements, we used the semantically meaningful list of 25 UI element labels defined in Clay and included that in our prompt as a constraint. We also controlled LLM's API
Figure 1: Example UI hierarchy structure. Here we can parse out UI grammar \(\texttt{Root}\rightarrow\texttt{Container}\) Button and \(\texttt{Container}\rightarrow\texttt{Pictogram}\) Text
response format for easier parsing of the generation result, as shown in Fig 2.
#### 2.3.2 Prompt With UI Grammar
For our second pipeline and prompt design (Fig. 3), we introduce UI grammar as an intermediary step in UI layout generation with an architecture similar to neuro-symbolic models (Sarker et al., 2021). Instead of asking LLMs to directly generate the final screen layout, in the 1-shot example, we describe UI layout generation as a 2-step process: first, we introduce the list of UI grammar in the screen, then explain how we can generate the example UI layout using the provided UI grammar.
An important step in constructing the prompt with UI grammar is selecting which screens from the Clay dataset to parse grammar from. When generating a layout using descriptions of screen \(S\) from the original Clay dataset, if we also input grammars parsed from \(S\) into the prompt, data leakage occurs as screen \(S\) can be reconstructed from its own grammars in a straightforward manner. To avoid this, we conduct a 20/80 random split of the Clay dataset and use grammars parsed from the 20% grammar set to guide the generation of the 80% generation set.
In addition, from our observation, many screens from the same app packages in Clay share similar layout structures. Consequently, we splitted the dataset by apps in order to avoid the data leak caused by having screens from the same app package in both sets.
## 3 Initial Experiments
In May 2023, we used OpenAI's GPT-4 API to conduct a preliminary experiment comparing the 2 proposed pipelines for UI layout generation. We used the gpt-4-0314 version of GPT-4 with a max_token of \(2,000\) and temperature of \(0.7\).
**Dataset** For both prompt designs, we pre-process the UI view hierarchy files from Clay by removing all attributes of UI elements but label and bounds, as all others are not necessary for our layout generation task. To further ensure the generation quality, we work with a subset of the top \(10k\) UI screens from Clay with an app review higher than \(4.3\) and download of more than \(10k\) in Google Play Store for our generation. These two thresholds serve as
Figure 2: Prompt 1 design for generation without UI grammar
quality filters and are manually defined to balance the need for a sufficiently large sample size against the desire for high-quality app representation.
Given OpenAI's API response rate and call limits, it is hard to quickly generate a large number of results. In this work-in-progress, we have conducted an initial experiment on a batch of \(192\) UI screens from the top apps in Clay and report the preliminary evaluation results as follows. Visualization of example generation results is shown in 4.
## 4 Preliminary Evaluations
Here we report preliminary evaluations of our UI layout generation results against 3 common metrics commonly used in this domain: Maximum Intersection Over Union (MaxIoU), Alignment, and Overlap. 1 The MaxIoU value is calculated between the generation screen \(S^{\prime}\) and the original screen \(S\) from Clay the provided screen summaries as part of the prompt. Alignment and Overlap are both calculated over the generated result \(S\) only.
Footnote 1: Refer to Jing et al. (2023) for definitions of these metrics.
Please note that in order to more accurately evaluate the visual quality of the generated UI layouts, we removed 5 types of UI elements that are commonly invisible on screens 2 from the results before evaluation.
Footnote 2: Namely: Root, Background, List_item, Card_view, and Container.
**Results** In Table 1, we can see that in our initial experiment, GPT-4 performed well on _overlap_ without grammar, and on _alignment_ with and without grammar, having close or even better metric performance than real data. The _overlap_ result for both prompt designs achieved \(0.00\), meaning every element aligns with at least 1 other element on the screen. This is consistent with the visual appearance of the generation results. While we did not specifically mention the need to align UI elements or avoid element overlap in our prompt, GPT-4 was able to generate high-quality results against these metrics. In addition, introducing UI grammar to guide GPT-4's layout generation process slightly increased the MaxIoU performance. On this metric, GPT-4 with grammar
Figure 3: Prompt 2 design for generation with UI grammar
is comparable with some general layout generation models trained on large datasets as reported in (Jing et al., 2023), demonstrating LLMs' in-context learning ability in mobile UI layout generation.
While we did not explicitly restrict GPT-4 on using only the provided grammar set 3, \(83.8\%\) of the rules GPT-4 reported to be using for generation came from the provided grammar. This showed that GPT-4 was not entirely restricted by the grammar we provided, demonstrating the flexibility of the model and our approach.
Footnote 3: Specifically, we used the wording “Here is a list of UI grammar rules to base your generation on. Using each rule multiple times is expected”.
## 5 Discussion and Future Work
Our experimentation with LLMs for UI layout generation has demonstrated LLMs' promising ability on this task. However, we believe LLMs like GPT-4 also have the potential capability to generate content along with layouts to create mid-fi to high-fi prototypes. The potential to combine LLMs with existing UI templates or design systems such as Google Material Design will enable more automated, customized, and efficient UI prototyping techniques.
We argue that besides improving LLMs' generation quality on metrics like MaxIoU, by introducing UI grammar as an intermediary representation in the generation process, our approach could increase the explainability and users' controllability of black-box pre-trained LLMs:
* _Explainability:_ By reviewing the UI grammar employed in LLMs' generation processes, users could gain a better understanding of LLMs' internal generation mechanisms. Our approach differs from a post-hoc explanation request for LLMs, in that our approach can be more easily verified through an easy comparison between the grammars employed and the final UI structure. On the other hand, post-hoc explanation requests (e.g. a follow-up question such as _"explain why you generated this result"_), while similar to how humans provide justifications, do not necessarily reflect the actual generation mechanism.
* _Controllability:_ With UI grammar as an intermediary representation in the generation process, users can obtain higher control of the generation results if enabled to modify or replace the grammar provided to the LLMs in prompts. Future applications can build upon such model architecture and provide users with more ways to interact with UI grammar in the prompts (e.g. directly selecting which apps to extract grammar from) to improve the controllability of LLMs in similar generation tasks.
In Section 2.3.2 we have discussed the potential of data leak when using the _natural language description_ and _UI grammar_ derived from the same screen. But on the other hand, since UI grammar represents different ways of organizing
Figure 4: Visualizing the generation results. In each 4-screens group, the left 2 images are the original image and its parsed bounding boxes, while the right 2 are GPT-4 generated results. The original image’s description from Screen2Words was used in the prompt for generating the right 2 layouts.
and designing UI elements on a screen, we could potentially use UI grammar as a proxy to control characteristics of generation results. One possible use case is generating certain styles of UI, by extracting grammar specifically from screens in compliance with a company's design guidelines.
Continuing this initial study, we have planned the below agendas for our follow-up work:
1. Making improvements to our pipeline and prompt structure by extending _UI grammar_ with each rule's occurrence probability;
2. Integrating reasoning steps for the target user, information to display, and supported actions of a UI through Chain-of-Thought prompting (Wei et al., 2022), a workflow resembling the one of human UI designers;
3. Conducting multi-faceted layout generation assessments involving human evaluators and more quantitative metrics (e.g. Frechet inception distance) at a larger scale, to ensure the robustness and applicability of our models;
4. Experiementing the feasibility of generating high-fidelity UI prototypes with LLMs, as discussed above, and potentially build interactive design-support tools to speed up UI prototyping.
## 6 Conclusion
In this work, we explored Large Language Models' ability to generate mobile user interface layouts through 1-shot, in-context learning. We proposed _UI grammar_, a novel approach to represent the hierarchical relationship between UI elements, and incorporated it into our prompts to steer UI layout generation. Our preliminary results demonstrated LLMs' capabilities to generate high-quality UI layouts with competitive performance, as well as the usefulness of UI grammar in improving certain aspects of generation qualities. We conclude by discussing the implications of using LLMs and UI grammar for future research.
|
2309.07132 | Fundamental Antisymmetric Mode Acoustic Resonator in Periodically Poled
Piezoelectric Film Lithium Niobate | Radio frequency (RF) acoustic resonators have long been used for signal
processing and sensing. Devices that integrate acoustic resonators benefit from
their slow phase velocity (vp), in the order of 3 to 10 km/s, which allows
miniaturization of the device. Regarding the subject of small form factor,
acoustic resonators that operate at the so-called fundamental antisymmetric
mode (A0), feature even slower vp (1 to 3 km/s), which allows for smaller
devices. This work reports the design and fabrication of A0 mode resonators
leveraging the advantages of periodically poled piezoelectricity (P3F) lithium
niobate, which includes a pair of piezoelectric layers with opposite
polarizations to mitigate the charge cancellation arising from opposite stress
of A0 in the top and bottom piezoelectric layers. The fabricated device shows a
quality factor (Q) of 800 and an electromechanical coupling (k2) of 3.29,
resulting in a high figure of merit (FoM, Q times k2) of 26.3 at the resonant
frequency of 294 MHz, demonstrating the first efficient A0 device in P3F
platforms. The proposed A0 platform could enable miniature signal processing,
sensing, and ultrasound transducer applications upon optimization. | Omar Barrera, Jack Kramer, Ryan Tetro, Sinwoo Cho, Vakhtang Chulukhadze, Luca Colombo, Ruochen Lu | 2023-08-27T17:42:08Z | http://arxiv.org/abs/2309.07132v1 | # Fundamental Antisymmetric Mode
###### Abstract
Radio frequency (RF) acoustic resonators have long been used for signal processing and sensing. Devices that integrate acoustic resonators benefit from their slow phase velocity (\(v_{p}\)), in the order of 3 to 10 km/s, which allows miniaturization of the device. Regarding the subject of small form factor, acoustic resonators that operate at the so-called fundamental antisymmetric mode (A0), feature even slower \(v_{p}\) (1 to 3 km/s), which allows for smaller devices. This work reports the design and fabrication of A0 mode resonators leveraging the advantages of periodically poled piezoelectricity (P3F) lithium niobate, which includes a pair of piezoelectric layers with opposite polarizations to mitigate the charge cancellation arising from opposite stress of A0 in the top and bottom piezoelectric layers. The fabricated device shows a quality factor (\(Q\)) of 800 and an electromechanical coupling (\(k^{2}\)) of 3.29, resulting in a high figure of merit (FoM, \(Q\)-\(k^{2}\)) of 26.3 at the resonant frequency of 294 MHz, demonstrating the first efficient A0 device in P3F platforms. The proposed A0 platform could enable miniature signal processing, sensing, and ultrasound transducer applications upon optimization.
Piezoelectric resonators, piezoelectric devices, lithium niobate, microelectromechanical systems, laterally vibrating resonators, fundamental antisymmetric mode
## I Introduction
There is a never-ending demand for smaller devices in the radio-frequency (RF) commercial electronics industry. Not only are small form factor devices required, but the performance of the components is expected to remain comparable. To this end, acoustic wave resonators have achieved great success in applications such as miniature sensors, high quality factor (\(Q\)) resonators and front-end filters for mobile applications [1, 2]. Acoustic resonators utilize the principle of piezoelectricity to transduce energy back and forth between electrical signals and mechanical oscillations. Therefore, it is possible to perform the sensing or signal processing in the mechanical domain, which has the advantage of slow phase velocities (\(v_{p}\)) in the order of 3 to 10 km/s, several orders of magnitude lower than the electrical counterparts [3, 4]. Among different types of acoustic vibrations, resonators working at the fundamental symmetric mode (S0) [5, 6, 7, 8], the fundamental shear horizontal mode (SH0) [9, 10, 11, 12, 13], first-order antisymmetric (A1) [14, 15, 16, 17, 18, 19], and first-order symmetric (S1) [20, 21, 22, 23, 24, 25, 26, 27] have been widely studied. Such devices have demonstrated strong potential and are still heavily investigated. However, it is not easy to further tune down the \(v_{p}\) of these modes below 3 km/s. This factor limits the range of device size reduction.
Another mode, namely the fundamental antisymmetric mode (A0), features even slower \(v_{p}\) of less than 2 km/s [28]. Thus, exciting this mode opens the possibility of further device dimensions downsizing. However, an efficient excitation of the A0 mode with high figures of merit is not trivial. In the case of a single film, due to the nature of the motion, stress antinodes with opposite signs develop at the top and bottom of the piezoelectric layer. This effect leads to charge cancellation, adversely affecting the achievable electromechanical coupling (\(k^{2}\)), thus lowering the figure of merit (FoM, \(Q\)-\(k^{2}\)). Adding a passivation silicon dioxed (SiO\({}_{2}\)) layer can partially mitigate this issue, but this process itself also brings about degraded \(k^{2}\) and \(Q\)[29, 30]. Recently, resonators leveraging the periodically poled piezoelectric (P3F) effect have been proposed to exploit over-moded lithium-niobate (LiNbO\({}_{3}\)) resonators [31, 32, 33]. Over
Fig. 1: (a) LiNbO\({}_{3}\) dual layer resonator stack (b) Simulated A0 displacement and stress profiles.
Fig.2: (a) Stress antinode formation in single layer A0 mode leading to charge cancellation (b) Fully harnessing piezoelectric convertion using P3F dual layer stack.
model resonators also suffer from charge cancellation effects, and P3F films have the potential to help mitigate the issues. However, studies on enhancing the performance of A0 mode device with P3F LiNbO\({}_{3}\) are not reported.
In this work, we implemented the first A0 mode resonators in a P3F thin-film. The devices are built on a dual-layer X-cut LiNbO\({}_{3}\) film, achieving a \(Q\) of 800, \(k^{2}\) of 3.29% and a high FoM of 23.6 at 294 MHz, while maintaining a \(v_{p}\) of 1800 m/s, 2 to 5 times smaller than S0 modes. The proposed A0 platform could enable miniature signal processing, sensing, and ultrasound transducer applications upon optimization.
## II Design and Simulation
The device structure is formed by 2 layers of X-cut LiNbO\({}_{3}\), each layer with a thickness of 550 nm and the y-orientation in opposite direction [Fig. 1(a)]. The displacement and stress profiles for a canonical A0 shape motion are shown in Fig. 1(b). The displacement is purely vertical and it has the characteristic flexural bending associated with the fundamental mode. The lateral stress antinodes, as expected, are at the top and bottom of the thin film and have opposite signs. In conventional single-layer piezoelectric materials, it is hard to harness the generated charge with a single pair of top electrodes, as the sign of charges in the upper and lower sections will perfectly cancel [Fig. 2(a)]. However, in P3F LiNbO\({}_{3}\), due to the different signs in the piezoelectric coefficient (\(e_{l}\)), we will fully harness the piezoelectricity [Fig. 2(b)].
The resonator is excited by aluminum (Al) interdigitated electrodes (IDT) sitting on top of the stack, the wavelength (\(\lambda\)) is chosen as 6 \(\upmu\)m, and a constant duty cycle of 50% is maintained. The metal thickness is selected as 350 nm. A single unit cell of the proposed design was simulated using COMSOL finite element analysis (FEA) to verify the mode shape and the effect of the P3F film stack, the results are shown in Fig. 3. The device exhibits an electromechanical coupling (\(k^{2}\)) of 6.34% at the resonant frequency of 283.4 MHz.
A comparison between \(k^{2}\) and \(v_{p}\) against changes in the \(\lambda\) is plotted in Fig. 4. It can be observed that at 6 \(\upmu\)m the \(v_{p}\) is 1,800 ms/s, thus confirming the potential of P3F resonators to realize miniaturized devices. Additionally, the intersection between \(k^{2}\) and \(v_{p}\) also validates the selection of \(\lambda\) as it offers a good tradeoff between these 2 parameters.
## III Fabrication and Measurement
The fabrication process starts with a dual layer X-cut LiNbO\({}_{3}\), total thickness of 1.1 \(\upmu\)m, thin film transferred on top of a silicon (Si) carrier wafer. The material stack is provided by NGK
Fig. 4: Simulated k\({}^{2}\) and \(v_{p}\) dispersion curves against changes in the wavelength of a unit cell A0 resonator.
Fig. 5: (a) Step-by-step fabrication process diagram of the A0 resonators (b) Optical image of the full fabricated device.
Fig. 3: Simulated admittance of the unit cell resonator and A0 flexural displacement mode shape.
Insolators, Inc. First, opening windows for device release are patterned on top using traditional lithography techniques. Next, ion milling is used to etch through the LiNbO\({}_{3}\) stack deep into the silicon wafer. Afterwards, the features for metal interconnects are patterned using electron beam lithography, and e-beam evaporation is used to deposit 350 nm of Al for the buslines and electrodes. Finally, the resonators are released using xeon divloride (XeF\({}_{2}\)) for silicon etching. The step-by-step fabrication process is depicted in Fig. 5(a). An optical image of the final suspended device is shown in Fig. 5(b), the large lateral etch windows perform most of the isotropic release process, while the small etch windows between electrodes help confine acoustic energy in the active area.
The fabricated device is measured using a Keysight vector network analyzer (VNA), and the measured data is then fitted using a modified Butterworth-Van Dyke model. The main resonance is located at 294 MHz, in good agreement with the expected behaviour from the simulation. The device exhibits a \(Q\) of 800 and a \(k^{2}\) of 3.29%, resulting in a high FoM of 26.3. The reduced \(k^{2}\) is due to crystal orientation misalignment of the top and bottom layers during the bonding, which could be improved in future works. \(Q\) could be potentially enhanced by minimizing lattice damage during thin film transfer. A possible approach to maintaining good lattice properties could involve using an intermediate amorphous silicon (a-Si) layer between the LiNbO\({}_{3}\) stack and carrier susbrate [34]. The measurements, fitting and extracted main parameters are plotted in Fig. 6 (a) and (b), respectively.
## IV Conclusion
In this work, we report the first A0 mode resonator leveraging a P3F X-cut LiNbO\({}_{3}\) bi-layer stack. The device shows a Q of 800 and \(k^{2}\)at the resonant frequency of 294 MHz. The resulting FoM of 23.6 with a slow \(v_{p}\) of 1,800 m/s could enable miniature scale resonators for signal processing, sensing, and ultrasound transducer applications.
## Acknowledgment
The authors would like to thank the funding support from the DARPA COFFEE program and Dr. Ben Griffith for the helpful discussion.
|
2308.08128 | How to Mask in Error Correction Code Transformer: Systematic and Double
Masking | In communication and storage systems, error correction codes (ECCs) are
pivotal in ensuring data reliability. As deep learning's applicability has
broadened across diverse domains, there is a growing research focus on neural
network-based decoders that outperform traditional decoding algorithms. Among
these neural decoders, Error Correction Code Transformer (ECCT) has achieved
the state-of-the-art performance, outperforming other methods by large margins.
To further enhance the performance of ECCT, we propose two novel methods.
First, leveraging the systematic encoding technique of ECCs, we introduce a new
masking matrix for ECCT, aiming to improve the performance and reduce the
computational complexity. Second, we propose a novel transformer architecture
of ECCT called a double-masked ECCT. This architecture employs two different
mask matrices in a parallel manner to learn more diverse features of the
relationship between codeword bits in the masked self-attention blocks.
Extensive simulation results show that the proposed double-masked ECCT
outperforms the conventional ECCT, achieving the state-of-the-art decoding
performance with significant margins. | Seong-Joon Park, Hee-Youl Kwak, Sang-Hyo Kim, Sunghwan Kim, Yongjune Kim, Jong-Seon No | 2023-08-16T03:35:52Z | http://arxiv.org/abs/2308.08128v2 | # How to Mask in Error Correction Code Transformer: Systematic and Double Masking
###### Abstract
In communication and storage systems, error correction codes (ECCs) are pivotal in ensuring data reliability. As deep learning's applicability has broadened across diverse domains, there is a growing research focus on neural network-based decoders that outperform traditional decoding algorithms. Among these neural decoders, Error Correction Code Transformer (ECCT) has achieved the state-of-the-art performance among neural network-based decoders, outperforming other methods by large margins. To further enhance the performance of ECCT, we propose two novel methods. First, leveraging the systematic encoding technique of ECCs, we introduce a new masking matrix for ECCT, aiming to improve the performance and reduce the computational complexity. Second, we propose a novel transformer architecture of ECCT called a double-masked ECCT. This architecture employs two different mask matrices in a parallel manner to learn more diverse features of the relationship between codeword bits in the masked self-attention blocks. Extensive simulation results show that the proposed double-masked ECCT outperforms the conventional ECCT, achieving the state-of-the-art decoding performance among neural network-based decoders with significant margins.
## Introduction
Over recent years, deep learning methods have experienced rapid advancements and achieved phenomenal success in various tasks, such as natural language processing (NLP), image classification, object detection, semantic segmentation, etc. Among the various deep learning architectures available, the transformer architecture [23] has consistently demonstrated the state-of-the-art results in most tasks. After breaking through in NLP, the application of transformer has expanded to include computer vision tasks [17, 14], and again demonstrated outstanding performances compared to the conventional neural network architecture. The versatility of transformer has now extended to the field of error correction codes (ECCs) [16].
ECCs have played a pivotal role in ensuring reliability in wireless communication and storage systems by serving as a key technology to correct errors in noisy environments. The ECC research based on deep learning architectures has primarily focused on decoders that offer superior error correction performance. Stimulated by the advancement of deep learning techniques, a new type of decoders based on neural networks has emerged [15, 14, 16]. These neural network-based decoders have overcome the limitations of traditional algorithm-based decoders. Notably, among them, the transformer-based ECC decoder achieves the state-of-the-art performance.
The transformer based ECC decoder, also called Error correction Code Transformer (ECCT) [16], applied a mask matrix in the self-attention block to learn the noise in the communication channel. Since not all bits in a codeword are equally related, ECCT can improve the performance by using a mask matrix that facilitates learning of the relevance between codeword bits. In the conventional ECCT work, the mask matrix is derived from the parity check matrix (PCM) whose parity check equations determine a direct relationship between codeword bits.
However, the problem is that numerous PCMs exist for the same codebook, which raises the following question:
"Which one of those PCMs is optimal to aid the self-attention mechanism in ECCT?"
We find that selecting a different PCM (i.e., different mask matrix) has a crucial impact on the performance of ECCT. Hence, identifying the optimal PCM for constructing a mask matrix is an important problem of ECCT, which has not investigated yet.
In this work, we first introduce a novel mask matrix specifically tailored for ECCT, which we term a _systematic mask matrix_ due to its construction based on a systematic PCM. The systematic mask matrix has more masking positions than the mask matrix used in the conventional ECCT. This property of the systematic mask matrix makes the self-attention map _sparser_, and enables more compact and concentrated learning of the relationship between codeword bits. It is a surprising observation given that systematic form of the matrix is typically utilized for efficient encoding rather than decoding [15]. Yet, in ECCT, they play a pivotal role in enhancing decoding performance, which is an interesting finding.
Next, we propose a novel transformer architecture called
a _double-masked_ (DM) _ECCT_. This architecture consists of two parallel masked self-attention blocks, each employing a distinct mask matrix. Utilizing two different mask matrices enables the DM ECCT to capture diverse features of the bit relationship.
We apply the proposed methods to two representative ECCs: Bose-Chaudhuri-Hocquenghem (BCH) and polar codes as in [14]. Through extensive simulations across diverse code parameters, we demonstrate that _systematic mask matrices_ improve decoding performance, while reducing decoding complexity. Furthermore, our DM ECCT, integrating both systematic and conventional mask matrices, achieves the state-of-the-art performance among neural network-based decoders for both BCH and polar codes. To the best of our knowledge, this is the first work to propose a new mask matrix structure tailored for ECCT. Additionally, our proposed DM ECCT is a novel transformer architecture for ECC to utilize multiple mask matrices, adding diversity to the ECCT architecture.
## Related Works
In this section, we briefly review the deep learning approaches to ECC applications, such as the neural network-based ECC decoders. There are mainly two approaches: The model-based approach and the model-free approach.
### Model-Based Approach
The first approach is to implement a conventional decoding methods (e.g., belief propagation (BP) decoder and min-sum (MS) decoder) on a neural network. These neural decoders unfold the iterative decoding operation on the Tanner graph into a deep neural network. nachmani2018deep proposed a neural decoder based on the recurrent neural network for BCH codes and achieved the performance improvement over the standard BP decoder. dai2021deep modified the neural MS decoder for protograph low-density parity-check (LDPC) codes. A parameter sharing mechanism was proposed for training scalability to long codes, which also reduces the training complexity and memory cost. Furthermore, a number of studies exhibited that neural network-based BP and MS decoders outperform the traditional decoders [13, 14, 15]. However, these model-based neural decoders might face restrictive performance limits due to architectural constraints.
### Model-Free Approach
The second approach is a model-free approach, which employs neural network architectures with no prior knowledge of decoding algorithms. This approach is not restricted to the conventional decoding models but encounters a significant challenge initially. The model-free approach faces the overfitting problem since it is impractical to train all codewords in a codebook. However, bennatan2018deep proposed a preprocessing that enables the black box model decoder to overcome the overfitting problem. They utilized the syndrome to learn noise only with the all-zero codeword. Also, they combined recurrent neural network architecture with the preprocessing. ECCT is another work that implemented the transformer without the overfitting problem using the same preprocessing and achieved excellent decoding performance through the masked self-attention mechanism. However, in the previous ECCT research, the mask matrix was directly derived from the PCM of the conventional decoding algorithm, rather than being adjusted for ECCT.
## Background
In this section, we briefly summarize some background on the ECCs and the preprocessing and postprocessing of the conventional ECCT [14].
### Error Correction Codes
Let \(C\) be a linear code. A codeword \(x\in C\subset\{0,1\}^{n}\) can be defined by a generator matrix \(G\) of size \(k\times n\) and a PCM \(H\) of size \((n-k)\times n\), which satisfies \(GH^{T}=0\) over \(\{0,1\}\) with modulo \(2\) addition. In other words, a codeword \(x\) can be determined by the constraint \(Hx=0\). Let \(x_{s}\) be a binary phase shift keying modulation of \(x\) (i.e., \(x_{s}=+1\) if \(x=0\) and \(x_{s}=-1\) if \(x=1\)) and let \(y\) be a channel output of \(x_{s}\) after passing the additive white Gaussian noise channel (\(y=x_{s}+z\), where \(z\) denotes Gaussian random noise, i.e., \(z\sim N(0,\sigma^{2})\)).
The objective of the decoder (\(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\)) is to recover the original transmitted codeword \(x\) by correcting errors. When \(y\) is received, the decoder first determines if the received signal is corrupted by checking the syndrome \(s(y)=Hy_{b}\), where \(y_{b}=\text{bin}(\text{sign}(y))\). Here, \(\text{sign}(a)\) represents \(+1\) if \(a\geq 0\) and \(-1\) otherwise and \(\text{bin}(-1)=1\), \(\text{bin}(+1)=0\). If \(s(y)\) is non-zero vector, it is detected that \(y\) is distorted during the transmission, and the decoder initiates the error correction process. ECCT [14] employs the transformer architecture to approximate the role of the ECC decoder.
### Preprocessing and Postprocessing
By employing the preprocessing method in [1], the all-zero codeword is enough for training, which makes the ECCT robust to overfitting problems. The preprocessing conducts input embedding as \(\tilde{y}=[|y|,s(y)]\), where \(|y|\) is the magnitude of \(y\), and \([\cdot,\cdot]\) denotes the concatenation of two vectors. The objective of the ECCT is to estimate the multiplicative noise \(\tilde{z}_{s}\), which is defined by
\[y=x_{s}+z=x_{s}\tilde{z}_{s}. \tag{1}\]
Since the ECCT estimates the multiplicative noise, \(f(y)=\hat{z}_{s}\) and the estimation of \(x\) is \(\hat{x}=\text{bin}(\text{sign}(yf(y)))\). If the multiplicative noise is correctly estimated, then \(\text{sign}(\tilde{z}_{s})=\text{sign}(\hat{z}_{s})\) and \(\text{sign}(\tilde{z_{s}}\hat{z}_{s})=1\). In this case, \(\hat{x}\) is obtained by
\[\hat{x} =\text{bin}(\text{sign}(yf(y)))=\text{bin}(\text{sign}(x_{s}\tilde {z_{s}}\hat{z_{s}}))\] \[=\text{bin}(\text{sign}(x_{s}))\] \[=x.\]
## Proposed Methods
We propose the systematic mask matrix for the ECCT architecture and the DM ECCT with two different mask matrices. The proposed systematic mask matrix and DM ECCT architecture are shown in Figures 1 and 2, respectively.
### Systematic Mask
In the ECCT, the mask matrix constructed by the PCM \(H\) facilitates the learning. Since the mask matrix is uniquely determined by the PCM. There is one-to-one correspondence between the PCM and the mask matrix. Unlike the ECCT in Choukroun and Wolf (2022), we construct the systematic mask matrix from a specific PCM, which is defined as systematic PCM. To construct the systematic mask matrix, we first transform the PCM as a systematic form (i.e., reduced row echelon form) by Gaussian elimination technique. The systematically formed PCM \(H_{\text{sys}}\) is expressed as \(H_{\text{sys}}=[I_{n-k}~{}P]\), where \(I_{n-k}\) is the identity matrix of size \((n-k)\times(n-k)\) and \(P\) is a matrix of size \((n-k)\times k\). Then, we can construct the systematic mask matrix from \(H_{\text{sys}}\) by Algorithm 1, which is slightly modified from Choukroun and Wolf (2022). The conventional mask matrix Choukroun and Wolf (2022) and the proposed systematic mask matrix for the BCH code \((31,11)\) are compared in Figure 1.
As shown in Figure 1(b), the first \((n-k)\times(n-k)\) submatrix of the systematic mask matrix is the identity matrix \(I_{n-k}\) of \(H_{\text{sys}}\), which is for efficient encoding. Also, the systematic mask matrix has more masking positions compared to the conventional mask matrix. In other words, employing the systematic mask matrix makes the masked self-attention map sparser than the conventional mask matrix. As the masked self-attention map becomes sparse, the model tends to focus more on the unmasked locations. Since not all positions in the codeword are equally related, it is important to focus on the highly related positions in the codeword during the self-attention mechanism. Therefore, employing an "appropriate" mask matrix leads to better training and decoding for ECCT. Furthermore, since only unmask positions in the self-attention map participate in the training, the mask matrix with a large portion of masking positions leads to a lower training complexity. Due to these reasons, the proposed ECCT using a systematic mask matrix, which we call a systematic-masked (SM) ECCT from now on, is expected to have a better complexity-decoding performance tradeoff than the conventional ECCT.
We propose to use the systematic mask matrix for the masked self-attention mechanism instead of the conventional mask matrix. Changing the conventional mask matrix to the systematic mask matrix leads to the enhancement in both error correction performance and computational complexity of ECCT. This improvement is not limited to specific code lengths or code rates but to a wide range of coding parameters for various ECCs.
### Double-Masked ECCT
To further improve the decoding performance, we propose a novel architecture called a DM ECCT. In the DM ECCT, we utilize the two different mask matrices for two input embedded vectors in the masked self-attention module. The overall
Figure 1: Two different types of mask matrices: (a) The conventional mask is determined by the conventional PCM Choukroun and Wolf (202), (b) the systematic mask matrix is determined by the systematically transformed PCM. The non-zero entries in PCMs and masking positions in the mask matrix are depicted in colored boxes. The proposed systematic mask matrix masks more values than the conventional mask matrix, resulting in sparser self-attention maps.
architecture of the DM ECCT is illustrated in Figure 2.
During the initial embedding of the DM ECCT, the received codeword \(y\) is converted to \(\tilde{y}_{1}=[|y|,s_{1}(y)]\) and \(\tilde{y}_{2}=[|y|,s_{2}(y)]\), where \(s_{1}(y)=H_{1}y_{b}\) and \(s_{2}(y)=H_{2}y_{b}\). \(H_{1}\) and \(H_{2}\) can be any PCMs that have the same codebook. Then, \(\tilde{y}_{1}\) and \(\tilde{y}_{2}\) with \(2n-k\) bits are projected to \(d\) dimensional embedding. After passing the initial embedding layer, the decoder is defined as a concatenation of \(N\) decoder layers. The decoder layer consists of one masked self-attention, followed by one feed-forward neural network (FFNN). For each step, a normalization layer precedes and a residual connection is established. Finally, the output layer takes the concatenated vector of two output vectors of the decoder layer. It consists of three fully connected (FC) layers, applied after a normalization layer. The first FC layer reduces \(2\times(2n-k)\) dimension vector to \(2n-k\) dimension, the second reduces \(d\) dimensional embedding to a one-dimensional \(2n-k\) vector, and the third reduces \(2n-k\) to an \(n\) dimensional vector. The output represents an estimate of soft multiplicative noise \(f(y)=\hat{z}_{s}\) with which the decoding is completed by bit flipping.
A key structure of a DM ECCT is that we employ two different mask matrices for two input vectors. The input vector determined by \(\tilde{y}_{i}\) is masked with \(M_{i}\), since both \(\tilde{y}_{i}\) and \(M_{i}\) are determined by \(H_{i}\), for \(i=1,2\). As mentioned above, the relevance between codeword bits are associated in the PCM, and numerous PCMs exist for the same codebook. Rather than utilizing a single PCM (or a single mask matrix), utilizing a pair of PCMs can provide a diversity to the decoder. In the decoder layer, the proposed DM ECCT architecture captures distinct features generated by two different PCMs. These features are subsequently fused using concatenation and processed by the FC layers in the output layer. Such a design allows the DM ECCT to adeptly discern the relationship between codeword bits, ultimately achieving state-of-the-art decoding performance among neural network-based decoders.
## Training
The goal of the proposed decoder is to learn the multiplicative noise \(\tilde{z}_{s}\) in (1) and recover an original transmitted signal \(x\). We can obtain the multiplicative noise by \(\tilde{z}_{s}=\tilde{z}_{s}x_{s}^{2}=yx_{s}\). Then, the target multiplicative noise for binary cross-entropy loss function is defined by \(\tilde{z}=\text{bin}(\text{sign}(yx_{s}))\). Finally, the cross-entropy loss function for a received codeword \(y\) is defined as
\[\mathcal{L}=-\sum_{i=1}^{n}\tilde{z}_{i}\log(\sigma(f(y)))+(1-\tilde{z}_{i}) \log(1-\sigma(f(y))).\]
To compare with the conventional ECCT fairly, we adopt the same training setup as used in the previous work. We use the Adam optimizer [10] and conduct 1000 epochs. Each epoch consists of 1000 minibatches, where each minibatch is composed of 128 samples.
## Experiments
To evaluate the efficacy of our proposed methods, we assess the bit error rate (BER) performance of both SM ECCT and DM ECCT for BCH codes and polar codes, and then compare with the conventional ECCT [12]. The implementation of the conventional ECCT is taken from [12]. For the testing, we collect at least 500 frame errors at each signal-to-noise ratio (SNR) value for at least \(10^{5}\) random codewords.
The DM ECCT utilizes the systematic and conventional mask matrices for two self-attention blocks. For polar codes, however, we utilize the systematic mask matrix and the mask matrix constructed based on the modified conventional PCM. In the following section, it will be mentioned that the upper \(n\times n\) submatrix in the conventional mask matrix of polar code is always unmasked and the DM ECCT obtains the dense self-attention map. The dense self-attention map hinders the ECCT from effectively learning the relationship between codeword bits. Thus, we modify the conventional PCM of the polar codes by the row reduction. Starting from the first row, when the row contains all positions of one's of the next row, we eliminate the ones in the row. Utilizing
Figure 2: Architecture of the DM ECCT.
both systematic and modified conventional mask matrices provides diversity gains to the DM ECCT.
Tables 1 and 2 show the BER performances of the conventional ECCT, the proposed SM ECCT, and the proposed DM ECCT for \(N=\{2,6\}\) and \(d=\{32,64,128\}\). The first, second, and the third rows of each codes correspond to the BER performances at SNR \(4\) dB, \(5\) dB, and \(6\) dB, respectively. For the same \(N\) and SNR, the proposed DM ECCT achieves the best BER in most cases, except for BCH code \((63,30)\) when \(N=6\) and \(d=128\). The SM ECCT outperforms the conventional ECCT despite its lower computational complexity. Figures 3 and 4 show the decoding performance of BCH codes and polar codes for \(N=2\), \(d=128\) and \(N=6\), \(d=128\), respectively. The DM ECCT, notably at a low code rate, significantly outperforms the conventional ECCT by more than 1 dB for high SNR. Furthermore, the decoding performance for both SM ECCT and DM ECCT continues to decrease, even when the ECCT curve starts to flatten,
\begin{table}
\begin{tabular}{c|c|c c|c c|c c c} \hline \hline \multicolumn{1}{c}{\(d\)} & \multicolumn{4}{c|}{\(32\)} & \multicolumn{2}{c|}{\(64\)} & \multicolumn{2}{c}{\(128\)} \\ \hline \multirow{2}{*}{Code} & \multirow{2}{*}{SNR} & \multirow{2}{*}{ECCT} & \multicolumn{2}{c|}{SM} & \multirow{2}{*}{ECCT} & \multicolumn{2}{c|}{SM} & \multirow{2}{*}{ECCT} & \multicolumn{2}{c}{SM} & \multirow{2}{*}{DM} \\ & & & ECCT & & & ECCT & & ECCT & & ECCT \\ \hline \hline \multirow{2}{*}{BCH} & \(4\) dB & \(3.43e-2\) & \(1.77e-2\) & \(2.74e-2\) & \(1.17e-2\) & \(1.97e-2\) & \(9.06e-3\) & \(\mathbf{6.11e-3}\) \\ & \(5\) dB & \(1.47e-2\) & \(6.12e-3\) & \(1.06e-2\) & \(3.46e-3\) & \(6.60e-3\) & \(2.36e-3\) & \(\mathbf{1.38e-3}\) \\ \cline{2-8} & \(6\) dB & \(5.02e-3\) & \(1.70e-3\) & \(3.05e-3\) & \(7.12e-4\) & \(1.66e-3\) & \(4.43e-4\) & \(\mathbf{1.78e-4}\) \\ \hline \multirow{2}{*}{BCH} & \(4\) dB & \(1.46e-2\) & \(1.06e-2\) & \(1.20e-2\) & \(8.22e-3\) & \(8.38e-3\) & \(5.63e-3\) & \(\mathbf{4.07e-3}\) \\ \cline{2-8} & \(5\) dB & \(4.40e-3\) & \(3.05e-3\) & \(3.39e-3\) & \(2.09e-3\) & \(1.95e-3\) & \(1.17e-3\) & \(\mathbf{7.38e-4}\) \\ \cline{2-8} & \(6\) dB & \(9.17e-4\) & \(6.20e-4\) & \(6.32e-4\) & \(3.32e-4\) & \(2.99e-4\) & \(1.72e-4\) & \(\mathbf{7.65e-5}\) \\ \hline \multirow{2}{*}{BCH} & \(4\) dB & \(4.32e-2\) & \(3.23e-2\) & \(4.07e-2\) & \(2.49e-2\) & \(3.63e-2\) & \(2.15e-2\) & \(\mathbf{1.81e-2}\) \\ \cline{2-8} & \(5\) dB & \(1.88e-2\) & \(1.28e-2\) & \(1.72e-2\) & \(8.60e-3\) & \(1.41e-2\) & \(6.99e-3\) & \(\mathbf{4.80e-3}\) \\ \cline{2-8} & \(6\) dB & \(5.70e-3\) & \(3.72e-3\) & \(4.96e-3\) & \(2.11e-3\) & \(3.53e-3\) & \(1.45e-3\) & \(\mathbf{8.61e-4}\) \\ \hline \multirow{2}{*}{BCH} & \(4\) dB & \(1.58e-2\) & \(1.16e-2\) & \(1.27e-2\) & \(9.63e-3\) & \(1.14e-2\) & \(8.34e-3\) & \(\mathbf{6.41e-3}\) \\ \cline{2-8} & \(4\) dB & \(4.41e-3\) & \(2.87e-3\) & \(3.16e-3\) & \(2.21e-3\) & \(2.64e-3\) & \(1.72e-3\) & \(\mathbf{1.05e-3}\) \\ \cline{2-8} & \(6\) dB & \(7.57e-4\) & \(4.13e-4\) & \(4.35e-4\) & \(2.97e-4\) & \(3.38e-4\) & \(2.00e-4\) & \(\mathbf{1.02e-4}\) \\ \hline \multirow{2}{*}{Polar} & \(4\) dB & \(4.03e-2\) & \(3.50e-2\) & \(2.76e-2\) & \(1.74e-2\) & \(1.44e-2\) & \(1.11e-2\) & \(\mathbf{8.59e-3}\) \\ \cline{2-8} & \(5\) dB & \(1.66e-2\) & \(1.33e-2\) & \(9.34e-3\) & \(4.51e-3\) & \(3.91e-3\) & \(2.30e-3\) & \(\mathbf{1.66e-3}\) \\ \cline{2-8} & \(6\) dB & \(5.17e-3\) & \(3.42e-3\) & \(2.34e-3\) & \(7.05e-4\) & \(1.08e-3\) & \(2.73e-4\) & \(\mathbf{1.52e-4}\) \\ \hline \multirow{2}{*}{Polar} & \(4\) dB & \(1.74e-2\) & \(1.29e-2\) & \(1.20e-2\) & \(1.01e-2\) & \(9.04e-3\) & \(7.84e-3\) & \(\mathbf{5.62e-3}\) \\ \cline{2-8} & \(5\) dB & \(5.84e-3\) & \(3.99e-3\) & \(3.33e-3\) & \(2.79e-3\) & \(2.39e-3\) & \(1.97e-3\) & \(\mathbf{1.23e-3}\) \\ \cline{2-8} & \(6\) dB & \(1.39e-3\) & \(8.63e-4\) & \(6.22e-4\) & \(5.17e-4\) & \(4.35e-4\) & \(3.48e-4\) & \(\mathbf{1.60e-4}\) \\ \hline \multirow{2}{*}{Polar} & \(4\) dB & \(6.75e-3\) & \(7.44e-3\) & \(5.37e-3\) & \(1.69e-1\) & \(4.17e-3\) & \(4.77e-3\) & \(\mathbf{3.43e-3}\) \\ \cline{2-8} & \(5\) dB & \(1.42e-3\) & \(1.62e-3\) & \(1.02e-3\) & \(1.13e-3\) & \(6.95e-4\) & \(8.04e-4\) & \(\mathbf{4.95e-4}\) \\ \cline{2-8} & \(6\) dB & \(1.83e-4\) & \(2.15e-4\) & \(1.24e-4\) & \(1.42e-4\) & \(7.47e-5\) & \(8.58e-5\) & \(\mathbf{4.27e-5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: A comparison of BER for three different methods at three different SNR values (4 dB, \(5\) dB, \(6\) dB). This table shows the BER performance for \(N=2\) and \(d=\{32,64,128\}\). Best results are in **bold**.
Figure 3: The BER performance of BCH and polar codes for \(N=2\) and \(d=128\).
indicating the potential of the proposed ECCTs to perform well at high SNR points. Given that the conventional ECCT outperforms previous neural network-based decoding algorithms as noted by Choukroun and Wolf [13], our work can be recognized as the new state-of-the art solution for ECC decoding.
## Discussion
### Complexity Analysis
Figure 5(a) shows the sparsity of the self-attention map employing conventional and systematic masks with respect to the full self-attention map without masking. Utilizing the systematic mask, sparsity levels rise from 72% to 74% for BCH code \((31,11)\), from 56% to 67% for BCH code \((63,30)\), and dramatically from 52% to 82% for polar code \((64,22)\). As depicted in Figures 5(b) and 5(c), the system
\begin{table}
\begin{tabular}{c|c|c c|c c|c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{6}{c|}{\(N=6\)} \\ \hline \hline \multicolumn{1}{c}{\(d\)} & \multicolumn{2}{c|}{32} & \multicolumn{2}{c|}{64} & \multicolumn{2}{c}{128} \\ \hline \multirow{2}{*}{Code} & \multirow{2}{*}{SNR} & \multirow{2}{*}{ECCT} & SM & \multirow{2}{*}{ECCT} & SM & \multirow{2}{*}{ECCT} & SM & DM \\ & & & ECCT & & ECCT & & ECCT & ECCT \\ \hline \hline \multirow{2}{*}{BCH} & \(4\,\text{dB}\) & \(1.22e-2\) & \(5.22e-3\) & \(8.35e-3\) & \(2.48e-3\) & \(4.37e-3\) & \(1.72e-3\) & \(\mathbf{8.70e-4}\) \\ \cline{2-7} & \(5\,\text{dB}\) & \(3.28e-3\) & \(1.27e-3\) & \(1.92e-3\) & \(5.91e-4\) & \(9.69e-4\) & \(3.22e-4\) & \(\mathbf{8.57e-5}\) \\ \cline{2-7} & \(6\,\text{dB}\) & \(4.94e-4\) & \(2.10e-4\) & \(2.37e-4\) & \(8.23e-5\) & \(1.10e-4\) & \(4.89e-5\) & \(\mathbf{6.70e-6}\) \\ \hline \multirow{2}{*}{BCH} & \(4\,\text{dB}\) & \(5.55e-3\) & \(2.79e-3\) & \(3.50e-3\) & \(3.11e-3\) & \(2.97e-3\) & \(1.71e-3\) & \(\mathbf{1.07e-3}\) \\ \cline{2-7} & \(5\,\text{dB}\) & \(1.02e-3\) & \(4.43e-4\) & \(6.50e-4\) & \(5.24e-4\) & \(4.66e-4\) & \(2.52e-4\) & \(\mathbf{8.35e-5}\) \\ \cline{2-7} & \(6\,\text{dB}\) & \(1.12e-4\) & \(3.11e-5\) & \(6.65e-5\) & \(5.66e-5\) & \(4.76e-5\) & \(2.35e-5\) & \(\mathbf{5.75e-6}\) \\ \hline \multirow{2}{*}{BCH} & \(4\,\text{dB}\) & \(8.41e-3\) & \(5.63e-3\) & \(7.36e-3\) & \(4.41e-3\) & \(1.87e-2\) & \(9.00e-3\) & \(\mathbf{8.04e-3}\) \\ \cline{2-7} & \(5\,\text{dB}\) & \(1.37e-3\) & \(8.33e-4\) & \(1.06e-3\) & \(5.24e-4\) & \(4.41e-3\) & \(1.96e-3\) & \(\mathbf{1.18e-3}\) \\ \cline{2-7} & \(6\,\text{dB}\) & \(1.10e-4\) & \(5.09e-5\) & \(7.02e-5\) & \(2.68e-5\) & \(5.28e-4\) & \(2.51e-4\) & \(\mathbf{7.61e-5}\) \\ \hline \multirow{2}{*}{BCH} & \(4\,\text{dB}\) & \(1.00e-2\) & \(5.03e-3\) & \(6.12e-3\) & \(2.05e-3\) & \(4.53e-3\) & \(3.76e-3\) & \(2.74e-3\) \\ \cline{2-7} & \(5\,\text{dB}\) & \(2.49e-3\) & \(8.95e-4\) & \(1.59e-3\) & \(2.49e-4\) & \(5.58e-4\) & \(4.32e-4\) & \(\mathbf{2.62e-4}\) \\ \cline{2-7} & \(6\,\text{dB}\) & \(6.83e-4\) & \(1.07e-4\) & \(5.42e-4\) & \(1.82e-5\) & \(3.15e-5\) & \(1.61e-5\) & \(\mathbf{9.31e-6}\) \\ \hline \multirow{2}{*}{Polar} & \(4\,\text{dB}\) & \(5.22e-3\) & \(3.32e-3\) & \(2.52e-3\) & \(1.59e-3\) & \(2.16e-3\) & \(5.84e-4\) & \(\mathbf{4.82e-4}\) \\ \cline{2-7} & \(5\,\text{dB}\) & \(9.64e-4\) & \(5.01e-4\) & \(3.50e-4\) & \(1.76e-4\) & \(1.76e-4\) & \(3.28e-5\) & \(\mathbf{2.94e-5}\) \\ \cline{2-7} & \(6\,\text{dB}\) & \(1.01e-4\) & \(4.44e-5\) & \(2.66e-5\) & \(1.39e-5\) & \(1.22e-5\) & \(1.37e-6\) & \(\mathbf{6.39e-7}\) \\ \hline \multirow{2}{*}{Polar} & \(4\,\text{dB}\) & \(3.59e-3\) & \(3.06e-3\) & \(2.30e-3\) & \(2.09e-3\) & \(1.25e-3\) & \(9.10e-4\) & \(\mathbf{9.03e-4}\) \\ \cline{2-7} & \(5\,\text{dB}\) & \(5.21e-4\) & \(4.22e-4\) & \(2.94e-4\) & \(2.66e-4\) & \(1.25e-4\) & \(8.28e-5\) & \(\mathbf{8.07e-5}\) \\ \cline{2-7} & \(6\,\text{dB}\) & \(4.56e-5\) & \(3.10e-5\) & \(1.98e-5\) & \(1.46e-5\) & \(6.57e-6\) & \(4.47e-6\) & \(\mathbf{3.70e-6}\) \\ \hline \multirow{2}{*}{Polar} & \(4\,\text{dB}\) & \(6.75e-3\) & \(7.44e-3\) & \(5.37e-3\) & \(1.69e-1\) & \(2.05e-3\) & \(\mathbf{1.68e-3}\) & \(1.70e-3\) \\ \cline{2-7} & \(5\,\text{dB}\) & \(1.42e-3\) & \(1.62e-3\) & \(1.02e-3\) & \(1.13e-3\) & \(2.50e-4\) & \(2.12e-4\) & \(\mathbf{2.11e-4}\) \\ \cline{2-7} & \(6\,\text{dB}\) & \(1.83e-4\) & \(2.15e-4\) & \(1.24e-4\) & \(1.42e-4\) & \(1.77e-5\) & \(1.83e-5\) & \(\mathbf{1.73e-5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: A comparison of BER for three different methods at three different SNR values (4 dB, \(5\) dB, \(6\) dB). This table shows the BER performance for \(N=6\) and \(d=\{32,64,128\}\). Best results are in **bold**.
Figure 4: The BER performance of BCH and polar codes for \(N=6\) and \(d=128\).
atic mask matrix shows a notably larger portion of masking positions than the conventional mask matrix, especially for polar codes. This is attributed to the fact that the first row in the conventional PCM of the polar codes is the all-ones vector. In such case, the upper \(n\times n\) submatrix of the conventional mask matrix is always unmasked according to the Algorithm 1. However, the first row of the systematic PCM is not the all-ones vector and a large portion of the upper \(n\times n\) submatrix is masked. This sparsity improvement in the self-attention maps leads to a reduction in computational complexity.
In terms of the computational complexity, the use of the proposed systematic mask matrix contributes to reducing the computational complexity. For the DM ECCT, it requires twice of a complexity in the decoder layer, as it has two input embedded vectors that pass through all blocks in the decoder layer. However, the architecture of DM ECCT facilitates the parallel operations in the decoder layers; hence, DM ECCT can improve decoding performance while maintaining the decoding latency.
## Conclusion
In this paper, we aimed to improve the performance of ECC decoding using a novel architecture of the ECCT. We first proposed the systematic mask matrix, which is more suitable for the ECCT than the conventional mask matrix. Additionally, we proposed the novel architecture of DM ECCT by employing two mutually complementary mask matrices.
Through extensive simulations, we demonstrated that the proposed methods outperform the conventional ECCT. We first achieved performance and computational complexity enhancements by the systematic mask matrix. The sparsity incurred by the systematic mask prompts the ECCT to focus on more important positions, enhancing decoding performance. In particular, more pronounced improvements are observed in low-rate codes. Traditionally, the systematic form of the matrix has been employed for efficient encoding in conventional decoders (e.g., BP, MS decoders). However, our results highlight its critical importance in the decoding process of ECCT as well.
Also, we showed that the proposed DM ECCT architecture contributes to decoding performance improvement. We utilized two different mask matrices, systematic and conventional masks, in a parallel manner, which provided diversity gains to the decoder. The DM ECCT notably enhances decoding performance over the conventional ECCT for both BCH codes and polar codes, achieving the state-of-the-art decoding performance among neural network-based decoders with considerable margins.
|
2310.14993 | Understanding the Inner Workings of Language Models Through
Representation Dissimilarity | As language models are applied to an increasing number of real-world
applications, understanding their inner workings has become an important issue
in model trust, interpretability, and transparency. In this work we show that
representation dissimilarity measures, which are functions that measure the
extent to which two model's internal representations differ, can be a valuable
tool for gaining insight into the mechanics of language models. Among our
insights are: (i) an apparent asymmetry in the internal representations of
model using SoLU and GeLU activation functions, (ii) evidence that
dissimilarity measures can identify and locate generalization properties of
models that are invisible via in-distribution test set performance, and (iii)
new evaluations of how language model features vary as width and depth are
increased. Our results suggest that dissimilarity measures are a promising set
of tools for shedding light on the inner workings of language models. | Davis Brown, Charles Godfrey, Nicholas Konz, Jonathan Tu, Henry Kvinge | 2023-10-23T14:46:20Z | http://arxiv.org/abs/2310.14993v1 | # Understanding the Inner Workings of Language Models Through Representation Dissimilarity
###### Abstract
As language models are applied to an increasing number of real-world applications, understanding their inner workings has become an important issue in model trust, interpretability, and transparency. In this work we show that representation dissimilarity measures, which are functions that measure the extent to which two model's internal representations differ, can be a valuable tool for gaining insight into the mechanics of language models. Among our insights are: (i) an apparent asymmetry in the internal representations of model using SoLU and GeLU activation functions, (ii) evidence that dissimilarity measures can identify and locate generalization properties of models that are invisible via in-distribution test set performance, and (iii) new evaluations of how language model features vary as width and depth are increased. Our results suggest that dissimilarity measures are a promising set of tools for shedding light on the inner workings of language models.
## 1 Introduction
The defining feature of deep neural networks is their capability of learning useful feature representations from data in an end-to-end fashion. Perhaps ironically, one of the most pressing scientific challenges in deep learning is _understanding_ the features that these models learn. This challenge is not merely philosophical: learned features are known to impact model interpretability/explainability, generalization, and transferability to downstream tasks, and it can be the case that none of these effects are visible from the point of view of model performance on even carefully selected validation sets.
When studying hidden representations in deep learning, a fundamental question is whether the internal representations of a given pair of models are similar or not. Dissimilarity measures (Klabunde et al., 2023) are a class of functions that seek to address this by measuring the difference between (potentially high-dimensional) representations. In this paper, we focus on two such functions that, while popular in computer vision, have seen limited application to language models: model stitching (Lenc and Vedaldi, 2014; Bansal et al., 2021) and centered kernel alignment (CKA) (Kornblith et al., 2019). _Model stitching_ extracts features from earlier layers of model \(f\) and plugs them into the later layers of model \(g\) (possibly mediated by a small, learnable, connecting layer), and evaluates downstream performance of the resulting "stitched" model. Stitching takes a task-centric view towards representations, operating under the assumption that if two models have similar representations then these representations should be reconcilable by a simple transformation to such an extent that the downstream task can still be solved. On the other hand, CKA compares the statistical structure of the representations obtained in two different models/layers from a fixed set of input datapoints, ignoring any relationship to performance on the task for which the models were trained.
In this paper we make the case that dissimilarity measures are a tool that has been underutilized in the study of language models. We support this claim through experiments that shed light on the inner workings of language models: **(i)** We show that stitching can be used to better understand the changes to representations that result from using different nonlinear activations in a model. In particular, we find evidence that feeding Gaussian error linear unit (GeLU) model representations into a softmax linear unit (SoLU) model incurs a smaller penalty in loss compared to feeding SoLU activations into a GeLU model, suggesting that models with GeLU activations may form representations that contain strictly more useful information for the training task than the representations of models using SoLU activations. **(ii)** We show that dissimilarity measures can localize differences in models
that are invisible via test set performance. Following the experimental set-up of (Juneja et al., 2022), we show that both stitching and CKA detect the difference between models which generalize to an out-of-distribution test set and models that do not generalize. **(iii)** Finally, we apply CKA to the Pythia networks (Biderman et al., 2023), a sequence of generative transformers of increasing width and depth, finding a high degree of similarity between early layer features even as scale varies from 70 million to 1 billion parameters, an emergent "block structure" previously observed in CKA analysis of image classifiers such as ResNets and Vision Transformers, and showing that CKA identifies a Pythia model (pythia-2.8b-deduped) exhibiting remarkably low levels of feature similarity when compared with the remaining models in the family (as well as inconsistent architectural characteristics). This last finding is perhaps surprising considering the consistent trend towards lower perplexity and higher performance on benchmarks with increasing model scale seen in the original Pythia evaluations (Biderman et al., 2023) -- one might have expected to see an underlying consistent evolution of hidden features from the perspective of CKA.
## 2 Background
In this section we review the two model dissimilarity measures appearing in this paper.
**Model stitching (Bansal et al., 2021):** Informally, model stitching asks how well the representation extracted by the early layers of one model can be used by the later layers of another model to solve a specific task. Let \(f\) be a neural network and for a layer \(l\) of \(f\) let \(f_{\leq l}\) (respectively \(f_{\geq l}\)) be the composition of the first \(l\) layers of \(f\) (respectively the layers \(m\) of \(f\) with \(m\geq l\)). Given another network \(g\) the model obtained by stitching layer \(l\) of \(f\) to layer \(m\) of \(g\) with stitching layer \(\varphi\) is \(g_{>m}\circ\varphi\circ f_{\leq l}\). The performance of this stitched network measures the similarity of representations of \(f\) at layer \(l\) and representations of \(g\) at layer \(m\).
**Centered kernel alignment (CKA) (Kornblith et al., 2019):** Let \(D=\{x_{1},\dots,x_{d}\}\) be a set of model inputs. For models \(f\) and \(g\) with layers \(l\) and \(m\) respectively, let \(A_{f,l,g,m}\) be the covariance matrix of \(f_{\leq l}(D)\) and \(g_{\leq m}(D)\). Then the CKA score for models \(f\) and \(g\) at layers \(l\) and \(m\) respectively and evaluated at \(D\) is
\[\frac{||A_{f,l,g,m}||_{F}^{2}}{||A_{f,l,f,l}||_{F}||A_{g,m,g,m}||_{F}}, \tag{1}\]
where \(||\cdot||_{F}\) is the Frobenious norm. Higher CKA scores indicate more structural similarity between representations. In our experiments we use an unbiased estimator of eq. (1) to calculate CKA in batches (see appendix C for details).
## 3 Model stitching reveals an asymmetry between GeLU and interpretable-by-design SoLU
The design of more interpretable models is an area of active research. Interpretable-by-design models often achieve comparable performance on downstream tasks to their non-interpretable counterparts (Rudin, 2019). However, these models (almost by definition) have differences in their hidden layer representations that can impact downstream performance.
Many contemporary transformers use the Gaussian error linear unit activation function, approximately \(\mathrm{GeLU}(x)=x*\mathrm{sigmoid}(1.7x)\). The softmax linear unit \(\mathrm{SoLU}(x)=x\cdot\mathrm{softmax}(x)\) is an activation function introduced in (Elhage et al., 2022) in an attempt to reduce neuron polysemanticity: the softmax has the effect of shrinking small and amplifying large neuron outputs. One of the findings of (Elhage et al., 2022) was that SoLU transformers yield comparable performance to their GeLU counterparts on a range of downstream tasks. However, those tests involved zero-shot evaluation or fine-tuning of the full transformer, and as such they do not shed much light on intermediate hidden feature representations. We use stitching to do this.
Using the notation from Section 2, we set our stitching layer \(\varphi\) to be a learnable linear layer (all other parameters of the stitched model are frozen) between the residual streams following a layer \(l\). We stitch small 3-layer (9.4M parameter) and 4-layer (13M parameter) SoLU and GeLU models trained by (Nanda, 2022), using the Pile validation set (Gao et al., 2020) to optimize \(\varphi\).
Figure 1 displays the resulting stitching losses calculated on the Pile validation set. For both the 3-layer and 4-layer models, and at every layer considered, we see that when \(f\) is a SoLU model for the stitched model \(g_{>m}\circ\varphi\circ f_{\leq l}\) -- the "SoLU-into-GeLU" cases -- larger penalties are incurred than when \(f\) uses GeLUs, i.e. the "GeLU-into-SoLU"
cases. We conjecture that this outcome results from the fact that SoLU activations effectively reduce capacity of hidden feature layers; there may be an analogy between the experiments of fig. 1 and those of (Bansal et al., 2021, Fig. 2(c)) stitching vision models of different widths.
We also measure the stitching performance between pairs of identical models, the SoLU-into-SoLU and GeLU-into-GeLU stitching baselines. This is meant to delineate between two factors influencing the stitching penalties being recorded: the ease of optimizing stitching layers and the interplay between hidden features of possibly different architectures. We seek to measure differences between hidden features, while the former is an additional unavoidable factor inherent in model stitching experiments. The identical SoLU-into-SoLU and GeLU-into-GeLU stitching baselines serve as a proxy measure of stitching optimization success, since in-principle the stitching layer should be able to learn the identity matrix and incur a stitching penalty of 0. So, that SoLU-into-SoLU stitches better than GeLU-into-GeLU gives evidence for SoLU models being easier to stitch with than GeLU models, from an optimization perspective. That both SoLU-into-GeLU and GeLU-into-SoLU incur higher stitching penalties than GeLU-into-GeLU suggests that the penalties cannot only be a result of stitching layer optimization issues.
As pointed out in (Bansal et al., 2021) such an analysis is not possible using CKA, which only detects distance between distributions of hidden features (displayed for our GeLU and SoLU models in figs. 4 and 5), not their usefulness for a machine learning task. Further, the differences in layer expressiveness between GeLU and SoLU models are not easily elicited by evaluation on downstream tasks, but can be seen through linear stitching.
## 4 Locating generalization failures
Starting with a single pretrained BERT model and fine-tuning 100 models (differing only in random initialization of their classifier heads and stochasticity of batching in fine-tuning optimization) on the Multi-Genre Natural Language Inference (MNLI) dataset (Williams et al., 2018), the authors of (McCoy et al., 2020) observed a striking phenomenon: despite the fine-tuned BERTs' near identical performance on MNLI, their performance on Heuristic Analysis for NLI Systems (HANS), an out of distribution variant of MNLI, (McCoy et al., 2019) was highly variable. On a specific subset of HANS called "lexical overlap" (HANS-LO) one subset of fine-tuned BERTs (the _generalizing_) models) were relatively successful and used syntactic features; models in its complement (the _heuristic_ models) failed catastrophically and used a strategy akin to
Figure 1: **Comparing stitching loss** between 3-layer and 4-layer GeLU and SoLU models on the Pile validation set. Stitching for “solu\(\rightarrow\)gelu,” i.e. stitching with a SoLU ‘head’ and GeLU ‘tail,’ incurs systematic penalties (larger loss is worse) compared to the “gelu\(\rightarrow\)solu” stitching. Baselines for stitching identical models (e.g., “gelu-3l\(\rightarrow\)gelu-3l”) are given to account for potential inherent differences in learning the stitching layer \(\varphi\) between the activation functions.
bag-of-words.1 Recent work of (Juneja et al., 2022) found that partitioning the 100 fine-tuned BERTs on the basis of their HANS-LO performance is essentially equivalent to partitioning them on the basis of _mode connectivity_: that is, the generalizing and heuristic models lie in separate loss-landscape basins. We refer the interested reader to (Juneja et al., 2022) for further details.
Footnote 1: By “relatively successful” we mean “Achieving up to 50% accuracy on an adversarially-designed test set.” While variable OOD performance with respect to fine-tuning seed was also seen on other subsets of HANS, we follow (McCoy et al., 2020; Juneja et al., 2022) in focusing on HANS-LO where such variance was most pronounced.
But at what layer(s) do the hidden features of the generalizing and heuristic models diverge? For example, are these two subpopulations of models distinct because their earlier layers are different or because their later layers are different? Or both? Neither analysis of OOD performance nor mode connectivity analysis can provide an answer, but both stitching and CKA reveal that the difference between the features of the generalizing and heuristic models is concentrated in later layers of the BERT encoder. It is worth highlighting that conceptualizing and building relevant datasets for distribution shifts is often expensive and time consuming. That identity stitching and CKA _using in-distribution MNLI data alone_ differentiate between heuristic and generalizing behavior on HANS-LO suggests that they are useful tools for model error analysis and debugging.
Figure 1(a) displays performance of pairs of BERT models stitched with the _identity function_\(\varphi=\mathrm{id}\), i.e. models of the form \(g_{>m}\circ f_{\leq l}\) on the MNLI finetuning task.2 We see that at early layers of the models identity stitching incurs almost no penalty, but that stitching at later layers (when the fraction of layers occupied by at the bottom model exceeds 75 %) stitching _between_ generalizing and heuristic models incurs significant accuracy drop (high stitching penalty), whereas stitching _within_ the generalizing and heuristic groups incurs almost no penalty.
Footnote 2: The motivation for stitching with a constant identity function comes from evidence that stitching-type algorithms perform symmetry correction (accounting for the fact that features of model A could differ from those of model B by an architecture symmetry e.g. permuting neurons, see e.g. (Ainsworth et al., 2023)), but that networks obtained from multiple finetuning runs from a fixed pretrained model seem to not differ by such symmetries (see for example (Wortsman et al., 2022)).
A similar picture emerges in fig. 1(b), where we plot CKA values between features of fine-tuned BERTs on the MNLI dataset. At early layers, CKA is insensitive to the generalizing or heuristic nature of model A and B. At later layers (in the same range where we saw identity stitching penalties appear), the CKA measure _between_ generalizing and heuristic models significantly exceeds its value _within_ the generalizing and heuristic groups.
Together, the results of fig. 2 paint the following picture: the NLI models of (McCoy et al., 2020) seem to all build up generally useful features in their early layers, and only decide to memorize (i.e., use a lexical overlap heuristic) or generalize (use syntactic features) in their final layers.
## 5 Representation dissimilarity in scaling families
Enormous quantities of research and engineering energy have been devoted to design, training and evaluation of _scaling families_ of language models. By a "scaling family" we mean a collection of neural network architectures with similar components, but with variable width and/or depth resulting in a sequence of models with increasing size (as measured by number of parameters). Pioneering work in computer vision used CKA to discover interesting properties of hidden features that vary with network width and depth (Nguyen et al., 2021), but to the best of our knowledge no similar studies have appeared in the natural language domain
We take a first step in this direction by computing CKA measures of features within and between the models of the Pythia family (Biderman et al., 2023), up to the second-largest model with 6.9 billion parameters. In all experiments we use the "deduped" models (trained on the deduplicated version of the Pile (Gao et al., 2020)). In the case of inter-model CKA measurements on pythia-6.9b we see in fig. 3 (bottom right) that similarity between layers \(l\) and \(m\) gradually decreases as \(|m-l|\) decreases. We also see a pattern reminiscent of the "block structure" analyzed in (Nguyen et al., 2021), where CKA values are relatively high when comparing two layers _within_ one of the following three groups: **(i)** early layers (the first 3), **(ii)** late layers (in this case only the final layer) and **(iii)** the remaining intermediate layers, while CKA values are relatively low when comparing two layers _between_ these three groups.
Using CKA to compare features _between_ pythia-{1b,1.4b,2.8b} and pythia-6.9b we observe high similarity between features at the be
ginning of the model. A plausible reason for this early layer similarity is the common task of "detokenization," (Elhage et al., 2022) where early neurons in models may change unnatural tokenizations to more useful representations, for example responding strongly to compound words (e.g., birthday1 party). We also continue to see "block structure" even when comparing between two models, and in the case of the pairs (pythia-1b, pythia-6.9b) and (pythia-1.4b, pythia-6.9b) substantial inter-model feature similarity in intermediate layers.
The low CKA values obtained when comparing intermediate layers of (pythia-2.8b, pythia-6.9b) break this trend - in fact, as illustrated in fig. 9, pythia-2.8b exhibits such trend-breaking intermediate layer feature dissimilarity with every Pythia model with 1 billion or more parameters.3 Upon close inspection, we see that while the pythia-{1b,1.4b,6.9b} models all have a early layer "block" consisting of 3 layers with relatively high CKA similarity, in pythia-2.8b this block consists of only 2 layers, suggesting the features of pythia-2.8b diverge from the rest of the family very early in the model. In table 1 we point out that some aspects of pythia-2.8b's architecture are inconsistent with the general scaling trend of the Pythia family as a whole.
Footnote 3: Except itself, of course.
Figure 3: CKA between {pythia-1b,pythia-1.4b,pythia-2.8b,pythia-6.9b} and pythia-6.9b evaluated on the Pile dataset (Gao et al., 2020) (higher means more similar).
Figure 2: **Left:** Identity stitching on the MNLI dataset between pairs of the top 10 (“generalizing”) and bottom 10 (“heuristic”) performing models of (McCoy et al., 2020) on the lexical overlap subset of HANS (an out-of-distribution NLI dataset). **Right:** Corresponding CKA values. **Both:** Confidence intervals are obtained by evaluating on all distinct pairs of generalizing and heuristic models, for a total of \(2\cdot{10\choose 2}+10^{2}=190\) model comparisons.
### Limitations
In the current work we examine relatively small language models. Larger models have qualitatively different features, and it is not obvious if our experiments on the differences in layer expressiveness between GeLU and SoLU models will scale to significantly larger models. While we show that identity stitching and CKA distinguish the heuristic and generalizing BERT models, we did not attempt to use stitching/CKA to automatically cluster the models (as was done with mode connectivity in (Juneja et al., 2022)).
## Ethics Statement
In this paper we present new applications of representation analysis to language model hidden features. Large language models have the potential to impact human society in ways that we are only beginning to glimpse. A deeper understanding of the features they learn could be exploited for both positive and negative effect. Positive use cases include methods for enhancing language models to be more safe, generalizable and robust (one such approach hinted at in the experiments of section 4), methods for explaining the decisions of language models and identifying model components responsible for unwanted behavior. Unfortunately, these tools can at times be repurposed to do harm, for example extracting information from model training data and inducing specific undesirable model predictions.
## Acknowledgements
This research was supported by the Mathematics for Artificial Reasoning in Science (MARS) initiative via the Laboratory Directed Research and Development (LDRD) investments at Pacific Northwest National Laboratory (PNNL). PNNL is a multi-program national laboratory operated for the U.S. Department of Energy (DOE) by Battelle Memorial Institute under Contract No. DE-AC05-76RL0-1830.
|
2303.06277 | SPOTR: Spatio-temporal Pose Transformers for Human Motion Prediction | 3D human motion prediction is a research area of high significance and a
challenge in computer vision. It is useful for the design of many applications
including robotics and autonomous driving. Traditionally, autogregressive
models have been used to predict human motion. However, these models have high
computation needs and error accumulation that make it difficult to use them for
realtime applications. In this paper, we present a non-autogressive model for
human motion prediction. We focus on learning spatio-temporal representations
non-autoregressively for generation of plausible future motions. We propose a
novel architecture that leverages the recently proposed Transformers. Human
motion involves complex spatio-temporal dynamics with joints affecting the
position and rotation of each other even though they are not connected
directly. The proposed model extracts these dynamics using both convolutions
and the self-attention mechanism. Using specialized spatial and temporal
self-attention to augment the features extracted through convolution allows our
model to generate spatio-temporally coherent predictions in parallel
independent of the activity. Our contributions are threefold: (i) we frame
human motion prediction as a sequence-to-sequence problem and propose a
non-autoregressive Transformer to forecast a sequence of poses in parallel;
(ii) our method is activity agnostic; (iii) we show that despite its
simplicity, our approach is able to make accurate predictions, achieving better
or comparable results compared to the state-of-the-art on two public datasets,
with far fewer parameters and much faster inference. | Avinash Ajit Nargund, Misha Sra | 2023-03-11T01:44:29Z | http://arxiv.org/abs/2303.06277v1 | # SPOTR: Spatio-temporal Pose Transformers for Human Motion Prediction
###### Abstract
3D human motion prediction is a research area of high significance and a challenge in computer vision. It is useful for the design of many applications including robotics and autonomous driving. Traditionally, autoregressive models have been used to predict human motion. However, these models have high computation needs and error accumulation that make it difficult to use them for realtime applications. In this paper, we present a non-autogressive model for human motion prediction. We focus on learning spatio-temporal representations non-autoregressively for generation of plausible future motions. We propose a novel architecture that leverages the recently proposed Transformers. Human motion involves complex spatio-temporal dynamics with joints affecting the position and rotation of each other even though they are not connected directly. The proposed model extracts these dynamics using both convolutions and the self-attention mechanism. Using specialized spatial and temporal self-attention to augment the features extracted through convolution allows our model to generate spatio-temporally coherent predictions in parallel independent of the activity. Our contributions are threefold: (i) we frame human motion prediction as a sequence-to-sequence problem and propose a non-autoregressive Transformer to forecast a sequence of poses in parallel; (ii) our method is activity agnostic; (iii) we show that despite its simplicity, our approach is able to make accurate predictions, achieving better or comparable results compared to the state-of-the-art on two public datasets, with far fewer parameters and much faster inference.
## 1 Introduction
Human motion prediction is the task of forecasting human poses conditioned on a sequence of observed poses. It is valuable in many applications including autonomous driving, animation, robotics, mixed reality, and healthcare. While humans can predict motions of other humans relatively easily to help them perform different tasks such as navigate through a crowd or play sports, the same is not true for algorithms.
Over the last decade, several attempts have been made to advance human motion forecasting. While earlier solutions have used Recurrent Neural Networks (RNNs) [23, 30, 26] and Convolutional Neural Networks (CNNs) [18], models based on Graph Convolutional Networks (GCNs) [7, 19, 9] and Transformers [1, 24] have become increasingly popular. Conventional methods relying on recurrent neural networks (RNNs) used stacks of LSTM or GRU modules and solve the task with autoregressive decoding, generating predictions sequentially, conditioned on previous predictions [26, 8, 1]. Autoregressive methods have two main shortcomings. First, the models are prone to accumulation of errors in prediction over time. This is because predictions are conditioned on previous predictions that already contain some error. Attempting to minimize these cumulative errors can eventually cause the predictions to collapse to a non-plausible static pose [18, 17]. Second, autoregressive models are not parallelizable which makes it difficult to deploy these computationally intensive models in real-time interactive user-centered scenarios. Other motion prediction methods have included generative adversarial networks (GANs) [10], long short-term memory (LSTMs) [8], and Markovian dynamics [16]. Most prior approaches have
Figure 1: Proposed approach for non-autoregressive motion prediction with a Transformer. Spatio-temporal feature extraction is followed by a temporal convolution in the encoder. Features generated by the encoder are passed through the decoder. To ensure the predicted quarternions represent valid rotations, we explicitly normalize them to be of unit length. All poses are predicted in parallel.
largely been replaced by deep learning methods fueled by the availability of large scale human motion datasets. However, 3D motion prediction remains a challenging task even though body joints and corresponding motions are highly correlated, the spatial relations and temporal evolution are difficult to model.
In this work, we present a non-autogressive model architecture which explicitly considers the spatio-temporal aspects of human motion data for the 3D motion modeling task. Our approach is motivated by the recent success of Transformer models [29] on tasks such as machine translation [29, 5], music [11], animation [11], image captioning [6], and image animation [28]. The original Transformer was designed for a one-dimensional sequences of words using self-attention [29, 2]. To use the self-attention mechanism for our inherently spatio-temporal 3D task of predicting human motion, we propose to decouple the temporal and spatial dimensions. Our model has an encoder-decoder architecture with the encoder extracting spatio-temporal features using self-attention augmented convolutions. The predictions are produced in a single pass by the decoder which is composed of multiple interleaved graph convolution and temporal convolution layers.
In the proposed approach, we extract two sets of spatio-temporal features of input motion. One set of features is learned using a block of graph convolutions followed by convolution along the temporal dimension. The second set is obtained by adding the features obtained from the spatial and temporal attention blocks. In each input frame, temporal attention extracts the correlations between the positions of the same joint in the past frames while the spatial attention identifies the dependencies between the joints in the same frame. We concatenate the convolutional and self-attention features allowing the decoder to access and determine which segments of information are relevant for generating structurally and temporally coherent predictions.
We evaluate our proposed model on the H3.6M [12] and CMU Mocap datasets in the short-term prediction setting. Our model matches the performance of state-of-the-art non-autoregressive model [19] for short-term predictions using input over a much shorter time horizon (400ms vs 2000ms) compared to prior work [17, 24]). Our approach allows for increased inference speed for time-sensitive applications, overcoming some of the limitations of autoregressive models. It is able to produce more accurate predictions with a relatively small number of parameters. Furthermore, in contrast with prior work [25], our method is activity-agnostic.
## 2 Related Work
Human pose forecasting is usually formulated as a sequence-to-sequence problem. A sequence of seed poses are used to extract features of the motion using an encoder and then the future poses are predicted, usually autoregressively, using a decoder. While earlier solutions were based on Recurrent Neural Networks (RNNs) [23, 26] or Convolutional Neural Networks (CNNs) [18, 10], models based on Graph Convolutional Networks (GCNs) [14] have become increasingly popular.
### Autoregressive Models
Aksan et. al. [1] proposed a Transformer-based architecture for the generative modelling of 3D human motion. They learn spatio-temporal dynamics autoregressively by using decoupled spatial and temporal self-attention. This enables their model to learn both structural and temporal dependencies explicitly and make accurate short-term predictions and also, generate possible future poses over long horizons. While the decoupled attention mechanisms are effective in learning the motion features, based on [3] we hypothesize that augmenting the attention features with spatio-temporal feature maps will improve the model performance.
Multiscale GCNs are a popular choice of architecture for modelling human motion. GCNs operating on clusters of joints are used to learn the dynamics of the various joints. In [7] the GCNs are used to extract features from fine to coarse scale and then from coarse to fine. The extracted multiscale features are then combined and decoded using a GCN decoder to obtain residuals between the input and the target pose sequence. [19] fuse the extracted multiscale features across scales and feed them into a novel Graph Gated Recurrent Unit (G-GRU) to generate the predictions autoregressively.
Hermes et. al. [9] propose a low-complexity forecasting model based on the Graph-WaveNet. Each Graph-WaveNet block performs spatio-temporal convolutions followed by purely spatial graph convolutions. However, unlike the original Graph-WaveNet, they replace the full undirected skeletal graph with three directed graphs based on the joint kinematic tree. They use quaternions to represent the joints which have to be of unit length to be valid rotations. However the authors do not mention of how the validity of the predicted quaternions is ensured.
Autoregressive models generate predictions one time step at a time which makes them slow and unsuitable for real-time applications. They also suffer from error accumulation and drift in the predictions [24]. To avoid these pitfalls we propose a lightweight non-autogressive model which generates the predictions in parallel.
### Non-Autoregressive Models
Development of non-autoregressive models can overcome some of the limitations of autogressive models as the goal to produce structurally and temporally coherent poses for realtime applications is crucial to many domains from mixed reality to pedestrian movements in autonomous driv
ing scenarios.
The Discrete Cosine Transform (DCT) is popular choice for encoding the temporal features of the motion [22, 21, 4]. Mao et. al. [21] use GCNs coupled with attention to predict the DCT coefficients which are then converted back into the time-domain. The idea of modeling the trajectories in the frequency domain is combined with the transformer architecture in [4]. However, they differ from traditional transformer-decoders by generating the predictions corresponding to a set of seed joints and then progressively estimating the locations of the other joints based on the kinematic chains of body skeletons. Models using the DCT to encode the temporal features are trained to predict both the input and target DCT coefficients to ensure continuity of the DCT coefficients. This additional computational burden might make it unsuitable for real-time applications.
Martinez-Gonzalez et. al [24] propose Pose Transformers to predict future poses. The joints are represented using Euler angles and the input and prediction sequences are projected from and to 3D pose vectors using GCNs. The projected input sequence is processed by a transformer encoder. The encoder output and a query sequence are used by the transformer decoder to compute the cross-attention which is used to generate the predictions in one pass. A multi-task non-autoregressive motion prediction model is proposed in [17]. They represent the joints using quaternions and encode the seed skeleton sequence using a series of temporal and graph convolutional networks. The encoded contextual features are used to predict both the action and forecast the future motion. The future motion is predicted by combining them with positional embeddings and passing them through a decoder composed of GCN-TCN blocks. Both these methods however rely on the high-level action to guide the low-level predictions.
Different from prior work, our proposed approach operates on the joints in the temporal domain and learns the spatio-temporal dynamics from the sequence of input poses. Further, we generate the predictions using a shallow decoder independent of the action class of the motion.
## 3 Background
To make our manuscript self contained, we briefly introduce Spatial Temporal Graph Convolutional Networks (ST-GCNs) [31] and Transformer self-attention [29].
### Spatial Temporal Graph Convolutional Networks
ST-GCNs proposed for skeleton-based action recognition are composed of a spatial convolution module followed by a temporal convolution module. Input to the ST-GCN is the joint quaternions on the graph nodes. Multiple layers of graph and temporal convolution are applied to gradually generate higher-level feature maps. The spatial features are computed for each partition of the adjacency matrix using the graph convolutional network (GCN) proposed in [14]. Equation 1 describes the operation of the GCN
\[F_{out}=\sum_{i=1}^{K}F_{in}A_{i}W_{i} \tag{1}\]
where \(K\) is the number of disjoint groups of \(E_{S}\) defined in 4.1 as the spatial edges, \(A_{i}\) is the adjacency matrix of the \(i^{th}\) partition. \(F_{in}\) and \(F_{out}\) are the input and the output feature maps.
### Transformer Self-Attention
The original Transformer self-attention was proposed in [29] for Natural Language Processing (NLP) tasks. The self-attention mechanism is a sequence-to-sequence operation meant to augment the embedding of each word using the embeddings of the surrounding context. For each embedding \(e_{i}\in E=\{e_{1},e_{2},\dots,e_{n}\}\), a query \(q\in\mathbf{R}^{d_{q}}\), a key \(k\in\mathbf{R}^{d_{k}}\) and value \(v\in\mathbf{R}^{d_{v}}\) is computed. The output
Figure 2: **Proposed Non-autoregressive Architecture. Given an input motion sequence \(X_{1:N}\) in quaternions we project it into an embedding space and add position encoding. The spatio-temporal features of the embeddings are extracted by the, (i) GCN-TCN block, and (ii) decoupled spatial (SSA) and temporal attention (TSA) blocks. We concatenate the convolutional features with the sum of the attention features and pass them to the next layer. This concatenation allows the model to choose the most relevant features of the input motion. Finally, we generate the predictions in parallel using a 3-layer GCN-TCN decoder.**
embedding is computed as a weighted average of the values with the weights being determined with a dot-product between the query and keys.
\[Attention(Q,K,V)=softmax\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V \tag{2}\]
where \(Q,K,V\) are matrices containing the query, key and value vectors. In practice, a mechanism called the multi-head attention is used where the scaled dot-product attention is computed many times in parallel with different parameterized matrices and then combined to obtain the final embedding. This gives self-attention greater power of discrimination where different inputs can influence the output in different ways not possible in a single self-attention operation.
## 4 Method
This section describes our proposed approach. For an overview please refer to Figure 2. Our model consists of an attention augmented encoder and a simple decoder. The encoder effectively summarizes the spatio-temporal dynamics of the input motion by combining the convolutional and attention feature maps. The decoder uses only convolutions to generate the predictions in parallel.
### Problem Formulation
Given a sequence of \(N\) consecutive skeleton poses of a single human \(X=\{x_{1},x_{2},\dots,x_{N}\}\), we predict the next \(M\) poses \(X^{{}^{\prime}}=\{x_{N+1},x_{N+2},\dots,x_{N+M}\}\). Each pose \(x_{i}\in\mathbf{R}^{J\times 4}\) where \(J\) is the number of joints and each joint is parameterized using quaternions. We represent the entire sequence of historical poses using a spatio-temporal graph [31, 27],, \(G=(V,E)\). Here, \(V=\{\nu_{ni}|n=1,2,\dots,N;i=1,2,\dots,J\}\) is the set of all nodes. Each node corresponds to a joint in a particular frame meaning the \(\big{|}\ V\ \big{|}=N\times J\). \(E\) is the set of all connections between the nodes. \(E\) consists of two subsets -
1. Spatial Edges, \(E_{S}=\{(\nu_{ni},\nu_{nj})|i,j=1,2,\dots,J;n=1,2,\dots,N\}\) is the set of connections between pairs of joints \((i,j)\) at time \(t\). This subset is further divided into \(K\) disjoint groups using the partition strategies proposed in [31].
2. Temporal Edges, \(E_{T}=\{(\nu_{ni},\nu_{(n+1)i})|i=1,2,\dots,J;n=1,2,\dots,N\}\) is the set of connections between consecutive time steps of a single joint \(i\).
### Spatio-temporal Pose Transformer
Our proposed solution is shown in Figure 2. The components of our model are - (i) **GCN-TCN-Unit**: The spatial and temporal features of the input motion history are extracted using graph convolution followed by temporal convolution (ii) **Spatial and Temporal Attention Modules**: Similar to other human motion analysis works [1, 27], we extract the attention feature maps of the motion using spatial and temporal attention modules. (iii)**Decoder**: The decoder is made of a sequence of GCN-TCN units and it generates the predictions non-autoregressively.
The input motion sequence is first projected into an embedding space using a \(1x1\) convolution layer.
### GCN-TCN-Unit
This is the spatio-temporal feature extraction block of our proposed model. The spatial features are extracted using the GCN proposed in [13]. However instead of using a single adjacency matrix to represent the connections between the joints of the body we partition the adjacency using the partitioning strategies introduced in [31]. The spatial feature map is then processed by a temporal convolution network (TCN). Dilated convolutions with kernels of size \(1\times T_{s}\) are used to extract the temporal features from the spatial feature maps.
### Spatial and Temporal Attention
The self-attention mechanism is the prevalent choice for modeling long range dependencies in vision and sequence modeling tasks. Our model uses the traditional attention blocks but computes the self-attention along the spatial and temporal dimensions separately.
Spatial AttentionThe spatial self-attention, as shown in Figure 3, extracts the correlations between each pair of
Figure 3: Spatial Self-Attention Module
points in each frame of the input sequence. Given the GCN-TCN unit features corresponding to a particular joint \(\nu_{ni}\) in frame \(n\), \(f_{n}^{i}\in\mathbb{R}^{D_{in}}\). The spatial summary of the joints as a function of the other joints is computed using multi-head attention with \(H\) heads. For each \(\nu_{ni}\), a query vector \(q_{i}^{n}\in\mathbb{R}^{d_{q}}\), a key vector \(k_{i}^{n}\in\mathbb{R}^{d_{k}}\) and value vector \(v_{i}^{n}\in\mathbb{R}^{d_{v}}\) are computed by using linear transformations parameterized by trainable weights \(\mathbf{W}_{q}\in\mathbb{R}^{D_{in}\times d_{q}}\), \(\mathbf{W}_{k}\in\mathbb{R}^{D_{in}\times d_{k}}\) and \(\mathbf{W}_{v}\in\mathbb{R}^{D_{in}\times d_{v}}\) that are shared across all the joints. Scaled dot-product attention \(\alpha_{ij}^{n}=q_{i}^{n}k_{j}^{nT}\) is used to compute the new feature vector \(a_{i}^{n}\in\mathbb{R}^{D_{out}}\),
\[a_{i}^{n}=\sum_{j}softmax_{j}\left(\frac{\alpha_{ij}^{n}}{\sqrt{d_{k}}}\right) v_{j}^{n} \tag{3}\]
\(H\) such new feature vectors are computed and concatenated to obtain the output feature embedding for each \(\nu_{ni}\), \(\hat{f}_{n}^{i}=concat\left(a_{1}^{n},\dots,a_{H}^{n}\right)\).
Temporal AttentionThe temporal self-attention extracts the dependencies of each joint across all the frames. This assumes that features of associated with each joint is independent and the correlations between the poses in each frame with respect to one single joint. For the same joint \(\nu_{i}\) from different frames \(m\) and \(n\), the query vector \(q_{m}^{i}\in\mathbb{R}^{d_{q}}\) associated with \(\nu_{mi}\), the key and value vectors - \(k_{n}^{i}\in\mathbb{R}^{d_{k}}\) and \(v_{n}^{i}\in\mathbb{R}^{d_{v}}\) associated with \(\nu_{ni}\) are computed using linear trainable transformations similar to spatial attention. With the correlation \(\alpha_{mn}^{i}=q_{m}^{i}{k_{n}^{i}}^{T}\) the new feature vector is computed as
\[a_{i}^{m}=\sum_{n}softmax_{n}\left(\frac{\alpha_{mn}^{i}}{\sqrt{d_{k}}}\right) v_{i}^{n} \tag{4}\]
The resultant joint feature vector \(a_{i}^{m}\in\mathbb{R}^{D_{out}}\) concatenated with the vectors from other attention heads to obtain the output feature embedding.
We pass the spatial and temporal attention features through a \(1\times 1\) convolution layer to ensure that the attention map can be concatenated with the convolutional features along the feature dimension. The output of the spatial and temporal attention blocks are normalized and summed to obtain the final attention features. It is then augmented by concatenating the spatio-temporal convolutional features and passed to the next layer.
### Decoder
The predictions are generated by the passing the features generated by the encoder through three GCN-TCN blocks. To ensure the predicted quaternions represent valid rotations we explicitly normalize them to be of unit length.
### Training
We train the model by using a loss function composed of weighted mean distance in Euler angle space [26] and mean Euclidean distance between joints in 3D coordinates. Consider the predicted quaternions, \(\hat{X}\) associated with a single training sample \(X\). It is the converted to joint Euler angles, \(\hat{X}^{e}\) and 3D positions of the joints, \(\hat{X}^{p}\) using forward kinematics. The loss is computed as,
\[\mathcal{L}\left(\mathbf{X},\mathbf{\hat{X}}\right)=\alpha*E+\beta*P \tag{5}\]
where,
\[E=\frac{1}{N\times J}\sum_{n=0}^{N}\sum_{j=0}^{J}\big{|}\ \big{\{}\big{(}x_{n,j}^{e}-\hat{x^{e}}_{n,j}+\pi\big{)}\ \mathrm{mod}\ 2\pi\big{\}}-\pi\big{|},\]
\[P=\frac{1}{N\times J}\sum_{n=0}^{N}\sum_{j=0}^{J}\lVert x_{n,j}^{p}-\hat{x^{p} }_{n,j}\rVert_{2}\]
and \(\alpha,\beta\) are scalars used to weigh the contribution of the individual losses.
## 5 Experiments
We compare our model with the state-of-the-art methods quantitatively. We evaluate our model on the CMU Mocap [15] and Human 3.6M [12] datasets. In the section we introduce the datasets, the evaluation metrics and report results in both error in joint angles and 3D positions.
### Datasets
Human3.6MThe Human3.6M dataset is the most popular human motion dataset used for benchmarking the pose prediction task. It contains 3.6 million poses and each pose is represented using a skeleton composed of 32 joints. We remove joints with a constant rotation resulting in 21 joints in the skeleton.It consists of 15 actions such as walking, phoning, and eating performed by 7 subjects. The global translation and rotation in the pose sequences which are provided in axis-angle format are first removed [21, 17]. Then they are downsampled to 25 frames per second and converted into a sequence of quaternions. The data from subject 11 is used to tune hyperparameters and the model is tested on data from subject 5. Following [26], we augment the dataset by mirroring all the pose sequences.
CMU Motion CaptureThe CMU-Mocap dataset contains motion data of humans performing various actions such running, walking, and jumping. The poses are represented using a skeleton composed of 38 joints. For fair comparison, we train on data associated with the same actions as in [22, 17]. The pose sequences are pre-processed similar to the sequences in Human3.6M dataset.
### Implementation Details
Our network is implemented in Pytorch. We use the spatial partitioning strategy from [31] to partition the adjacency matrix representing the joint connections. Dilations of 1, 2, and 4 are used in the temporal convolution layers in the GCN-TCN unit in order increase their receptive field. The network is trained using the RAdam optimizer [20] as it eliminates the need for learning rate warmup during training. We use learning rate of \(3\times 10^{-4}\) and decay it by a factor of \(0.99\) every epoch. The model is regularized using a weight decay of \(1e-5\) and a dropout rate of \(0.1\) in the convolutional layers operating along the temporal dimension. By performing some test runs we picked the values \(\alpha=10\) and \(\beta=0.1\). In the SSA unit shown in 2 we use a temporal convolution layer after the spatial attention. Analogous to this we use a GCN unit to extract the spatial features before extracting temporal attention features. We use relative position encoding [3] in both the attention blocks.
### Metrics
We use the evaluation procedure used in previous works [21, 19] and report the short term prediction results (\(80-400ms\)) in both the Euler angle space and 3D positions. We use the average Euclidean distance in the Euler angle space and the Mean Per Joint Position Error (MPJPE) in millime
\begin{table}
\begin{tabular}{l c c c|c c c c|c c c c|c c c c} \multirow{2}{*}{**interval (ms)**} & \multicolumn{3}{c}{**Basketball**} & \multicolumn{6}{c}{**Basketball Signal**} & \multicolumn{6}{c}{**Directing Traffic**} & \multicolumn{3}{c}{**Jumping**} \\ & \(80\) & 160 & 320 & 400 & 80 & 160 & 320 & 400 & 80 & 160 & 320 & 400 & 80 & 160 & 320 & 400 \\ \hline LTD [22] & **0.33** & 0.52 & 0.89 & 1.06 & **0.11** & **0.2** & 0.41 & 0.53 & **0.15** & **0.32** & **0.52** & **0.6** & **0.31** & **0.49** & **1.23** & 1.39 \\ mNAT [17] & 0.34 & 0.49 & **0.86** & **1.01** & 0.15 & 0.24 & 0.48 & 0.61 & 0.20 & 0.41 & 0.65 & 0.77 & 0.38 & 0.56 & 1.29 & 1.45 \\ \hline Ours & 0.37 & 0.53 & 0.88 & 1.05 & 0.17 & 0.22 & **0.37** & **0.47** & 0.36 & 0.53 & 0.7 & 0.84 & 0.76 & 0.9 & 1.28 & **1.3** \\ \hline \hline \multirow{2}{*}{**interval (ms)**} & \multicolumn{6}{c}{**Running**} & \multicolumn{6}{c}{**Soccer**} & \multicolumn{6}{c}{**Walking**} & \multicolumn{6}{c}{**Wash Window**} \\ & \(80\) & 160 & 320 & 400 & 80 & 160 & 320 & 400 & 80 & 160 & 320 & 400 & 80 & 160 & 320 & 400 \\ \hline LTD [22] & 0.33 & 0.55 & 0.73 & 0.74 & **0.18** & **0.29** & 0.61 & **0.71** & 0.33 & 0.45 & 0.49 & 0.53 & **0.22** & 0.33 & 0.57 & 0.75 \\ mNAT [17] & **0.24** & 0.43 & 0.53 & **0.56** & 0.20 & 0.33 & **0.59** & 0.72 & 0.31 & 0.37 & 0.40 & 0.46 & 0.23 & 0.36 & 0.66 & 0.86 \\ \hline Ours & 0.31 & **0.38** & **0.5** & 0.57 & 0.35 & 0.45 & 0.68 & 0.80 & **0.24** & **0.28** & **0.34** & **0.36** & **0.22** & **0.31** & **0.53** & **0.63** \\ \end{tabular}
\end{table}
Table 1: Comparing the performance of our model on Euler angle error (lower is better) with the existing state-of-the-art models on the CMU Mocap dataset. Our model outperforms both of them on longer time horizons indicating that it is able to learn the complex spatio-temporal dynamics of joints while playing sports as well.
Figure 4: The mean spatial attention weights (top row) and the temporal attention weights (bottom row) associated with the last input frame given 10 frames of _walkingtogether_ from Human3.6M. From the 4 attention heads visualized we can see that the model is able to attend to joints that are not connected in the kinematic chain and is able to extract the dependencies between the spatio-temporal features of each joint from the past frames.
ters. The errors for a single frame, \(n\) are computed as,
\[E =\|x_{n}^{e}-\hat{x}_{n}^{e}\|_{2}\] \[MPJPE =\frac{1}{J}\sum_{j=1}^{J}\lVert x_{n,j}^{p}-\hat{x}_{n,j}^{p}\rVert _{2}\]
### Results
The performance of our model is provided in Table 1, 2, and 3. For the Human3.6M dataset, to ensure a fair comparison we compare our model performance only to the existing non-autoregressive works [17, 24] which operate on the input motion sequence in the time domain. From Table 3 we observe that errors in our model in the joint angle space are almost on par with the state-of-the-art non-autoregressive models despite being activity-independent and despite having far fewer parameters.
Our model uses 6 GCN-TCN blocks and 2 sets of temporal and spatial self-attention modules. This is extremely light in comparison to, (a) [17] which uses 6 GCN-TCN blocks in its encoder and decoder along with an additional activity recognition network, and (b) POTR [24] which uses 4 layers in the encoder and decoder each made of GCNs and multi-head attention modules. In addition, the POTR model [24] uses the last pose as the query in the encoder-decoder attention and hence performs well in the immediate time horizon (80-160ms) on average while our model performs better than POTR in predicting for longer time horizons (320-400ms). All these models use 2000ms of input motion history to generate the predictions. This is five times more than the number of input frames we use. These factors highlight the efficacy of using both convolutions and the self-attention mechanism to learn the spatio-temporal dynamics of human motion. Our results demonstrate that it is possible to build a lightweight model that is performative and that can be used in realtime interactive human-centered applications.
\begin{table}
\begin{tabular}{l c c c c|c c c c|c c c c|c c c c} \hline \multirow{2}{*}{**interval (ms)**} & \multicolumn{4}{c}{**Basketball**} & \multicolumn{4}{c}{**Basketball Signal**} & \multicolumn{4}{c}{**Directing Traffic**} & \multicolumn{4}{c}{**Jumping**} \\ & 80 & 160 & 320 & 400 & 80 & 160 & 320 & 400 & 80 & 160 & 320 & 400 & 80 & 160 & 320 & 400 \\ \hline LTD [22] & **14.0** & **25.4** & 49.6 & 61.4 & **3.5** & **6.1** & **11.7** & **15.2** & **7.4** & **15.1** & 31.7 & 42.2 & **16.9** & **34.4** & 76.3 & 96.8 \\ \hline Ours & 17.5 & 26.0 & **45.1** & **57.0** & 8.3 & 9.5 & 15.10 & 19.5 & 11.5 & 15.7 & **26.8** & **34.8** & 35.0 & 45.7 & **70.1** & **86.0** \\ \hline \multicolumn{11}{c}{**interval (ms)**} & \multicolumn{4}{c}{**Running**} & \multicolumn{4}{c}{**Soccer**} & \multicolumn{4}{c}{**Walking**} & \multicolumn{4}{c}{**Wash Window**} \\ & 80 & 160 & 320 & 400 & 80 & 160 & 320 & 400 & 80 & 160 & 320 & 400 & 80 & 160 & 320 & 400 \\ \hline LTD [22] & 25.5 & 36.7 & 39.3 & 39.9 & **11.3** & **21.5** & **44.2** & **55.8** & **7.7** & 11.8 & 19.4 & 23.1 & **5.9** & **11.9** & **30.3** & **40.0** \\ \hline Ours & **15.7** & **19.3** & **25.9** & **31.3** & 21.5 & 30.4 & 48.0 & 56.1 & 8.3 & **10.9** & **15.5** & **16.6** & 13.6 & 19.6 & 33.6 & 40.8 \\ \hline \multicolumn{11}{c}{**interval (ms)**} & \multicolumn{4}{c}{**Average**} & \multicolumn{4}{c}{**Soccer**} & \multicolumn{4}{c}{**Wash**} \\ & \multicolumn{4}{c}{} & \multicol
The work by [17] is the only non-autoregressive work evaluated on the CMU mocap dataset but they too do not provide metrics in terms of MPJPE. Hence, we include [22], which uses DCTs to model the temporal variation, in our comparison. From Table 1 and 2 we see that out model outperforms the state-of-art models on CMU Mocap for the longer time horizons (320 and 400ms). This indicates that our lightweight model is capable of learning the spatio-temporal dynamics associated with complex sports actions.
## 6 Conclusion
We introduce a novel lightweight spatio-temporal transformer (SPOTR) network for the 3D human motion prediction task. We show that augmenting spatio-temporal convolution with self-attention is very effective in making human motion predictions. Our non-autoregressive approach mitigates the issues of error accumulation, non-parallelizability, and static poses seen in autoregressive models while still providing comparable performance with a small number of parameters, significantly shorter seed sequences and faster computation time. We also demonstrate that the attention mechanism can be used to attain insights about the model's behavior. Finally, our lightweight model allows us to integrate build realtime interactive human-centered applications to solve real world problems.
|
2301.08423 | Adjoint-based variational optimal mixed models for large-eddy simulation
of turbulence | An adjoint-based variational optimal mixed model (VOMM) is proposed for
subgrid-scale (SGS) closure in large-eddy simulation (LES) of turbulence. The
stabilized adjoint LES equations are formulated by introducing a minimal
regularization to address the numerical instabilities of the long-term gradient
evaluations in chaotic turbulent flows. The VOMM model parameters are optimized
by minimizing the discrepancy of energy dissipation spectra between LES
calculations and a priori knowledge of direct numerical simulation (DNS) using
the gradient-based optimization. The a posteriori performance of the VOMM model
is comprehensively examined in LES of three turbulent flows, including the
forced homogeneous isotropic turbulence, decaying homogenous isotropic
turbulence, and temporally evolving turbulent mixing layer. The VOMM model
outperforms the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM) and
approximate deconvolution model (ADM) in predictions of various turbulence
statistics, including the velocity spectrum, structure functions, statistics of
velocity increments and vorticity, temporal evolutions of the turbulent kinetic
energy, dissipation rate, momentum thickness and Reynolds stress, as well as
the instantaneous vortex structures at different grid resolutions and times. In
addition, the VOMM model only takes up 30% time of the DMM model for all flow
scenarios. These results demonstrate that the proposed VOMM model improves the
numerical stability of LES and has high a posteriori accuracy and computational
efficiency by incorporating the a priori information of turbulence statistics,
highlighting that the VOMM model has a great potential to develop advanced SGS
models in the LES of turbulence. | Zelong Yuan, Yunpeng Wang, Xiaoning Wang, Jianchun Wang | 2023-01-20T04:55:23Z | http://arxiv.org/abs/2301.08423v2 | # Adjoint-based variational optimal mixed models for large-eddy simulation of turbulence
###### Abstract
An adjoint-based variational optimal mixed model (VOMM) is proposed for subgrid-scale (SGS) closure in large-eddy simulation (LES) of turbulence. The stabilized adjoint LES equations are formulated by introducing a minimal regularization to address the numerical instabilities of the long-term gradient evaluations in chaotic turbulent flows. The VOMM model parameters are optimized by minimizing the discrepancy of energy dissipation spectra between LES calculations and _a priori_ knowledge of direct numerical simulation (DNS) using the gradient-based optimization. The _a posteriori_ performance of the VOMM model is comprehensively examined in LES of three turbulent flows, including the forced homogeneous isotropic turbulence, decaying homogenous isotropic turbulence, and temporally evolving turbulent mixing layer. The VOMM model outperforms the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM) and approximate deconvolution model (ADM) in predictions of various turbulence statistics, including the velocity spectrum, structure functions, statistics of velocity increments and vorticity, temporal evolutions of the turbulent kinetic energy, dissipation rate, momentum thickness and Reynolds stress, as well as the instantaneous vortex structures at different grid resolutions and times. In addition, the VOMM model only takes up 30% time of the DMM model for all flow scenarios. These results demonstrate that the proposed VOMM model improves the numerical stability of LES and has high _a posteriori_ accuracy and computational efficiency by incorporating the _a priori_ information of turbulence statistics, highlighting that the VOMM model has a great potential to develop advanced SGS models in the LES of turbulence.
subgrid-scale model, variational optimal models, adjoint-based optimization, large-eddy simulation, incompressible turbulence
## 1 Introduction
Large-eddy simulation (LES) has become an effective tool for the investigation of turbulent flows, and has been widely applied to many industrial problems including the aeroacoustics, combustions, meteorological physics, interfacial mixing, _etc_(Sagaut, 2006; Garnier _et al._, 2009). The dominant large-scale motions of turbulence are directly resolved by the LES, leaving the effects of residual subgrid scales (SGS) on the resolved large scales modeled by the SGS models (Lesieur & Metais, 1996; Meneveau & Katz, 2000). In contrast, direct numerical simulation (DNS) of turbulence requires a sufficiently high mesh resolution to fully resolve all flow scales down
to the size of the Kolmogorov eddies, whose computational cost is prohibitively expensive at a high Reynolds number (Pope 2000). Therefore, LES is much more computationally efficient than the DNS by significantly reducing the degrees of freedom of turbulence, meanwhile accurately reconstructing large-scale flow structures (Pope 2000; Sagaut 2006; Durbin 2018).
The modeling of the unclosed SGS stress is crucial for the accuracy of predictions in LES. SGS models can be generally categorized into functional models, structural models and mixed models (Sagaut 2006; Garnier _et al._ 2009). The functional SGS models utilize the explicit dissipative terms to correctly reconstruct the forward kinetic energy cascade from large scales to small scales (Rozema _et al._ 2015; Abkar _et al._ 2016). The Smagorinsky model is one of the most popular functional SGS models and is favored for its substantial numerical stability and excellent robustness of LES calculations (Smagorinsky 1963; Lilly 1967). However, the functional SGS models generally exhibit excessive dissipation and fail to predict the sophisticated small-scale flow structures. In contrast, the structural SGS models recover the unclosed SGS stress with high _a priori_ accuracy by exactly truncating the Taylor series expansions or the assumption of scale similarity. These structural models include the approximate deconvolution method (Stolz & Adams 1999; Stolz _et al._ 2001), scale-similarity model (Bardina _et al._ 1980; Liu _et al._ 1994), velocity gradient model (Clark _et al._ 1979), _etc_. The structural SGS models can accurately capture the spatial distribution of SGS energy flux and backscatter of the kinetic energy, but suffer from the numerical instability without sufficient SGS dissipation in the _a posteriori_ studies of LES.
The mixed models consist of the structural models and functional eddy-viscosity models to balance the numerical stability and accuracy of LES and compensate their inherent model deficiencies. The Clark model combines the velocity gradient model with the Smagorinsky eddy viscosity (Clark _et al._ 1979). Erlebacher _et al._ (1992) proposed a mixed model which consists of the scale-similarity model and the dissipative Smagorinsky term. In the early stage, the SGS model parameters were either theoretically derived from the isotropic turbulent flows (Lilly 1967) or estimated by the _a priori_ analysis of DNS and experimental observations (Deardorff 1970; Clark _et al._ 1979), yielding poor predictions in the _a posteriori_ LES (Lesieur & Metais 1996; Meneveau & Katz 2000). A pioneering dynamical procedure with the Germano identity was developed to determine the Smagorinsky coefficient adaptively by the least-squares algorithm (Germano _et al._ 1991; Lilly 1992). Subsequently, the dynamic versions of mixed models were successively proposed, including the one-parameter (Zang _et al._ 1992) and two-parameter dynamical mixed models (Liu _et al._ 1994; Shi _et al._ 2008), the dynamic Clark model (Vreman _et al._ 1994) and dynamic ADM model (Habisreutinger _et al._ 2007), _etc_. The coefficients of a general multi-parameter dynamic mixed model (DMM) can be conveniently determined by the Germano-identity-based dynamic approach (Sagaut _et al._ 2000). However, extensive previous studies have shown that these DMM models are excessively dissipative in the transitional regions, but underestimate the SGS dissipation in situations of coarse mesh resolutions and grid anisotropy (Meneveau & Katz 2000; Moser _et al._ 2021). In addition, the dissipative Smagorinsky part in the DMM models is usually dominant over the structural part, leading to little advantage in the high _a priori_ accuracy of structural models. The basis tensors of the DMM model, comprising the functional eddy-viscosity and the accurate structural part, give a complete representation of the SGS stress and SGS energy flux (SGS dissipation), which is essential for the SGS modeling of LES. Yuan _et al._ (2022) preliminarily explored a scale-similarity dynamic procedure (SSD) with a dynamic nonlinear algebraic model, yielding more accurate predictions of various turbulence statistics and instantaneous vortex structures for both _a priori_ and _a posteriori_ analyses of LES than the Germano-identity-based dynamic (GID) approach in the homogeneous isotropic turbulence. However, the SSD procedure still suffers from the numerical instability at coarse-grid-resolution cases, where the spatial discretization error dominates the SGS modeling error. It might be challenging to develop a general dynamic framework for the model coefficient determination at various grid resolutions applicable to different types of turbulence problems. These results
demonstrate that the adjustment of SGS model parameters can effectively improve the accuracy of SGS modeling and enhance the predictions of LES.
Besides, additional artificial viscous or penalized regularization terms have been also introduced to enhance the _a posteriori_ stability of structural models. A secondary filtering regularization technique was proposed by Stolz _et al._ (2001) and Adams _et al._ (2004) to maintain the numerical stability of ADM models. Vollant _et al._ (2016) efficiently regularized the velocity gradient model by dynamically clipping the SGS backscatter. A spectral-vanishing-viscosity method (Tadmor, 1989) was proposed to effectively suppress the Gibbs oscillations at high wavenumbers (Cerutti _et al._, 2000) and has been successfully applied to the prediction of turbulent channel flows (Karamanos & Karniadakis, 2000). Xie _et al._ (2020_a_) used a hyperviscosity term to address the stability issue of the spatial-artificial-neural-network models. The effective hyperviscosity term was further applied to other data-driven SGS models (Yuan _et al._, 2020; Wang _et al._, 2021). Yuan _et al._ (2021_b_) developed a small-scale eddy-viscosity model to enhance the _a posteriori_ stability of dynamic iterative approximate deconvolution models, without affecting the accurate predictions of large-scale flow structures. A kinetic-energy-flux constrained SGS model proposed by Yu _et al._ (2022) regularizes the DSM model by the correct kinetic energy flux approximated by the tensor-diffusivity model and accurately predicts the transition to turbulence of a compressible flat-plate boundary layer. It is noteworthy that additional numerical parameters would be introduced for most regularization techniques, which are sensitive to the grid resolution of LES, requiring multiple tedious testings for different turbulence scenarios. To our knowledge, there might not be a unified adaptive regularization framework proposed for the stability of structural SGS models that can be universally applied to various types of turbulence with different grid resolutions of LES calculations. The dependence of SGS model parameters on grid resolutions of LES might be effectively addressed by incorporating the _a priori_ knowledge of DNS or experimental observations.
In recent years, many data-driven closure approaches (Tracey _et al._, 2015; Ling _et al._, 2016\(a\); Xiao _et al._, 2016; Maulik & San, 2017; Wang _et al._, 2018; Zhou _et al._, 2019; Yang _et al._, 2019; Park & Choi, 2021; Guan _et al._, 2022) have been extensively developed to improve the modeling of unclosed terms in turbulence, as more high-fidelity DNS or experimental data become available (Kutz, 2017; Duraisamy _et al._, 2019). Ling _et al._ (2016_b_) proposed a representative tensor-basis-neural-network (TBNN) model with the multiplicative layer that predicts coefficients of the basis tensors for the modeled Reynolds stress by taking velocity invariants as input to preserve Galilean invariance. The TBNN architecture can accurately reconstruct the anisotropy of Reynolds stress and predict the flow separation better than the baseline linear or nonlinear eddy-viscosity model. Xie _et al._ (2020_c_) further developed the artificial-neural-network-based nonlinear algebraic models yielding better predictions of LES statistics than classical dynamic SGS models. The gene-expression-programming technique was proposed to acquire the explicit mathematical expression of the unclosed SGS stress modeled by basis functions for LES using an evolutionary algorithm (Schoepplein _et al._, 2018; Li _et al._, 2021; Wu _et al._, 2022). The multi-agent reinforcement-learning framework was developed to discover Smagorinsky model coefficients using the control policy rewarded by the statistical discrepancy of energy spectrum (Novati _et al._, 2021; Kurz _et al._, 2023), and further applied to modeling the near-wall dynamics (Bae & Koumoutsakos, 2022).
Although the machine-learning-based closure models can improve the _a priori_ accuracy of turbulence models fairly well, they have been reported to suffer from the ill-conditioned issues in the _a posteriori_ studies. The small _a priori_ errors of the modeled Reynolds stress can be significantly amplified and then propagated into the mean velocity field in the _a posteriori_ testings (Wu _et al._, 2019). Gamahara & Hattori (2017) established an artificial-neural-network framework for the SGS closures of turbulent channel flows, which accurately predicts the unclosed SGS stress in _a priori_ studies, but shows no obvious advantages over the Smagorinsky model in the reconstruction of the mean velocity profiles. The recurrent neural network was employed to
learn the coarse-grained discretization errors of LES and expected to construct the perfect LES formulation (Beck _et al._, 2019). However, these perfect SGS closure terms also encounter serious _a posteriori_ instability issues, even though the _a priori_ predictions show high correlations with the exact unclosed terms (Beck _et al._, 2019). These results indicate that most current data-driven closure approaches can acquire sufficiently high _a priori_ accuracy after being trained by the high-fidelity DNS or experimental data, but still lack indispensable extrapolation capabilities and are difficult to be applied to the _a posteriori_ testings of out-of-sampling flow scenarios.
The data-assimilation techniques can effectively remedy the deficiencies of insufficient _a posteriori_ accuracy of closure models by iteratively evaluating and minimizing the discrepancies between coarse-grained _a posteriori_ calculations and benchmark high-fidelity DNS or experimental observations. The data-assimilation approaches can be generally classified into three categories: ensemble-based statistical methods (Colburn _et al._, 2011; Zhang _et al._, 2022), adjoint-based variational approaches (Bewley _et al._, 2001; Delport _et al._, 2009; Badreddine _et al._, 2014) and their mixed variants (Mons _et al._, 2021). The ensemble-based statistical techniques use ensemble statistics to approximately measure the model uncertainty and continuously correct the measurement errors of observations by the classical Kalman-filtering strategies or nudging methods (Clark Di Leoni _et al._, 2020). These statistical assimilation methods allow the convenient inference of flow states and statistics, without any detailed information of dynamical systems, facilitating their wide application in complex practical scenarios. However, the state estimations of these ensemble-based approaches frequently evaluate the matrix multiplication and inverse operations, resulting in the massive computation expense and large memory usage for the high degree-of-freedom turbulence problems at a high Reynolds number. In contrast, the adjoint-based variational techniques employ the optimal control strategy to efficiently optimize the model parameters or state variables by minimizing the discrepancies between the benchmark observations and _a posteriori_ predictions. Singh & Duraisamy (2016) proposed a field-inversion procedure to infer model discrepancies in the source terms of Reynolds-averaged Navier-Stokes (RANS) transport equations using Bayesian posterior estimation. He _et al._ (2018) simplified the field-inversion strategy and employed the continuous adjoint formulation to optimize a spatially varying turbulence production term in the Spalart-Allmaras model of RANS equations.
In comparison with the extensive studies of data-assimilation-based RANS models (Kato & Obayashi, 2013; Kato _et al._, 2015; Xiao _et al._, 2016; Li _et al._, 2017; Xiao & Cinnella, 2019), investigations on SGS models of LES assimilated with high-fidelity simulation data are still preliminary. A spatially-varying parameter in a local uncertainty model and initial conditions were optimized based on experimental observations of the cylindrical wake flow using the discrete adjoint algorithm (Chandramouli _et al._, 2020). Mons _et al._ (2021) developed a non-intrusive ensemble-variational approach (EnVar) to enhance the predictions of the mean flow and Reynolds stresses by adjusting the wall-normal distribution of the Smagorinsky coefficient or injecting an artificial steady force in the LES momentum equations. The SGS force modeled by the artificial neural network was optimized by the point-to-point errors of the filtered velocity field using the discrete adjoint method for the decaying isotropic turbulence and plane jet flows (Sirignano _et al._, 2020; MacArt _et al._, 2021). However, these discrete adjoint or ensemble-based variational methods require massive matrix operations with significant memory usage.
In this paper, a variational optimal mixed model (VOMM) is proposed to reconstruct the unclosed SGS stress by assimilating the turbulence statistics of high-fidelity filtered DNS data using the continuous adjoint approach. The main difference from the previous work is that we derive adjoint LES equations with the general SGS model and conduct the energy budget analysis of adjoint equations. The continuous adjoint algorithm can enhance the physical understanding of the adjoint-based sensitivities and provide flexibility in selecting the discretization scheme for the adjoint equations. The quadratic terms of shear strain rate in adjoint LES equations turn out to be responsible for the exponential temporal growth of the adjoint-based gradients, giving rise to the
numerical divergence in a long time horizon for the chaotic turbulent flows. Hence, the stabilized adjoint LES equations are correspondingly formulated to enhance the numerical stability of the adjoint LES calculations. To the extent of the authors' knowledge, few previous studies have given detailed derivations of the adjoint LES equations with general SGS mixed models and formulated the stabilized version for long-term gradient evaluations. In addition, the selected cost functional is essential for the convergence and performance of adjoint-based gradient optimizations. Compared to the previous studies, turbulence statistical discrepancies rather than the chaotic point-to-point prediction errors are adopted to quantify the multiscale statistical behaviours of turbulence. The _a priori_ information about statistics of turbulence acquired from experimental data or DNS results, including energy spectra, structure functions, and probability density functions of physical quantities, can be used to determine or correct SGS model parameters to improve the _a posteriori_ accuracy of LES greatly. Turbulent statistical assimilation can effectively alleviate the impact of chaotic field observations on the performance of data assimilation. Furthermore, the _a posteriori_ performance of VOMM model is comprehensively investigated and compared to classical SGS models at multiple grid resolutions in different turbulence scenarios, including the forced and decaying homogeneous isotropic turbulence, as well as the temporally evolving turbulent mixing layer.
The remainder of this paper is structured as follows. Sec. 2 describes the governing equations of the large-eddy simulation. The conventional subgrid-scale models, including DSM, DMM and ADM models, are briefly introduced in Sec. 3. In Sec. 4, we first derive the adjoint LES equations with a general form of mixed SGS models, then conduct the energy budget analysis of adjoint equations, and correspondingly propose the stabilized adjoint LES equations. Afterwards, the adjoint-based variational optimal mixed model is developed. Sec. 5 further investigates the _a posteriori_ performance of the VOMM model in comparison to the classical SGS models for three turbulent flow scenarios, including the forced homogeneous isotropic turbulence, decaying homogeneous isotropic turbulence, and temporally evolving turbulent mixing layer. Conclusions are finally drawn in Sec. 6.
## 2 Governing equations of the large-eddy simulation
The three dimensional incompressible turbulence is governed by the Navier-Stokes equations (Pope 2000), namely
\[\frac{\partial u_{i}}{\partial x_{i}}=0, \tag{1}\]
\[\frac{\partial u_{i}}{\partial t}+\frac{\partial\left(u_{i}u_{j}\right)}{ \partial x_{j}}=-\frac{\partial p}{\partial x_{i}}+\nu\frac{\partial^{2}u_{i}} {\partial x_{j}\partial x_{j}}+\mathcal{F}_{i}, \tag{2}\]
where \(u_{i}\) is the \(i\)-th component of velocity, \(p\) denotes the pressure divided by the constant density, \(\nu\) is the kinematic viscosity, and \(\mathcal{F}_{i}\) represents the large-scale forcing on the fluid momentum in the \(i\)-th coordinate direction. The summation convection for the repeated indices is adopted by default for simplicity in this paper. Besides, the dimensionless governing parameter for the incompressible turbulence, namely, the Taylor microscale Reynolds number \(Re_{\lambda}\) is defined as (Pope 2000)
\[Re_{\lambda}=\frac{u^{\rm rms}\lambda}{\sqrt{3}\nu}, \tag{3}\]
where \(u^{\rm rms}=\sqrt{\langle u_{i}u_{i}\rangle}\) represents the root-mean-square (rms) value of the velocity magnitude, and \(\langle\cdot\rangle\) represents a spatial average along the homogeneous direction (_i.e._, average over the entire domain for the isotropic turbulence and the horizontal average for the temporally evolving mixing
layer). Here, \(\lambda=u^{\rm rms}\sqrt{5\nu/\varepsilon}\) is the Taylor microscale, where \(\varepsilon=2\nu\left\langle S_{ij}S_{ij}\right\rangle\) represents the average dissipation rate and \(S_{ij}=\frac{1}{2}\left(\partial u_{i}/\partial x_{j}+\partial u_{j}/\partial x _{i}\right)\) denotes the strain-rate tensor.
To obtain the governing equations of the large-eddy simulation, a spatial filtering operation, \(\bar{f}\left(\mathbf{x}\right)=\int\limits_{\Omega}f\left(\mathbf{x}^{\prime }\right)G\left(\mathbf{x}-\mathbf{x}^{\prime};\bar{\Delta}\right)d\mathbf{x}^ {\prime}\) is applied to the Navier-Stokes equations. Here, an overbar denotes the spatial filtering, \(\Omega\) is the entire domain. \(G\) and \(\bar{\Delta}\) are the filter kernel and filter width, respectively. The governing equations for the LES can be correspondingly derived as (Sagaut, 2006)
\[\frac{\partial\tilde{u}_{i}}{\partial x_{i}}=0, \tag{4}\]
\[\frac{\partial\tilde{u}_{i}}{\partial t}+\frac{\partial\left(\tilde{u}_{i} \tilde{u}_{j}\right)}{\partial x_{j}}=-\frac{\partial\bar{p}}{\partial x_{i}} -\frac{\partial\tau_{ij}}{\partial x_{j}}+\nu\frac{\partial^{2}\tilde{u}_{i} }{\partial x_{j}\partial x_{j}}+\bar{\mathcal{F}}_{i}. \tag{5}\]
Here, the unclosed SGS stress tensor \(\tau_{ij}=\overline{u_{i}u_{j}}-\bar{u}_{i}\bar{u}_{j}\) cannot be directly calculated using the resolved variables \(\tilde{u}_{i}\), and additional SGS stress modeling is required to make the LES equations solvable.
## 3 Conventional subgrid-scale models for LES
The SGS models aim to establish the approximate constitutive equation for SGS unclosed terms using the known resolved variables, and reconstruct the nonlinear interactions between the resolved large scales and unsolved small scales as accurately as possible (Moser _et al._, 2021; Johnson, 2022). The explicit SGS models consist of the functional and structural models. The functional modeling adopts the eddy-viscosity forms to mimic the forward kinetic energy transfer from the resolved large scales to the residual small scales, while the structural models can accurately recover the unclosed SGS stress by the hypothesis of scale similarity or using the truncated series expansions with high _a priori_ accuracy (Sagaut, 2006; Fowler _et al._, 2022). One of the most widely-used functional models is the Smagorinsky model (Smagorinsky, 1963; Lilly, 1967), expressed as
\[\tau_{ij}^{A}=\tau_{ij}-\frac{\delta_{ij}}{3}\tau_{kk}=-2C_{S}^{2}\bar{\Delta }^{2}|\bar{S}|\bar{S}_{ij}, \tag{6}\]
where \(\delta_{ij}\) denotes the Kronecker delta operator, \(\bar{S}_{ij}=\frac{1}{2}\left(\partial\bar{u}_{i}/\partial x_{j}+\partial\bar {u}_{j}/\partial x_{i}\right)\) is the filtered strain-rate tensor and \(|\bar{S}|=(2\bar{S}_{ij}\bar{S}_{ij})^{1/2}\) represents the characteristic filtered strain rate. The superscript "A" represents the trace-free anisotropic part of the arbitrary variables, namely, \(\left(\bullet\right)_{ij}^{A}=\left(\bullet\right)_{ij}-\left(\bullet\right)_ {kk}\delta_{ij}/3\). The isotropic SGS stress \(\tau_{kk}\) is absorbed into the pressure term. \(C_{S}^{2}\) is the Smagorinsky coefficient and can be determined empirically or by a theoretical analysis. The most common approach is based on the least-squares dynamic procedure using the Germano identity, giving rise to the dynamic Smagorinsky model (DSM), whose coefficient is given by (Germano _et al._, 1991; Lilly, 1992)
\[C_{S}^{2}=\frac{\left\langle L_{ij}^{A}\mathcal{M}_{ij}\right\rangle}{\left\langle \mathcal{M}_{kl}\mathcal{M}_{kl}\right\rangle}, \tag{7}\]
where the Leonard stress \(L_{ij}=\tilde{\bar{u}_{i}\bar{u}_{j}}-\tilde{\bar{u}_{i}}\tilde{\bar{u}_{j}}\), \(L_{ij}^{A}=L_{ij}-\frac{1}{3}\delta_{ij}L_{kk}\) and \(\mathcal{M}_{ij}=\tilde{\alpha}_{ij}-\beta_{ij}\). Here, a tilde stands for the test filtering operation at the double-filtering scale \(\tilde{\Delta}=2\tilde{\Delta}\), the variables \(\alpha_{ij}=2\tilde{\Delta}^{2}|\bar{S}|\bar{S}_{ij}\) and \(\beta_{ij}=2\tilde{\Delta}^{2}|\bar{\tilde{S}}|\bar{\tilde{S}}_{ij}\). The scale-similarity model \(\tau_{ij}=\tilde{\bar{u}_{i}\bar{u}_{j}}-\tilde{\bar{u}_{i}}\tilde{\bar{u}_{j}}\) is a typical structural model and can correctly reconstruct the SGS stress with high _a priori_ accuracy. However, these structural models often exhibit insufficient dissipation and numerical instability in the _a posteriori_ testings of LES due to the underestimation of the forward kinetic energy cascade. The dynamic mixed model (DMM) combines the scale-similarity model with the dissipative
Smagorinsky term, and is given by (Liu _et al._ 1994; Shi _et al._ 2008)
\[\tau_{ij}=C_{1}\bar{\Lambda}^{2}\left|\bar{S}\right|\bar{S}_{ij}+C_{2}\left( \bar{\bar{u}_{i}\bar{u}_{j}}-\bar{\bar{u}_{i}}\bar{\bar{u}_{j}}\right). \tag{10}\]
Similar to the DSM model, model coefficients of the DMM model \(C_{1}\) and \(C_{2}\) are dynamically determined by the least-squares algorithm using the Germano identity, expressed respectively as (Xie _et al._ 2020; Yuan _et al._ 2020)
\[C_{1}=\frac{\left\langle N_{ij}^{2}\right\rangle\left\langle L_{ij}M_{ij} \right\rangle-\left\langle M_{ij}N_{ij}\right\rangle\left\langle L_{ij}N_{ij} \right\rangle}{\left\langle N_{ij}^{2}\right\rangle\left\langle M_{ij}^{2} \right\rangle-\left\langle M_{ij}N_{ij}\right\rangle^{2}}, \tag{11}\]
\[C_{2}=\frac{\left\langle M_{ij}^{2}\right\rangle\left\langle L_{ij}N_{ij} \right\rangle-\left\langle M_{ij}N_{ij}\right\rangle\left\langle L_{ij}M_{ij} \right\rangle}{\left\langle N_{ij}^{2}\right\rangle\left\langle M_{ij}^{2} \right\rangle-\left\langle M_{ij}N_{ij}\right\rangle^{2}}, \tag{12}\]
where \(M_{ij}=H_{1,ij}-\bar{h}_{1,ij}\), and \(N_{ij}=H_{2,ij}-\bar{h}_{2,ij}\). Here, \(h_{1,ij}=-2\bar{\bar{\Lambda}}^{2}\left|\bar{S}\right|\bar{S}_{ij}\), \(h_{2,ij}=\bar{\bar{u}_{i}\bar{u}_{j}}-\bar{\bar{u}_{i}}\bar{\bar{u}_{j}}\), \(H_{1,ij}=-2\bar{\bar{\Lambda}}^{2}\left|\bar{S}\right|\bar{\bar{S}}_{ij}\), and \(H_{2,ij}=\bar{\bar{\bar{u}}_{i}\bar{\bar{u}}_{j}}-\bar{\bar{u}_{i}}\bar{\bar{u }_{j}}\). The hat stands for the test filtering at scale \(\hat{\Delta}=4\bar{\Delta}\).
The unfiltered variables can be accurately recovered by the resolved filtered field using the iterative approximate deconvolution procedure, namely (Stolz & Adams 1999; Stolz _et al._ 2001)
\[u_{i}^{*}=AD^{N}\ (\bar{u}_{i})=\sum_{n=1}^{N}\left(I-G\right)^{n-1}\otimes\bar{u}_{i}, \tag{13}\]
where the asterisk represents the approximately unfiltered variables, \(AD^{N}\) is the abbreviation of the \(N\)-th order approximate deconvolution, \(I\) is the identity, and the symbol "\(\otimes\)" stands for the spatial convolution operator. For any two functions \(f\) and \(g\), \(f\otimes g=\int_{-\infty}^{+\infty}f\ ({\bf x}^{\prime})\ g({\bf x}-{\bf x}^{ \prime})\ d{\bf x}^{\prime}\). The unclosed SGS stress then can be recovered with the scale-similarity form by the approximate deconvolution method (ADM), given by (Bardina _et al._ 1980)
\[\tau_{ij}=\overline{u_{i}^{*}u_{j}^{*}}-\bar{u_{i}^{*}}\bar{u_{j}^{*}}. \tag{14}\]
The number of iterations for the ADM model is recommended to be \(N\!=\!3\sim 5\) (Stolz _et al._ 2001). The accuracy of the ADM model becomes higher, while the numerical stability drops, as the number of iterations increases. Hence, \(N\!=\!5\) is selected in this paper. In order to maintain the numerical stability of the _a posteriori_ testings of LES [\(\partial\bar{u}_{i}/\partial t=\bar{R}_{i}\ (\bar{u}_{i},t)\)], Stolz _et al._ (2001) and Adams _et al._ (2004) introduced a secondary filtering relaxation term [\(\partial\bar{u}_{i}/\partial t=\bar{R}_{i}\ (\bar{u}_{i},t)\!+\!\bar{S}_{i}\ (\bar{u}_{i})\)], yielding
\[\bar{S}_{i}\ (\bar{u}_{i})=-\chi\left[I-G\otimes\sum_{n=1}^{N}\left(I-G \right)^{n-1}\right]\otimes\bar{u}_{i}, \tag{15}\]
where \(\chi\) is the empirical regularization coefficient, which is approximately insensitive to the LES results in previous studies, and we choose \(\chi=0\) and \(1\) for comparisons in our paper.
## 4 Adjoint-based variational optimal mixed models (Vomm)
The mixed model is composed of the structural parts and the dissipative functional terms, and its general form can be written as (Sagaut _et al._ 2000)
\[\tau_{ij}\left(u_{i};\bar{\Delta}\right)=\sum_{n=1}^{N}C_{n}T_{ij}^{(n)}\ (\bar{u}_{i};\bar{\Delta}), \tag{16}\]
where \(T_{ij}^{(n)}\) (\(\bar{u}_{i};\bar{\Delta}\)) represents the \(n\)-th basis stress tensor. \(C_{n}\) (\(n=1,2,...,N\)) denotes the corresponding model coefficient and \(N\) is the number of basis stress tensors. The model coefficients are generally respectively determined by the multivariate least-squares algorithm proposed by Germano _et al._ (1991) and Lilly (1992). Many previous studies have shown that the dynamic mixed models give rise to an excessive dissipation of energy in the transitional regions and dissipation underestimation if the filter scales are sufficiently large, especially in situations of grid anisotropy (Meneveau & Katz 2000; Moser _et al._ 2021).
In recent years, data-driven based high-accuracy SGS models are successively proposed (Kutz 2017; Duraisamy _et al._ 2019). Xie _et al._ (2019_a_) proposed an artificial-neural-network-based mixed model which accurately recovers the unclosed SGS terms by estimating mixed model coefficients with local flow characteristics as inputs of the machine-learning strategy, yielding better predictions of LES statistics than the classical dynamic mixed model. The input features of the data-driven closure models are crucial for the accuracy of SGS models (Gamahara & Hattori 2017; Beck _et al._ 2019; Xie _et al._ 2019\(b\); Park & Choi 2021). Incorporating the accurate structural parts, _i.e._, filtered velocity gradients at the neighboring stencil turn out to improve the performance of data-driven SGS models effectively (Xie _et al._ 2019\(b\), 2020_a_). Moreover, the spatial flow structures at scales between \(\bar{\Delta}/2\) and \(2\bar{\Delta}\) are found to be essential for the SGS modeling of LES at the filter scale \(\bar{\Delta}\) (Xie _et al._ 2020_b_). The strategy of the blind deconvolution with the artificial neural network was proposed to recover the unknown original unfiltered variables from the known filtered quantities with high accuracy (Maulik & San 2017; Maulik _et al._ 2019). A deconvolutional-artificial-neural-network (DANN) framework was further proposed to accurate reconstruct the SGS unclosed terms both in _a priori_ and _a posteriori_ analyses of isotropic turbulence (Yuan _et al._ 2020, 2021_a_), and successfully applied to the chemically reacting compressible turbulence (Teng _et al._ 2022). It was demonstrated that the DANN models embed the properties of symmetry and realizability conditions, which preserve the physical reliability of the DANN framework (Yuan _et al._ 2020). In order to enhance the interpretability of black-box machine-learning SGS models, a semi-explicit ANN-based spatial gradient model and constant-coefficient spatial gradient models are successively proposed by the elaborate Taylor expansions of velocity gradients in the neighboring stencil locations (Wang _et al._ 2021, 2022_b_). The machine-learning-based SGS models trained by high-fidelity simulation data can be regarded as the structural models with high _a priori_ accuracy, requiring additional indispensable dissipation to account for the spatial discretization effect and ensure the numerical stability in the _a posteriori_ studies of LES.
In addition to the machine-learning-assisted SGS models, some _a priori_ information about statistics of turbulence acquired from experimental data or DNS results like energy spectra, structure functions, and probability density functions of physical quantities can be used to determine or correct the model coefficients of SGS models to improve the model accuracy greatly. These _a priori_ knowledge of turbulent statistical quantities can be dynamically assimilated into the closure models via the data-assimilation based approaches. Among these data-assimilation techniques, adjoint-based variational methods adopt the optimal control strategy to efficiently calculate all the gradients of cost functionals for the model coefficients by solving the forward governing equations and the backward adjoint equations (Bewley _et al._ 2001). Then, the model coefficients of SGS models are iteratively updated using the gradient-based optimization algorithm until the optimal values are obtained. The cost functionals measure the discrepancies of statistical quantities in turbulence between the LES results and measurements from the experimental or DNS data, which can greatly alleviate the impact of chaotic field observations on the performance of data assimilation. In this work, we resort to the state-of-art adjoint-based data-assimilation approaches to establish a general optimal SGS framework to determine model parameters adaptively for various grid resolutions of LES in different turbulence scenarios.
### Adjoint LES equations and gradient evaluations with the mixed model
We optimize the model coefficients of the SGS closure model to minimize the statistical discrepancies between the LES calculations and the reference values acquired from the experimental or DNS data, which can be defined as the minimal optimization problem constrained by the governing equations (see Eqs. 2.4 and 2.5). The constrained optimization problem for the turbulent closure modeling is expressed as
\[\begin{array}{ll}\underset{C_{n}}{\text{min}}&\mathcal{J}\left[\phi\left( \bar{u}_{i};C_{n}\right),\phi\left(\bar{u}_{i}^{\text{ref}}\right)\right],\\ \text{s.t.}&R_{0}\left(\bar{u}_{i}\right)=\frac{\partial\bar{u}_{i}}{\partial x _{i}}=0,\\ \text{s.t.}&R_{i}\left(\bar{u}_{i},\bar{p}\right)=\frac{\partial\bar{u}_{i}}{ \partial t}+\frac{\partial\left(\bar{u}_{i}\bar{u}_{j}\right)}{\partial x_{j}} +\frac{\partial\bar{p}}{\partial x_{i}}-\nu\frac{\partial^{2}\bar{u}_{i}}{ \partial x_{j}\partial x_{j}}-\overline{\mathcal{F}}_{i}+\frac{\partial\tau_ {ij}}{\partial x_{j}}=0,\end{array} \tag{4.2}\]
where \(\mathcal{J}\left[\phi\left(\bar{u}_{i};C_{n}\right),\phi\left(\bar{u}_{i}^{ \text{ref}}\right)\right]=\int\limits_{0}^{T}\int\limits_{\Omega}J\left[\phi \left(\bar{u}_{i};C_{n},\mathbf{x},t\right),\phi\left(\bar{u}_{i}^{\text{ref} };\mathbf{x},t\right)\right]d\mathbf{x}dt\) denotes the total cost functions, \(J\left[\phi\left(\bar{u}_{i};C_{n},\mathbf{x},t\right),\phi\left(\bar{u}_{i}^ {\text{ref}};\mathbf{x},t\right)\right]\) is the discrepancy of statistical quantities \(\phi\) (_e.g._ kinetic energy spectra, structure functions, _etc._) between the LES results \(\bar{u}_{i}\) and reference values \(\bar{u}_{i}^{\text{ref}}\) (experimental or DNS data) at a certain state \(\left(C_{n},\mathbf{x},t\right)\). \(C_{n}\left(n=1,2,...,N\right)\) denotes model coefficients of the SGS mixed model \(\tau_{ij}=\sum\limits_{n=1}^{N}C_{n}T_{ij}^{\left(n\right)}\), and \(t\in\left[0,T\right]\) is the time horizon. Here, "s.t." stands for the abbreviation of "subject to". \(R_{0}\) and \(R_{i}\) (\(i=1,2,3\)) represent the LES continuity equation and momentum equations, respectively.
The Lagrangian functional \(\mathcal{L}\) is introduced to take the dynamics of LES variables \(\mathbf{\bar{v}}=[\bar{p},\bar{u}_{1},\bar{u}_{2},\bar{u}_{3}]^{T}\) into account and convert the constrained optimization (Eq. 4.2) into the unconstrained optimization problem, namely (Lewis _et al._, 2006)
\[\underset{C_{n}}{\text{min}}\ \mathcal{L}\left(\mathbf{\bar{v}};C_{n}\right), \text{ where }\mathcal{L}=\mathcal{J}\left[\phi\left(\mathbf{\bar{v}};C_{n}\right),\phi \left(\mathbf{\bar{v}}^{\text{ref}}\right)\right]-\sum\limits_{k=0}^{3}\int \limits_{0}^{T}\int\limits_{\Omega}R_{k}\left(\mathbf{\bar{v}};C_{n}\right) \cdot\bar{v}_{k}^{\dagger}d\mathbf{x}dt. \tag{4.3}\]
Here, \(\mathbf{\bar{v}}^{\dagger}=\left[\bar{p}^{\dagger},\bar{u}_{1}^{\dagger},\bar {u}_{2}^{\dagger},\bar{u}_{3}^{\dagger}\right]^{T}\) are the adjoint LES variables of \(\mathbf{\bar{v}}\), where \(\bar{p}^{\dagger}\) and \(\bar{u}_{i}^{\dagger}\) are the adjoint pressure and adjoint velocity, respectively. For the sake of brevity, the inner product of time and space is defined by \(\left\langle\mathbf{f},\mathbf{g}\right\rangle_{\mathbf{x},t}=\int\limits_{0 }^{T}\int\limits_{\Omega}\mathbf{f}\left(\mathbf{x},t\right)\cdot\mathbf{g} \left(\mathbf{x},t\right)d\mathbf{x}dt\), where \(\mathbf{f}\left(\mathbf{x},t\right)\) and \(\mathbf{g}\left(\mathbf{x},t\right)\) denote the arbitrary physical variables. The Lagrangian functional \(\mathcal{L}\) can be simplified as \(\mathcal{L}\left(\mathbf{\bar{v}};C_{n}\right)=\mathcal{J}\left(\mathbf{\bar{ v}};C_{n}\right)-\sum\limits_{k=0}^{3}\left\langle R_{k}\left(\mathbf{\bar{v}};C_{n} \right),\mathbf{\bar{v}}^{\dagger}\right\rangle_{\mathbf{x},t}\). The sensitivity of the Lagrangian functional \(\mathcal{L}\) can be derived by
\[\delta\mathcal{L}\left(\mathbf{\bar{v}};C_{n}\right) =\delta\mathcal{J}\left(\mathbf{\bar{v}};C_{n}\right)-\sum\limits_ {k=0}^{3}\left\langle R_{k}\left(\mathbf{\bar{v}};C_{n}\right),\mathbf{\bar{v }}^{\dagger}\right\rangle_{\mathbf{x},t}-\sum\limits_{k=0}^{3}\left\langle R _{k}\left(\mathbf{\bar{v}};\delta C_{n}\right),\mathbf{\bar{v}}^{\dagger} \right\rangle_{\mathbf{x},t}, \tag{4.4}\] \[=\delta\mathcal{J}\left(\mathbf{\bar{v}};C_{n}\right)-\sum\limits_ {k=0}^{3}\left\langle\frac{\partial R_{k}\left(\mathbf{\bar{v}};C_{n}\right)}{ \partial\mathbf{\bar{v}}}\cdot\delta\mathbf{\bar{v}},\mathbf{\bar{v}}^{\dagger }\right\rangle_{\mathbf{x},t}-\sum\limits_{k=0}^{3}\left\langle\frac{\partial R _{k}\left(\mathbf{\bar{v}};C_{n}\right)}{\partial C_{n}}\cdot\delta C_{n}, \mathbf{\bar{v}}^{\dagger}\right\rangle_{\mathbf{x},t},\]
where \(\partial R_{k}/\partial\mathbf{\bar{v}}\) and \(\partial R_{k}/\partial C_{n}\) are the tangent operators of the governing equations \(R_{k}\quad\left(k=0,1,2,3\right)\) for the variables \(\mathbf{\bar{v}}\) and parameters \(C_{n}\) with the perturbation field \(\delta\mathbf{\bar{v}}=\mathbf{\bar{v}}\left(C_{n}+\delta C_{n}\right)-\mathbf{ \bar{v}}\left(C_{n}\right),\quad n\in\left\{1,2,...,N\right\}\). The first term in Eq. 4.4 is the sensitivity of the cost functional \(\mathcal{J}\) and calculated as the Gateaux-Frechet derivative (Bewley _et al._, 2001)
of \(\mathcal{J}\) at \(C_{n}\) in the direction \(\delta C_{n}\), namely
\[\delta\mathcal{J}\left(\mathbf{\bar{v}};\delta C_{n}\right)=\lim_{e\to 0}\frac{d}{ d\varepsilon}\mathcal{J}\left(\mathbf{\bar{v}}\left(C_{n}+\varepsilon\delta C_{n} \right)\right)=\left(\frac{\partial J}{\partial\mathbf{\bar{v}}},\delta \mathbf{\bar{v}}\right)_{\mathbf{x},t}. \tag{4.5}\]
The adjoint identity (Bewley _et al._ 2001) can be obtained via the integral by part, given by
(4.6)
where the partial differential equations \(\mathbf{R}\left(\mathbf{\bar{v}}\right)=\partial\mathbf{\bar{v}}/\partial t+ \partial\mathbf{\bar{F}}/\partial\mathbf{x}=0\) with the associated adjoint operator \(\mathbf{R}^{\dagger}\left(\mathbf{\bar{v}}^{\dagger}\right)\), \(\mathbf{\bar{F}}\) denotes the fluxes and \(\Gamma\) is the boundary of the domain \(\Omega\). Here, represents the boundary and temporal integral terms, which determines the boundary and terminal conditions of the adjoint equations to give \(BT=0\). \(\left\langle\mathbf{f},\mathbf{g}\right\rangle_{t}=\int\limits_{0}^{T} \mathbf{f}\left(\mathbf{x},t\right)\cdot\mathbf{g}\left(\mathbf{x},t\right)dt\) and \(\left\langle\mathbf{f},\mathbf{g}\right\rangle_{\mathbf{x}}=\int\limits_{ \Omega}\mathbf{f}\left(\mathbf{x},t\right)\cdot\mathbf{g}\left(\mathbf{x},t \right)d\mathbf{x}\) denote the temporal and spatial inner products, respectively. The second term in Eq. 4.4 can be expressed with the adjoint identity, namely (Bewley _et al._ 2001; Delport _et al._ 2009, 2011)
\[\left\langle\frac{\partial R_{k}}{\partial\mathbf{\bar{v}}}\cdot\delta \mathbf{\bar{v}},\mathbf{\bar{v}}^{\dagger}\right\rangle_{\mathbf{x},t}= \left\langle\delta\mathbf{\bar{v}},\left(\frac{\partial R_{k}}{\partial\mathbf{ \bar{v}}}\right)^{\dagger}\cdot\mathbf{\bar{v}}^{\dagger}\right\rangle_{ \mathbf{x},t}+BT, \tag{4.7}\]
where \(\left(\partial R_{k}/\partial\mathbf{\bar{v}}\right)^{\dagger}\) is the adjoint operator of the LES tangent Jacobian tensor \(\partial R_{k}/\partial\mathbf{\bar{v}}\), \(\left(k=0,1,2,3\right)\). Substitute the Frechet derivative \(\delta\mathcal{J}\) (Eq. 4.5) and the adjoint identity (Eq. 4.7) into the sensitivity of the Lagrangian functional \(\mathcal{L}\) (Eq. 4.4), and we get
\[\delta\mathcal{L}\left(\mathbf{\bar{v}};C_{n}\right)=\left\langle\frac{ \partial J}{\partial\mathbf{\bar{v}}}-\sum\limits_{k=0}^{3}\left(\frac{ \partial R_{k}}{\partial\mathbf{\bar{v}}}\right)^{\dagger}\cdot\mathbf{\bar{v} }^{\dagger},\delta\mathbf{\bar{v}}\right\rangle_{\mathbf{x},t}-\sum\limits_{k= 0}^{3}\left\langle\frac{\partial R_{k}\left(\mathbf{\bar{v}};C_{n}\right)}{ \partial C_{n}}\cdot\delta C_{n},\mathbf{\bar{v}}^{\dagger}\right\rangle_{ \mathbf{x},t}-BT, \tag{4.8}\]
To avoid calculating the perturbation field \(\delta\mathbf{\bar{v}}\) in the first term of Eq. 4.8, the inner product should be equal to \(0\) and the corresponding adjoint LES equations can be derived by
\[\sum\limits_{k=0}^{3}\left(\frac{\partial R_{k}}{\partial\mathbf{\bar{v}}} \right)^{\dagger}\cdot\mathbf{\bar{v}}^{\dagger}-\frac{\partial J}{\partial \mathbf{\bar{v}}}=0. \tag{4.9}\]
Substitute the specific forms of the LES equations \(R_{k}\) (\(k=0,1,2,3\)) (see Eq. 4.2), and the adjoint LES equations can be written as
\[\frac{\partial\bar{u}_{i}^{\dagger}}{\partial x_{i}}=0, \tag{4.10}\]
\[\frac{\partial\bar{u}_{i}^{\dagger}}{\partial t}+\left(\frac{\partial\bar{u}_{ i}^{\dagger}}{\partial x_{j}}+\frac{\partial\bar{u}_{j}^{\dagger}}{\partial x _{i}}\right)\bar{u}_{j}+\frac{\partial\bar{p}^{\dagger}}{\partial x_{i}}+\nu \frac{\partial^{2}\bar{u}_{i}^{\dagger}}{\partial x_{j}\partial x_{j}}+\frac{ \partial\tau_{ij}^{\dagger}}{\partial x_{i}}+\frac{\partial J}{\partial\bar{u }_{i}}=0, \tag{4.11}\]
where \(\tau_{ij}^{\dagger}=\sum\limits_{n=1}^{N}C_{n}T_{ij}^{(n),\dagger}\) denotes the adjoint SGS mixed model and \(T_{ij}^{(n),\dagger}\) is the \(n\)-th adjoint basis stress tensor. The detailed derivation of the adjoint LES equations can refer to the Appendix A. The terminal conditions of the adjoint LES equations is determined by the last term of adjoint identity (Eq. 4.6), namely
\[\left\langle\mathbf{\bar{v}}^{\dagger},\delta\mathbf{\bar{v}}\right\rangle_{ \mathbf{x}}\big{|}_{0}^{T}=\left\langle\mathbf{\bar{v}}^{\dagger}\left(T \right),\delta\mathbf{\bar{v}}\left(T\right)\right\rangle_{\mathbf{x}}-\left \langle\mathbf{\bar{v}}^{\dagger}\left(0\right),\delta\mathbf{\bar{v}}\left(0 \right)\right\rangle_{\mathbf{x}}=\left\langle\mathbf{\bar{v}}^{\dagger}\left(T \right),\delta\mathbf{\bar{v}}\left(T\right)\right\rangle_{\mathbf{x}}, \tag{4.12}\]
where \(\delta\mathbf{\bar{v}}\left(0\right)=0\), since the unperturbed initial LES field is exactly given by the filtered DNS (fDNS) data. The terminal conditions \(\mathbf{\bar{v}}^{\dagger}\left(T\right)=\left[\bar{u}_{i}^{\dagger}\left(T \right),\bar{p}^{\dagger}\left(T\right)\right]^{T}=\mathbf{0}\) make the temporal integral
terms \(\left[\left\langle\delta\bar{\mathbf{v}},\bar{\mathbf{v}}^{\dagger}\right\rangle_{ \mathbf{x}}\right]_{0}^{T}\) equal to zero and the calculation of the terminal perturbation \(\delta\bar{\mathbf{v}}\left(T\right)\) is obviated. The terminal conditions (\(\bar{u}_{i}^{\dagger}\left(T\right)=0\), \(\bar{p}^{\dagger}\left(T\right)=0\)) and boundary conditions of the adjoint LES equations are identified by setting \(BT=0\) in Eq. 4.8. The sensitivity of the Lagrangian functional \(\mathcal{L}\) can be further expressed as
\[\delta\mathcal{L}\left(\bar{\mathbf{v}};C_{n}\right)=-\sum_{k=0}^{3}\left\langle \frac{\partial R_{k}\left(\bar{\mathbf{v}};C_{n}\right)}{\partial C_{n}}\cdot \delta C_{n},\bar{\mathbf{v}}^{\dagger}\right\rangle_{\mathbf{x},t}, \tag{4.13}\]
where \(\partial R_{0}/\partial C_{n}=0\), and \(\partial R_{i}/\partial C_{n}=\frac{\partial}{\partial C_{n}}\left(\frac{ \partial\tau_{ij}}{\partial x_{j}}\right)=\partial T_{ij}^{\left(n\right)}/ \partial x_{j}\) (\(n=1,2,...,N\)) denotes the \(n\)-th SGS basis force. Once the LES equations (Eqs. 2.4 and 2.5) temporally advances forward in the time horizon \(t\in[0,T]\) and the adjoint LES equations (Eqs. 4.10 and 4.11) are integrated backward with zero terminal conditions, the gradients of Lagrangian functional for the SGS model coefficients can be calculated efficiently by
\[\frac{\partial\mathcal{L}}{\partial C_{n}}=\frac{\delta\mathcal{L}\left(\bar {\mathbf{v}};C_{n}\right)}{\delta C_{n}}=-\left\langle\frac{\partial T_{ij}^{ \left(n\right)}}{\partial x_{j}},\bar{u}_{i}^{\dagger}\right\rangle_{\mathbf{ x},t},\ \left(n=1,2,...,N\right). \tag{4.14}\]
The adjoint-based gradient evaluations are independent of the parameter perturbations \(\delta C_{n}\left(n=1,2,...,N\right)\), which are very efficient compared to the finite difference algorithm and forward sensitivity analysis with at least \(N\) parameter perturbations and \(N+1\) LES equation calculations for each optimization iteration (Chandramouli _et al._, 2020; Sirignano _et al._, 2020; MacArt _et al._, 2021).
### Energy budget analysis of the adjoint LES equations
Before proceeding to the introduction of the variational optimal mixed models, it is essential to analyze the energy budget of the adjoint LES equations. The adjoint LES kinetic energy (\(\bar{\mathcal{E}}^{\dagger}=\bar{u}_{i}^{\dagger}\bar{u}_{i}^{\dagger}/2\)) equation is derived through multiplying the adjoint velocity \(\bar{u}_{i}^{\dagger}\) on both sides of the adjoint LES momentum equations (Eq. 4.11), namely
\[\frac{\partial\bar{\mathcal{E}}^{\dagger}}{\partial t}+\frac{\partial\bar{ \mathcal{P}}_{j}}{\partial x_{j}}=\bar{u}_{i}^{\dagger}\bar{S}_{ij}\bar{u}_{j }^{\dagger}+\bar{D}^{\dagger}-\bar{\Pi}^{\dagger}-\bar{J}^{\dagger}, \tag{4.15}\]
where \(\bar{u}_{i}^{\dagger}\bar{S}_{ij}\bar{u}_{j}^{\dagger}\) denotes the adjoint energy production term due to the shear strain rate \(\bar{S}_{ij}\). Here, \(\bar{\mathcal{P}}_{j}\) is the adjoint spatial transport flux, \(\bar{D}\) is the adjoint viscous dissipation term, \(\bar{\Pi}^{\dagger}\) is the adjoint variable of the SGS energy flux \(\bar{\Pi}=-\tau_{ij}\bar{S}_{ij}\) and \(\bar{J}^{\dagger}\) is the energy injected from the discrepancy between LES results and reference data. These terms are respectively defined by
\[\bar{\mathcal{P}}_{j}=\bar{\mathcal{E}}^{\dagger}\bar{u}_{j}+\left(\bar{p}^{ \dagger}+\bar{u}_{i}\bar{u}_{i}^{\dagger}\right)\bar{u}_{j}^{\dagger}+\left( \nu\frac{\partial\bar{u}_{i}^{\dagger}}{\partial x_{j}}+\tau_{ij}^{\dagger} \right)\bar{u}_{i}^{\dagger}, \tag{4.16}\]
\[\bar{D}=\nu\frac{\partial\bar{u}_{i}^{\dagger}}{\partial x_{j}}\frac{\partial \bar{u}_{i}^{\dagger}}{\partial x_{j}}, \tag{4.17}\]
\[\bar{\Pi}^{\dagger}=-\tau_{ij}^{\dagger}\bar{S}_{ij}^{\dagger}, \tag{4.18}\]
\[\bar{J}^{\dagger}=\bar{u}_{i}^{\dagger}\frac{\partial J}{\partial\bar{u}_{i}}, \tag{4.19}\]
where \(\bar{S}_{ij}^{\dagger}=\left(\partial\bar{u}_{i}^{\dagger}/\partial x_{j}+\partial \bar{u}_{j}^{\dagger}/\partial x_{i}\right)/2\) represents the adjoint strain-rate tensor. The backward evolution of the adjoint volume-averaged kinetic energy can be written as
\[-\frac{\partial\left\langle\bar{\mathcal{E}}^{\dagger}\right\rangle}{\partial t }=-\left\langle\bar{u}_{i}^{\dagger}\bar{S}_{ij}\bar{u}_{j}^{\dagger}\right\rangle -\left\langle\bar{D}^{\dagger}\right\rangle+\left\langle\bar{\Pi}^{\dagger} \right\rangle+\left\langle\bar{J}^{\dagger}\right\rangle, \tag{4.20}\]
where \(\left\langle\bar{D}^{\dagger}\right\rangle\) is pure dissipation term that drains out the adjoint energy. \(\left\langle\bar{\Pi}^{\dagger}\right\rangle\) denotes the adjoint SGS energy transport term which represents the forward adjoint energy transfer from large scales to unsolved residual scales if \(\left\langle\bar{\Pi}^{\dagger}\right\rangle>0\), otherwise stands for the adjoint SGS energy backscatter. The accurate reconstruction of \(\left\langle\bar{\Pi}^{\dagger}\right\rangle\) is crucial for the SGS modeling of LES and gradient evaluations with respect to the SGS model coefficients. \(\left\langle\bar{J}^{\dagger}\right\rangle\) is the loss-induced adjoint energy injection term. \(\left\langle\bar{D}^{\dagger}\right\rangle\) is the viscous dissipation which enhances the numerical stability of the adjoint LES field. \(\left\langle\bar{J}^{\dagger}\right\rangle\) is the adjoint energy production due to the discrepancy between LES evaluation and reference data, which dominates the accuracy of the sensitivity calculations. The large-scale strain-rate tensor \(\bar{S}_{ij}\) can be decomposed into its principal components using the eigendecomposition approach, such that (Wang & Gao, 2013)
\[\bar{S}_{ij} =\lambda_{1}q_{i}^{(1)}q_{j}^{(1)}+\lambda_{2}q_{i}^{(2)}q_{j}^{( 2)}+\lambda_{3}q_{i}^{(3)}q_{j}^{(3)}=\sum_{k=1}^{3}\lambda_{k}q_{i}^{(k)}q_{j }^{(k)}, \tag{4.21}\]
where \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) are the eigenvalues of the shear strain rate, with \(q_{i}^{(1)}\), \(q_{i}^{(2)}\) and \(q_{i}^{(3)}\) being the associated eigenvectors. Here, \(\lambda_{1}+\lambda_{2}+\lambda_{3}=0\) for the trace-free strain rate \(\bar{S}_{ij}\) in the incompressible turbulent flows. Hence, the quadratic term \(-\left\langle\bar{u}_{i}^{\dagger}\bar{S}_{ij}\bar{u}_{j}^{\dagger}\right\rangle\) in Eq. 4.20 is further expressed as
\[-\left\langle\bar{u}_{i}^{\dagger}\bar{S}_{ij}\bar{u}_{j}^{\dagger}\right\rangle =-\sum_{k=1}^{3}\left\langle\lambda_{k}\left(q_{i}^{(k)}\bar{u}_{i}^{\dagger} \right)\left(q_{j}^{(k)}\bar{u}_{j}^{\dagger}\right)\right\rangle=-\sum_{k=1} ^{3}\left\langle\lambda_{k}\left(q_{i}^{(k)}\bar{u}_{i}^{\dagger}\right)^{2} \right\rangle. \tag{4.22}\]
The sign of the eigenvalues \(\lambda_{k},~{}(k=1,2,3)\) determines the contribution of the adjoint energy from the quadratic term \(-\left\langle\bar{u}_{i}^{\dagger}\bar{S}_{ij}\bar{u}_{j}^{\dagger}\right\rangle\) is productive or dissipative. The quadratic terms with negative eigenvalues of the shear strain rate produce the positive adjoint energy production, while those with positive eigenvalues drain out the adjoint energy. In previous studies of chaotic adjoint methods, the adjoint-based gradients are found to grow exponentially with time and finally numerically diverge in a long time horizon for the chaotic flows (Wang & Gao, 2013; Ashley _et al._, 2019; Garai & Murman, 2021). The terms \(\left\langle\bar{D}^{\dagger}\right\rangle\), \(\left\langle\bar{\Pi}^{\dagger}\right\rangle\) and \(\left\langle\bar{J}^{\dagger}\right\rangle\) in the volume-averaged adjoint energy equation (Eq. 4.20) are less likely to cause the exponential growth of the adjoint energy, since the adjoint energy term \(\left\langle\bar{\mathcal{E}}^{\dagger}\right\rangle\) does not appears explicitly in these terms. It can be further shown that the quadratic term \(-\left\langle\bar{u}_{i}^{\dagger}\bar{S}_{ij}\bar{u}_{j}^{\dagger}\right\rangle\) plays the dominant role in the exponential growth of the adjoint variables. We apply the Cauchy-Schwarz inequality to the inner product terms in Eq. 4.22, such that (Talnikar _et al._, 2017)
\[\left(q_{i}^{(k)}\bar{u}_{i}^{\dagger}\right)^{2}\leqslant\left[q_{i}^{(k)}q_{ i}^{(k)}\right]\left(\bar{u}_{i}^{\dagger}\bar{u}_{i}^{\dagger}\right)=2 \left\|\mathbf{q}^{(k)}\right\|\overline{\mathcal{E}}^{\dagger}~{}~{}~{}(k=1,2,3)\,, \tag{4.23}\]
where "\(\|\cdot\|\)" denotes the L2 norm of the vectors. For the quadratic terms with negative eigenvalues (adjoint energy production), the evolution of the adjoint energy can be approximated using the leading principal vectors as
\[-\frac{\partial\left\langle\overline{\mathcal{E}}^{\dagger}\right\rangle}{ \partial t}\approx 2|\lambda|_{\infty}\|\mathbf{q}\|_{\infty}\left\langle \overline{\mathcal{E}}^{\dagger}\right\rangle, \tag{4.24}\]
where \(|\lambda|_{\infty}=\max\limits_{\Omega}\left\{-\lambda_{1},-\lambda_{2},-\lambda_{ 3}\right\}\) denotes the magnitude of the leading negative eigenvalue in the entire domain \(\Omega\) and \(\|\mathbf{q}\|_{\infty}\) represents the corresponding eigenvector magnitude. The adjoint energy is then calculated by the backward time interval, namely
\[\left\langle\overline{\mathcal{E}}^{\dagger}\right\rangle\left(t\right)\approx \left\langle\overline{\mathcal{E}}^{\dagger}\right\rangle\left(T\right)\exp \left[2|\lambda|_{\infty}\|\mathbf{q}\|_{\infty}\left(T-t\right)\right],\ \ t\in\left[0,T\right]\,. \tag{4.25}\]
The quadratic term \(-\left\langle\bar{u}_{i}^{\dagger}\bar{S}_{ij}\bar{u}_{j}^{\dagger}\right\rangle\) with negative eigenvalues makes the adjoint energy grow exponentially over time and numerically unstable if it cannot be suppressed by the adjoint dissipation in a long time horizon \(t\in\left[0,T\right]\). In order to stabilize the adjoint equations during every iteration, an additional symmetric tensor \(\bar{S}_{ij}^{a}\)(Ashley _et al._, 2019; Garai & Murman, 2021) is introduced to maintain the numerical stability of the adjoint momentum (Eq. 4.11), and the stabilized adjoint momentum equations are then expressed as
\[\frac{\partial\bar{u}_{i}^{\dagger}}{\partial t}+\left(\frac{\partial\bar{u}_ {i}^{\dagger}}{\partial x_{j}}+\frac{\partial\bar{u}_{j}^{\dagger}}{\partial x _{i}}\right)\bar{u}_{j}+\bar{S}_{ij}^{a}\bar{u}_{j}^{\dagger}+\frac{\partial \bar{p}^{\dagger}}{\partial x_{i}}+\nu\frac{\partial^{2}\bar{u}_{i}^{\dagger} }{\partial x_{j}\partial x_{j}}+\frac{\partial\tau_{ij}^{\dagger}}{\partial x _{j}}+\frac{\partial J}{\partial\bar{u}_{i}}=0. \tag{4.26}\]
Consequently, the stabilized adjoint kinetic energy equation is written by
\[\frac{\partial\overline{\mathcal{E}}^{\dagger}}{\partial t}+\frac{\partial \overline{\mathcal{P}}_{j}}{\partial x_{j}}=\bar{u}_{i}^{\dagger}\left(\bar{S} _{ij}-\bar{S}_{ij}^{a}\right)\bar{u}_{j}^{\dagger}+\tilde{D}^{\dagger}-\tilde {\Pi}^{\dagger}-\tilde{J}^{\dagger}. \tag{4.27}\]
Here, the quadratic term \(\bar{u}_{i}^{\dagger}\bar{S}_{ij}\bar{u}_{j}^{\dagger}<0\ \left(-\bar{u}_{i}^{ \dagger}\bar{S}_{ij}\bar{u}_{j}^{\dagger}>0\right)\) is responsible for the exponential growth of the adjoint energy, and the minimal artificial symmetric tensor is added to keep the adjoint variables numerically stable in advancing backward of the adjoint LES equations. The artificial symmetric tensor \(\bar{S}_{ij}^{a}\) can be optimized by the suboptimal minimization problem (Ashley _et al._, 2019; Garai & Murman, 2021), such that
\[\begin{array}{ll}\min\limits_{\bar{S}_{ij}^{a}}&\frac{1}{2}\bar{S}_{ij}^{a} \bar{S}_{ij}^{a},\\ s.t.&\bar{u}_{i}^{\dagger}\left(\bar{S}_{ij}-\bar{S}_{ij}^{a}\right)\bar{u}_{j} ^{\dagger}\geqslant 0.\end{array} \tag{4.28}\]
We use the sequential quadratic programming (SQP) approach (Boggs & Tolle, 1995; Chung & Freund, 2022) to efficiently solve the suboptimal problem, and the augmented Lagrangian functional \(\mathbb{L}\) is applied to the constrained minimization problem, namely
\[\mathbb{L}=\frac{1}{2}\bar{S}_{ij}^{a}\bar{S}_{ij}^{a}+\lambda\left[\bar{u}_{i }^{\dagger}\left(\bar{S}_{ij}-\bar{S}_{ij}^{a}\right)\bar{u}_{j}^{\dagger} \right], \tag{4.29}\]
where \(\lambda\) is the Lagrangian multiplier. The Karush-Kuhn-Tucker (KKT) optimal conditions (Kuhn & Tucker, 1951; Blonigan & Wang, 2018) are obtained by taking the derivatives of the cost functional with respect to the augmented optimal variables ( \(\bar{S}_{ij}^{a}\) and \(\lambda\)), derived by
\[\frac{\partial\mathbb{L}}{\partial\bar{S}_{ij}^{a}}=\bar{S}_{ij}^{a}-\lambda \left(\bar{u}_{i}^{\dagger}\bar{u}_{j}^{\dagger}\right)=0\ \Rightarrow\ \bar{S}_{ij}^{a}=\lambda\left(\bar{u}_{i}^{\dagger}\bar{u}_{j}^{\dagger} \right), \tag{4.30}\]
\[\frac{\partial\mathbb{L}}{\partial\lambda}=\bar{u}_{i}^{\dagger}\left(\bar{S} _{ij}-\bar{S}_{ij}^{a}\right)\bar{u}_{j}^{\dagger}=0\ \Rightarrow\ \bar{u}_{i}^{\dagger}\bar{S}_{ij}\bar{u}_{j}^{\dagger}=\bar{u}_{i}^{\dagger} \bar{S}_{ij}^{a}\bar{u}_{j}^{\dagger}. \tag{4.31}\]
By multiplying Eq. 4.30 by \(\bar{u}_{i}^{\dagger}\) from the left and right by \(\bar{u}_{j}^{\dagger}\), and then substituting it into Eq. 4.31, the Lagrangian multiplier \(\lambda\) is calculated by
\[\lambda=\frac{\bar{u}_{i}^{\dagger}\bar{S}_{ij}\bar{u}_{j}^{\dagger}}{\left( \bar{u}_{k}^{\dagger}\bar{u}_{k}^{\dagger}\right)^{2}}=\frac{\bar{u}_{i}^{ \dagger}\bar{S}_{ij}\bar{u}_{j}^{\dagger}}{4\overline{\mathcal{E}}^{\dagger} ^{2}}. \tag{4.32}\]
The minimal artificial symmetric tensor \(\tilde{S}^{a}_{ij}\) can be obtained by substituting Eq. 4.32 into Eq. 4.30, yielding
\[\tilde{S}^{a}_{ij}=\left\{\begin{array}{ll}\frac{\tilde{u}^{\dagger}_{m}\tilde{ S}_{mn}\tilde{u}^{\dagger}_{n}}{2\tilde{\mathcal{E}}^{\dagger^{2}}}\tilde{u}^{ \dagger}_{i}\tilde{u}^{\dagger}_{j},&\text{if }\tilde{u}^{\dagger}_{i}\tilde{S}_{ij} \tilde{u}^{\dagger}_{j}<0,\\ 0&,&\text{if }\tilde{u}^{\dagger}_{i}\tilde{S}_{ij}\tilde{u}^{\dagger}_{j} \geqslant 0.\end{array}\right. \tag{4.33}\]
The artificial momentum term \(\tilde{S}^{a}_{ij}\tilde{u}^{\dagger}_{j}\) is thus additionally calculated in the stabilized adjoint momentum equations (Eq. 4.26), namely
\[\tilde{S}^{a}_{ij}\tilde{u}^{\dagger}_{j}=\left\{\begin{array}{ll}\frac{ \tilde{u}^{\dagger}_{m}\tilde{S}_{mn}\tilde{u}^{\dagger}_{n}}{2\tilde{ \mathcal{E}}^{\dagger^{2}}}\tilde{u}^{\dagger}_{i},&\text{if }\tilde{u}^{ \dagger}_{i}\tilde{S}_{ij}\tilde{u}^{\dagger}_{j}<0,\\ 0&,&\text{if }\tilde{u}^{\dagger}_{i}\tilde{S}_{ij}\tilde{u}^{\dagger}_{j} \geqslant 0.\end{array}\right. \tag{4.34}\]
The minimal stabilization term \(\tilde{S}^{a}_{ij}\tilde{u}^{\dagger}_{j}\) can efficiently maintain the numerical stability of LES adjoint variables in the long-term chaotic turbulent calculations as much as possible, without deteriorating the correct evaluations of the adjoint-based gradient.
### Adjoint-based variational optimal mixed models (Vomm)
In this research, we select the mixed model comprised of the Smagorinsky dissipative term (Eq. 3.1) and approximate deconvolution model (ADM, Eq. 3.6) in the scale-similarity form, expressed as (Sagaut, 2006)
\[\tau_{ij}=C_{1}\left(\tilde{\Delta}^{2}|\tilde{S}|\tilde{S}_{ij}\right)+C_{2} \left(\overline{u^{*}_{i}u^{*}_{j}}-\overline{u^{*}_{i}}\ \overline{u^{*}_{j}}\right), \tag{4.35}\]
where \(u^{*}_{i}\) denotes the approximate unfiltered velocity recovered by the iterative van Cittert procedure (Eq. 3.6). In previous studies (Yuan _et al._, 2020), we have conducted error analyses to validate that deconvolutional-type SGS models with scale-similarity form (Eq. 3.7) perform better than those with the conventional direct-modeling form (\(\tau_{ij}=\overline{u^{*}_{i}u^{*}_{j}}-\bar{u}_{i}\bar{u}_{j}\)), satisfying the properties of symmetry and realizability conditions. The model coefficients \(C_{1}\) and \(C_{2}\) are optimally identified by minimizing the discrepancy between statistical quantities calculated by the LES results and those measured by the filtered DNS (fDNS) data. The selected statistics should be able to sufficiently quantify the multiscale transport behaviours of turbulence, meanwhile facilitating the practical measurements. The SGS stress \(\tau_{ij}\) and SGS force \(\partial\tau_{ij}/\partial x_{j}\) are intermediate variables,
Figure 1: Schematic diagram of the adjoint-based variational optimal mixed models.
and their statistics are relatively difficult to be obtained through the actual observations. In contrast, the statistics of velocity are more convenient to measure and the velocity spectrum clearly quantifies the turbulent kinetic energy distributions at different scales. The SGS modeling is especially concerned with the accurate reconstruction of small scales near the filter width, therefore we select the dissipation spectrum as the optimization statistical quantities \(\phi\left(\bar{u}_{i}\right)\) to increase the weights of small scales, namely (Pope 2000)
\[\phi\left(\bar{u}_{i},k,t\right)=D\left(k,t\right)=\int\limits_{\mathbf{k}}\nu k ^{2}\bar{v}_{i}^{*}\left(\mathbf{k},t\right)\bar{v}_{i}\left(\mathbf{k},t \right)\delta\left(\left|\mathbf{k}\right|-k\right)d\mathbf{k}, \tag{4.36}\]
where \(\delta\left(\cdot\right)\) denotes the Dirac delta function and the star symbol represents complex conjugate. \(k\) and \(\mathbf{k}\) stand for the wavenumber magnitude and wavenumber vectors, respectively. Here, \(\bar{v}_{j}\left(\kappa,t\right)=\mathbb{F}\left\{\bar{u}_{j}\left(\mathbf{x },t\right)\right\}=\sum\limits_{\mathbf{k}}\bar{u}_{j}\left(\mathbf{x},t \right)e^{-i\mathbf{k}\cdot\mathbf{x}}\) is the \(j\)-th velocity component in Fourier space, where \(\mathbb{F}\left\{\cdot\right\}\) represents the 3D Fourier transform, and \(i\) is the imaginary unit with \(i^{2}=-1\).
The optimization problem constrained by the governing equations for the SGS parameters \(C_{1}\) and \(C_{2}\) is defined in Eq. 4.2, where the cost functional for the dissipation spectrum \(D\left(k,t\right)\) is given by
\[\mathcal{J}\left(\phi,\phi^{\text{fDNS}}\right)=\int\limits_{0}^{T}\sum_{k=1} ^{k_{\text{max}}}J\left[D\left(k,t\right),D^{\text{fDNS}}\left(k,t\right) \right]dt, \tag{4.37}\]
where \(k_{\text{max}}=N_{\text{LES}}/3\) is the effective maximum wavenumber, \(N_{\text{LES}}\) is the number of LES grids, and the discrepancy function \(J\left[D\left(k,t\right),D^{\text{fDNS}}\left(k,t\right)\right]=\left[D\left(k,t\right)-D^{\text{fDNS}}\left(k,t\right)\right]^{2}\) takes the L2 norm of the prediction error.
The gradients of the loss function with respect to the model coefficients \(C_{1}\) and \(C_{2}\) are evaluated by Eq. 4.14, where the adjoint variables \(\bar{u}_{i}^{\dagger}\) are calculated by backward advancing the stabilized adjoint LES equations (Eqs. 4.10 and 4.26) with zero terminal conditions. The sensitivity term \(\partial J/\partial\bar{u}_{i}\) is calculated by the chain rule, namely
\[\frac{\partial J}{\partial\bar{u}_{i}}=\frac{\partial J}{\partial D}\frac{ \partial D}{\partial\bar{u}_{i}}=2\left(D-D^{\text{fDNS}}\right)\mathbb{F}^{- 1}\left\{2\nu k^{2}\bar{v}_{i}\left(\mathbf{k},t\right)\delta\left(\left| \mathbf{k}\right|-k\right)\right\}, \tag{4.38}\]
where \(\mathbb{F}^{-1}\left\{\cdot\right\}\) denotes the 3D inverse Fourier transform. In the stabilized adjoint momentum equations (Eq. 4.26), the adjoint SGS stress is given by \(\tau_{ij}^{\dagger}=C_{1}T_{ij}^{\left(1\right),\dagger}+C_{2}T_{ij}^{\left(2 \right),\dagger}\), where the associated adjoint basis stress tensors \(T_{ij}^{\left(1\right),\dagger}\) and \(T_{ij}^{\left(2\right),\dagger}\) are expressed in detail as
\[T_{ij}^{\left(1\right),\dagger}=-\bar{\Delta}^{2}\left(\left|\bar{S}\right| \bar{S}_{ij}^{\dagger}+2\frac{\bar{S}_{kl}\bar{S}_{kl}^{\dagger}}{|\bar{S}|} \bar{S}_{ij}\right), \tag{4.39}\]
\[T_{ij}^{\left(2\right),\dagger}=\sum_{n=1}^{N}\left(I-G\right)^{n-1}\otimes \left(\overline{\bar{u}_{i}^{*}\overline{u_{j}^{*}}}-\overline{\bar{u}_{i}^{ *}}u_{j}^{*}\right), \tag{4.40}\]
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \(Re_{\lambda}\) & \(E_{k}\) & \(k_{\text{max}}\eta\) & \(\eta/h_{\text{DNS}}\) & \(L_{I}/\eta\) & \(\lambda/\eta\) & \(u^{\text{rms}}\) & \(\omega^{\text{rms}}\) & \(\varepsilon\) \\ \hline
252 & 2.63 & 2.11 & 1.01 & 235.2 & 31.2 & 2.30 & 26.90 & 0.73 \\ \hline \hline \end{tabular}
\end{table}
Table 1: One-point statistics for the DNS of forced homogeneous isotropic turbulence with grid resolution of \(1024^{3}\).
where \(N=5\) denotes the number of iterations for the AD procedure. The detailed derivation of the adjoint SGS stress tensors for the VOMM model can refer to the Appendix B. To our knowledge, few previous works have studied the mixed SGS models and given the detailed derivations of the adjoint SGS models.
Once the gradients of the cost functional for the model coefficients are obtained by successively solving the forward LES equations and backward stabilized adjoint LES equations, a gradient-based iterative optimization procedure can be established, namely (Liu & Nocedal 1989; Badreddine _et al._ 2014)
\[C_{n}^{(k+1)}=C_{n}^{(k)}+\gamma^{(k)}d_{n}^{(k)},\ \ \ (n=1,2,\cdots,N)\, \tag{4.41}\]
where \(C_{n}^{(k)}\) is the \(n\)-th model coefficient during the \(k\)-th gradient-based optimal iteration, \(d_{n}^{(k)}\) denotes the updated direction of the \(n\)-th model coefficient and \(\gamma^{(k)}\) represents the step size. We use a popular quasi-Newton method named limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm to update the directions \(d_{n}^{(k)}\) (Liu & Nocedal 1989). The step size \(\gamma^{(k)}\) is calculated by the backtracking-Armijo line search method in the L-BFGS algorithm (Armijo 1966).
In summary, the diagram of the VOMM model is illustrated in Fig. 1, and the calculation steps are listed as follows.
(1) We first select the pure structural ADM model without the dissipative Smagorinsky term as the initial SGS model (Eq. 4.35) with model coefficients \(C_{1}^{(0)}=0\) and \(C_{2}^{(0)}=1\).
(2) The LES transient statistics (_e.g._ the dissipation spectrum shown in Eq. 4.36) is then evaluated by forward calculating the LES equations (Eqs. 2.4 and 2.5) initialized by the filtered DNS velocity field. The statistical discrepancy (Eq. 4.37) between the LES statistics and the _a priori_ measurable benchmark data (fDNS data) is measured to evaluate the performance of the SGS model with current parameters.
(3) Afterwards, the stabilized adjoint LES equations (Eqs. 4.10 and 4.26) are integrated backward with zero terminal conditions, driven by the loss sensitivity (Eq. 4.38) and corresponding adjoint SGS model (Eqs. 4.39 and 4.40). The adjoint-based gradients of augmented functional with respect to the model coefficients (Eq. 4.14) are sequentially evaluated using the adjoint variables and the SGS basis forces.
(4) The L-BFGS gradient-based optimization algorithm (Eq. 4.41) is adopted to iteratively update the SGS model parameters by repeating the above calculations until the stopping criteria are satisfied.
Figure 2: Velocity and dissipation spectra of DNS and filtered DNS in forced homogeneous isotropic turbulence with grid resolution of \(1024^{3}\): (\(a\)) velocity spectra, and (\(b\)) dissipation spectra. Diamond represent the cutoff wavenumber \(k_{c}\)=16 (\(\tilde{\Delta}=32h_{\text{DNS}}\)).
The stop criteria for the VOMM model for the optimization iterations are summarized as follows:
(a) the number of iterations reaches the maximum number of iterations;
(b) the ratio of the current loss to the initial loss is smaller than a given error threshold \(\epsilon_{0}\) (_e.g._, \(\epsilon_{0}=1\%\)), namely, \(\mathcal{J}^{(k)}/\mathcal{J}^{(0)}\leqslant\epsilon_{0}\);
(c) the difference of model coefficients between two successive iterations is negligible, namely, \(\left\|C_{n}^{(k+1)}-C_{n}^{(k)}\right\|/\left\|C_{n}^{(0)}\right\|\leqslant \epsilon_{0}\).
Eventually, the optimal parameters of the VOMM model are automatically obtained after reaching the given stopping optimization criteria.
## 5 _A posteriori_ studies of the VOMM models
In order to examine the performance of the proposed VOMM model, the _a posteriori_ evaluations are respectively carried out for the forced, decaying homogeneous isotropic turbulence and temporally evolving turbulent mixing layer in this paper. The results of the filtered direct numerical simulation (DNS) are the benchmark for the performance evaluations of the large-eddy simulation (LES). We first introduce the detailed settings of DNS for these three turbulent problems. The DNS data are then explicitly filtered by the commonly-used Gaussian filter, which is expressed
\begin{table}
\begin{tabular}{c c c c c} \hline \hline FGR & LES Resolution & \(C_{1}^{(0)}\) & \(C_{2}^{(0)}\) & \(C_{1}^{\text{opt}}\) & \(C_{2}^{\text{opt}}\) \\
1 & \(32^{3}\) & 0 & 1 & -0.0529 & 1.229 \\
2 & \(64^{3}\) & 0 & 1 & -0.0101 & 1.027 \\
4 & \(128^{3}\) & 0 & 1 & -0.0030 & 1.000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The initial and optimal parameters of the VOMM model for LES computations with the filter width \(\tilde{\Delta}=32h_{\text{DNS}}\) in forced homogeneous isotropic turbulence.
Figure 3: The evolution of the normalized cost function in forced homogeneous isotropic turbulence.
as (Pope 2000; Sagaut 2006)
\[G\left(\mathbf{r};\tilde{\Delta}\right)=\left(\frac{6}{\pi\tilde{\Delta}^{2}} \right)^{1/2}\exp\left(-\frac{6\mathbf{r}^{2}}{\tilde{\Delta}^{2}}\right). \tag{5.1}\]
The filter scale \(\tilde{\Delta}=32h_{\text{DNS}}\) is selected for both the forced and decaying homogeneous isotropic turbulence, while \(\tilde{\Delta}=8h_{\text{DNS}}\) for the temporally evolving turbulent mixing layer, where \(h_{\text{DNS}}\) denotes the grid spacing of DNS. Three conventional SGS models, _i.e._, the dynamic Smagorinsky model (DSM, Eq. 3.1), the dynamic mixed model (DMM, Eq. 3.3) and the approximate deconvolution model with standard secondary filtering regularization (ADM, Eqs. 3.6 \(\sim\) 3.8) are adopted to compare against the VOMM model. The consistent instantaneous snapshots of the filtered DNS data are used to initialize the LES calculations for different SGS models. Both the turbulent statistics and transient contours are evaluated and compared with different SGS models for the _a posteriori_ testings of the three canonical turbulent flows.
### Forced homogeneous isotropic turbulence
We perform the direct numerical simulation of forced incompressible isotropic turbulence using the uniform grid resolution \(N=1024^{3}\) in a cubic box of \((2\pi)^{3}\) with periodic boundary conditions (\(h_{\text{DNS}}=2\pi/1024\)) (Xie _et al._ 2020, 2020, 2020). The pseudo-spectral method is used for the spatial discretization of the governing equations (Canuto _et al._ 1988; Peyret 2002). The nonlinear advection terms are fully dealiased by the two-thirds dealiasing rule (Canuto _et al._ 1988). A second-order two-step Adams-Bashforth explicit scheme is used for time integration (Chen _et al._ 1993).
The kinematic viscosity is chosen as \(\nu=0.001\), and large-scale forcing is applied to the two lowest wavenumber shells to maintain the turbulence in statistical equilibrium, giving rise to the Taylor Reynolds number \(\text{Re}_{\lambda}\approx 250\) (Wang _et al._ 2010; Yuan _et al._ 2020). The detailed one-point statistics of DNS data for the forced isotropic turbulence are summarized in Table 1 (Yuan _et al._ 2022). Here, \(k_{\text{max}}=\frac{2\pi}{3h_{\text{DNS}}}\) denotes the largest effective wavenumber after the fully dealiasing, and \(\omega^{\text{rms}}=\sqrt{\left(\omega_{i}\omega_{i}\right)}\) represents the root-mean-square value of the vorticity magnitude, where \(\omega=\nabla\times\mathbf{u}\) stands for the vorticity which is the curl of the velocity field. The Kolmogorov length scale \(\eta\) and the integral length scale \(L_{I}\) stand for the smallest resolved scale and the largest
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Model(FGR=1,\(N=32^{3}\)) & DSM & DMM & ADM(\(\chi\)=0) & ADM(\(\chi\)=1) & VOMM \\ \hline t(CPU-s) & 0.142 & 0.243 & 0.056 & 0.056 & 0.066 \\ t/t\({}_{\text{DMM}}\) & 0.584 & 1 & 0.231 & 0.230 & 0.273 \\ \hline Model(FGR=2,\(N=64^{3}\)) & DSM & DMM & ADM(\(\chi\)=0) & ADM(\(\chi\)=1) & VOMM \\ \hline t(CPU-s) & 0.870 & 1.465 & 0.368 & 0.361 & 0.418 \\ t/t\({}_{\text{DMM}}\) & 0.594 & 1 & 0.251 & 0.246 & 0.285 \\ \hline Model(FGR=4,\(N=128^{3}\)) & DSM & DMM & ADM(\(\chi\)=0) & ADM(\(\chi\)=1) & VOMM \\ \hline t(CPU-s) & 6.512 & 10.103 & 2.517 & 2.588 & 3.240 \\ t/t\({}_{\text{DMM}}\) & 0.645 & 1 & 0.249 & 0.256 & 0.321 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The average computational cost of SGS stress modeling \(\tau_{ij}\) for LES computations with the filter width \(\tilde{\Delta}=32h_{\text{DNS}}\) in forced homogeneous isotropic turbulence.
characteristic scale of turbulence, and are defined respectively by
\[\eta=\left(\frac{\nu^{3}}{\varepsilon}\right)^{1/4}, \tag{5.2}\]
\[L_{I}=\frac{3\pi}{2(u^{\rm rms})^{2}}\int_{0}^{+\infty}\frac{E\left(k\right)}{k}dk, \tag{5.3}\]
where \(\varepsilon\) is the spatial average dissipation rate of kinetic energy. The total turbulent kinetic energy \(E_{k}=\left\langle u_{i}u_{i}\right\rangle/2=\int_{0}^{+\infty}E\left(k\right)dk\), and \(E\left(k\right)\) represents the velocity spectrum. The resolution parameters \(k_{\rm max}\eta\geqslant 2.1\) and \(\eta/h_{\rm DNS}\geqslant 1\) indicate that the grid resolution is sufficient to capture
Figure 4: Velocity spectra for different SGS models in the _a posteriori_ analysis of forced homogeneous isotropic turbulence with the same filter scale \(\bar{\Delta}=32h_{\rm DNS}\): (a) log-log for FGR=1, \(N=32^{3}\); (b) semi-log for FGR=1, \(N=32^{3}\); (c) log-log for FGR=2, \(N=64^{3}\); (d) semi-log for FGR=2, \(N=64^{3}\); (e) log-log for FGR=4; \(N=128^{3}\); and (f) semi-log for FGR=4, \(N=128^{3}\).
the smallest turbulent eddy scales and ensure the convergence of turbulent kinetic energy at all scales (Ishihara _et al._, 2007, 2009). In order to alleviate the impact of initial conditions, the forced homogeneous isotropic turbulence is run for a long period after the flow gradually reaches a statistically steady state (more than 50 large-eddy turnover times \(\tau=L_{I}/u^{\rm rms}\)). We select data of the last ten large-eddy turnover times as a benchmark for LES comparisons (total forty flow-field snapshots of DNS data).
In this paper, the Gaussian filter (Eq. 1) is used as the explicit filter to calculate the filtered physical variables. The selected filter width \(\tilde{\Delta}=32h_{\rm DNS}\) and the corresponding cutoff wavenumber is \(k_{c}=\pi/\tilde{\Delta}=16\). The velocity and dissipation spectra of the DNS and filtered DNS at \(\tilde{\Delta}=32h_{\rm DNS}\) are illustrated in Fig. 2. The filtered velocity spectrum nearly overlaps with the DNS data in a Kolmogorov scaling law of \(k^{-5/3}\) at the low wavenumber region, while it drops significantly at the region larger than the truncated wavenumber \(k_{c}\). Overall 12% of the turbulent kinetic energy is filtered out in the residual velocity field at the filter scale \(\tilde{\Delta}=32h_{\rm DNS}\). In contrast, the filtered dissipation spectrum gradually grows with the power of law scaling \(k^{1/3}\) at the low-wavenumber inertial region, and drops sharply where the cutoff wavenumber exceeds. The small scales near the truncated wavenumbers are essential for the reconstruction of the filtered dissipation spectrum and also very important for the residual SGS modeling. However, these small scales account for a very small proportion of the turbulent kinetic energy, almost several orders of magnitude smaller than the large scales. Thus, the dissipation spectrum instead of the kinetic energy spectrum is chosen as the optimization objective function of the proposed VOMM model in the paper.
The _a posteriori_ testings of LES are essential to validate the practical performance of the SGS models. LES calculations use the same kinematic viscosity (\(\nu=0.001\)) with the DNS. The filter width is fixed to \(\tilde{\Delta}=32h_{\rm DNS}\) and the impact of the spatial discretization errors on the SGS models is investigated by changing the grid resolution of LES. Three different filter-to-grid ratios FGR=\(\tilde{\Delta}/h_{\rm LES}\)=1, 2 and 4 are chosen to study the influence of spatial discretization on the SGS modeling, and the corresponding grid points of LES are \(N=32^{3}\), \(64^{3}\) and \(128^{3}\), respectively. The proposed VOMM model (Eq. 35) is compared against the classical SGS models, including the dynamic Smagorinsky model (DSM, Eq. 1), the dynamic mixed model (DMM, Eq. 3) and the standard approximate deconvolution model with secondary filtering regularization (ADM, Eqs. 7 and 8). The relaxation factors of ADM model \(\chi\)=0 and 1 are chosen for comparisons. The ratios of the time steps for LES and DNS are \(\Delta t_{\rm LES}/\Delta t_{\rm DNS}=\{10,10,5\}\) for different grids (FGR=1, 2 and 4 with \(N=32^{3}\), \(64^{3}\) and \(128^{3}\)). Among the filtered DNS data of the ten large-eddy turnover periods, the data of the first two large-eddy turnover times are used for the adjoint optimization of the VOMM model (only the dissipation spectrum is used, stored once every \(0.1\tau\), twenty sets in total), and the remaining data of the last eight large-eddy turnover times are used for the _a posteriori_ accuracy validation of the LES models.
Figure 5: Second-order structure functions of the filtered velocity for LES in the _a posteriori_ analysis of forced homogeneous isotropic turbulence with the same filter scale \(\tilde{\Delta}=32h_{\rm DNS}\): (a) FGR=1, \(N=32^{3}\); (b) FGR=2, \(N=64^{3}\); and (c) FGR=4, \(N=128^{3}\).
At the adjoint-based optimization stage of the VOMM model, the calculations of the adjoint equations are consistent with the primary LES equations. We adopt the same pseudo-spectral numerical scheme to spatially discrete the stabilized adjoint momentum equations (Eq. 4.26). A second-order two-step Adams-Bashforth explicit scheme is applied for the time backward integration with zero terminal conditions. Since the large-scale forcing is assumed to be nearly independent of the filtered velocity, the large-scale forcing term does not appear in the adjoint momentum equations. During the adjoint optimization stage (see Fig. 1) of the VOMM model, the pure structural ADM model without the dissipative Smagorinsky term is selected as the initial SGS model with model coefficients \(C_{1}^{(0)}=0\) and \(C_{2}^{(0)}=1\). The LES forward evolution is initialized by the filtered DNS velocity field and the dissipation spectrum is calculated when the filtered DNS data are available (every \(0.1\tau\)). The statistical discrepancy of the dissipation spectrum between the LES and fDNS data is evaluated and recorded as the cost functional. The adjoint-based gradients of the cost functional with respect to the model coefficients are calculated through backward integrating the stabilized adjoint LES equations (Eqs. 4.10 and 4.26) with zero terminal conditions. The SGS model coefficients are then iteratively updated by the gradient-based L-BFGS optimization algorithm (Eq. 4.41) until reaching the stopping criteria.
Figure 3 shows the evolution of the cost function normalized by the initial discrepancy during the adjoint-based optimization in forced homogeneous isotropic turbulence. The loss functions (prediction errors of dissipation spectra between LES and fDNS data) for all three different filter-to-grid ratio cases (FGR=1,2 and 4) gradually converge and become stationary within less than twenty iterations. The error is significantly reduced by nearly an order of magnitude for the cases of FGR=1 and 2 within about ten iterations, and is drastically reduced to 20% of the initial state at FGR=4. These results indicate that the adjoint-based L-BFGS gradient optimization is very efficient and effectively obtains the optimal model coefficients within several iterations. The optimal parameters of the VOMM model are summarized in Table 2. The magnitude of the eddy-viscosity coefficient (\(\left\langle\left|C_{1}^{\text{opt}}\right|\right\rangle\) ) dramatically reduces from 0.0529 to 0.003 with the increasing of FGR and LES resolutions, while the coefficient of the ADM part (\(C_{2}^{\text{opt}}\)) gradually approaches unity, which is identical to the theoretical value derived from the Taylor series expansions. Once the optimal model coefficients are obtained, we further examine the _a posteriori_ performance of the VOMM model using the filtered DNS data of the last eight large-eddy turnover periods.
Table 3 gives the average computational cost for the SGS stress modeling at the same filter width \(\tilde{\Delta}=32h_{\text{DNS}}\). For all three different grid resolutions, the computation time of the VOMM model is only about 30% of that of the DMM model, without significantly increasing the computational cost in comparison to the ADM models (\(\chi=0\) and 1).
The velocity spectra of different SGS models with the filter scale \(\tilde{\Delta}=32h_{\text{DNS}}\) in comparison to
Figure 6: Fourth-order structure functions of the filtered velocity for LES in the _a posteriori_ analysis of forced homogeneous isotropic turbulence with the same filter scale \(\tilde{\Delta}=32h_{\text{DNS}}\): (a) FGR=1, \(N=32^{3}\); (b) FGR=2, \(N=64^{3}\); and (c) FGR=4, \(N=128^{3}\).
those of the DNS and filtered DNS (fDNS) data are shown in Fig. 4. The velocity spectrum of DNS data exhibits a sufficiently long inertial range with a typical \(k^{-5/3}\) scaling. The spectrum of fDNS almost overlaps with that of DNS at the low-wavenumber region, but is obviously lower than that of DNS near the truncated wavenumber since the small-scale kinetic energy at high wavenumbers is filtered out. LES only solves the large-scale variables with the filtered Navier-Stokes equations (Eqs. 2 and 3), leaving the effect of residual small scales to be approximately reconstructed by the SGS model. Therefore the statistics of an ideal LES would overlap with that of the fDNS data as closely as possible. When the grid resolution of LES is sufficiently coarse and the grid spacing of LES is equal to the filter scale (FGR=1, c.f. Figs 4a and 4b), the spatial discretization error is significant and deteriorates the accuracy of the SGS stress modeling. LES calculations with traditional SGS models are very difficult to obtain accurate predictions of the turbulent kinetic energy cascade at FGR=1. The velocity spectra predicted by the ADM models with \(\chi=0\) and 1 exhibit numerical unstable, and kinetic energy at high wavenumbers is obviously overestimated due to the insufficient dissipation. DSM and DMM models also have dramatic overestimations at high-wavenumber regions, with predictions even larger than that of the DNS data. In contrast, VOMM model predicts the velocity spectra most accurately among these SGS models whose results nearly coincide with that of fDNS.
For the cases of fine grid resolutions (FGR=2 and 4), the pure ADM model (\(\chi=0\)) is still numerically unstable since the pure structural model itself cannot produce sufficient SGS dissipation. The ADM model with the standard secondary-filtering regularization (\(\chi=1\)) exhibits excessively dissipative, and the small-scale kinetic energy at high wavenumbers is extremely exhausted and much lower than that of fDNS. The predictions of DSM and DMM models illustrate the obviously tilted distribution, where kinetic energy at low wavenumbers is accumulated, while that near the truncated wavenumber is diminished. The dynamic least-square procedure for both DSM and DMM models would overestimate the eddy-viscosity coefficient for the cases of fine grid resolutions (FGR=2 and 4), and small-scale flow structures near the truncated wavenumbers are exhausted by the excessive dissipation. The turbulent kinetic energy is transferred from large scales to small scales through the forward energy cascade process of the nonlinear advection. The lack of the sufficient flow structures near the cutoff wavenumber leads to the energy accumulation in the intermediate wavenumber region. In contrast, the VOMM model is superior to the other SGS models and can accurately predict the velocity spectra at all different grid resolutions of LES, with the predictions very close to the fDNS data.
In order to further examine the reconstruction of multiscale properties of turbulence by the SGS models, we calculate the longitudinal structure functions of the filtered velocity, namely
Figure 7: Sixth-order structure functions of the filtered velocity for LES in the _a posteriori_ analysis of forced homogeneous isotropic turbulence with the same filter scale \(\tilde{\Delta}=32h_{\text{DNS}}\): (a) FGR=1, \(N=32^{3}\); (b) FGR=2, \(N=64^{3}\); and (c) FGR=4, \(N=128^{3}\).
(Xie _et al._ 2018, 2019_a_)
\[\bar{S}_{n}(r)=\left\langle\frac{\delta_{r}\bar{u}}{\bar{u}^{\rm rms}}\right|^{n }\right\rangle, \tag{5.4}\]
where \(n\) represents the order of structure function and \(\delta_{r}\bar{u}=\left[\bar{\bf u}\left({\bf x}+{\bf r}\right)-\bar{\bf u} \left({\bf x}\right)\right]\cdot\hat{\bf r}\) denotes the longitudinal velocity increment at the separation \({\bf r}\) with the unit distance vector \(\hat{\bf r}={\bf r}/|{\bf r}|\). Figures 5, 6 and 7 respectively compare the second-order, fourth-order and sixth-order structure functions of the filtered velocity for different SGS models with the filtered DNS data. For all three grid resolutions of LES (FGR=1, 2 and 4), all SGS models predict the lower-order structure functions (Fig. 5) much better than the higher-order structure functions (Figs. 6 and 7). Besides, the predictions of structure functions are improved greatly with the increasing of the grid resolution, and those of all SGS models almost coincide with each other at large separations. The ADM models (both \(\chi=0\) and 1) give the worst predictions and obviously overestimate the structure function at small distances \({\bf r}\). DSM and DMM models also predict the structure functions greater than the fDNS data at small separations but underestimate the structure functions at large distances. In contrast, the VOMM model can accurately reconstruct the structure functions with different orders at both small and large separations, almost overlapping with those of the filtered DNS.
We then evaluate the probability density functions (PDFs) of the filtered velocity increments to measure the spatial correlations of turbulence, as shown in Fig. 8, where the velocity increments \(\delta_{r}\bar{u}/\bar{u}^{\rm rms}\) are normalized by the root-mean-square value of velocity. The cases of fine grid resolutions (FGR=2 and 4) are very similar to that of FGR=1 and not shown in the paper. The PDFs of the velocity increments exhibit approximately symmetrical distribution, relatively
Figure 8: PDFs of the normalized velocity increments \(\delta_{\rm r}\bar{u}/\bar{u}^{\rm rms}\) for LES at grid resolution of \(32^{3}\) in the _a posteriori_ analysis of forced homogeneous isotropic turbulence with the same filter scale \(\bar{\Delta}=32h_{\rm DNS}\): (a) \({\rm r}=\bar{\Delta}\); (b) \({\rm r}=2\bar{\Delta}\); (c) \({\rm r}=3\bar{\Delta}\); (d) \({\rm r}=4\bar{\Delta}\).
Figure 9: Contours of the normalized vorticity \(\tilde{\omega}/\tilde{\omega}_{\text{IDNS}}^{\text{rms}}\) at an arbitrary \(x_{1}\)-\(x_{2}\) plane at \(t/\tau\approx 4\) for LES at a grid resolution of \(64^{3}\) (FGR=2) in forced homogeneous isotropic turbulence with the filter width \(\bar{\Delta}=32h_{\text{DNS}}\): (a) fDNS, (b) DMM, (c) ADM(\(\chi\)=0), (d) ADM(\(\chi\)=1), (e) DMM, and (f) VOMM.
concentrated at small distances while gradually becoming wider as the distance increases. The PDFs predicted by the ADM, DSM and DMM models are significantly wider than those of the fDNS. In comparison with these traditional SGS models, the VOMM model gives the most accurate prediction of the velocity increments for different distances, which are in reasonable agreement with the fDNS data.
We finally examine the reconstruction of instantaneous spatial flow structures by plotting the contours of the normalized vorticity magnitude as shown in Fig. 9. The vorticity contours are consistently extracted on an arbitrary \(x_{1}\)-\(x_{2}\) plane for the isotropic turbulence at the same time with approximately four large-eddy turnover periods ( \(t/\tau\approx 4\)) at a grid resolution of \(64^{3}\). It is noteworthy that the exact point-to-point correlations are difficult to achieve under the long-term forecasting of LES due to the chaotic nature of the turbulence and extreme sensitivity to perturbations (Pope 2000; Wang _et al._ 2022\(c\), 2023). The pure ADM model overpredicts some unrealistic small-scale structures, which are obviously different from the band-like or strip-like spatial structures of the fDNS data. The DSM, DMM and ADM (\(\chi=1\)) models only predict the large-scale vorticity structures and some small scales are excessively dissipated. Compared to these traditional SGS models, the VOMM model predicts the vortex structures very similar to the fDNS data.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline FGR & LES Resolution & \(C_{1}^{(0)}\) & \(C_{2}^{(0)}\) & \(C_{1}^{\text{opt}}\) & \(C_{2}^{\text{opt}}\) \\
1 & \(32^{3}\) & 0 & 1 & -0.0398 & 3.150 \\
2 & \(64^{3}\) & 0 & 1 & -0.0094 & 1.326 \\
4 & \(128^{3}\) & 0 & 1 & -0.0020 & 1.101 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The initial and optimal parameters of the VOMM model for LES computations with the filter width \(\bar{\Delta}=32h_{\text{DNS}}\) in decaying homogeneous isotropic turbulence.
Figure 10: The evolution of the normalized cost function in decaying homogeneous isotropic turbulence.
### Decaying homogeneous isotropic turbulence
In order to investigate the impact of turbulent unsteady evolution on SGS stress modeling, the numerical simulation of decaying homogeneous isotropic turbulence in a cubic box of \((2\pi)^{3}\) with periodic boundary conditions is investigated in this subsection. The numerical simulation method is consistent with the forced homogeneous isotropic turbulence. We spatially discretize the governing equations using the pseudo-spectral method with the two-thirds dealiasing rule at a uniform grid resolution of \(N=1024^{3}\). The temporal discretization scheme adopts the second-order two-step Adams-Bashforth explicit method. The statistically steady isotropic turbulence data of the forced isotropic turbulence (detailed statistics see Fig. 1) is used as the initial field for DNS decaying turbulence without the large-scale forcing. The kinematic viscosity is set to \(\nu=0.001\) and the initial Taylor Reynolds number is \(\mathrm{Re}_{\lambda}\approx 250\). We calculate the DNS data of decaying turbulence for about six large-eddy turnover times (\(\tau=L_{I}/u^{\mathrm{rms}}\)), the first two of which are used for the adjoint-based optimization to determine the model coefficients of VOMM model (only the dissipation spectrum is used, stored once every \(0.1\tau\), twenty sets in total).
The _a posteriori_ studies of LES adopt the consistent kinematic viscosity (\(\nu=0.001\)) with the DNS. The Gaussian filter (Eq. 5.1) is selected as the explicit filter with the given filter width \(\tilde{\Delta}=32h_{\mathrm{DNS}}\). Similar to the forced isotropic turbulence, three different filter-to-grid ratios FGR=\(\tilde{\Delta}/h_{\mathrm{LES}}\)=1,2 and 4 are chosen to investigate the impact of the spatial discretization on the SGS stress modeling with the corresponding grid resolutions of LES \(N=32^{3}\), \(64^{3}\) and \(128^{3}\). The
Figure 11: Temporal evolutions of the turbulent kinetic energy \(E_{k}\) for LES in the _a posteriori_ analysis of decaying homogeneous isotropic turbulence with the same filter scale \(\tilde{\Delta}=32h_{\mathrm{DNS}}\): (a) FGR=1, \(N=32^{3}\); (b) FGR=2, \(N=64^{3}\); and (c) FGR=4, \(N=128^{3}\).
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Model(FGR=1,\(N=32^{3}\)) & DSM & DMM & ADM(\(\chi\)=0) & ADM(\(\chi\)=1) & VOMM \\ \hline t(CPU-s) & 0.153 & 0.259 & 0.065 & 0.062 & 0.070 \\ t/t\({}_{\mathrm{DMM}}\) & 0.590 & 1 & 0.249 & 0.239 & 0.269 \\ \hline Model(FGR=2,\(N=64^{3}\)) & DSM & DMM & ADM(\(\chi\)=0) & ADM(\(\chi\)=1) & VOMM \\ \hline t(CPU-s) & 1.026 & 1.857 & 0.567 & 0.563 & 0.589 \\ t/t\({}_{\mathrm{DMM}}\) & 0.553 & 1 & 0.306 & 0.303 & 0.317 \\ \hline Model(FGR=4,\(N=128^{3}\)) & DSM & DMM & ADM(\(\chi\)=0) & ADM(\(\chi\)=1) & VOMM \\ \hline t(CPU-s) & 6.026 & 10.287 & 2.521 & 2.531 & 3.393 \\ t/t\({}_{\mathrm{DMM}}\) & 0.586 & 1 & 0.245 & 0.246 & 0.330 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The average computational cost of SGS stress modeling \(\tau_{ij}\) for LES computations with the filter width \(\tilde{\Delta}=32h_{\mathrm{DNS}}\) in decaying homogeneous isotropic turbulence.
adjoint-based optimization of the VOMM model (c.f. Fig. 1) is first performed to determine the optimal model coefficients using the dissipation spectra as the cost functional. The pure ADM model without the Smagorinsky part is used as the initial SGS model with parameters \(C_{1}^{(0)}=0\) and \(C_{2}^{(0)}=1\). The adjoint-based gradients of the cost functional for the model coefficients are evaluated by successively forward solving the LES equations (Eqs. 4 and 5) and backward integrating the stabilized adjoint LES equations (Eqs. 10 and 16). The gradient-based L-BFGS optimization algorithm (Eq. 41) is used for iteratively updating the SGS model parameters until reaching the stopping criteria.
The evolution of the cost function normalized by the initial loss during the adjoint-based optimization for the decaying isotropic turbulence is displayed in Fig. 10. The loss functions for all three cases of different grid resolutions (FGR=1,2 and 4) drop rapidly at the beginning and gradually reach a plateau within approximately twenty iterations. The prediction errors of the optimization objective are considerably reduced to 10% of the initial state for both FGR=1 and 2, and substantially decreased to about 20% of the original value at FGR=4. The adjoint-based gradient optimization can quickly obtain the optimal model parameters within a limited number of iterations (less than 100 optimization iterations, namely, 200 LES evaluations). Table 4 gives the optimal parameters of the VOMM model. The magnitude of the dissipative Smagorinsky coefficient (\(\left|C_{1}^{\rm opt}\right|\)) significantly drops from 0.0398 to 0.002 as the LES resolution increases, which is slightly lower than that in forced homogeneous isotropic turbulence. In contrast, the coefficient of the structural part (\(C_{2}^{\rm opt}\)) is asymptotically close to unity as the grid spacing of LES becomes smaller, similar to the results of forced isotropic turbulence.
The _a posteriori_ performance of the VOMM model is further validated after determining the optimal SGS model coefficients by the adjoint-based gradient optimization. We compare the proposed VOMM model (Eq. 43) with the classical SGS models including the DSM model (Eq. 3.1), DMM model (Eq. 3.3) and the ADM model regularized by the standard secondary-filtering technique (Eqs. 3.7 and 3.8). The time steps of LES are given as \(\Delta t_{\rm LES}/\Delta t_{\rm DNS}=\{10,10,5\}\) for different grid resolutions (FGR=1, 2 and 4 with \(N=32^{3}\), \(64^{3}\) and \(128^{3}\)). The average computational costs for the SGS stress modeling with different grid resolutions using different SGS models at the same filter scale \(\tilde{\Delta}=32h_{\rm DNS}\) are summarized in Table 5. The computation time of the VOMM model only accounts for approximately 30% of the time of DMM model and slightly increases in computational cost compared to the ADM models with \(\chi=0\) and 1.
Figures 11 and 12 respectively compare the temporal evolutions of the turbulent kinetic energy and the resolved dissipation rate (\(\bar{\varepsilon}=2\nu\left\langle\bar{S}_{ij}\bar{S}_{ij}\right\rangle\)) of different SGS models with the filtered DNS (fDNS) data. The turbulent kinetic energy gradually decays from the initial statistically steady state over time, since there are no additional forcing driving the dissipative turbulent system. All the classical SGS models (DSM, DMM and ADM models) clearly overestimate the kinetic energy
Figure 12: Temporal evolutions of the average dissipation rate \(\bar{\varepsilon}\) for LES in the _a posteriori_ analysis of decaying homogeneous isotropic turbulence with the same filter scale \(\tilde{\Delta}=32h_{\rm DNS}\): (a) FGR=1, \(N=32^{3}\); (b) FGR=2, \(N=64^{3}\); and (c) FGR=4, \(N=128^{3}\).
throughout the time, which differs significantly from the benchmark fDNS data. In contrast, the VOMM model gives reasonable predictions of the turbulent kinetic energy, which is the closest to the fDNS data. The average dissipation rate displays a decline trend with time, similar to that of the turbulent kinetic energy. However, all conventional SGS models wrongly predict the non-monotonic tendency of the average dissipation rate over time. For the case of sufficiently coarse grid resolution of LES (FGR=1 with \(N=32^{3}\)), DSM, DMM and ADM models overpredict the dissipation rate with an erroneous temporal evolution that first increases and then decreases. When the grid resolution of LES becomes fine (FGR=2 and 4 with \(N=64^{3}\) and \(128^{3}\)), DSM and DMM models obviously underestimate the dissipative rate at the early stage of decaying turbulence (\(t/\tau\leqslant 3\)), then DMM model gradually becomes closer to the fDNS data while DSM model
overestimates the dissipation rate with the decaying of turbulence. The pure ADM model ( \(\chi=0\) ) always gives the overestimations of the dissipation rate for all three different grid resolutions of LES, even though the pure ADM model can accurately predict the turbulent kinetic energy at a sufficiently high grid resolution (FGR=4). These results demonstrate that the pure structural ADM model without any dissipative terms might not accurately predict all physical quantities of LES (_i.e._, the average dissipation rate), even if the grid resolution is high enough compared to the filter scale (FGR=4). The ADM model with standard secondary-filtering regularization (\(\chi=1\)) provides excessive dissipation similar to the DSM model with mispredictions of first underestimating and then overestimating the average dissipation rate over time at FGR=2 and 4. In comparison to these classical SGS models, the VOMM model accurately predicts the temporal evolutions of average dissipation rate for all three different grid resolutions, which agrees fairly well with the benchmark filtered DNS data.
The transient velocity spectra of different SGS models at the filter width \(\bar{\Delta}=32h_{\text{DNS}}\) with two different time instants \(t/\tau\approx 2\) and 4 are further illustrated in Fig. 13. The velocity spectra exhibit an overall decrease, and the kinetic energy at all wavenumbers declines with the decaying of turbulence. All the classical SGS models (DSM, DMM and ADM models) overpredict the kinetic energy at high wavenumbers for the coarse grid-resolution case (FGR=1 with \(N=32^{3}\) ) and the excessive kinetic energy stacked at small scales leads to the numerical instability of LES, which gradually intensifies with the evolution of time. The conventional SGS models provide insufficient model dissipation to balance the discretization errors and the small-scale kinetic energy cannot be effectively dissipated in time at FGR=1. For the fine grid-resolution cases (FGR=2 and 4 with \(N=64^{3}\) and \(128^{3}\)), the dissipation of the traditional SGS models (DSM, DMM models, and ADM model with \(\chi=1\)) is too strong to diminish most small-scale flow structures near the truncated wavenumber, which hinders the normal transmission of turbulent kinetic energy cascades from large scales to small scales. Therefore, the kinetic energy of classical SGS models accumulates in the region of intermediate wavenumbers, leading to the overestimations of the turbulent kinetic energy with time (Fig. 11) at FGR=2 and 4 with \(N=64^{3}\) and \(128^{3}\). LES using the pure ADM model with \(\chi=0\) is always numerically unstable and lacks necessary SGS dissipation to drain out the small-scale kinetic energy for all different grid resolutions. Compared to these classical SGS models, the VOMM model can accurately reconstruct the kinetic energy cascade with the predictions that nearly coincide with those of fDNS at all three different grid resolutions.
Furthermore, we compare the PDFs of the normalized vorticity magnitude at the dimensionless time \(t/\tau\approx 4\) as shown in Fig. 14. The vorticity is normalized by the root-mean-square values of the vorticity calculated by the fDNS data for comparisons of different grid resolutions. The pure ADM models with \(\chi=0\) gives the worst prediction of the vorticity with erroneous peaks
Figure 15: Contours of the normalized vorticity \(\bar{\omega}/\bar{\omega}_{\rm IDNS}^{\rm rms}\) at an arbitrary \(x_{1}\)-\(x_{2}\) plane at \(t/\tau\approx 4\) for LES at a grid resolution of \(64^{3}\) (FGR=2) in decaying homogeneous isotropic turbulence with the filter width \(\bar{\Delta}=32h_{\rm DNS}\): (a) fDNS, (b) DMM, (c) ADM(\(\chi\)=0), (d) ADM(\(\chi\)=1), (e) DMM, and (f) VOMM.
of PDFs significantly different from the fDNS data for all three grid resolutions. The secondary filtering technique (\(\chi=1\)) of the ADM model cannot improve the prediction of vorticity very well, whose estimations are still obviously different from the benchmark fDNS data. DSM and DMM models underestimate the PDF of vorticity and have wrong predictions of the PDF peak at the coarse grid-resolution case (FGR=1 with \(N=32^{3}\) ), while greatly improving the predictions of PDFs with the increasing of the grid resolution (FGR=2 and 4 with \(N=64^{3}\) and \(128^{3}\)). In contrast, the VOMM model outperforms these classical SGS models at all three different grid resolutions, which gives a reasonably good prediction for both the locations and the peaks of the PDFs of the vorticity.
The reconstruction of transient spatial vorticity structures are finally demonstrated by the contours of the normalized vorticity magnitude shown in Fig. 15. The instantaneous snapshots are selected on an arbitrary \(x_{1}\)-\(x_{2}\) slice at the consistent time instant \(t/\tau\approx 4\). The pure ADM model predicts the excessive stochastic small-scale structures, which significantly differ from the fDNS data. The other SGS models can predict the large-scale vorticity structures quite well, but the VOMM model reconstruct the spatial vortex structures very similar to the benchmark fDNS data. The VOMM model can accurately recover more flow structures and the temporal evolution of the vortex with suitable SGS dissipation and accurate structural modeling.
### Temporally evolving turbulent mixing layer
The turbulent mixing layer is one of the cardinal flows in the fluid-mechanics community, which is widely applied to the investigation of turbulent combustion, chemical reaction mixing process, and fundamental studies of flow instabilities. The turbulent mixing layer involves the unsteady shear process of vortex shedding and transition from laminar to turbulent flows, which are remarkably suitable for investigating the impact of non-uniform turbulent shear and mixing on the SGS models. The temporally evolving turbulent mixing layer characterized by the Kelvin-Helmholtz instability induced by the initial velocity difference is considered in this paper (Vreman _et al._ 1997; Sharan _et al._ 2019; Wang _et al._ 2022_a_). The free-shear mixing layer is governed by the same Navier-Stokes equations (Eqs. 2.1 and 2.2) without the forcing term. Figure 16 illustrates the diagram of the flow configuration for the temporally evolving turbulent mixing layer with the initial hyperbolic tangent streamwise velocity profile. The numerical simulation of mixing layer is performed in a cuboid domain with lengths \(L_{1}\times L_{2}\times L_{3}=8\pi\times 8\pi\times 4\pi\) at the uniform grid resolution of \(N_{1}\times N_{2}\times N_{3}=512\times 512\times 256\) where \(x_{1}\in[-L_{1}/2,L_{1}/2]\), \(x_{2}\in[-L_{2}/2,L_{2}/2]\) and \(x_{3}\in[-L_{3}/2,L_{3}/2]\) denote the streamwise, transverse and spanwise directions, respectively. To enable a periodic configuration in the normal direction, the initial
Figure 16: Diagram of the temporally evolving mixing layer with the mean velocity profile: (a) schematic of the mixing layer, (b) mean streamwise velocity profile \(\langle u_{1}\rangle\) along the normal (\(x_{2}\)) direction.
mean streamwise velocity (c.f. Fig. 16b) is given by (Sharan _et al._, 2019; Wang _et al._, 2022_a_)
\[\langle u_{1}\rangle=\frac{\Delta U}{2}\left[\tanh\left(\frac{x_{2}}{2\delta_{ \theta}^{0}}\right)-\tanh\left(\frac{x_{2}+L_{2}/2}{2\delta_{\theta}^{0}} \right)-\tanh\left(\frac{x_{2}-L_{2}/2}{2\delta_{\theta}^{0}}\right)\right], \text{ for }-\frac{L_{2}}{2}\leqslant x_{2}\leqslant\frac{L_{2}}{2}, \tag{5.5}\]
where \(\Delta U=2\) is the velocity difference between two equal and opposite free streams across the shear layer, \(\delta_{\theta}^{0}=0.08\) denotes the initial momentum thickness, and \(\langle\cdot\rangle\) stands for a spatial average over all the homogeneous directions (_i.e._, \(x_{1}\) and \(x_{3}\) directions for the mixing layer). The initial mean transverse and spanwise velocities are both set to zero, namely, \(\langle u_{2}\rangle=\langle u_{3}\rangle=0\). Since the initial mean velocity field is periodic in all three directions, the triply periodic boundary conditions are adopted and the pseudo-spectral method with the two-thirds dealiasing rule is used for the spatial discretization. An explicit two-step Adam-Bashforth scheme is selected as the time-advancing scheme. In order to effectively suppress the influence of the top and bottom boundaries on the central mixing layer, two numerical diffusion buffer zones are applied near the vertical edges of domain (Wang _et al._, 2022_a_). The thickness of the buffer layer is set to \(15\delta_{\theta}^{0}\) in the paper, which is sufficiently large and has a negligible effect on the calculations of mixing layer (Wang _et al._, 2022_a_).
The digital filter method is used to generate the spatially-correlated initial perturbation imposed on the mean velocities with the digital filter width \(\Delta_{d}=\tilde{\Delta}=8h_{\text{DNS}}\) consistent to the filter scale of LES (Klein _et al._, 2003; Wang _et al._, 2022_b_). The initial Reynolds stress distribution (\(R_{ij}=\left\langle u_{i}^{\prime}u_{j}^{\prime}\right\rangle\) where \(u_{i}^{\prime}=u_{i}-\langle u_{i}\rangle\) represents the fluctuated velocity) of the digital filter method is assumed as a vertical distribution of \(R_{ij}=A\left(1-\langle u_{1}\rangle^{2}\right)I_{ij}\) with the identity \(I_{ij}\) and peak amplitude \(A=0.025\Delta U\). The kinematic viscosity of shear layer is set to \(\nu_{\infty}=5\times 10^{-4}\). The momentum thickness quantifies the range of turbulence region in the mixing layer, which is defined by (Rogers & Moser, 1994; Sharan _et al._, 2019)
\[\delta_{\theta}=\int\limits_{-L_{2}/4}^{L_{2}/4}\left[\frac{1}{4}-\left(\frac {\langle\bar{u}_{1}\rangle}{\Delta U}\right)^{2}\right]dx_{2}. \tag{5.6}\]
Correspondingly, the Reynolds number based on the momentum thickness \(Re_{\theta}\) is expressed as
\[Re_{\theta}=\frac{\Delta U\delta_{\theta}}{\nu_{\infty}}. \tag{5.7}\]
Here, the initial momentum thickness Reynolds number is \(Re_{\theta}^{0}=320\). The detailed numerical parameters of DNS for the temporally evolving mixing layer is summarized in Table 6.
We calculate the DNS of the mixing layer for total of eight hundred time units (\(t/\tau_{\theta}=800\)) normalized by \(\tau_{\theta}=\delta_{\theta}^{0}/\Delta U\). In order to reduce the impact of initial random disturbances on the temporal development of the shear layer, six numerical experiments with different random initializations are performed, one of which is adopted for the parameter optimization of the VOMM model, while the remaining five are used to evaluate the ensemble-averaged physical quantities. The _a posteriori_ studies of LES are conducted using the explicit Gaussian filter (Eq. 5.1) with
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \(N_{1}\times N_{2}\times N_{3}\) & \(L_{1}\times L_{2}\times L_{3}\) & \(\nu_{\infty}\) & \(Re_{\theta}\) & \(\delta_{\theta}^{0}\) & \(\Delta U\) & \(\Delta_{d}/h_{\text{DNS}}\) & \(h_{\text{DNS}}\) & \(\Delta t_{\text{DNS}}\) \\ \hline \(512\times 512\times 256\) & \(8\pi\times 8\pi\times 4\pi\) & \(5\times 10^{-4}\) & \(4000\) & \(0.08\) & \(2\) & \(8\) & \(\pi/64\) & \(0.002\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Numerical parameters for the DNS of the temporally evolving mixing layer.
the given filter scale \(\tilde{\Delta}=8h_{\text{DNS}}\) and initialized by the same instantaneous velocity field of the filtered DNS at \(t/\tau_{\theta}=50\). Two different filter-to-grid ratios FGR=\(\tilde{\Delta}/h_{\text{LES}}\)=1 and 2 are selected to study the influence of the spatial resolution or discretization error on the SGS stress modeling with the corresponding grid resolutions of LES: \(N=64^{2}\times 32\) and \(128^{2}\times 64\). The results from the previous two turbulence problems (forcing and decaying homogenous isotropic turbulence) indicate that the statistics of turbulence are very close and similar when the grid resolution is sufficiently fine (FGR=2 and 4) and the discretization error is considered negligible. However, the statistics of LES with a relatively coarse grid resolution (FGR=1) are distinctly different from those of LES with satisfactory grid resolutions (FGR=2 and 4), since the spatial discretization error of FGR=1 is considerably significant and dominates the SGS modelling error. Therefore, the _a posteriori_ testings of LES at both FGR=1 and 2 are essential for performance evaluations of the SGS model.
The dissipation spectrum of the filtered DNS is consistently used as the objective function to optimize the model parameters of the VOMM model during the period (assess every \(t/\tau_{\theta}=10\) with total thirty-six groups at \(50\leqslant t/\tau_{\theta}\leqslant 400\)) of the adjoint-based optimization (c.f. Fig. 1). The pure ADM model without the dissipative term is adopted as the initial SGS model with coefficients \(C_{1}^{(0)}=0\) and \(C_{2}^{(0)}=1\). We calculate the adjoint-based gradients of the cost functional for the model parameters by backward integrating the stabilized adjoint LES equations (Eqs. 4.10 and 4.26). The SGS model coefficients are iteratively updated by the L-BFGS optimization method (Eq. 4.41) until the stopping criterion is ultimately satisfied. Figure 17 gives the optimization
\begin{table}
\begin{tabular}{c c c c c} \hline \hline FGR & LES Resolution & \(C_{1}^{(0)}\) & \(C_{2}^{(0)}\) & \(C_{1}^{\text{opt}}\) & \(C_{2}^{\text{opt}}\) \\
1 & \(64^{2}\times 32\) & 0 & 1 & -0.0637 & 1.188 \\
2 & \(128^{2}\times 64\) & 0 & 1 & -0.0126 & 1.000 \\ \hline \hline \end{tabular}
\end{table}
Table 7: The initial and optimal parameters of the VOMM model for LES computations with the filter width \(\tilde{\Delta}=8h_{\text{DNS}}\) in temporally evolving mixing layer.
Figure 17: The evolution of the normalized cost function in temporally evolving turbulent mixing layer.
process of the cost function during the adjoint-based optimization for the temporally evolving mixing layer. The loss functions for both FGR=1 and 2 drop dramatically and reach a steady plateau within less than ten iterations. The cost function of FGR=1 shows a more distinct reduction with approximately 8% of the initial level than that of FGR=2 decreasing to the 10% of original value. The optimal parameters of VOMM model are quickly obtained by the effective gradient-based optimization within a limited number of iterations (around 10 optimization evaluations, namely, 20 LES calculations). Table 7 summarizes the optimal parameters of the VOMM model. The parameter magnitude of the dissipative Smagorinsky term (\(\left|C_{1}^{\text{opt}}\right|\)) obviously decreases from 0.0637 to 0.0126 when the FGR increases from 1 to 2, while the ADM coefficient (\(C_{2}^{\text{opt}}\)) generally tends towards unity, similar to the cases of isotropic turbulence.
We then examine the _a posteriori_ performance of the proposed VOMM model once the SGS model coefficients are determined by the adjoint-based gradient optimization strategy. In order to demonstrate the generality of the optimal model parameters that are insensitive to the initial perturbations, ensemble-averaged quantities are evaluated by five numerical experiments with different initial random disturbances from the optimization process. The time steps of LES are set as \(\Delta t_{\text{LES}}/\Delta t_{\text{DNS}}=\{10,5\}\) to guarantee the consistent CFL number for different grid resolutions (FGR=1 and 2 with \(N=64^{2}\times 32\) and \(128^{2}\times 64\)). The VOMM model is compared with the conventional SGS models (DSM, DMM and ADM models), and the average modeling costs for different SGS models are listed in Table 8. The VOMM model evaluates efficiently with about 30% computational cost of the DMM model which is similar to those of the ADM models.
Figure 18 illustrates the temporal evolutions of the momentum thickness \(\delta_{\theta}\) in LES calculations of different SGS models compared to the benchmark fDNS data. At the case of coarse grid resolution (FGR=1 with \(N=64^{2}\times 32\)), all conventional SGS models underpredict the momentum thickness at the early stage of shear layer development (\(t/\tau_{\theta}\leqslant 300\)) but give obvious overestimations in the linear growth region. For the fine-grid-resolution case (FGR=2 with \(N=128^{2}\times 64\)), DMM and ADM (\(\chi\)=1) models can capture the growth rate of momentum thickness well at the beginning of temporal development, but still overpredict the thickness with the developing of shear layer. The prediction of the pure ADM model with \(\chi=0\) is irregular and nonlinear all the time without an apparent linear self-similar region. The DSM model at different grid resolutions gives the clearly tilted temporal evolutions, where the momentum thickness is underestimated at the beginning of transition region and overpredicted in the region of linear growth. In contrast, the predictions of the VOMM model always coincide well with those of fDNS, and they accurately capture the temporal growth rate in the linear region at both grid resolutions.
Furthermore, the evolutions of the turbulent kinetic energy in the streamwise and spanwise
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model(FGR=1,\(N=64^{2}\times 32\)) & DSM & DMM & ADM(\(\chi\)=0) & ADM(\(\chi\)=1) & VOMM \\ \hline t(CPU-s) & 0.646 & 1.096 & 0.254 & 0.247 & 0.362 \\ t/t\({}_{\text{DMM}}\) & 0.590 & 1 & 0.232 & 0.225 & 0.330 \\ \hline Model(FGR=2,\(N=128^{2}\times 64\)) & DSM & DMM & ADM(\(\chi\)=0) & ADM(\(\chi\)=1) & VOMM \\ \hline t(CPU-s) & 3.756 & 6.370 & 1.465 & 1.460 & 1.908 \\ t/t\({}_{\text{DMM}}\) & 0.590 & 1 & 0.230 & 0.229 & 0.300 \\ \hline \hline \end{tabular}
\end{table}
Table 8: The average computational cost of SGS stress modeling \(\tau_{ij}\) for LES computations with the filter width \(\tilde{\Delta}=8h_{\text{DNS}}\) in temporally evolving turbulent mixing layer.
directions are displayed in Figs. 19 and 20, respectively. The comparisons of transverse turbulent kinetic energy for different SGS models are very similar to those in the spanwise direction, not shown in the paper. The turbulent kinetic energy of DNS in different directions gradually increases with the developing of the shear layer, since the initial perturbated velocity field is approximately laminar and steadily transitions to turbulence. The temporal development of the streamwise kinetic energy can be approximately regarded as a linear growth with time, which is distinctly different from that of spanwise kinetic energy. All classical SGS models predict both streamwise and spanwise kinetic energy much larger than the benchmark fDNS results at both grid resolutions of LES, except that the pure ADM model gives underestimations of kinetic energy in the fine-grid-resolution case (FGR=2). Compared to these traditional models, the VOMM model accurately predicts the kinetic energy at different grid resolutions in both streamwise and spanwise directions, and is the closest to the fDNS data.
The profiles of the resolved Reynolds shear stress component \(\bar{R}_{12}=\left\langle\bar{u}_{1}^{\prime}\bar{u}_{2}^{\prime}\right\rangle\) at time instants \(t/\tau_{\theta}\approx 500\) and \(800\) are illustrated in Fig. 21, which is the dominant Reynolds stress term due to the intense mixing along the streamwise and normal directions (Vreman _et al._, 1997; Sharan _et al._, 2019). The normal distribution of the Reynolds stress is a second-order statistic of turbulence which has high requirements for the accuracy of SGS modeling of LES. The ADM models underpredict the Reynolds stress, while DSM and DMM models give obvious overestimations at
Figure 19: Temporal evolutions of the streamwise turbulent kinetic energy \(E_{k1}\) for LES in the _a posteriori_ analysis of temporally evolving turbulent mixing layer with the same filter scale \(\bar{\Delta}=8h_{\text{DNS}}\): (a) FGR=1, \(N=64^{2}\times 32\); (b) FGR=2, \(N=128^{2}\times 64\).
Figure 18: Temporal evolutions of the momentum thickness \(\delta_{\theta}\) for LES in the _a posteriori_ analysis of temporally evolving turbulent mixing layer with the same filter scale \(\bar{\Delta}=8h_{\text{DNS}}\): (a) FGR=1, \(N=64^{2}\times 32\); (b) FGR=2, \(N=128^{2}\times 64\).
different times. Compared to these classical SGS models, the VOMM model gives the prediction closest to the fDNS results, and accurately recovers the transient profiles of Reynolds stress.
We further compare the velocity spectra of different SGS models with the DNS and filtered DNS data at time instants \(t/\tau_{\theta}\approx 500\) and \(800\), as shown in Fig. 22. The spectra of DNS at \(t/\tau_{\theta}\approx 500\) and \(800\) are very similar since the instantaneous velocity fields at different moments are both at the self-similar stage of mixing layer. For the coarse grid-resolution case at FGR=1 with \(N=64^{2}\times 32\), the conventional SGS models (DSM, DMM and ADM models) always give the overestimations of the small-scale kinetic energy at high wavenumbers, and the excess kinetic energy accumulates at small scales and exacerbates the numerical instability of LES over time. The SGS dissipation provided by these conventional SGS models is insufficient to stabilize the numerical perturbations induced by the spatial discretization errors, which cannot effectively drain out the small-scale kinetic energy in time at FGR=1. For the case of fine grid resolution at FGR=2 with \(N=128^{2}\times 64\), the pure ADM model is still numerically unstable, whose prediction distinctly deviates from the fDNS data. And the velocity spectra predicted by the other conventional SGS models (DSM, DMM and ADM with \(\chi\)=0) diminish at high-wavenumber regions and accumulate in the region of intermediate wavenumbers, since these traditional SGS models are too dissipative at the fine grid-resolution case to recover the effect of small-scale flow structures near the cutoff wavenumber, giving rise to the blockage of the kinetic energy cascade from large scales to small scales. In contrast, the kinetic energy cascade can be correctly constructed with high accuracy by
the VOMM model, and the predictions are always in reasonable agreement with those of fDNS at different grid resolutions and time instants.
The reconstruction of vortex structures is finally compared with different SGS models by displaying the iso-surface of the Q-criterion. The Q-criterion is a useful visualization tool for observing vortex structures in turbulent flows, and is the second invariant of velocity gradient tensor, namely (Hunt _et al._ 1988; Dubief & Delcayre 2000; Zhan _et al._ 2019)
\[Q=\frac{1}{2}\left(\bar{\Omega}_{ij}\bar{\Omega}_{ij}-\bar{S}_{ij}\bar{S}_{ij} \right)\,, \tag{5.8}\]
where \(\bar{\Omega}_{ij}=\frac{1}{2}\left(\partial\bar{u}_{i}/\partial x_{j}- \partial\bar{u}_{j}/\partial x_{i}\right)\) represents the rotation-rate tensor. The instantaneous iso-surface of Q at \(t/\tau_{\theta}\approx 500\) is illustrated in Fig. 23 during the self-similar stage of the mixing layer for Q=0.2 colored by the streamwise velocity. The Q iso-surface of fDNS contains a large number of elaborate vortex structures near the middle \(x_{1}\)-\(x_{3}\) plane of the shear layer, including the rib-like vortices, hairpin vortices and complex helical vortices, _etc._ DSM, DMM and ADM (\(\chi\)=1) models exhibit an excessive dissipation that only large-scale rib-like vortex structures remain, while the pure ADM model with \(\chi\)=0 suffers from numerical instability of LES and overpredicts many nonphysical small-scale structures caused by numerical noise. In contrast, the VOMM model can accurately reconstruct much more vortex structures, highlighting its advantage in improving the accuracy of LES.
Figure 23: The iso-surface of the Q-criterion at \(Q\)=0.2 colored by the streamwise velocity at \(t/\tau_{\theta}\approx 500\) in the _a posteriori_ analysis of temporally evolving turbulent mixing layer with filter scale \(\tilde{\Delta}=8h_{\rm DNS}\) at grid resolution of \(N=128^{2}\times 64\): (a) fDNS, (b) DMM, (c) ADM(\(\chi\)=0), (d) ADM(\(\chi\)=1), (e) DMM, and (f) VOMM.
## 6 Conclusion
In this work, an adjoint-based variational optimal mixed model (VOMM) is developed for the large-eddy simulation of turbulence. We first derive the original adjoint LES equations with the general SGS model, and then carry out the energy budget analysis of adjoint equations. These detailed derivations demonstrate that the quadratic term with negative eigenvalues of the shear strain rate is responsible for the exponential temporal growth of the adjoint-based gradients, giving rise to the numerical divergence in a long time horizon for the chaotic turbulent flows. This issue might greatly limits the application of the adjoint-based variational methods and optimal control strategy in turbulence problems. An additional stabilization term is introduced to maintain the numerical stability of the adjoint LES equations and is efficiently calculated by the sequential quadratic programming (SQP) approach, without degrading the accuracy of gradient evaluations for the SGS model parameters. Subsequently, the stabilized adjoint LES equations are correspondingly formulated.
The approximate deconvolution model (ADM) in the scale-similarity form and the dissipative Smagorinsky term are selected as the basis tensors of the proposed VOMM model. The parameters of the VOMM model are optimally identified by minimizing the statistical discrepancies between dissipation spectra of the LES and those of the benchmark filtered DNS data. The adjoint-based gradients of cost functional for model coefficients are efficiently evaluated by successively forward solving the LES equations and backward integrating the stabilized adjoint LES equations. The gradient-based L-BFGS optimization algorithm is adopted for iteratively updating the VOMM model parameters until the optimal values are obtained.
Three turbulent flow scenarios including the forced homogeneous isotropic turbulence, decaying homogeneous isotropic turbulence and temporally evolving turbulent mixing layer are investigated to examine the _a posteriori_ performance of the VOMM model. The pure structural ADM model without the dissipative Smagorinsky term is selected as the initial SGS model for the parameter optimization. The loss functions of the dissipation spectra can dramatically converge and reach the optimal state of only about 10% of the initial value within less than twenty iterations (about forty LES evaluations) during the adjoint-based gradient optimization at different grid resolutions for these three types of turbulence. These results indicate that the adjoint-based gradient optimization is an effective tool to obtain the optimal parameters of VOMM model within only a few iterations. Meanwhile, the computational efficiency of the proposed method is independent of the number of parameters.
Once the optimal SGS model coefficients are determined by the adjoint-based gradient optimization, the _a posteriori_ accuracy of the VOMM model is further tested in comparison with the classical SGS models, including the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), the pure ADM model and ADM model with the standard secondary-filtering regularization, respectively. The various statistics of turbulence and the instantaneous flow structures are comprehensively compared for LES calculations of different SGS models with the benchmark filtered DNS data at different grid resolutions of three turbulent flow scenarios.
In the cases of forced and decaying homogeneous isotropic turbulence, the filter scale is fixed to \(\bar{\Delta}=32h_{\text{DNS}}\) and the impact of the spatial discretization errors on the SGS modeling is studied by changing the grid resolution of LES with three different filter-to-grid ratios FGR=1, 2 and 4. The _a posteriori_ performance of the proposed VOMM model is systematically evaluated by comparison to the conventional SGS models (DSM, DMM and ADM models) in terms of the velocity spectra, structure functions with different orders, PDFs of the velocity increments and vorticity, temporal evolutions of the turbulent kinetic energy and average dissipation rate, as well as the instantaneous vorticity contours at different grid resolutions. The pure ADM model always exhibits numerical instability due to the insufficient sufficient SGS dissipation for all grid-resolution cases. The dynamic models and standard regularized ADM model underpredict the
model dissipation in the case of coarse grid resolution (FGR=1), with the excess kinetic energy accumulated at small scales leading to the numerical instability of LES. The SGS dissipation imposed by these classical SGS models is insufficient to suppress the numerical perturbations dominated by the spatial discretization, and it cannot effectively drain out the small-scale kinetic energy in time at FGR=1. However, the traditional SGS models are too dissipative that most small-scale flow structures near the truncated wavenumber are diminished, giving rise to the blockage of the kinetic energy cascade from large scales to small scales at situations of satisfactory grid resolutions. In contrast, the VOMM model can correctly reconstruct the kinetic energy cascade and the evolution of dissipation rate with high accuracy, which is essential for the isotropic turbulence. In addition, the VOMM model accurately predicts various flow statistics and transient spatial flow structures, which are always in reasonable agreement with the benchmark filtered DNS results at different grid resolutions and times.
In the context of the temporally evolving turbulent mixing layer, the unsteady evolution of the shear layer from the initial perturbed velocity field gradually transitions to fully developed turbulence is challenging for the SGS modeling of LES. The VOMM model can accurately reconstruct the temporal evolutions of characteristic physical quantities of the mixing layer, including the momentum thickness, turbulent kinetic energy in different directions and transient velocity spectra at different times. The corresponding predictions of VOMM are closest to the filtered DNS results and superior to these conventional SGS models (DSM, DMM and ADM models). The profiles of Reynolds shear stress at the self-similar stage of the shear layer are critical for the development of mixing layer, and all conventional SGS models are not able to accurately predict the vertical distributions with significant deviations from the benchmark fDNS result. In contrast, the VOMM model predicts the Reynolds stress fairly well at different time instants. Besides, it can be clearly observed from the iso-surface of Q-criterion that the VOMM model accurately recovers the diverse spatial vortex structures very similar to the benchmark fDNS data in comparison to the classical SGS models.
Furthermore, for the cases of three turbulent flow scenarios with different grid resolutions, the computational cost of the proposed VOMM model is only about 30% the time of the DMM model, which is very efficient and competitive compared to the classical SGS models. These results suggest that the proposed VOMM model has high _a posteriori_ accuracy and computational efficiency by assimilating the _a priori_ knowledge of turbulence statistics, and can be a promising tool to develop advanced SGS models in the LES of turbulence.
Eventually, it is noteworthy that fine-tuning a small number of model parameters of some traditional SGS models can significantly improve the _a posteriori_ accuracy of LES using the proposed adjoint-based optimization framework. In addition, the predictions of LES in complex turbulent flows using the VOMM model might be dramatically accurate as the number of model coefficients increases, while the computational cost of the adjoint-based approach hardly varies with to the number of parameters. Although the high-fidelity turbulence statistics is provided by DNS data in the current study, the experimental measurements can also be assimilated using the same optimization procedure to increase the accuracy of LES modeling for a particular type of complex turbulent flow. In future work, we would further apply the VOMM model with the existing optimal parameters to more complex turbulent flows and generalize to turbulence with different filter scales.
**Funding.** This work was supported by the National Natural Science Foundation of China (NSFC Grants No. 91952104, No. 92052301, No. 12172161, and No. 12161141017), by the National Numerical Windtunnel Project (No. NNW2019ZT1-A04), by the NSFC Basic Science Center Program (Grant No. 11988102), by the Shenzhen Science and Technology Program (Grants No. KQTD20180411143441009), by Key Special Project for Introduced Talents Team of Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou) (Grant No.
GML2019ZD0103), and by Department of Science and Technology of Guangdong Province (Grants No. 2019B21203001). This work was also supported by Center for Computational Science and Engineering of Southern University of Science and Technology.
**Declaration of interests.** The authors report no conflict of interest.
## Appendix A Derivation of the adjoint large-eddy simulation equations
The large-eddy simulation (LES) equations are expressed as (Pope 2000; Sagaut 2006)
\[R_{0}\left(\bar{u}_{i}\right)=\frac{\partial\bar{u}_{i}}{\partial x_{i}}=0, \tag{12}\]
\[R_{i}\left(\bar{u}_{i},\bar{p}\right)=\frac{\partial\bar{u}_{i}}{\partial t}+ \frac{\partial\left(\bar{u}_{i}\bar{u}_{j}\right)}{\partial x_{j}}+\frac{ \partial\bar{p}}{\partial x_{i}}-\nu\frac{\partial^{2}\bar{u}_{i}}{\partial x _{j}\partial x_{j}}-\overline{\mathcal{F}}_{i}+\frac{\partial\tau_{ij}}{ \partial x_{j}}=0, \tag{13}\]
where an overbar denotes the filtered variables with filter scale \(\tilde{\Delta}\), \(\bar{u}_{i}\) and \(\bar{p}\) denote the filtered velocity and pressure, respectively. Here, \(\nu\) is the kinematic viscosity, and \(\bar{\mathcal{F}}_{i}\) represents the large-scale forcing. The unclosed SGS stress \(\tau_{ij}=\overline{u_{i}u_{j}}-\bar{u}_{i}\bar{u}_{j}\) is modeled by the \(N\)-parameter mixed model \(\tau_{ij}=\sum\limits_{n=1}^{N}C_{n}T_{ij}^{(n)}\left(\bar{u}_{i};\tilde{ \Delta}\right)\) with the basis stress tensors \(T_{ij}^{(n)}\) and model coefficients \(C_{n}\ \ (n=1,2,...,N)\). The sensitivities of the governing equations for the LES variables \(\mathbf{\bar{v}}=\left[\bar{p},\bar{u}_{1},\bar{u}_{2},\bar{u}_{3}\right]^{T}\) are given by
\[\delta R_{k}=\frac{\partial R_{k}}{\partial\mathbf{\bar{v}}}\cdot\delta \mathbf{\bar{v}}=\left[\begin{array}{c}\frac{\partial\delta\bar{u}_{i}}{ \partial x_{i}}\\ \frac{\partial\delta\bar{u}_{i}}{\partial t}+\frac{\partial\left(\bar{u}_{j} \delta\bar{u}_{i}\right)}{\partial x_{j}}+\frac{\partial\left(\bar{u}_{i} \delta\bar{u}_{j}\right)}{\partial x_{j}}+\frac{\partial\delta\bar{p}}{ \partial x_{i}}-\nu\frac{\partial^{2}\delta\bar{u}_{i}}{\partial x_{j}\partial x _{j}}+\frac{\partial\delta\tau_{ij}}{\partial x_{j}}\end{array}\right]=0. \tag{14}\]
The adjoint LES equations are derived by the adjoint identity acting on the adjoint variables \(\mathbf{\bar{v}}^{\dagger}=\left[\bar{p}^{\dagger},\bar{u}_{1}^{\dagger}, \bar{u}_{2}^{\dagger},\bar{u}_{3}^{\dagger}\right]^{T}\), namely
\[\left\langle\frac{\partial R_{k}}{\partial\mathbf{\bar{v}}}\cdot\delta \mathbf{\bar{v}},\mathbf{\bar{v}}^{\dagger}\right\rangle_{\mathbf{x},t}= \left\langle\delta\mathbf{\bar{v}},\left(\frac{\partial R_{k}}{\partial \mathbf{\bar{v}}}\right)^{\dagger}\cdot\mathbf{\bar{v}}^{\dagger}\right\rangle _{\mathbf{x},t}+BT, \tag{15}\]
where \(BT\) denotes the boundary and temporal integral terms, and \(BT=0\) can identify the boundary and terminal conditions of the adjoint equations. The corresponding adjoint LES equations can be expressed as
\[\sum\limits_{k=0}^{3}\left(\frac{\partial R_{k}}{\partial\mathbf{\bar{v}}} \right)^{\dagger}\cdot\mathbf{\bar{v}}^{\dagger}-\frac{\partial J}{\partial \mathbf{\bar{v}}}=0, \tag{16}\]
where \(\partial J/\partial\mathbf{\bar{v}}=\left[0,\frac{\partial J}{\partial\bar{u}_ {1}},\frac{\partial J}{\partial\bar{u}_{2}},\frac{\partial J}{\partial\bar{u}_ {3}}\right]^{T}\) denotes the sensitivity of the cost functional \(J\left(\bar{u}_{i},\bar{u}_{i}^{\mathrm{ref}};C_{n},\mathbf{x},t\right)\) which quantifies the discrepancy between \(\bar{u}_{i}\) and the reference data \(\bar{u}_{i}^{\mathrm{ref}}\) in the LES calculations under the given parameters \(C_{n}\left(n=1,2,...,N\right)\) at a certain space-time state \(\left(\mathbf{x},t\right)\). Here, the terms \(\left(\partial R_{k}/\partial\mathbf{\bar{v}}\right)^{\dagger}\cdot\mathbf{\bar {v}}^{\dagger}\left(k=0,1,2,3\right)\) are derived by multiplying the perturbation LES equations (Eq. 14) with the adjoint LES variables \(\mathbf{\bar{v}}^{\dagger}\), and then integrating by parts to rearrange all of the
differential operators without \(\delta\mathbf{\tilde{v}}\) onto the adjoint variables \(\mathbf{\tilde{v}}^{\dagger}\), yielding
\[\begin{array}{l}\frac{\partial\delta\tilde{u}_{i}}{\partial x_{i}}\tilde{p}^{ \dagger}+\left[\frac{\partial\delta\tilde{u}_{i}}{\partial t}+\frac{\partial \left(\tilde{u}_{j}\delta\tilde{u}_{i}\right)}{\partial x_{j}}+\frac{\partial \left(\tilde{u}_{i}\delta\tilde{u}_{j}\right)}{\partial x_{j}}+\frac{\partial \delta\tilde{p}}{\partial x_{i}}-\nu\frac{\partial^{2}\delta\tilde{u}_{i}}{ \partial x_{j}\partial x_{j}}+\frac{\partial\delta\tilde{r}_{ij}}{\partial x_{ j}}\right]\tilde{u}_{i}^{\dagger}=\\ -\left(\frac{\partial\tilde{u}_{i}^{\dagger}}{\partial x_{i}}\right)\delta \tilde{p}-\left[\frac{\partial\tilde{u}_{i}^{\dagger}}{\partial t}+\left( \frac{\partial\tilde{u}_{i}^{\dagger}}{\partial x_{j}}+\frac{\partial\tilde{u }_{i}^{\dagger}}{\partial x_{i}}\right)\tilde{u}_{j}+\nu\frac{\partial^{2} \tilde{u}_{i}^{\dagger}}{\partial x_{j}\partial x_{j}}+\frac{\partial}{ \partial x_{j}}\left(\tilde{u}_{k}^{\dagger}\frac{\partial\tau_{jk}}{\partial \tilde{u}_{i}}\right)-\tilde{u}_{k}^{\dagger}\frac{\partial^{2}\tau_{jk}}{ \partial\tilde{u}_{i}\partial x_{j}}\right]\delta\tilde{u}_{i}+\\ +\underbrace{\frac{\partial}{\partial t}\frac{\partial}{\partial t}}_{\text{ terminal condition}}+\underbrace{\frac{\partial}{\partial x_{j}}\left[\left(\tilde{u}_{i}^{\dagger} \tilde{u}_{j}+\nu\frac{\partial\tilde{u}_{i}^{\dagger}}{\partial x_{j}}+\tilde{u }_{k}^{\dagger}\frac{\partial\tau_{jk}}{\partial\tilde{u}_{i}}\right)\delta \tilde{u}_{i}-\nu\tilde{u}_{i}^{\dagger}\frac{\partial\delta\tilde{u}_{i}}{ \partial x_{j}}\right]+\frac{\partial}{\partial x_{i}}\left[\tilde{u}_{i}^{ \dagger}\delta\tilde{p}+\left(\tilde{p}^{\dagger}+\tilde{u}_{j}\tilde{u}_{j}^{ \dagger}\right)\delta\tilde{u}_{i}\right].\] (A6)
The adjoint LES equations are written in detail as
\[\frac{\partial\tilde{u}_{i}^{\dagger}}{\partial x_{i}}=0,\] (A7)
\[\frac{\partial\tilde{u}_{i}^{\dagger}}{\partial t}+\left(\frac{\partial\tilde {u}_{i}^{\dagger}}{\partial x_{j}}+\frac{\partial\tilde{u}_{j}^{\dagger}}{ \partial x_{i}}\right)\tilde{u}_{j}+\nu\frac{\partial^{2}\tilde{u}_{i}^{\dagger }}{\partial x_{j}\partial x_{j}}+\frac{\partial}{\partial x_{j}}\left(\tilde{u} _{k}^{\dagger}\frac{\partial\tau_{jk}}{\partial\tilde{u}_{i}}\right)-\tilde{u}_ {k}^{\dagger}\frac{\partial^{2}\tau_{jk}}{\partial\tilde{u}_{i}\partial x_{j}} +\frac{\partial J}{\partial\tilde{u}_{i}}=0.\] (A8)
It is worth noting that the adjoint SGS term \(\tilde{u}_{k}^{\dagger}\frac{\partial^{2}\tau_{jk}}{\partial\tilde{u}_{i} \partial x_{j}}\) can lead to the non-conservation of the adjoint momentum and deteriorate the evaluation of the adjoint-based gradients. To our knowledge, few previous studies have addressed this critical issues that make the LES adjoint field prone to numerical instability and eventual divergence. In order to maintain the momentum conservation in the adjoint equations, we remove \(\tilde{u}_{k}^{\dagger}\frac{\partial^{2}\tau_{jk}}{\partial\tilde{u}_{i} \partial x_{j}}\) from Eq. A8, and the conservative adjoint LES equations are obtained as
\[\frac{\partial\tilde{u}_{i}^{\dagger}}{\partial x_{i}}=0,\] (A9)
\[\frac{\partial\tilde{u}_{i}^{\dagger}}{\partial t}+\left(\frac{\partial\tilde {u}_{i}^{\dagger}}{\partial x_{j}}+\frac{\partial\tilde{u}_{j}^{\dagger}}{ \partial x_{i}}\right)\tilde{u}_{j}+\nu\frac{\partial^{2}\tilde{u}_{i}^{\dagger }}{\partial x_{j}\partial x_{j}}+\frac{\partial\tau_{ij}^{\dagger}}{\partial x _{j}}+\frac{\partial J}{\partial\tilde{u}_{i}}=0,\] (A10)
where \(\tau_{ij}^{\dagger}=\tilde{u}_{k}^{\dagger}\frac{\partial\tau_{jk}}{\partial \tilde{u}_{i}}\) is the adjoint SGS stress. If the unclosed SGS terms is modeled by the \(N\)-parameter mixed model \(\tau_{ij}=\sum\limits_{n=1}^{N}C_{n}T_{ij}^{(n)}\left(\tilde{u}_{i};\tilde{ \Delta}\right)\) with the basis stress tensors \(T_{ij}^{(n)}\) and model coefficients \(C_{n}\), the adjoint SGS stresses are correspondingly represented as \(\tau_{ij}^{\dagger}=\sum\limits_{n=1}^{N}C_{n}T_{ij}^{(n),\dagger}\) with the associated adjoint basis stress tensors \(T_{ij}^{(n),\dagger}\) (\(n=1,2,...,N\)).
## Appendix B Derivation of the adjoint SGS stress for the VOMM model
The present variational optimal mixed model (VOMM) combines the approximate deconvolution model (ADM) in the scale-similarity form with the dissipative Smagorinsky part, expressed as
\[\tau_{ij}=C_{1}T_{ij}^{(1)}+C_{2}T_{ij}^{(2)},\ \ \text{with}\ \ T_{ij}^{(1)}=\tilde{ \Delta}^{2}|\tilde{S}|\tilde{S}_{ij},\ \ T_{ij}^{(2)}=\overline{u_{i}^{*}u_{j}^{*}-u_{i}^{*}}\ \overline{u_{j}^{*}},\] (B1)
where \(u_{i}^{*}=\sum\limits_{n=1}^{N}(I-G)^{n-1}\otimes\tilde{u}_{i}\) stands for the \(i\)-th approximate unfiltered velocity component recovered by the iterative van Cittert procedure, \(N\) is the number of iterations for the AD procedure, \(I\) is the identity, and the symbol "\(\otimes\)" is the spatial convolution operator. Here, \(C_{1}\) and \(C_{2}\) are SGS
model coefficients. The variation of the first basis SGS tensor \(T_{ij}^{(1)}\) with respect to the velocity, is derived by
\[\delta T_{ij}^{(1)}=\bar{\Delta}^{2}\left[|\bar{S}|\delta\bar{S}_{ij}+\left( \delta|\bar{S}|\right)\bar{S}_{ij}\right]=\bar{\Delta}^{2}\left(|\bar{S}|\frac {\partial\bar{S}_{ij}}{\partial\bar{u}_{k}}+\frac{\partial|\bar{S}|}{\partial \bar{u}_{k}}\bar{S}_{ij}\right)\delta\bar{u}_{k}, \tag{12}\]
where the derivatives of the shear strain-rate tensor and characteristic strain rate for the velocity are further written as
\[\frac{\partial\bar{S}_{ij}}{\partial\bar{u}_{k}}=\frac{1}{2}\frac{\partial}{ \partial\bar{u}_{k}}\left(\frac{\partial\bar{u}_{i}}{\partial x_{j}}+\frac{ \partial\bar{u}_{j}}{\partial x_{i}}\right)=\frac{1}{2}\left(\frac{\partial \delta_{ik}}{\partial x_{j}}+\frac{\partial\delta_{jk}}{\partial x_{i}} \right), \tag{13}\]
and
\[\frac{\partial|\bar{S}|}{\partial\bar{u}_{k}}=\frac{\partial|\bar{S}|}{ \partial\bar{S}_{ij}}\frac{\partial\bar{S}_{ij}}{\partial\bar{u}_{k}}=\frac{ \bar{S}_{ij}}{|\bar{S}|}\left(\frac{\partial\delta_{ik}}{\partial x_{j}}+\frac {\partial\delta_{jk}}{\partial x_{i}}\right). \tag{14}\]
The inner product between the variation of the first basis SGS force and the adjoint velocity is derived by
\[\frac{\partial\delta T_{ij}^{(1)}}{\partial x_{j}}\bar{u}_{i}^{ \dagger}=-\frac{\partial\bar{u}_{i}^{\dagger}}{\partial x_{j}}\delta T_{ij}^{( 1)}+\frac{\partial}{\partial x_{j}}\left[\bar{u}_{i}^{\dagger}\delta T_{ij}^{( 1)}\right]\] \[=-\frac{\bar{\Delta}^{2}}{2}\left[\left(|\bar{S}|\frac{\partial \bar{u}_{i}^{\dagger}}{\partial x_{j}}\right)\left(\frac{\partial\delta_{ik}}{ \partial x_{j}}+\frac{\partial\delta_{jk}}{\partial x_{i}}\right)+\left(\frac{ \partial\delta_{mk}}{\partial x_{n}}+\frac{\partial\delta_{mk}}{\partial x_{m} }\right)\left(\frac{2\tilde{S}_{mn}}{|\bar{S}|}\bar{S}_{ij}\frac{\partial\bar {u}_{i}^{\dagger}}{\partial x_{j}}\right)\right]\delta\bar{u}_{k}+\frac{ \partial}{\partial x_{j}}\left[\bar{u}_{i}^{\dagger}\delta T_{ij}^{(1)}\right] \tag{15}\] \[=-\frac{\bar{\Delta}^{2}}{2}\left\{\frac{\partial}{\partial x_{j }}\left|\bar{S}\right|\left(\frac{\partial\bar{u}_{i}^{\dagger}}{\partial x_{j }}+\frac{\partial\bar{u}_{j}^{\dagger}}{\partial x_{k}}\right)\right]+\frac{ \partial}{\partial x_{j}}\left[\frac{2\tilde{S}_{jk}}{|\bar{S}|}\tilde{S}_{mn }\left(\frac{\partial\bar{u}_{m}^{\dagger}}{\partial x_{n}}+\frac{\partial\bar {u}_{m}^{\dagger}}{\partial x_{m}}\right)\right]\right\}\delta\bar{u}_{k}+\frac {\partial}{\partial x_{j}}\left[\bar{u}_{i}^{\dagger}\delta T_{ij}^{(1)} \right],\]
Here, the adjoint strain-rate tensor \(\bar{S}_{ij}^{\dagger}=\left(\partial\bar{u}_{i}^{\dagger}/\partial x_{j}+ \partial\bar{u}_{j}^{\dagger}/\partial x_{i}\right)/2\), and the inner product term can be further expressed as
\[\bar{u}_{i}^{\dagger}\frac{\partial\delta T_{ij}^{(1)}}{\partial x_{j}}=\left\{ \frac{\partial}{\partial x_{j}}\left[-\bar{\Delta}^{2}\left(|\bar{S}|\bar{S}_{ ij}^{\dagger}+\frac{2\tilde{S}_{kl}\bar{S}_{kl}^{\dagger}}{|\bar{S}|}\tilde{S}_{ ij}\right)\right]\right\}\delta\bar{u}_{i}+\frac{\partial}{\partial x_{j}}\left[\bar{u}_{i}^{ \dagger}\delta T_{ij}^{(1)}\right]. \tag{16}\]
Thus, the adjoint basis stress tensor \(T_{ij}^{(1),\dagger}\) is given by
\[T_{ij}^{(1),\dagger}=-\bar{\Delta}^{2}\left(|\bar{S}|\bar{S}_{ij}^{\dagger}+ \frac{2\tilde{S}_{kl}\bar{S}_{kl}^{\dagger}}{|\bar{S}|}\tilde{S}_{ij}\right). \tag{17}\]
The common filter function \(G\) (_e.g._ top-hat, Gaussian and spectral filters) is symmetric spatial filter, and is self-adjoint, namely (Vreman, 2004)
\[\left\langle G\otimes f,g\right\rangle_{\mathbf{x}}=\left\langle f,G\otimes g \right\rangle_{\mathbf{x}}, \tag{18}\]
where \(f\left(\mathbf{x}\right)\) and \(g\left(\mathbf{x}\right)\) are arbitrary variables. The \(G^{n}\) filter with spatially filtering \(n\) times (\(G^{n}=G\otimes G\otimes\cdots\otimes G\)) also satisfies the self-adjoint property proved by the mathematical induction method, expressed as
\[\left\langle G^{n}\otimes f,g\right\rangle_{\mathbf{x}}=\left\langle G\otimes G ^{n-1}\otimes f,g\right\rangle_{\mathbf{x}}=\left\langle G^{n-1}\otimes f,G \otimes g\right\rangle_{\mathbf{x}}=\cdots=\left\langle f,G^{n}\otimes g \right\rangle_{\mathbf{x}}. \tag{19}\]
The \(\left(I-G\right)\) filter is also a symmetric filter, and the approximate deconvolution procedure \(H=\sum\limits_{n=1}^{N}\left(I-G\right)^{n-1}\) is thus the self-adjoint filter. The second basis SGS tensor \(T_{ij}^{(2)}\)can be described using the AD abbreviated notation, namely
\[T_{ij}^{(2)}=\overline{u_{i}^{*}u_{j}^{*}}-\overline{u_{i}^{*}}\overline{u_{j}^ }=G\otimes\left[\left(H\otimes\bar{u}_{i}\right)\left(H\otimes\bar{u}_{j}\right) \right]-\left[G\otimes\left(H\otimes\bar{u}_{i}\right)\right]\ \left[G\otimes\left(H\otimes\bar{u}_{j}\right)\right]. \tag{20}\]
The variation of the second basis SGS tensor \(T_{ij}^{(2)}\) with respect to the velocity, expressed as
\[\delta T_{ij}^{(2)}=G\otimes\left[\left(H\otimes\delta\bar{u}_{i}\right)u_{j}^{* }\right]+G\otimes\left[u_{i}^{*}\left(H\otimes\delta\bar{u}_{j}\right)\right]- \left[G\otimes\left(H\otimes\delta\bar{u}_{i}\right)\right]\ \overline{u_{j}^{*}-u_{i}^{*}}\ \left[G\otimes\left(H\otimes\delta\bar{u}_{j}\right)\right]\.\] (B.11)
The inner product between the variation of the second basis SGS force and the adjoint velocity is given by
\[\begin{array}{l}\frac{\partial\delta T_{ij}^{(2)}}{\partial x_{j}}\bar{u}_{i }^{\dagger}=-\frac{\partial\bar{u}_{i}^{\dagger}}{\partial x_{j}}\delta T_{ij} ^{(2)}+\frac{\partial}{\partial x_{j}}\left[\bar{u}_{i}^{\dagger}\delta T_{ij} ^{(2)}\right]\\ =-2\bar{S}_{ij}^{\dagger}\left\{G\otimes\left[\left(H\otimes\delta\bar{u}_{i} \right)u_{j}^{*}\right]\right\}+2\bar{S}_{ij}^{\dagger}\left[G\otimes\left(H \otimes\delta\bar{u}_{i}\right)\right]\ \overline{u_{j}^{*}}+\frac{\partial}{ \partial x_{j}}\left[\bar{u}_{i}^{\dagger}\delta T_{ij}^{(2)}\right]\.\end{array}\] (B.12)
The inner product term can be further simplified by the self-adjoint property, such that
\[\begin{array}{l}\frac{\partial\delta T_{ij}^{(2)}}{\partial x_{j}}\bar{u}_{ i}^{\dagger}=-2\left(G\otimes\bar{S}_{ij}^{\dagger}\right)\left[\left(H \otimes\delta\bar{u}_{i}\right)u_{j}^{*}\right]+2\left[G\otimes\left(\bar{S}_ {ij}^{\dagger}\overline{u_{j}^{*}}\right)\right]\left(H\otimes\delta\bar{u}_{ i}\right)\ +\frac{\partial}{\partial x_{j}}\left[\bar{u}_{i}^{\dagger}\delta T_{ij}^{(2)}\right]\\ =H\otimes\left(-2\bar{S}_{ij}^{\dagger}u_{j}^{*}+2\bar{S}_{ij}^{\dagger} \overline{u_{j}^{*}}\right)\delta\bar{u}_{i}+\frac{\partial}{\partial x_{j}} \left[\bar{u}_{i}^{\dagger}\delta T_{ij}^{(2)}\right]\\ =\left\{\frac{\partial}{\partial x_{j}}\left[H\otimes\left(\overline{\bar{u}_ {i}^{\dagger}\overline{u_{j}^{*}}}-\overline{\bar{u}_{i}^{\dagger}}u_{j}^{*} \right)\right]+H\otimes\left(\overline{\frac{\partial\bar{u}_{i}^{\dagger} \overline{u_{j}^{*}}}{\partial x_{i}}u_{j}^{*}}-\frac{\partial\bar{u}_{i}^{ \dagger}}{\partial x_{i}}u_{j}^{*}\right]\right\}\delta\bar{u}_{i}+\frac{ \partial}{\partial x_{j}}\left[\bar{u}_{i}^{\dagger}\delta T_{ij}^{(2)}\right].\end{array}\] (B.13)
It is quite notable that the second adjoint SGS term makes the non-conservation of the adjoint momentum, therefore we discard the second adjoint SGS term. Thus, the second adjoint basis stress tensor \(T_{ij}^{(2),\dagger}\) can be written as
\[T_{ij}^{(2),\dagger}=H\otimes\left(\overline{\bar{u}_{i}^{\dagger}\overline{u _{j}^{*}}}-\overline{\bar{u}_{i}^{\dagger}}u_{j}^{*}\right)=\sum\limits_{n=1}^{ N}\left(I-G\right)^{n-1}\otimes\left(\overline{\bar{u}_{i}^{\dagger}\overline{u_{j}^{*}}}- \overline{\bar{u}_{i}^{\dagger}}u_{j}^{*}\right).\] (B.14)
In summary, the adjoint SGS stress of the proposed VOMM model is represented by
\[\tau_{ij}^{\dagger}=C_{1}T_{ij}^{(1),\dagger}+C_{2}T_{ij}^{(2),\dagger},\] (B.15)
where the adjoint basis stress tensors are \(T_{ij}^{(1),\dagger}=-\bar{\Delta}^{2}\left(|\bar{S}|\bar{S}_{ij}^{\dagger}+ \frac{2\bar{S}_{kl}\bar{S}_{kl}^{\dagger}}{|\bar{S}|}\bar{S}_{ij}\right)\) and \(T_{ij}^{(2),\dagger}=\sum\limits_{n=1}^{N}\left(I-G\right)^{n-1}\otimes\left( \overline{\bar{u}_{i}^{\dagger}\overline{u_{j}^{*}}}-\overline{\bar{u}_{i}^{ \dagger}}u_{j}^{*}\right)\).
|
2308.04712 | Slot Induction via Pre-trained Language Model Probing and Multi-level
Contrastive Learning | Recent advanced methods in Natural Language Understanding for Task-oriented
Dialogue (TOD) Systems (e.g., intent detection and slot filling) require a
large amount of annotated data to achieve competitive performance. In reality,
token-level annotations (slot labels) are time-consuming and difficult to
acquire. In this work, we study the Slot Induction (SI) task whose objective is
to induce slot boundaries without explicit knowledge of token-level slot
annotations. We propose leveraging Unsupervised Pre-trained Language Model
(PLM) Probing and Contrastive Learning mechanism to exploit (1) unsupervised
semantic knowledge extracted from PLM, and (2) additional sentence-level intent
label signals available from TOD. Our approach is shown to be effective in SI
task and capable of bridging the gaps with token-level supervised models on two
NLU benchmark datasets. When generalized to emerging intents, our SI objectives
also provide enhanced slot label representations, leading to improved
performance on the Slot Filling tasks. | Hoang H. Nguyen, Chenwei Zhang, Ye Liu, Philip S. Yu | 2023-08-09T05:08:57Z | http://arxiv.org/abs/2308.04712v1 | # Slot Induction via Pre-trained Language Model Probing and Multi-level Contrastive Learning
###### Abstract
Recent advanced methods in Natural Language Understanding for Task-oriented Dialogue (TOD) Systems (e.g., intent detection and slot filling) require a large amount of annotated data to achieve competitive performance. In reality, token-level annotations (slot labels) are time-consuming and difficult to acquire. In this work, we study the Slot Induction (SI) task whose objective is to induce slot boundaries without explicit knowledge of token-level slot annotations. We propose leveraging Unsupervised Pre-trained Language Model (PLM) Probing and Contrastive Learning mechanism to exploit (1) unsupervised semantic knowledge extracted from PLM, and (2) additional sentence-level intent label signals available from TOD. Our approach is shown to be effective in SI task and capable of bridging the gaps with token-level supervised models on two NLU benchmark datasets. When generalized to emerging intents, our SI objectives also provide enhanced slot label representations, leading to improved performance on the Slot Filling tasks. 1
Footnote 1: Our code and datasets are publicly available at [https://github.com/nhhoang96/MultiCL_Slot_Induction](https://github.com/nhhoang96/MultiCL_Slot_Induction)
## 1 Introduction
Natural Language Understanding (NLU) has become a crucial component of the Task-oriented Dialogue (TOD) Systems. The goal of NLU is to extract and capture semantics from users' utterances 2. There are two major tasks in NLU framework, including intent detection (ID) and slot filling (SF) (Tur and De Mori, 2011). While the former focuses on identifying overall users' intents, the latter extracts semantic concepts from natural language sentences. In NLU tasks, intents denote sentence-level annotations while slot types represent token-level labels.
Footnote 2: In our work, we use the term **utterance** and **sentence** interchangeably.
Despite recent advances, state-of-the-art NLU methods (Haihong et al., 2019; Goo et al., 2018) require a large amount of annotated data to achieve competitive performance. However, the fact that annotations, especially token-level labels, are expensive and time-consuming to acquire severely inhibits the generalization capability of traditional NLU models in an open-world setting Louvan and Magnini (2020); Xia et al. (2020). Recent works attempt at tackling the problems in low-resource settings on both intent level Xia et al. (2018); Nguyen et al. (2020); Siddique et al. (2021) and slot level Yu et al. (2021); Glass et al. (2021). However, most approaches remain restricted to closed-world settings where there exist pre-defined sets of seen and emerging sets of classes. Some approaches even require additional knowledge from related token-level tasks that might not be readily available.
Additionally, with increasing exposure to the ever-growing number of intents and slots, TOD systems are expected to acquire task-oriented adaptation capability by leveraging both inherent semantic language understanding and task-specific knowledge to identify the crucial emerging concepts in the users' utterances. This ability can be referred to as **Slot Induction** in TOD Systems.
Recently, Pre-trained Contextualized Language Models (PLM) such as BERT Devlin et al. (2019) have shown promising capability of capturing semantic and syntactic structure without explicit linguistic pre-training objectives Jawahar et al. (2019); Rogers et al. (2020); Wu et al. (2020). Despite imperfections, the captured semantics from PLM via unsupervised probing mechanisms could be leveraged to induce important semantic phrases covering token-level slot labels.
Additionally, as an effective unsupervised representation learning mechanism Wei and Zou (2019); Gao et al. (2021), Contrastive Learning (CL) is capable of refining the imperfect PLM semantic phrases in a self-supervised manner to mitigate biases existent in the PLM. In specific, given a sample phrase _in the same area_ corresponding to
spatial_relation_ slot type, as a presumed structural knowledge, PLM tends to split the preposition and determiner from the noun phrase during segmentation, resulting in _in the_ and _same area_. Despite its structural correctness, the identified segments fail to align with ground truth slots due to the lack of knowledge from the overall utterance semantics.
On the other hand, CL can also be leveraged on a sentence level when intent labels are available. In fact, there exist strong connections between slot and intent labels (Zhang et al., 2019; Wu et al., 2020). For instance, utterances with _book_restaurant_ intent tend to contain _location_ slots than those from _rate_book_ intent. Therefore, as intent labels are less expensive to acquire, they could provide additional signals for CL to induce slot labels more effectively when available.
In this work, we propose leveraging PLM probing together with CL objectives for Slot Induction (SI) task. Despite imperfections, PLM-derived segmentations could produce substantial guidance for SI when slot labels are not readily available. We introduce CL to further refine PLM segmentations via (1) segment-level supervision from unsupervised PLM itself, and (2) sentence-level supervision from intent labels to exploit the semantic connections between slots and intents. Our refined BERT from SI objectives can produce effective slot representations, leading to improved performance in slot-related tasks when generalized towards emerging intents.
Our contributions can be summarized as follows: \(\bullet\) We propose leveraging semantic segments derived from Unsupervised PLM Probing (UPL) to induce phrases covering token-level slot labels. We name the task as Slot Induction.
\(\bullet\) We propose enhancing the quality of PLM segments with Contrastive Learning refinement to better exploit (1) unsupervised segment-level signals from PLM, (2) sentence-level signals from intent labels to improve SI performance.
\(\bullet\) We showcase the effectiveness of our proposed SI framework and its ability to produce refined PLM representations for token-level slots when generalized to emerging intents.
## 2 Related Work
Pre-trained Language Model ProbingPre-trained Language Models (PLMs) have been shown to possess inherent syntactic and semantic information. Different probing techniques are developed to investigate the knowledge acquired by PLMs, either from output representations (Wu et al., 2020), intermediate representations (Sun et al., 2019), or attention mapping (Clark et al., 2019; Yu et al., 2022). Unlike previous probing techniques that focus on deriving syntactic tree structure, we leverage semantically coherent segments recognized by PLMs to induce phrases containing token-level slot labels in NLU tasks for TOD Systems.
Contrastive LearningContrastive Learning (CL) has been widely leveraged as an effective representation learning mechanism (Oord et al., 2018). The goal of CL is to learn the discriminative features of instances via different augmentation methods. In Natural Language Processing (NLP), CL has been adopted in various contexts ranging from text classification (Wei and Zou, 2019), embedding representation learning (Gao et al., 2021) to question answering (Xiong et al., 2020; Liu et al., 2021). CL has also been integrated with PLM as a more effective fine-tuning strategy for downstream tasks (Su et al., 2021). In our work, we propose an integration of CL with PLM probing techniques to further refine imperfect PLM-derived segments via (1) unsupervised signals from PLM itself, and (2) less expensive sentence-level intent label supervision for improved SI performance.
## 3 Problem Formulation
Slot InductionWe introduce the task of Slot Induction (SI) whose objective is to identify phrases containing token-level slot labels. Unlike traditional SF and previously proposed AISI framework (Zeng et al., 2021), in our SI task, both slot boundaries and slot types are unknown during training. The task is also related to Phrasal Segmentation/Tagging (PS) methods (Shang et al., 2018; Gu et al., 2021). However, there are three key distinc
Figure 1: Illustration of connections between Phrasal Segmentation (PS), Beginning-Inside-Outside (BIO) Tagging Slot Label and Break-Tie (B-T) Labeling Schema based on Golden Slot Labels (Red: denotes Golden Slot Labels for the utterance, **P1,P2** denote identified phrases, **NA, B,T** denote Not-Relevant, Break, Tie Labels in B-T Labeling Scheme)
tions: (1) utterances and intent labels (if available) are the only sources of information for the task, (2) slot phrases (i.e. close by (_spatial_relation_), most expensive (_cost_relative_)), are not restricted to noun phrases, (3) slot phrases (i.e. strauss is playing today (_movie_name_)) might be more sophisticated and harder to identify than typical noun phrases (i.e. chicago (_city_)). These differences explain why PS methods do not consistently perform well in our proposed SI task (Section 6).
Specifically, given an utterance with the length of \(T\) tokens \(x=[x_{1},x_{2}...,x_{T}]\), SI task aims to make decisions at \(T-1\) positions whether to (1) tie the current token with the previous one to extend the current phrase 3, or (2) break away from the previous token/ phrase to form a new phrase.
Footnote 3: In our work, we use the term **segment** and **phrase** interchangeably.
Evaluation MetricWe adopt the Break-Tie (B-T) schema Shang et al. (2018) to evaluate SI task. The metric allows for direct comparison between supervised Sequential Labeling and unsupervised PS methods. In SI setting, _Tie_ represents the connection between tokens of the same slot type while _Break_ denotes the separation between (1) tokens from different slot types, and (2) tokens from a slot type and non-slot tokens. As the objective of SI is on slot tokens, consecutive non-slot tokens should not contribute to the overall performance. Therefore, additional _NA_ labels are introduced to guarantee that evaluations are only conducted on slot tokens and their adjacent tokens.
Figure 1 depicts the connections of SF and PS labels with B-T schema. For PS, Break denotes the separation of two consecutive phrases. If no phrase is identified by PS methods, every token is considered as _Tie_ to one another. In the Figure 1 example, as "south carolina" is the only identified phrase, the given sentence is simply split into two phrases where _Break_ denotes their junction. Precision, Recall and F-1 Metrics are reported for individual labels, namely B-P,B-R,B-F1 for _Break_ and T-P,T-R,T-F1 for _Tie_.
Given an utterance, an optimal SI model makes correct decisions to either break and tie at every token index. Therefore, **H-Mean**, denoting the harmonic mean between F-1 Scores of _Tie_ and _Break_ label predictions, is considered the golden criteria for SI model comparison.
## 4 Proposed Framework
In this section, we introduce our proposed Multi-level Contrastive Learning framework for SI task with 2 major components: **Segment-level Contrastive Learning (SegCL)** and **Sentence-level Contrastive Learning (SentCL)** as depicted in Figure 2. We first introduce the backbone Unsupervised PLM Probing (UPL) for both components.
Figure 3: Illustration of UPL Segmentation Tree for sentence _“make me a reservation in south carolina”_ with sample Impact Matrix at depth \(d=3\) (**Lighter** color denotes **lower** impact score). \(d=0\) corresponds to the sentence-level representation (no segmentation).
Figure 2: Illustration of the Proposed Model Overview. The model is made up of two-level Contrastive Learning depicted by two modules: (1) **Segment-level Supervision (SegCL)** via Unsupervised PLM Probing (UPL), (2) **Sentence-level Supervision (SentCL)** via intent labels. Green, Orange, Red denote Anchor, Positive, Negative samples respectively. **Black circle** denotes the representation of the **cropped segment** from Augmentation.
### Unsupervised PLM Probing (UPL)
We adopt Token-level Perturbed Masking mechanism Wu et al. (2020) to construct semantic segments by leveraging PLM in an unsupervised manner. Due to its operations on the output layers of PLM, UPL is flexible with the choices of PLM and avoids local sub-optimal structure from pre-selected PLM layers Clark et al. (2019). In our study, we use BERT Devlin et al. (2019) as an exemplar PLM. Specifically, given a sentence \(x=[x_{1},\cdots,x_{T}]\), the Impact Matrix \(\mathcal{F}\in\mathbb{R}^{T\times T}\) is constructed by calculating the Impact Score between every possible pair of tokens (including with itself) in the given sentence based on BERT's embedding and a specified distance metric Wu et al. (2020). Leveraging \(\mathcal{F}\), UPL derives the structural tree by recursively finding the optimal cut position \(k\) with the following objective:
\[\underset{k}{argmax}(\mathcal{F}_{i.k}^{i.k}+\mathcal{F}_{k+1.j}^{k+1.j} \tag{1}\] \[-\mathcal{F}_{i.k}^{k+1.j}-\mathcal{F}_{k+1.j}^{i.k})\]
where \(i,j\in[0,T-1]\) denotes the start and end indexes of the segment considered for splitting.
At every tree depth, sets of combined tokens are considered semantic segments since they preserve certain meanings within utterances. Segments at a deeper level include (1) all segments obtained from previous levels and (2) new segments obtained at the current level. For instance, at depth \(d=3\) of the given example in Figure 3, the obtained segments are _"make"_, _"me"_, _"a reservation in"_, _"south Carolina"_. As PLM parameters are updated during training, the derived UPL trees from the same utterance can vastly change. For simplicity, we set the tree depth \(d\) as a tunable hyperparameter.
Formally, at a specified depth \(d\) with \(m\) semantic segments acquired from UPL, the final representation of the input sentence \(x\) is defined as follows:
\[\mathbf{h_{U}}=[\overrightarrow{s_{0}},...\overrightarrow{s_{m-1}}],\ \overrightarrow{s_{i}}=\frac{\sum_{j=c}^{d} \overrightarrow{h_{j}}}{d-c+1} \tag{2}\]
where \(\mathbf{h_{U}}\in\mathbb{R}^{\mathbf{m}\times\mathbf{d_{h}}}\), \(d_{h}\) is hidden dimensions of BERT representations, \(c\),\(d\) are the start and end indexes of the corresponding segment \(s_{i}\) and \(\overrightarrow{h_{j}}\) represents the BERT embedding of \(j\)-th token.
### Multi-level Contrastive Learning
As UPL only considers token interactions for segment formation, its semantic segments are far from perfect. Additional refinements are needed to enhance the quality of the extracted segments via (1) semantic signals captured in segment-level PLM representations, (2) sentence-level intent labels.
Our overall learning objective is summarized as \(\mathcal{L}=\delta\mathcal{L}_{s}+\gamma\mathcal{L}_{d}\), where \(\mathcal{L}_{s},\mathcal{L}_{d}\) denote SegCL Loss and SentCL Loss, and \(\gamma,\delta\) are their corresponding loss coefficient hyperparameters for aggregation. For each CL level, positive and negative samples are drawn separately based on (1) the same batch of sampled anchor samples, (2) different selection criteria detailed below.
**Segment-level Contrastive Learning (SegCL)** UPL produces semantic segments by purely considering the exhaustive word-pair interactions within given sentences. However, it does not take into consideration the overall semantic representation produced by the PLM BERT via special [CLS] tokens. Therefore, we propose leveraging [CLS] representations to guide UPL towards more discriminative segment representations via SegCL objectives. Specifically, SegCL aims to minimize the distance between [CLS] representation and UPL segment representations while maximizing the distance between representations of [CLS] and random segments of the corresponding utterance.
Given a sample utterance, segment representation obtained from UPL is considered a positive sample while negative samples are represented as segments produced by randomly chosen indexes within the given utterance. The number of segments for both positive and negative samples are kept similar (\(m\)) so that SegCL focuses on learning the optimal locations of segmentation indexes. We adopt InfoNCE contrastive loss Oord et al. (2018):
\[\mathcal{L}_{s}=-log\frac{\exp^{cos(\overrightarrow{h_{C}},\mathbf{h_{U}})/ \tau_{\mathbf{s}}}}{\exp^{cos(\overrightarrow{h_{C}},\mathbf{h_{U}})/\tau_{ \mathbf{s}}}+\exp^{cos(\overrightarrow{h_{C}},\mathbf{h_{r}})/\tau_{ \mathbf{s}}}} \tag{3}\]
where \(\overrightarrow{h_{C}}\in\mathbb{R}^{1\times d_{h}}\) denotes [CLS] representation from BERT, and \(\mathbf{h_{U}},\mathbf{h_{r}}\in\mathbb{R}^{\mathbf{m}\times\mathbf{d_{h}}}\) denote the representations from UPL and random segmentation. \(m\) is the number of extracted segments from UPL as defined in Equation 2. \(\tau_{s}\) is the soft segment-level temperature hyperparameter.
**Sentence-level Contrastive Learning (SentCL)** Besides relying on UPL, we propose leveraging sentence-level intent labels to further improve the quality of segment representations derived from UPL. Specifically, we randomly draw positive and negative samples based on the intent labels of the given anchor samples. As utterances with similar intents tend to share common slot phrases, our SentCL aims to learn discriminative segments for better alignment between utterances from the same
intents. We adopt InfoNCE loss for SentCL:
\[\mathcal{L}_{d}=-log\frac{\exp^{cos(\mathbf{h_{a}},\mathbf{h}_{+})/\tau_{d}}}{\exp ^{cos(\mathbf{h_{a}},\mathbf{h}_{+})/\tau_{d}}+\exp^{cos(\mathbf{h_{a}},\mathbf{ h}_{-})/\tau_{d}}} \tag{4}\]
where \(\mathbf{h_{a}}\in\mathbb{R}^{\mathbf{m}\times\mathbf{d_{h}}},\mathbf{h}_{+} \in\mathbb{R}^{\mathbf{a}\times\mathbf{d_{h}}},\mathbf{h}_{-}\in\mathbb{R}^{ \mathbf{b}\times\mathbf{d_{h}}}\) denote the representations of anchor, positive and negative samples respectively and \(m,a,b\) denote the number of extracted segments from UPL for the respective samples. \(\tau_{d}\) is the soft sentence-level temperature hyperparameter.
To further encourage the model to identify discriminative segments from the same sentence-level intent label, we adopt random segment cropping as an augmentation strategy. As UPL could generate a vastly different number of segmentation based on the the cut_score (Equation 1) from the updated BERT parameters at each step, we conduct random segmentation cropping by a percent ratio (\(\beta\)) so that it could be adapted to individual input utterances and segmentation trees. The remaining segments after cropping are utilized to compute \(\mathcal{L}_{d}\).
## 5 Experiments
### Datasets & Evaluation Tasks
We evaluate our proposed work on the two publicly available NLU benchmark datasets ATIS Tur et al. (2010) and SNIPS Coucke et al. (2018) with the previously proposed data splits Zhang et al. (2019).
To evaluate the generalization of the refined representations from our proposed work, we conduct additional splits of each dataset into 2 parts (P1 and P2). For each benchmark dataset, we construct P1 for SI evaluation by reserving samples from randomly chosen 60% of available intents. The remaining samples (P2) are used as test sets for evaluating SF task when generalized towards emerging intents. The objective of this splitting strategy is two-fold: (1) Since there is no overlapping intent between P1 and P2, there exists no information leakage of intents leveraged in SI training (P1) while evaluating SF (P2). (2) We can validate the generalization capability of representations learned from our SI framework in other slot-related tasks. Statistics for both parts of each dataset are reported in Table 1.
**Evaluation Task 1: Slot Induction (P1)** We conduct evaluation of Unsupervised SI task on P1 of both SNIPS and ATIS datasets. B-T evaluation metrics are adopted as introduced in Section 3. Implementation details of our SI model, including hyperparameters, are discussed in Appendix B.
### Evaluation Task 2: Generalization towards Emerging Intents (P2)
To evaluate the generalization of SI refinement, we conduct SF training on P1 datasets with different BERT initializations (Original vs Refined BERT) and evaluation on emerging intents and slots in P2. Slot Precision (S-P), Recall (S-R), F1 (S-F1) are reported on P2. Implementation is detailed in Appendix C.
### Slot Induction Baseline
We conduct a comprehensive study that evaluates our SI approach with both _Upper Bound_ and _Comparable_ Methods. For fair comparisons, we leverage the same "bert-base-uncased" PLM Devlin et al. (2019) across all applicable baselines. The _Upper Bound_ includes methods that leverage directly **token-level labels** such as Golden Slot Labels, Named Entity Recognition (NER) Labels, Part-of-Speech (POS) Tagging or Noun Phrase (NP) Labels during training and/or pre-training process, including **Joint BERT FT**, **SpaCy**Honnibal et al. (2020), **FlairNLP**Akbik et al. (2018).
In addition, we compare with other **unsupervised** PS methods that do not require any token-level labels as _Comparable_ Baselines, including: **Dependency Parsing (DP-RB/DP-LB)**, **AutoPhrase**Shang et al. (2018), **UCPhrase**Gu et al. (2021), **USSI**Yu et al. (2022). For fair comparisons with _Comparable_ baselines, we also report results from our model's variants with similar prior knowledge assumption, namely **Ours (w/o CL)**, **Ours (w/o SentCL)**. Due to space constraints, details of _Upper Bound_ and _Comparable_ baselines are provided in Appendix A.1, A.2 respectively.
## 6 Result & Discussion
### Slot Induction
From our experimental results in Table 2 and 3, for SI task, our proposed framework outperforms the _Comparable_ Methods in H-Mean evaluation metric for B-T schema on both datasets. We achieve significant gains in SNIPS dataset (+6.28 points in H-Mean as compared to the next _Comparable_ Methods). Despite lack of access to any types of token-level labels, our method is also closely on
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & **SNIPS\_P1** & **SNIPS\_P2** & **ATIS\_P1** & **ATIS\_P2** \\ \hline \# Elements & 5 & 2 & 14 & 7 \\ \# Slets & 31 & 16 & 68 & 63 \\ \# Train Samples & 9356 & – & 3811 & – \\ \# Validation Samples & 500 & – & 414 & – \\ \# Test Samples & 501 & 4127 & 750 & 895 \\ Avg Train Sent Length & 8.65 & – & 11.67 & – \\ Avg Valid Sent Length & 8.72 & – & 11.82 & – \\ Avg Test Sent Length & 8.71 & 9.87 & 10.68 & 8.92 \\ \hline \end{tabular}
\end{table}
Table 1: Details of SNIPS and ATIS datasets.
par with some of the _Upper Bound_ methods that have been pre-trained with token-level labels (0.16 point difference from SpaCy in H-Mean). Despite promising achievements, most unsupervised PS methods only achieve competitive Break performance as compared to supervised methods but fall behind more significantly in terms of Tie performance. This implies unsupervised methods are able to differentiate non-slot tokens from slot tokens but tend to fragment slot tokens of the same type into multiple slot phrases due to the missing knowledge of token-level slot label spans.
UCPhrase is an exceptional baseline as it achieves significant better Tie but worse Break performance as compared to other _Comparable_ baselines. This roots from the lack of keyphrases predicted from the model, leading to higher tendency to "tie" tokens. We speculate that its core phrase miner's dependency on frequency is not effective for extracting slots in NLU tasks. Phrases with high frequency in utterances are typically non-slot tokens (i.e. add, reserve), leading to limited meaningful core phrases for phrase-tagging training.
On ATIS dataset, the gap between _Comparable_ Methods and _Upper Bound_ is more significant as utterances tend to be longer and contain a wider variety of slot types than SNIPS dataset. This leads to a significant reduction in T-P across all of the _Comparable_ Methods, resulting in a larger gap in H-Mean for ATIS dataset (approximately 18.37 points in comparison with 0.16 points in SNIPS dataset). Additionally, in comparison with SNIPS dataset, ATIS dataset contains more domain-independent slot types such as _city_name_ (New York), _country_name_ (United States). Therefore, methods leveraging either relevant token-level labels (i.e. POS, NER tags) or additional large-scaled external Knowledge Base (i.e. Wikipedia) achieve considerable performance gains. For instance, _FlairNLP_ is only 10.81 points below the Fully Supervised _Joint BERT FT_ on ATIS dataset (as compared to 21.92 points below on SNIPS) in terms of H-Mean.
Compared with USSI, _Ours (w/o CL)_ consistently achieves better H-Mean performance on both ATIS and SNIPS datasets (1.04% and 2.14% respectively). We hypothesize USSI might suffer from the local sub-optimality of pre-selected layers within deep PLM architecture. As the attention distribution across different layers varies Clark et al. (2019), the pre-selected layers can significantly impact the unsupervised semantic probing of PLM.
Table 4 demonstrates that both SegCL and SentCL (w aug) objectives provide valuable in
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{**SNIPS**} & \multicolumn{2}{|c|}{**ATIS**} \\ \hline Ours (w/o CL) & \multicolumn{2}{|c|}{52.59} & \multicolumn{2}{|c|}{36.03} \\ \hline + SegCL & 53.61 \(\pm\) 0.71 & 38.20 \(\pm\) 0.08 & \multicolumn{2}{|c|}{_BERT FT_ on ATIS dataset (as compared to 21.92 points below on SNIPS) in terms of H-Mean} \\ + SentCL (w/o aug) & 53.44 \(\pm\) 0.22 & 37.59 \(\pm\) 0.81 & \multicolumn{2}{|c|}{_BERT FT_ on ATIS dataset (as compared to 21.92 points below on SNIPS) in terms of H-Mean} \\ + SentCL (w aug) & 54.23 \(\pm\) 0.10 & 38.12 \(\pm\) 0.36 & \multicolumn{2}{|c|}{_BERT FT_ on ATIS dataset (as compared to 21.92 points below on SNIPS) in terms of H-Mean} \\ \hline \multicolumn{2}{|c|}{**Ours (full)**} & \multicolumn{2}{|c|}{**54.68 \(\pm\) 0.08**} & \multicolumn{2}{|c|}{_BERT FT_ on ATIS dataset (as compared to 21.92 points below on SNIPS) in terms of H-Mean} \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study of effectiveness of SegCL and SentCL on SNIPS and ATIS in terms of H-Mean
\begin{table}
\begin{tabular}{|c||c|c|c||c|c||c|c|c||c|} \hline \multicolumn{2}{|c||}{} & \multicolumn{1}{c|}{**Model**} & \multicolumn{1}{c|}{**Prior Knowledge**} & \multicolumn{1}{c||}{Break} & \multicolumn{1}{c||}{Tie} & \multicolumn{1}{c||}{**H-Mean**} \\ \hline \multirow{6}{*}{**Upper Bound**} & \multirow{6}{*}{Joint BERT FT} & Slot + Intent & 96.91 \(\pm\) 0.17 & 96.62 \(\pm\) 0.69 & 96.76 \(\pm\) 0.26 & 73.55 \(\pm\) 0.38 & 73.39 \(\pm\) 1.03 & 73.47 \(\pm\) 0.38 & 83.52 \(\pm\) 0.16 \\ & & FaiNLP \(\lx@notemark{\text{\textdagger}}\) & POS \& NER & 80.04 & 62.81 & 70.38 & 48.25 & 63.31 & 54.77 & 61.60 \\ & & SpaCy \(\lx@notemark{\textdagger}\) & POS \& NER & 75.73 & 50.29 & 60.45 & 41.71 & 62.97 & 50.18 & 54.84 \\ \hline \multirow{6}{*}{**Comparable**} & \multirow{6}{*}{Dpr-LB **} & \multirow{6}{*}{–} & 59.68 & 34.27 & 43.54 & 21.69 & 38.53 & 27.76 & 33.90 \\ & & & & 66.53 & 52.56 & 58.73 & 33.97 & 52.24 & 41.17 & 48.40 \\ \cline{1-1} & & AutoPhrase & External KB & 65.51 \(\pm\) 0.23 & 57.16 \(\pm\) 2.59 & 61.08 \(\pm\) 1.15 & 33.09 \(\pm\) 0.74 & 36.62 \(\pm\) 1.67 & 34.99 \(\pm\) 1.50 & 44.43 \(\pm\) 1.64 \\ \cline{1-1} & & UCPhrase & PLM & 42.25 \(\pm\) 4.90 & 20.26 \(\pm\) 2.71 & 27.39 \(\pm\) 1.95 & 30.66 \(\pm\) 2.42 & **73.83** \(\pm\) 3.83 & **48.99** \(\pm\) 2.14 & 34.98 \(\pm\) 2.35 \\ \cline{1-1} & & **83.21** & 62.12 & 71.14 & 31.34 & 39.49 & 40.42 & 51.55 \\ \hline \multirow{6}{*}{**Comparable**} & \multirow{6}{*}{Ours (w/o CL)} & PLM & 75.36 & 66.70 & 70.76 & 38.51 & 45.81 & 41.84 & 52.59 \\ \cline{1-1} & & Ours (w/o CL) & PLM & 60.79 \(\pm\) 0.73 & 66.43 \(\pm\) 0.29 & 70.94 \(\pm\) 0.49 & 99.15 \(\pm\) 0.60 & 47.99 \(\pm\) 0.93 & 33.61 \(\pm\) 0.73 & 53.61 \(\pm\) 0.71 \\ \cline{1-1} & & **Ours (full)** & **PLM + Intent** & 76.87 \(\pm\) 0.25 & **67.27 \(\pm\) 0.24** & **72.00 \(\pm\) 0.24** & **40.39 \(\pm\) 0.16** & 48.49 \(\pm\) 0.19 & 44.07 \(\pm\) 0.04 & **54.48 \(\pm\) 0.05** \\ \hline \end{tabular}
\end{table}
Table 2: Experimental performance result on SNIPS dataset over 3 runs (**H-Mean** is considered the golden criteria for SI (Section 3)). \(\lx@notemark{\textdagger}\) denotes models that do not require random initializations.
\begin{table}
\begin{tabular}{|c||c|c|c|c||c|c|c||c|} \hline \multicolumn{2}{|c||}{} & \multicolumn{1}{c|}{**Model**} & \multicolumn{1}{c|}{**Prior Knowledge**} & \multicolumn{1}{c||}{Break} & \multicolumn{1}{c||}{Tie} & \multicolumn{1}{c||}{**H-Mean**} \\ \hline \multirow{6}{*}{**Upper Bound**} & \multirow{6}{*}{Joint BERT FT} & Slot + Intent & 96.91 \(\pm\) 0.17 & 96.62 \(\pm\) 0.69 & 96.76 \(\pm\) 0.26 & 73.55 \(\pm\) 0.38 & 73.39 \(\pm\) 1.03 & 73.47 \(\pm\) 0.38 & 83.52 \(\pm\) 0.16 \\ & & FaiNLP \(\lx@notemark{\textdagger}\) & POS \& NER & 80.04 & 62.81 & 70.38 & 48.25 & 63.31 & 54.77 & 61.60 \\ & & SpaCy \(\lx@notemark{\textdagger}\) & POS \& NER & 75.73 & 50.29 & 60.45 & 41.71 & 62.97 & 50.18 & 54.84 \\ \cline{1-1} & & Dpr-LB **B** & – & 59.68 & 34.27 & 43.54 & 21.69 & 38.53 & 27.76 & 33.90 \\ \cline{1-1} & & DR-RB **B** & – & 66.53 & 52.56 & 58.73 & 33.97 & 52.24 & 41.17 & 48.40 \\ \cline{1-1} & & AutoPhrase & External KB & 65.51 \(\pm\) 0.23 & 57
formation for SI task, leading to improved performance on both datasets beyond _Ours (w/o CL)_.
Segment-level Supervision (SegCL)As observed in Figure 3(a), 3(b), semantic representation of the given utterance via [CLS] token is closer to the UPL-derived segments as compared to random segment counterparts due to the higher sum of similarity score (0.1281 > -0.6304). UPL segments also correctly identify nearly all of the slot ground truth labels (i.e. artist (_music_item_), paulinho da Costa (_artist_), my (_playlist_owner_), very nearly nashville (_playlist_)) in the given utterance while random segmentations truncate the slot phrases incorrectly.
Sentence-level Supervision (SentCL)On the sentence level, besides the commonly aligned phrases (i.e. _add tune to_ vs _add rupee to_), the model recognizes corresponding playlists in anchor and positive samples (i.e. _black metal playlist_ vs _ultra metal playlist_) and assign competitive similarity score between them. On the other hand, potential relevant noun phrases (i.e. ultra metal playlist (_playlist_) and any silvester sound track (_sound track_)) between anchor and negative samples are assigned low similarity score. This showcases the model's capability in (1) correctly recognizing and bringing the important slot phrases in positive-anchor pair closer together, (2) reducing the importance of potential relevant slot phrases across samples with different intents. The Similarity Matrix presented in Figure 3(c) also indicates the strong segment alignment between positive and anchor samples as the diagonal cells receive higher similarity score than most of the other cells within the same column or row.
Qualitative Case StudyAdditional Case Studies presented in Figure 5 demonstrate the effec
Figure 4: Similarity Matrices between positive/negative and anchor samples from SegCL and SentCL. For SegCL ((a), (b)), positive-anchor pair is more aligned as the sum of similarity scores between positive segments and [CLS] representation (i.e. sum of row-wise cell values) is higher than the negative counterpart. Boundaries of all slot types (presented by red, pink, orange boxes) are correctly recognized in the positive sample in contrast to the negative counterpart. For SentCL ((c), (d)), positive-anchor pair assigns a higher similarity score to the aligned slot phrase (red box) while negative-anchor pair reduces similarity scores between potential relevant slot phrase (orange box).
Figure 5: Sample Segmentation Results from _Comparable_ Methods in comparison with **Golden Slot Labels** on SNIPS dataset where “\(\mathbb{I}\)” denotes the _Break_ as introduced in Figure 1. Red, Blue denote distinct slot label segments. The colors are repeated in _Comparable_ Methods to showcase the consistency of models’ predictions with ground truth labels under the condition no more than 2 tokens in the segments are mispredicted.
tiveness of our proposed framework in capturing slot phrases. Despite the imperfect segmentations, _Ours_ captures phrases closer to the ground truth slot labels than other _Comparable_ baselines. In fact, our identified phrases "spirit touches ground" and "leche con chocolate list" are exact matches for the golden slot labels. Our proposed multi-level CL refining mechanism is also shown to correct mistakes of the original model. (from "by phil" in _Ours (w/o CL)_ to "phil och" in _Ours (with CL)_.
### Generalization towards Emerging Intents Visual Representation
We first visualize the representations of two randomly sampled slot types produced by the raw original BERT and our Refined BERT (via SI objectives). As observed in Figure 6, our Refined BERT clusters the representations of samples with the same slot types for both training and testing sets more effectively than the original BERT in the embedding space, leading to far clearer separation boundaries between the sampled slot types. For Train Slots, embeddings of slot values from each slot type are nearly disentangled, implying our Refined BERT is capable of recognizing slot types without explicit slot training objectives and token-level label access. In addition, when applied to new intents and slots in P2 dataset, our SI framework produces refined BERT with better semantic representations for tokens from the same slot types as observed in Figure 5(c),5(d).
Quantitative EvaluationAs observed in Table 5, when generalized to emerging intents and slots, our Refined BERT outperforms the traditional BERT while fine-tuning on both datasets in all slot evaluation metrics. This showcases the generalization capability of our model across different sentence-level intent labels. In addition, the consistent improvement in SF evaluation implies that SI training objectives via UPL and CL refinement provide more guidance to the PLM for the downstream token-level task without explicit training objectives and label requirements.
## 7 Conclusion
In our work, we propose the study of token-level Slot Induction (SI) via an Unsupervised Pre-trained Language Modeling (PLM) Probing in conjunction with Contrastive Learning (CL) objectives. By leveraging both unsupervised signals from PLM and sentence-level signals from intent labels via CL objectives, our proposed framework not only
\begin{table}
\begin{tabular}{|l||c|c|c||} \hline & \multicolumn{3}{c||}{**SNIPS\_P2**} \\ \hline & S-P & S-R & S-F1 \\ \hline Original BERT & 14.11 \(\pm\) 0.47 & 17.78 \(\pm\) 0.82 & 15.73 \(\pm\) 0.62 \\ \hline Refined BERT & **15.08 \(\pm\) 0.48** & **19.61 \(\pm\) 0.23** & **17.05 \(\pm\) 0.38** \\ \hline \multicolumn{4}{|c||}{**ATIS\_P2**} \\ \hline Original BERT & 66.67 \(\pm\) 0.82 & 63.35 \(\pm\) 1.35 & 64.96 \(\pm\) 0.74 \\ \hline Refined BERT & **70.12 \(\pm\) 0.85** & **63.64 \(\pm\) 0.48** & **66.72 \(\pm\) 0.66** \\ \hline \end{tabular}
\end{table}
Table 5: Evaluation of SF task over 3 runs on Emerging Intents in SNIPS_P2 and ATIS_P2 datasets.
Figure 6: Slot Value Representation Visualization of the raw original pre-trained BERT and raw Refined BERT via SI on sample slot types from training set SNIPS_P1 ((a), (b)) and testing set SNIPS_P2 ((c), (d)). Blue and Red denotes slot values from randomly sampled ground truth slot types.
achieves competitive performance in comparison with other unsupervised phrasal segmentation baselines but also bridges the gap in performance with _Upper Bound_ methods that require additional token-level labels on two NLU benchmark datasets. We also demonstrate that our proposed SI training is capable of refining the original PLM, resulting in more effective slot representations and benefiting downstream SF tasks when generalized towards emerging intents. Further studies of better exploitation of full-depth segmentation trees, enhanced segment augmentation mechanisms and better semantic alignment extraction between slots and intents are promising directions for our future work. We also seek to extend the current SI studies beyond English and towards multilingual NLU systems. Nguyen and Rohrbaugh (2019); Qin et al. (2022); Nguyen et al. (2023)
## Limitations
Our proposed framework assumes a fixed hyperparameter depth \(d\) for UPL segmentation tree. In other words, only segments extracted at the depth \(d\) are considered for CL objectives. \(d\) is tuned with each dataset's validation set. However, as our main objective is to investigate the effects of UPL and CL objectives, we leave the full tree exploitation as future extensions for our work.
Secondly, the goal of our SI is to identify the slot phrase boundaries. The label type predictions for recognized slot phrases are beyond the scope of our investigation. Therefore, direct end-to-end evaluation of SI in mitigating slot label scarcity issues cannot be directly evaluated. Our rationale for dividing the task into 2 separate steps (i.e. slot boundary induction and slot label prediction) is as follows: As the complete SI is a complex task, breaking it down not only allows for direct and focused evaluation of the proposed framework's contribution at individual steps but also minimizes error propagation from intermediate steps to a single end-task metric. This rationale is further supported by our empirical study in Section 6. The proposed _USSI_ whose objective unifies both aforementioned steps underperforms _Ours(w/o CL)_ and _Ours(full)_ when evaluated at the slot boundary induction step.
## Acknowledgement
This work is supported in part by NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941.
We would like to acknowledge the use of the facilities of the High Performance Computing Division and High Performance Research and Development Group at the National Center for Atmospheric Research and the use of computational resources (doi:10.5065/D6RX99HX) at the NCAR-Wyoming Supercomputing Center provided by the National Science Foundation and the State of Wyoming, and supported by NCAR's Computational and Information Systems Laboratory.
|
2303.09144 | On Koopman-based surrogate models for non-holonomic robots | Data-driven surrogate models of dynamical systems based on the extended
dynamic mode decomposition are nowadays well-established and widespread in
applications. Further, for non-holonomic systems exhibiting a multiplicative
coupling between states and controls, the usage of bi-linear surrogate models
has proven beneficial. However, an in-depth analysis of the approximation
quality and its dependence on different hyperparameters based on both
simulation and experimental data is still missing. We investigate a
differential-drive mobile robot to close this gap and provide first guidelines
on the systematic design of data-efficient surrogate models. | Lea Bold, Hannes Eschmann, Mario Rosenfelder, Henrik Ebel, Karl Worthmann | 2023-03-16T08:21:07Z | http://arxiv.org/abs/2303.09144v1 | # On Koopman-based surrogate models for non-holonomic robots
###### Abstract
Data-driven surrogate models of dynamical systems based on the extended dynamic mode decomposition are nowadays well-established and widespread in applications. Further, for non-holonomic systems exhibiting a multiplicative coupling between states and controls, the usage of bi-linear surrogate models has proven beneficial. However, an in-depth analysis of the approximation quality and its dependence on different hyper-parameters based on both simulation and experimental data is still missing. We investigate a differential-drive mobile robot to close this gap and provide first guidelines on the systematic design of data-efficient surrogate models.
1
Footnote 1: Optimization-based Control Group, Institute of Mathematics, Technische Universität Ilmenau, Germany, [lea.bold, karl.worthmann]@tu-ilmenau.de.
K. Worthmann gratefully acknowledges funding by the German Research Foundation (DFG, project-ID 507037103).
2
Footnote 2: Institute of Engineering and Computational Mechanics (ITM), University of Stuttgart, Germany, [hannes.eschmann, mario.rosenfelder, [email protected].
The ITM acknowledges the support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2075 – 390740016, project PN-4- "Learning from Data - Predictive Control in Adaptive Multi-Agent Scenarios" and project EB195/32-1, 433183605 “Research on Multibody Dynamics and Control for Collaborative Elastic Object Transportation by a Heterogeneous Swarm with Aerial and Land-Based Mobile Robots”.
## 1 Introduction
Non-holonomic vehicles are of indispensible practical value in transportation and robotics. To automate their behavior, accurate models are key for tasks such as motion planning and model-based control. Often, in robotics, simple kinematic models based on first principles are employed because it can be arduous to take into account hardware imperfections and effects beyond kinematics, and because it fits typical cascade-type control approaches. An alternative are data-driven techniques, which need to strike a balance between data efficiency, model expressiveness, efficient and reliable numerical realizations, and, at best, should have a theoretical underpinning that may bring about beneficial theoretical properties such as quantifiable error bounds with finite data. With regard to these requirements, a very popular method is the extended Dynamic Mode Decomposition (eDMD), whose theoretical foundation is the Koopman framework. The Koopman operator lifts the nonlinear dynamics to linear but infinite-dimensional dynamics, which are then approximated using eDMD to generate a data-based surrogate model [1]. This approach has been recently generalized to the setting with inputs [2] to apply linear techniques for the controller design [3]. In this paper, we show, based on real-world data and hardware experiments with a non-holonomic (differential-drive) mobile robot, that and how eDMD in a Koopman framework can be used to learn a model more accurate than the nominal kinematic model. Moreover, we show how it is possible to improve data efficiency and model accuracy by incorporating physical a-priori knowledge.
Even with an accurate model, controller design for non-holonomic systems remains challenging [4] since, e.g., Brockett's condition is violated meaning that there does not exist a continuous time-invariant state-feedback law. For instance, as rigorously shown in [5, 6], techniques like model predictive control based on quadratic costs do not successfully solve the set-point stabilization problem. A remedy are more sophisticated schemes using structural insight, e.g., based on the homogeneous approximation and privileged coordinates, see [6, 7, 8, 9]. This insight is key to understand whether a linear surrogate model as proposed in eDMDe suffices or a bilinear one is required [10, 11, 12].
Extended DMD with control (eDMDc) has already been explored for robotic systems, e.g., for an inverted pendulum or a tail-actuated robotic fish [13], or within simulations for non-holonomic mobile robots [14]. Even a first experimental validation of Koopman-based LQR control utilizing structural knowledge has been explored for a tail-actuated robotic fish [15]. However, determining an optimal dimensionality of the Koopman-based surrogate model remains challenging [16]. A rare experimental work, in which eDMDc is applied to non-holonomic robots, can be found in [17]. Therein, eDMDc is used to identify a model based on simulated data
using a dictionary consisting of Hermite polynomials, and the prediction of that model is also compared with the behavior of a hardware robot. However, the authors do not identify a model based on data from real-world hardware and, hence, only the nominal dynamics is replicated. Moreover, a bi-linear surrogate model seems to be advantageous as shown in [10, 11] on a simulated robot arm and a planar quadrotor, respectively - a claim, which is further supported in [12, 18] for control-affine systems exhibiting a state-control coupling since lifted linear models of finite dimension cannot capture nonlinear actuation effects inherent in many robotic systems [11].
The contribution of this manuscript is the experimental investigation of the Koopman-based, bi-linear surrogate model in simulation _and_ experiment, which, to the knowledge of the authors, is novel in itself and in the depth of the conducted analysis. In that regard, we consider the so-called one-step error to analyze and compare the prediction accuracy for various reference trajectories in dependence of the key hyperparameters like the composition of the dictionary, the amount of data points, and the control basis employed for the bilinear approach. In particular, we outperform nominal models using surrogate models generated from random real-world data.
Section 2 recaps eDMD in the Koopman framework before the problem setup is given in Section 3. Then, simulation and experimental results are presented in Sections 4 and 5, respectively, before the results are discussed and conclusions are drawn.
**Notation**: For integers \(n,m\in\mathbb{Z}\) with \(n\leq m\), we define \([n:m]\coloneqq\mathbb{Z}\cap[n,m]\).
## 2 Recap: eDMD in the Koopman framework
We consider the nonlinear dynamical system governed by \(\dot{x}(t)=f(x(t))\) with a locally-Lipschitz continuous vector field \(f:\mathbb{R}^{n_{x}}\to\mathbb{R}^{n_{x}}\). Then, for observables \(\varphi\in L^{2}(\mathbb{R}^{n_{x}},\mathbb{R})\), the Koopman operator is defined by the identify
\[(\mathcal{K}^{t}\varphi)(x^{0})=\varphi(x(t;x^{0}))\qquad\forall\,(t,x^{0}) \in\mathbb{R}_{\geq 0}\times\mathbb{R}^{n_{x}}, \tag{1}\]
i.e., instead of evaluating the observable \(\varphi\) at the flow \(x(t;x^{0})\) emanating from the initial condition \(x(0;x^{0})=x^{0}\) at time \(t\), the Koopman operator propagates the observable forward in time \(\mathcal{K}^{t}\varphi\) and, then, evaluates the propagated observable at the initial value \(x^{0}\in\mathbb{R}^{n_{x}}\). Alternatively, one may also work with the generator \(\mathcal{L}\) of the Koopman semigroup \((\mathcal{K}^{t})_{t\in\mathbb{R}_{\geq 0}}\), which satisfies the abstract Cauchy problem \(\dot{z}(t)=\mathcal{L}z(t)\), \(z(0)=\varphi\), see, e.g., [19]. For details on DMD [20] and its variants, we refer to [21] and the references therein. The connection to the Koopman framework is treated in [1]. Here, we restrict ourselves to a compact set \(\mathbb{X}\subset\mathbb{R}^{n_{x}}\), see [19] for a detailed discussion.
For the dictionary \(\mathbb{V}\coloneqq\text{span}\{(\psi_{j})_{j=1}^{N}\}\) with \(\psi_{j}:\mathbb{X}\to\mathbb{R}\), the data-based surrogate model of the Koopman generator using the i.i.d. data points \(x^{[1]},..,x^{[d]}\in\mathbb{X}\) is given by
\[\tilde{\mathcal{L}}_{d}=\tilde{C}^{-1}\tilde{A}\quad\text{ with }\quad\tilde{C}=\tfrac{1}{d}\Psi_{X}\Psi_{X}^{\top}\text{ and }\tilde{A}=\tfrac{1}{d}\Psi_{X}\Psi_{Y}^{\top},\]
where the matrices \(\Psi_{X},\Psi_{Y}\in\mathbb{R}^{N\times d}\) are defined by
\[\Psi_{X} \coloneqq\left[\left[\begin{smallmatrix}\psi_{1}(x^{[i]})\\ \vdots\\ \psi_{N}(x^{[i]})\end{smallmatrix}\right]\right]\cdots\left[\begin{smallmatrix} \psi_{1}(x^{[d]})\\ \psi_{N}(x^{[d]})\end{smallmatrix}\right]\right],\] \[\Psi_{Y} \coloneqq\left[\left[\left.\begin{smallmatrix}(\mathscr{L}\psi_{1} )(x^{[i]})\\ (\mathscr{L}\psi_{N})(x^{[i]})\end{smallmatrix}\right]\right]\cdots\left[ \begin{smallmatrix}(\mathscr{L}\psi_{1})(x^{[d]})\\ (\mathscr{L}\psi_{N})(x^{[d]})\end{smallmatrix}\right]\right].\]
Note that \((\mathscr{L}\psi_{j})(x^{[i]})=f(x^{[i]})\cdot\nabla\psi_{j}(x^{[i]})\) holds for all \((i,j)\in[1:d]\times[1:N]\). Since one cannot expect invariance of \(\mathbb{V}\) w.r.t. the approximated Koopman operator, one projects the outcome to the coordinate functions, which are tacitly assumed to be contained in the dictionary, e.g., \(\psi_{i}(x)=x_{i}\) for all \(i\in[1:n_{x}]\). In the operator setting, a time shift \(\delta>0\) is fixed and the data matrix \(\Psi_{Y}\) contains the entries \(\psi_{j}(x(\delta;x_{i}))\) instead of \((\mathscr{L}\psi_{j})(x_{i})\).
For control-affine systems \(\dot{x}(t)=f(x(t))+\sum_{i=1}^{n_{x}}g_{i}(x(t))u_{i}(t)\), there are two different options to deduce eDMD-based surrogate models. In [22], a linear surrogate model \(\dot{\psi}=\mathcal{L}\psi+\mathcal{B}u(t)\) (eDMDc) is proposed. To this end, the state is augmented by the control, i.e., \(\tilde{x}=[x^{\top}\ u^{\top}]^{\top}\). An alternative are bi-linear surrogate models that explicitly leverage the control-affine structure, i.e., the identity \(\mathcal{L}^{u(t)}=\mathcal{L}^{0}+\sum_{i=1}^{n_{x}}u_{i}(t)(\mathcal{L}^{e_{ i}}-\mathcal{L}^{0})\), where \(\mathcal{L}^{e_{i}}\) is the generator for the autonomous dynamics with \(u\equiv e_{i}\). This yields \(\dot{\psi}=\mathcal{L}^{u(t)}\psi\), see, e.g., [23] and the references therein. This approach seems to be preferable. On the one hand, it alleviates the curse of dimensionality resulting from the state augmentation in eDMDc. On the other hand, bilinear models seem to be superior if state-control couplings are present, i.e., one of the vector fields \(g_{i}\) depends on the state \(x\), see [10, 11, 12, 18]. For further details on the Koopman theory for control systems, see, e.g., [3] and the references therein.
The approximation error can be split up into its two sources of error, i.e., the estimation [18] and the projection error [19]. While the latter results from only finitely many observables in the dictionary \(\mathbb{V}\) and, thus, approximating the Koopman generator/operator on the respective finite-dimensional subspace, the former is a consequence of using only finitely many data points \(x^{[i]}\), \(i\in[1:d]\). While the convergence in the infinite-data limit also holds for eDMDc [24], finite-data error bounds are presently only available for the bilinear approach, see [18, 19].
## 3 Problem Setup
The nominal kinematics of the differential-drive robot is given in terms of the driftless control-affine system
\[\dot{x}(t)=\begin{bmatrix}\cos\theta(t)\\ \sin\theta(t)\\ 0\end{bmatrix}v(t)+\begin{bmatrix}0\\ 0\\ 1\end{bmatrix}\omega(t), \tag{2}\]
\(x(0)=x^{0}\). The state \(x=[x_{1}\ x_{2}\ \theta]^{\top}\in\mathbb{X}\subset\mathbb{R}^{3}\) consists of its position \([x_{1}\ x_{2}]^{\top}\) in the plane and its orientation \(\theta\) measured relative to the \(x_{1}\)-axis. Nominally, it is assumed that the robot can instantaneously attain any admissible translational velocity \(v\) in forward direction and angular yaw velocity \(\omega\), so that these act as the system's control input \(u=[v\,\omega]^{\top}\in\mathbb{U}\subset\mathbb{R}^{2}\), where \(\mathbb{U}\) is compact, convex, and \(0\in\mathrm{int}(\mathbb{U})\). In general, the nominal kinematics does not perfectly describe the dynamics of the physical robot since inertia effects, motor dynamics, and manufacturing imperfections are not accounted for. From a mechanical point of view, the dynamics (2) describe the kinematics of a differential-drive mobile robot in the plane under the common assumption that the wheels roll without slipping with the wheel-floor contact point sticking perfectly to the ground, preventing instantaneous lateral motions of the robot and thereby giving rise to a non-holonomic kinematic constraint. A physical robot with such a kinematic setup is employed throughout this contribution. On the nominal kinematic level, the robot's configuration is completely described by means of its pose, hence it is sufficient to formulate the observables based on \(x\). Thus, in general, the learning procedure from Section 2 receives as data recorded pairs of states and corresponding successor states, but not any prior information on the dynamics of the robot. However, in Sec. 4, we show how some mechanical prior knowledge can be incorporated, e.g., when choosing the observables of the dictionary \(\mathbb{V}\).
## 4 Simulation results
In this section, eDMD is applied to the simulated, nominal non-holonomic robot. First, we generate i.i.d. random data matrices \(X_{i}\in\mathbb{R}^{n_{u}\times d}\), \(i\in\{0,\ldots,n_{u}\}\), with \(n_{x}=3\) and \(d=10000\) data points each, where each column is in the set \(\mathbb{X}\) and serves as an initial condition for the dynamical system (2). Each data point contained in \(X_{i}\) is simulated forward \(\delta=0.02\,\mathrm{s}\) with the Runge-Kutta method of fourth order using a specific constant control input \(u_{i}\). For \(i=0\), the latter is chosen to \(u_{0}=0\). For \(i>0\), it is selected to be the \(i\)th vector of a basis \(B\) of \(\mathbb{R}^{n_{u}}\). Here, with \(n_{u}=2\) and the basis \(B=\{u_{1},u_{2}\}\), this yields the matrices \(Y_{0}\), \(Y_{1}\), and \(Y_{2}\) containing in each column the successor states of the states in \(X_{0}\), \(X_{1}\), and \(X_{2}\) for the inputs \(u_{0}\), \(u_{1}\), and \(u_{2}\), respectively. Nominally, the system is free of drift, i.e., \(Y_{0}=X_{0}\). In simulations, different from experiments, it is possible to choose \(X_{0}=X_{1}=X_{2}\), which is done in this section. In the following, the dictionary \(\mathbb{V}\) is spanned by the monomials of \(x_{1}\), \(x_{2}\), and \(\theta\) of degree less or equal than \(7\), which yields \(N=120\) observables in total, yielding the set of observables \(\mathbb{O}_{120}\). By lifting the matrices \(X_{i},Y_{i}\), \(i\in[0:n_{u}]\), with those observables, the matrices \(\Psi_{X_{i}},\Psi_{Y_{i}}\) are computed, see Section 2. Now, an approximation of the Koopman operator for step size \(\delta\) is computed by \(K_{i}^{\delta}=((\Psi_{X_{i}}\Psi_{X_{i}}^{\top})^{-1}\Psi_{X_{i}}\Psi_{Y_{i} }^{\top})^{\top}\), \(i\in[0:n_{u}]\). Using the bilinear approach, we approximate the Koopman operator for a control value \(u\in\mathbb{U}\subset\mathbb{R}^{n_{u}}\) by \(K_{u}^{\delta}=K_{0}^{\delta}+\sum_{i=1}^{n_{u}}g_{i}\cdot\left(K_{i}^{\delta }-K_{0}^{\delta}\right)\) for factors \(g_{i}\), \(i\in[1:n_{u}]\), which, here, solve the linear system \(g_{1}u_{1}+g_{2}u_{2}=u\). There are two different ways to use the approximated Koopman operator to obtain the approximate values of the coordinates at a time step \(k>0\). In the first surrogate model variant proposed in [23], subsequently referred to as \(\text{SUR}_{1}\), one projects after each time step, i.e., \(x_{j}[k]=(\mathcal{K}_{u[k-1]}^{\delta}\Psi(x[k-1]))_{j}\) for \(j\in[1:n_{x}]\), with the number inside square brackets denoting the time step, where one step is of duration \(\delta\). Between time steps, the new values of the observables are calculated based on the new coordinate values. In the second variant, called \(\text{SUR}_{2}\) in the following, one projects once at the end, i.e., \(x_{j}[k]=((\prod_{i=0}^{k-1}\mathcal{K}_{u[i]}^{\delta})\Psi(x^{0}))_{j}\). To analyze their influence, Fig. 1 shows prediction results for the two variants and, as a reference, the result of the time integration of the nominal kinematic model using the Runge-Kutta method of fourth order. In the depicted scenario, the control input is set to the constant value \(u\equiv\begin{bmatrix}0.2&0.2\end{bmatrix}^{\top}\), i.e., the robot will move in a circle and the basis is \(B=\{e_{1},e_{2}\}\) for the unit vectors \(e_{1},e_{2}\in\mathbb{R}^{n_{u}}\). As can be seen, \(\text{SUR}_{1}\) leads to a trajectory whose error remains comparatively small over the whole trajectory. For the
model \(\text{SUR}_{2}\), however, we receive a trajectory that visibly deviates from the reference after about one outer of simulated time, which can also be seen in the error plot. In the second half of the simulation, the prediction based on \(\text{SUR}_{2}\) becomes increasingly inaccurate and quickly unusable. Consequently, from now on, we will only use \(\text{SUR}_{1}\) for Koopman-based surrogate models.
The basis employed for \(\mathbb{R}^{n_{u}}\) need not consist of unit vectors. In the following, we use the bases \(B_{1}=\{[0.2\ 0]^{\top},[0\ 2]^{\top}\}\) and \(B_{2}=\{[0.2\ -0.4]^{\top},[0.2\ 0.6]^{\top}\}\) instead. Basis \(B_{1}\) contains scaled variants of the unit vectors that, in absolute value, fit better to the usual operating points of the employed hardware robot; for instance, it cannot attain translational velocities of \(1\,\text{m/s}\). Still, training with \(B_{1}\) only captures the robot driving a straight line or rotating on the spot. In contrast, to study the influence of the usage of different training motions for learning, the inputs contained in \(B_{2}\) let the robot drive arcs of different radii. In Fig. 2, the results for those two bases are illustrated.
In the plotted scenario, the same random control sequence \(u\) is applied to the models. Once again, the prediction results of the surrogate models are compared to time integrations of the nominal model, which is used as a reference. In addition to the error norm, the one-step prediction error is considered. To calculate the latter, in each time step, starting from the same reference value, the following time step is predicted using the model of choice and the result is compared with the corresponding, subsequent value of the reference. As the results in Fig. 2 show, using random control values leads to a higher error than using the constant control input from Fig. 1, motivating the subsequent analysis using test trajectories where a wider variety of inputs are applied. Moreover, here, the difference between the two bases is negligible. However, it is not a priori clear whether the latter also holds when using data from an imperfect hardware robot. Hence, real-world data is considered subsequently.
## 5 Experimental results
We use a custom-built mobile robot as depicted on the right of Fig. 3. Its pose is tracked by an external tracking system consisting of five Optitrack Prime 13W cameras. The robot receives its inputs, the desired forward translational velocity and the desired angular yaw velocity, wirelessly. On-board software kinematically calculates the angular velocities of the wheels corresponding to the inputs under the assumption of rolling without slipping. Two independent PID controllers operating at a frequency of \(100\,\text{Hz}\) control the motors so that the wheels quickly attain the desired angular velocities. Naturally, due to imperfections, the actual robot velocities may not match the sent ones. The time step is set to \(\delta=0.1\,\text{s}\) subsequently.
### Data Generation
Generating uniformly distributed training samples is possible by driving the robot to each corresponding point in the state space individually, applying one of the \(n_{u}\) inputs, and potentially driving back to that point to apply another input. However, this way of generating training data is notoriously time-inefficient. The more efficient procedure used in this paper works as follows. For the considered robot, holding any input for several time steps nominally results in a circular motion with the radius being determined by the quotient of the translational and angular velocities. The basis vectors of \(B_{1}\), which consist of driving in a straight line and turning on the spot, correspond to circles with infinite and vanishing radii, respectively. Therefore, slightly different sampling
Figure 1: Results from two Koopman-based surrogate models based on first-principles data, with the trajectory emanating from \(x^{0}=[0.2\ 0\ -\pi/2]^{\top}\) on the left, and the norm of the prediction error on the right.
strategies for the two input bases \(B_{1}\) and \(B_{2}\) are used. Starting from an initial position on the admissible motion plane \(\mathbb{P}=[0.0,1.5]\,\mathrm{m}\times[-0.75,0.75]\,\mathrm{m}\) with \(\mathbb{X}=\mathbb{P}\times\mathbb{R}\), a new point is drawn i.i.d. For \(B_{1}\), the robot turns using the corresponding input of the input basis until it faces this generated point. In order to collect as many data points as possible, the robot does at least one full rotation. Subsequently, the robot drives in a straight line toward this generated point. This way, the necessary input of \(B_{1}\) is held for several time steps, generating training samples along the way, making the procedure very time efficient. This procedure is repeated until a sufficient amount of training data is generated. Due to the reasons stated above, for the input basis \(B_{2}\), the sampling strategy is adjusted slightly. The robot, again, turns and drives towards the uniformly randomly generated point. Then, each time alternating between the two basis vectors of \(B_{2}\), the inputs are applied either until a full circle is driven or until the nominal state prediction of the robot leaves \(\mathbb{X}\). Generally, while time efficient and effective, this way of generating samples does not lead to a perfectly uniform distribution. In Fig. 3, some of the trajectories used during the data generation are depicted.1 Another practical consideration concerns the measurement of the robot's orientation. The optical tracking system steadily continues the angular
Figure 3: From left to right, training trajectories used to generate the samples for the input bases \(B_{1}\) and \(B_{2}\), and a photograph of the employed type of custom-built robot are shown.
Figure 2: Results using \(\mathrm{SUR}_{1}\) and the basis \(B_{1}\) () or \(B_{2}\) (). From left to right, top to bottom, the resulting trajectories in the motion plane, the applied control values, the one-step prediction errors, and total error norms are shown.
measurements such that the orientation angle may lie outside of \((-\pi,\pi]\). While it would be possible to use the raw data for training, instead, we leverage the periodicity of the orientation.2
Footnote 2: Each orientation in \(X_{i}\), \(i\in\{0,1,2\}\), is shifted to its equivalent value within \((-\pi,\pi]\). The entries of \(Y_{i}\) are shifted by the same amount as the corresponding entries of \(X_{i}\). However, after that, some orientations in \(Y_{i}\) may still lie outside of \((-\pi,\pi]\), namely if the angle left the interval between the sampling instants. The matrices with shifted entries are then used to compute the surrogate model. Before each evaluation of the model, the orientation is shifted to \((-\pi,\pi]\). Subsequently, the output is then shifted back, resulting in the surrogate model being periodic (but not necessarily continuous) in the orientation.
### Results
Two main scenarios are considered. In the first scenario, the robot follows an \(\infty\)-shaped trajectory. As the terminal and initial velocities are zero, at the start as well as the end of the trajectory, the speed is increased or decreased linearly to obtain a smoother motion. In the second scenario, the robot shall follow a square-shaped trajectory. To that end, the robot drives trapezoidal velocity profiles on each edge of the square with a top speed of about \(0.2\,\mathrm{m/s}\). At each corner, the robot makes a quarter counter-clockwise turn with a maximum absolute angular velocity of \(1.0\,\mathrm{rad/s}\), with the angular velocities being increased or decreased linearly.
First, we look at Koopman-based models in which we do not incorporate further a-priori knowledge. The training data was generated as described above for the constant controls contained in the bases \(B_{1}\) or \(B_{2}\), for which 4626 or 5182 training data points were recorded, respectively. Because of the results from Section 4, only the Koopman-based surrogate model with projection in each step (\(\mathrm{SUR}_{1}\)) is employed. Results for the \(\infty\)-shaped trajectory can be seen in Fig. 4, where for the two bases \(B_{1}\) and \(B_{2}\) as well as for different observable sets, the resulting predicted trajectories are plotted on the left-hand side and the absolute errors are compared on the right-hand side. The errors are measured relative to a representative lap of the hardware robot. Due to imperfections, when supplied with inputs that should lead to a perfect \(\infty\)-trajectory for the nominal kinematics, the real robot's trajectory is not of perfect shape. Three different surrogate models differing in their dictionaries are considered. Firstly, the set of observables \(\mathbb{O}_{120}\) from Section 4 is used. Secondly, in \(\mathbb{O}_{32}\), compared to \(\mathbb{O}_{120}\), we exclude monomials for which \(x_{1}\) and \(x_{2}\) have a degree larger than 1, yielding 32 observables in total. Finally, we further reduce the number of observables by omitting monomials where \(x_{1}\) or \(x_{2}\) appear multiplied with \(\theta\), leading to \(\mathbb{O}_{11}\) with 11 observables. This is motivated by the physical insight that the robot's dynamics is translation invariant, so it is interesting to see whether incorporating this knowledge improves model quality. In that regard, as can be seen in the upper part of Fig. 4, the predictions of the surrogate model using \(B_{1}\) with \(\mathbb{O}_{120}\) follow the reference rather well for some time but then completely deviate and even leave the experiment area. The paths for \(\mathbb{O}_{32}\) and \(\mathbb{O}_{11}\), however, are nearly indistinguishable and follow the reference well; only the error plot suggests that \(\mathbb{O}_{11}\) might perform a bit better.
To study the influence of the input basis, the same scenario is plotted in the bottom part of Fig. 4 for basis \(B_{2}\). Again, the trajectories for \(\mathbb{O}_{32}\) and \(\mathbb{O}_{11}\) are very close to each other, even in the error plot. However, with \(B_{2}\), using the observables \(\mathbb{O}_{120}\) results in a trajectory that is close to the reference for much longer before the error becomes visible. Therefore, these results seem to suggest that using the basis \(B_{2}\) is a lot better if the set of observables \(\mathbb{O}_{120}\), which does not use any physical insight, is used and slightly better if \(\mathbb{O}_{32}\) and \(\mathbb{O}_{11}\) are employed, which partly or fully presume translational invariance. Due to these findings, we subsequently use the set of observables \(\mathbb{O}_{11}\) since it seems to yield the best predictions but, due to less elements, is also the most computationally efficient. In particular, as Fig. 5 shows, the surrogate models using \(\mathbb{O}_{11}\) beat the predictions of the nominal model as well as (naturally) of the surrogate model from Section 4.
From now on, due to the beneficial performance, if not stated otherwise, we employ the basis \(B_{2}\) with the observables \(\mathbb{O}_{11}\) and compare the prediction performance of the corresponding surrogate model with the nominal model and experiment runs. In particular, we include 15 experiment runs for each scenario since subsequent experiment realizations generally do not yield identical results due to disturbances, meaning that perfect prediction performance is impossible. Results for the \(\infty\)-trajectory and for the square-shaped trajectory are plotted in Fig. 6. From the trajectory plots in the upper part, it becomes evident that the Koopman-based prediction outperforms the nominal model, better representing the systematic skewedness of the physical robot's trajectories. Similarly, in the lower part, the minimum, maximum and average Euclidean norms of the error between Koopman-based prediction and the family of hardware robot trajectories show that prediction quality is consistently good.
A remaining concern is data efficiency. Hence, subsequently, training data points are removed systematically to obtain smaller training data sets. The original training data was generated by choosing, for \(B_{1}\), \(m_{1}=50\) and, for \(B_{2}\), \(m_{1}=39\) random initial conditions in \(\mathbb{X}\). The relevant trajectory pieces driven for each sampled point are all of different lengths, e.g., depending on the distance to the boundary of \(\mathbb{X}\). To systematically reduce the number of data points, first, these lengths are unified by taking the length of the shortest trajectory, which, here, consists of \(m_{2}=20\) steps, and discarding the data points beyond that for each trajectory segment. This leads to
Figure 4: Results using \(B_{1}\) (top) and \(B_{2}\) (bottom) based on real data, where trajectories for different sets of observables are compared with the result of an experiment run. Absolute errors are depicted on the right, independently for position (pos., norm) and orientation (\(\theta\)).
Figure 5: Comparison of the surrogate models using \(B_{1}\) and \(B_{2}\) with models based on first principles and on data generated from a first principles model.
a new training data set of \(m_{1}\cdot m_{2}=1000\) or \(780\) per basis. Then, every \(n\)th data point, \(n\in\{1,20,50,100\}\), is used to create training data sets of lower cardinality. Using \(\mathbb{O}_{11}\), the different resulting surrogate models' average one-step prediction errors w.r.t. the \(15\) recorded trajectories in the \(\infty\)-scenario are given in Fig. 7. These show that basis \(B_{2}\) seems to be more data efficient since the errors remain lower in data-sparse settings. Moreover, comparatively small training data sets can suffice in this scenario to achieve one-step prediction errors that are consistently smaller than that of the nominal model, especially when using basis \(B_{2}\).
## 6 Summary and Outlook
This contribution showed with a detailed analysis that a bilinear eDMD approach in the Koopman framework can be a very powerful data-driven modeling tool in mobile robotics. Even with a modest amount of data and
Figure 6: Comparison of the surrogate model using \(B_{2}\) with \(15\) experiment runs, where \(e_{\max}\), \(e_{\text{avg}}\), and \(\sigma\) denote maximum, average, and standard deviation of the error norms, respectively.
Figure 7: Average one-step prediction errors for surrogate models using \(B_{1}\) (left) and \(B_{2}\) (right) when only using every \(n\)th data point.
a calculation time in the second range, the approach can be used to learn a dynamical model that is on average more accurate in predictions than the common nominal kinematic model of a differential-drive robot. Moreover, we have seen that and how physical a-priori knowledge can be successfully incorporated into the model, which is interesting beyond the considered application scenario. In particular, we have shown how the dictionary of observables can be modified to account for translation invariance. Still, there are many remaining topics that we will cover in subsequent research. This includes data-driven modeling that strives to include second-order effects such as actuator dynamics and inertia, complicating especially practical considerations such as measuring and sampling of training data. Similarly, we will look at non-holonomic vehicles of higher degree of non-holonomy. Moreover, we intend to use the learned models for data-based predictive control.
**Acknowledgement**: We sincerely thank Manuel Schaller (TU Ilmenau) for his support w.r.t. implementation details and fruitful discussions, which improved our manuscript.
|
2307.04197 | Vacuum Integration: UV- and IR-divergencies | In this note we present the important details regarding the massless vacuum
integrations which are not outlined in the literature. In particular, it has
been shown how the delta-function represents either UV-regime or IR-regime. In
the case of vacuum integration, we advocate the use of sequential approach to
the singular generated functions (distributions). The sequential approach is
extremely useful for many practical applications, in particular, in the
effective potential method. | I. V. Anikin | 2023-07-09T15:05:52Z | http://arxiv.org/abs/2307.04197v1 | # Vacuum Integration: UV- and IR-divergencies
###### Abstract
In this note we present the important details regarding the massless vacuum integrations which are not outlined in the literature. In particular, it has been shown how the delta-function represents either UV-regime or IR-regime. In the case of vacuum integration, we advocate the use of sequential approach to the singular generated functions (distributions). The sequential approach is extremely useful for many practical applications, in particular, in the effective potential method.
## 1 Introduction
In different QFT-models, at the classical level, the effects of spontaneous symmetry breaking are very important in the context of the geometrical analysis of the Goldstone theorem. In this connection, the study of a vacuum state as the potential minimum plays an significant role [1]. Meanwhile, the quantum corrections, that tend usually to distort the geometrical picture, computed within the effective potential (EP) methods allow to return again to the classical geometrical analysis of the models with spontaneous symmetry breaking. In the standard EP-approaches, the quantum corrections are given by the the vacuum integrations with the massive propagators. However, the special interest is related to the vacuum integrations with the massless propagators. It is mostly dictated by the use of conformal symmetry (see for example [2; 3; 4]).
On the other hand, working with the vacuum massless integrations, it demands some careful considerations. Indeed, the general dimensional analysis suggests that all vacuum integrations with the massless propagators lead to zero [5; 6]. It is true except a particular case of dimensionless integrand where the ultraviolet (UV), or infrared (IR), momentum region is only under consideration. In this case, the arguments of dimensional analysis cannot be applied.
In [7], it has been shown that the vacuum integration of dimensionless and massless integrand is proportional to \(\delta(n-D/2)\) where the space dimension is defined as \(D=d-2\epsilon\) (\(d=2,4,6\) etc.) and \(n\) implies the propagator index. The delta-function as a singular generated function (distribution) is a well-defined linear functional on the suitable finite function space. In the case of dimensional regularization, this space should be realized with the integration measure as \(d\epsilon\,\varphi(\epsilon)\) where \(\varphi\) has a localized support. However, it is not always convenient, even possible, to deal with the measure as \(d\epsilon\)[3; 4]. Moreover, owing to the symmetry properties, the delta-function is usually hiding the information on the UV(or IR)-divergency.
Following Gorishni-Isaev's method [7], we present all necessary details on the vacuum integration where the delta-function has been treated in the frame of the sequential approach [8; 9]. We also demonstrate how the delta-function represents the UV(IR)-regimes.
\(\Delta_{F}(0)\)-singularity
Let us consider the simplest case of scalar massless propagator \(\Delta_{F}(0)\) giving the tad-pole diagram. Using the Fourier transform, the propagator \(\Delta_{F}(0)\) can be write as 1
Footnote 1: For the sake of shortness, here and in what follows the momentum loop normalization is hidden in \((d^{D}k)\). Moreover, the Euclidian measure of momentum integrations has been implies.
\[\Delta_{F}(0)=\int\frac{(d^{D}k)}{k^{2}}=\int(d^{D}k)\Big{\{}C^{-1 }(D,1)\int d^{D}z\,\frac{e^{-ikz}}{\left(z^{2}\right)^{D/2-1}}\Big{\}}\] \[=C^{-1}(D,1)\int d^{D}z\,\frac{\delta(z)}{\left(z^{2}\right)^{D/2 -1}}\equiv\Gamma(D/2-1)\int(d^{D}z)\,\frac{\delta(z)}{\left(z^{2}\right)^{D/2 -1}}, \tag{1}\]
where the integration measure \((d^{D}z)\) absorbs the normalization constant \(i(-\pi)^{D/2}\) arising from
\[C^{-1}(D,n)=i(-\pi)^{D/2}\frac{\Gamma(D/2-n)}{\Gamma(n)}. \tag{2}\]
If we assume that \(D/2-1=0\), then the propagator in Eqn. (1) takes a form of
\[\Delta_{F}(0)=\Gamma(0)\int(d^{D}z)\,\delta(z)\Rightarrow\Gamma(0), \tag{3}\]
where, as well-known, the singularity of \(\Gamma\)-function can be presented as
\[\Gamma(0)=\lim_{\epsilon\to 0}\Gamma(\epsilon)=\lim_{\epsilon\to 0}\Big{\{} \frac{1}{\epsilon}+....\Big{\}}. \tag{4}\]
It is worth to notice that the condition given by \(D/2-1=0\) should be applied before the integration over \((d^{D}k)\) in Eqn. (1) in order to avoid the uncertainty, see also Sec. 3.
On the other hand, according to [7], the vacuum integration method applied to the Feynman propagator results in the delta-function. Let us remind a key moment of Gorishni-Isaev's method. Using the spherical system (in the momentum Euclidian space), \(\Delta_{F}(0)\) can be represented as
\[\Delta_{F}(0)=\int\frac{(d^{D}k)}{k^{2}}=\frac{1}{2}\int d\Omega\int_{0}^{ \infty}d\beta\,\beta^{D/2-2}, \tag{5}\]
where \(d\Omega\) gives the finite angle measure of integration. The replacement \(\beta=e^{y}\) leads to the following expression
\[\Delta_{F}(0)=\frac{1}{2}\int d\Omega\int_{-\infty}^{\infty}(dy)\,e^{iy\,\left[ (-i)(D/2-1)\right]}=\frac{1}{2\,|i|}\delta\big{(}D/2-1\big{)}\,\int d\Omega \tag{6}\]
or, restoring all coefficients, it reads
\[\Delta_{F}(0)=-2i\,\pi^{1+D/2}\,\delta(1-D/2)\Big{|}_{D=2}=-2i\,\pi^{2}\, \delta(0). \tag{7}\]
So, for the case of \(D=2\), the matching of Eqns. (3) and (7) gives the following representation
\[(-i)\,\Delta_{F}(0)=\Gamma(0)=-\,2\,\pi^{2}\,\delta(0). \tag{8}\]
With this, we may conclude that \(\delta(0)\)-singularity can be treated as the singularity of \(\Gamma(0)\), see Eqn. (4). The same inference has been reached by the different method, see [2]. Notice that the physical (UV or IR) nature of the mentioned singularity has been somewhat hidden.
In the dimensional regularization, the UV- and IR-divergencies are associated with the small positive (\(\epsilon>0\)) and negative (\(\epsilon<0\)) regularized parameter \(\epsilon\), respectively. In this connection, using the \(\alpha\)-parametrization, we rewrite Eqn. (1) as
\[\Delta_{F}(0)=\int\frac{(d^{D}k)}{k^{2}}=\Gamma(D/2-1)\int(d^{D}z )\ \frac{\delta(z)}{\left(z^{2}\right)^{D/2-1}}\] \[=\int(d^{D}z)\ \delta(z)\left\{\int_{0}^{\infty}d\alpha\,\alpha^{D/2-2 }\,e^{-\alpha z^{2}}\right\}=\int_{0}^{\infty}(d\alpha)\,\alpha^{D/2-2}. \tag{9}\]
Hence, one gets (modulo the normalization factor which is now irrelevant)
\[\Delta_{F}(0)=\int\frac{(d^{D}k)}{k^{2}}=\int_{0}^{\infty}(d\alpha)\,\alpha^{ D/2-2}\Rightarrow\frac{1}{D/2-1}\Big{\{}\lim_{\alpha\to\infty}\alpha^{D/2-1}- \lim_{\alpha\to 0}\alpha^{D/2-1}\Big{\}}. \tag{10}\]
From Eqn. (12), one can see that the first term corresponds to the UV-divergency, while the second term - to the IR-divergency. That is, we have
\[\lim_{\alpha\to\infty}\alpha^{D/2-1}=[\infty]_{\rm UV} \quad\text{if }D>2,\] \[\lim_{\alpha\to 0}\alpha^{D/2-1}=[\infty]_{\rm IR} \quad\text{if }D<2. \tag{11}\]
In other words, if the dimensional parameter \(\epsilon\) in \(D=d-2\epsilon\) is small one, \(|\epsilon|<1\), and it varies from the negative to positive variables, we have the following representation for \(\Delta_{F}(0)\)
\[\Delta_{F}(0)\Big{|}_{d=2} \Rightarrow\frac{1}{D/2-1}\Big{\{}\Theta(D>2\,|\,\epsilon<0)\lim_ {\alpha\to\infty}\alpha^{D/2-1}-\Theta(D<2\,|\,\epsilon>0)\lim_{\alpha\to 0} \alpha^{D/2-1}\Big{\}}=0\] \[\Rightarrow\delta\big{(}1-D/2\big{)}\Big{|}_{D\neq 2}=0, \tag{12}\]
where \(\epsilon\) should be considered as an external independent parameter. From Eqns. (1) and (12), in the dimensional regularization, one can see that the positive small \(\epsilon\) is regularizing the UV-divergency but not IR-divergency. Thus, every of the methods gives the same final conclusion.
To conclude this section, we remind the other useful representation given by
\[\Delta_{F}(0)=\lim_{z^{2}\to 0}\Delta_{F}(z^{2})=\lim_{z^{2}\to 0}\frac{1}{4 \pi}\delta_{+}(z^{2})=\delta(0),\quad z\in\mathbb{E}^{4} \tag{13}\]
which is in agreement with Eqns. (7) and (8).
## 3 Vacuum integration as a limit of non-vacuum integration
We now address to the relation between vacuum and non-vacuum integrations. In the dimensional regularization procedure, we begin with the consideration of two-point 1PI massless Green function given by
\[\mathcal{I}(p^{2})=\int\frac{(d^{D}k)}{k^{2}(k^{2}+p^{2})}=(c.c.)\,(p^{2})^{D/ 2-2}\,G(1,1), \tag{14}\]
where \((c.c)\) implies the coefficient constant and
\[G(1,1)=\frac{\Gamma(-D/2+2)\Gamma^{2}(D/2-1)}{\Gamma(D-2)}. \tag{10}\]
Using \(D=4-2\epsilon\), we get
\[\mathcal{I}(p^{2})=\int\frac{(d^{D}k)}{k^{2}(k^{2}+p^{2})}=(c.c.) \left(p^{2}\right)^{-\epsilon}\frac{\Gamma(\epsilon)\Gamma^{2}(1-\epsilon)}{ \Gamma(2-2\epsilon)}. \tag{11}\]
In Eqns. (11) and (11), the scale dependence of \(\mu^{2}\) is hidden as irrelevant one.
The vacuum integration can be obtained from Eqn. (11) with the help of the corresponding limit as
\[\mathcal{V}_{2}\equiv\int\frac{(d^{D}k)}{(k^{2})^{2}}=\lim_{p^{2} \to 0}\mathcal{I}(p^{2}). \tag{12}\]
There are, however, some subtleties of this limit which are now under our considerations. Indeed, having used the \(\alpha\)-representation, let us calculate the integral of Eqn. (11). We have the following
\[\mathcal{I}(p^{2})=(c.c.)\int_{0}^{\infty}d\alpha d\beta\frac{e^{ -p^{2}\frac{\alpha\beta}{\alpha+\beta}}}{[\alpha+\beta]^{D/2}}=(c.c.)\int_{0} ^{\infty}\lambda\lambda^{1-D/2}\int_{0}^{1}dxe^{-p^{2}\lambda x\bar{x}}, \tag{13}\]
where
\[\alpha=\lambda x_{1},\quad\beta=\lambda x_{2},\quad\lambda\in[ 0,\infty]. \tag{14}\]
The next stage of calculations is to make a replacement as
\[\tilde{\lambda}=p^{2}\lambda x\bar{x},\quad d\tilde{\lambda}=p^{2 }x\bar{x}d\lambda \tag{15}\]
in the exponential function. This replacement simplifies the integrals and it leads to the corresponding combination of \(\Gamma\)-functions denoted as \(G(1,1)\)[5, 6]. Ultimately, we reproduce the result presented by Eqns. (11) and (11).
Now, the first mathematical subtlety is that if we suppose the limits \(p^{2}\to 0\) and \(\epsilon\to 0\) are consequent ones, not simultaneous, it is clear that these limits are not commutative operations, _i.e._
\[\big{[}\lim_{p^{2}\to 0},\lim_{\epsilon\to 0}\big{]}\neq 0. \tag{16}\]
On the other hand, if the limits are simultaneous ones we deal with the uncertainty of \([0]^{0}\) which should be somehow resolved.
The second subtlety is related to the limit \(p^{2}\to 0\) and the replacement of Eqn. (15). Namely, in order to avoid the mentioned uncertainty, we have to implement the limit \(p^{2}\to 0\) before the possible replacement. In this case, the limit of \(p^{2}\to 0\) is well-defined operation and we finally obtain that
\[\lim_{p^{2}\to 0}\mathcal{I}(p^{2})=(c.c.)\int_{0}^{\infty}d \lambda\lambda^{1-D/2}=\frac{1}{2-D/2}\Big{\{}\lim_{\lambda\to\infty}\lambda^ {2-D/2}-\lim_{\lambda\to 0}\lambda^{2-D/2}\Big{\}}\] \[\equiv \int\frac{(d^{D}k)}{(k^{2})^{2}}=\mathcal{V}_{2}. \tag{17}\]
\(\delta(0)\)-singularity
We are now in a position to discuss the treatment of \(\delta(0)\)-singularity (or \(\delta(0)\)-uncertainty). To this aim, we follow to the sequential approach to the singular generated functions (distributions).
From one hand, based on the dimensional analysis, we may conclude that all massless vacuum integrations disappear, _i.e._
\[\mathcal{V}_{n}=\int\frac{(d^{D}k)}{[k^{2}]^{n}}=0\quad\text{for}\;n\neq D/2. \tag{10}\]
However, the case of \(n=D/2\) (or \(n=2\) if \(\varepsilon\to 0\)) requires the special consideration because the dimensional analysis argumentation does not now work. Nevertheless, the nullification of \(\mathcal{V}_{D/2}\) takes still place but thanks to different reasons. It turns out, the ultraviolet and infrared divergencies are cancelled each other. Hence, if only the ultraviolet divergencies are under our consideration, \(\mathcal{V}_{D/2}\) is not equal to zero.
To demonstrate, we dwell on the vacuum integration which is externally the IR-regularized one. It is necessary to remind that, in the space with \(D=d-2\varepsilon\), the positive value of \(\varepsilon\) allows to avoid the UV-divergency. In the spherical co-ordinate system, we write the following representation
\[\mathcal{V}_{2}=\int_{UV}\frac{(d^{D}k)}{[k^{2}]^{2}}\equiv\frac{\pi^{D/2}}{ \Gamma(D/2)}\int_{\mu^{2}}^{\infty}d\beta\beta^{D/2-3}\quad\text{with}\;\beta =|k|^{2}, \tag{11}\]
where \(\mu^{2}\) plays a role of IR-regularization and the angular integration given by the measure \(d\Omega\) is calculated explicitly. Next, calculating \(\beta\)-integration, we reach the representation as
\[\mathcal{V}_{2}=\frac{\pi^{2-\varepsilon}\mu^{-2\varepsilon}}{\Gamma(2- \varepsilon)}\left.\frac{1}{\varepsilon}\right|_{\varepsilon\to 0}, \tag{12}\]
where it is shown that the \(\varepsilon\)-pole corresponds to the UV-divergency only because the IR-divergency is absent by construction thanks for \(\mu^{2}\). This is a very-well known representation used, for example, in [5, 6].
On the other hand, we are able to calculate the vacuum integration by Gorishni-Isaev's method [7]. In this case, \(\mathcal{V}_{n}\) reads
\[\mathcal{V}_{n}=\int\frac{(d^{D}k)}{[k^{2}]^{n}}=\frac{2i\,\pi^{1+D/2}}{(-1)^ {D/2}\,\Gamma(D/2)}\delta(n-D/2). \tag{13}\]
Supposing \(D=4-2\varepsilon\), the only contribution is given by
\[\mathcal{V}_{2}=\int\frac{(d^{D}k)}{[k^{2}]^{2}}=\frac{2i\,\pi^{3-\varepsilon }}{\Gamma(2-\varepsilon)}\delta(\varepsilon)\,\neq\,0. \tag{14}\]
Hence, the delta-function of argument \(\varepsilon\) reflects the UV-divergency. We specially stress that the representations of \(\mathcal{V}_{2}\) given by Eqns. (12) and (14) are equivalent.
The delta-function as a generated function (distribution) is a linear singular functional (which cannot be generated by any locally-integrated functions) defined on the suitable finite function space. Such a definition is absolutely well but it is not unique one. Namely, the delta-function can be understood with the help of the fundamental sequences of regular functionals provided the
corresponding weak limit, see for example [8; 9]. Besides, one of the delta-function representations is related to the following realization
\[\delta(t)=\lim_{\varepsilon\to 0}\delta_{\varepsilon}(t)\equiv\lim_{ \varepsilon\to 0}\frac{St.F.(-\varepsilon\leq t\leq 0)}{\varepsilon}, \tag{21}\]
where \(St.F.(-\varepsilon\leq t\leq 0)\) implies the well-known step-function without any uncertainties.
Going back to Eqn. (20), one can see that the treatment of \(\delta(\varepsilon)\) as the linear (singular) functional on the finite function space with \(d\mu(\varepsilon)=d\varepsilon\phi(\varepsilon)\) meets some difficulties within the dimensional regularization approach. Indeed, for the practical use, \(\varepsilon\) is not a convenient variable for the construction of the finite function space because we finally need to be focused on the limit as \(\varepsilon\to 0\).
Meanwhile, within the sequential approach [8; 9], the delta-function might be considered as the usual singular (meromorphic) function and the \(\delta(0)\)-singularity/uncertainty can be treated as a pole of the first order [2],
\[\delta(0)=\lim_{\varepsilon\to 0}\delta_{\varepsilon}(0)\equiv\lim_{ \varepsilon\to 0}\frac{1}{\varepsilon}. \tag{22}\]
For the demanding mathematician, the representation of Eqn. (22) should be understood merely as a symbol. That is, \(\delta(0)\) denotes alternatively the limit of \(1/\varepsilon\). This representation is also backed by the obvious fact that Eqns. (21) and (20) are equivalent ones.
It is worth to notice that representation of \(\delta(0)\) through the pole of an arbitrary meromorphic function should be used very carefully. For example, if we suppose that (here, \(z\in\mathbb{E}^{4}\) and the delta-function is assumed to be a functional on the finite function space)
\[\left[\delta(z)\right]^{2}=\delta(0)\,\delta(z), \tag{23}\]
the representation given by
\[\delta(z)=\lim_{\varepsilon\to 0}\delta_{\varepsilon}(z),\quad \delta_{\varepsilon}(z)=\frac{1}{\pi^{2}\varepsilon^{4}}e^{-z^{2}/\varepsilon ^{2}}\Rightarrow\delta(0)\sim\delta_{\varepsilon}(0)=\frac{1}{\pi^{2} \varepsilon^{4}} \tag{24}\]
does not satisfy the condition of Eqn. (23). Another informative example can be found in [10].
## 5 Conclusion
To conclude, we have presented the important explanations regarding the massless vacuum integrations. In the note, we have demonstrated the preponderance of sequential approach where the singular generated functions (distributions) are treated as a fundamental sequences of regular functionals. Due to this treatment, the uncertainty as \(\delta(0)\) can be resolved via the meromorphic function of first order. Also, it has been shown in detail how the delta-function represents either UV-regime or IR-regime.
## Acknowledgements
Our special thanks go to S.V. Mikhailov and L. Szymanowski for very useful and stimulating discussions. |
2301.01361 | Modeling the Rhythm from Lyrics for Melody Generation of Pop Song | Creating a pop song melody according to pre-written lyrics is a typical
practice for composers. A computational model of how lyrics are set as melodies
is important for automatic composition systems, but an end-to-end
lyric-to-melody model would require enormous amounts of paired training data.
To mitigate the data constraints, we adopt a two-stage approach, dividing the
task into lyric-to-rhythm and rhythm-to-melody modules. However, the
lyric-to-rhythm task is still challenging due to its multimodality. In this
paper, we propose a novel lyric-to-rhythm framework that includes
part-of-speech tags to achieve better text setting, and a Transformer
architecture designed to model long-term syllable-to-note associations. For the
rhythm-to-melody task, we adapt a proven chord-conditioned melody Transformer,
which has achieved state-of-the-art results. Experiments for Chinese
lyric-to-melody generation show that the proposed framework is able to model
key characteristics of rhythm and pitch distributions in the dataset, and in a
subjective evaluation, the melodies generated by our system were rated as
similar to or better than those of a state-of-the-art alternative. | Daiyu Zhang, Ju-Chiang Wang, Katerina Kosta, Jordan B. L. Smith, Shicen Zhou | 2023-01-03T21:30:20Z | http://arxiv.org/abs/2301.01361v1 | # Modeling the Rhythm From Lyrics for Melody Generation of Pop Song
###### Abstract
Creating a pop song melody according to pre-written lyrics is a typical practice for composers. A computational model of how lyrics are set as melodies is important for automatic composition systems, but an end-to-end lyric-to-melody model would require enormous amounts of paired training data. To mitigate the data constraints, we adopt a two-stage approach, dividing the task into lyric-to-rhythm and rhythm-to-melody modules. However, the lyric-to-rhythm task is still challenging due to its multimodality. In this paper, we propose a novel lyric-to-rhythm framework that includes part-of-speech tags to achieve better text-setting, and a Transformer architecture designed to model long-term syllable-to-note associations. For the rhythm-to-melody task, we adapt a proven chord-conditioned melody Transformer, which has achieved state-of-the-art results. Experiments for Chinese lyric-to-melody generation show that the proposed framework is able to model key characteristics of rhythm and pitch distributions in the dataset, and in a subjective evaluation, the melodies generated by our system were rated as similar to or better than those of a state-of-the-art alternative.
Daiyu Zhang Ju-Chiang Wang Katerina Kosta Jordan B. L. Smith Shicen ZhouByteDance{daiyu.zhang, ju-chiang.wang, katerina.kosta, jordan.smith, zhoushicen}@bytedance.com
## 1 Introduction
Setting lyrics to a melody is a common but complex task for a composer. The form, articulation, meter, and symmetry of expression in lyrics can inspire, or set constraints on, the melodic arrangement. Given the importance of melody, it is unsurprising that the decades-long history of Music Metacreation systems includes countless melody-creation systems (see [1] for a review). However, less attention has been paid to the lyric-to-melody generation task (i.e., generating a melody for given input lyrics). The task is challenging for many reasons, including but not limited to: the need to handle the prosody of the text correctly (e.g., one should avoid setting an unstressed word like 'the' on a stressed note in the melody); the need to reflect the structure of the lyrics in the melody; and the need to create a good melody to begin with.
With the rapid growth of deep learning tools, this task has gained more attention, and there are many recent examples of lyric-to-melody creation systems, most using an end-to-end approach [2, 3, 4, 5]. Modeling the relationship between lyric syllables and musical notes is a complex, cross-modal task, but it is hoped that we can succeed with a large amount of paired examples (i.e., lyrics aligned to their corresponding melodies). However, acquiring such data is expensive, and using unsupervised learning has shown limited performance gains [3]. All the systems mentioned here are trained on fewer than 200,000 examples of song lyrics; by contrast, the text-to-image system DALL-E has 12 billion parameters and involved hundreds of millions of paired text-image training data [6].
One alternative, suggested in [7], is to pick an intermediate representation and adopt a two-stage approach: one model to convert lyrics to the chosen representation, and a second to convert that to a melody. The motivation is that there is sufficient data to train each model separately, without the paired lyrics-melody data required by the end-to-end approach.
We choose 'rhythm' as the intermediate step because, if we disregard melismas and expressive singing techniques, we can assume there is a one-to-one correspondence between syllables and onsets, and between onsets and melody pitches. Also, there is plenty of data to model each step: first, from karaoke-style scrolling lyrics data, we can obtain an alignment between syllables in lyrics and note onsets in music, and thus note durations and metrical positions, too. Second, there are multiple public datasets from which to learn to assign pitches for each note given their duration. Our goal is then to solve two sub-tasks, namely lyric-to-rhythm and rhythm-to-melody, with an as
Figure 1: Diagram of the proposed system.
sumption that the rhythm generation process is independent of the pitch generation one [7].
There are many recent melody generation models [8, 9, 10, 11], but lyric-to-rhythm modeling is rarely attempted. In this paper, we introduce a novel framework for converting lyrics to rhythms using an encoder-decoder Transformer architecture [12]. The proposed system is outlined in Fig. 1: given an input set of lyrics, a lyric-to-rhythm module assigns onset times and durations for each syllable. This rhythm, along with a user-provided chord progression, is fed into a Chord-conditioned Melody Transformer (CMT) [13], a state-of-the-art melody generation system, to predict the pitch for each note. The details of the lyric-to-rhythm module and the CMT are provided in Sections 3.3 and 3.2, respectively.
## 2 Background
Lyrics and melody are not arbitrarily combined; common sense suggests and prior analysis [14] indicates that patterns in lyrics and melodies are related and can be modeled, in part, with features of the melody (e.g., note duration) and lyrics (e.g., syllable stress). One of the earliest lyric-to-melody systems was designed to handle Japanese prosody [15]: first, the input text was segmented into phrases; next, a set of pre-composed rhythms was searched for one that fit the syllable count and matched the accent pattern of the text; finally, pitches were assigned using dynamic programming to optimise the interval directions with the natural prosody of the words. An earlier lyric-to-rhythm system also leveraged a dataset of pre-composed rhythms that were scored based on their match to the input syllable-stress and word-rarity patterns [16]. Although our system has little in common with these works, we do share the use of rhythm as an intermediate representation.
Algorithms for automatic music generation are a subset of Music Metacreation systems [1], which have been present in Western music in many forms, including being used for the creation of standalone pieces and, either offline or in real-time, as part of the human composition process. With the help of machine learning and deep learning architectures, many such systems have shown to be capable of generating a plausible outcomes that match the musical characteristics of given datasets. Supervised generative models aim to learn a representation of the underlying characteristics of a training set distribution. Depending on the model, this representation can be either explicitly depicted or implicitly used to generate samples from the learned distribution [17].
Some systems aim to generate a part of a musical piece with the aid of another given part (including melody-to-lyrics creation [18], the inverse of the task we consider). Conditioning the choice of parameters in a generative model on data from other modalities, such as a bass line or a structure, can yield controllable generation systems [19][p.82-83]. For the case of using chords to condition melody generation, a recent system adjusting a general adversarial network architecture has been presented in [20] with the option of generating melody lines over a given accompaniment. The Chord-conditioned Melody Transformer (CMT) [13] is the most recent effort in this area; we adapt much of the design of this system, extending it to accept both lyrics and chords as input. Details of this system, and how we adapt it, follow in Section 3.
## 3 Methodology
### System Overview
Our system design is motivated by the Chord-conditioned Melody Transformer (CMT) [13]. The authors of CMT proposed a two-stage system, assuming a hierarchy that the process of generating melodies is two-phase, as depicted in Fig. 2: _Stage 1_, generating the rhythm of notes from chord progressions; _Stage 2_, generating the pitch for each note depending on the chord progressions and generated rhythm. Our proposed system augments CMT by replacing chord-to-rhythm (i.e., Stage 1) with a novel _lyric-to-rhythm module_. As a result, users can input the lyrics and chord progression of a full song in our system (see Fig. 1). Then, the lyric-to-rhythm module generates the MIDI (with empty pitches). Second, CMT processes the MIDI and chord progression to generate the melody. As a result, the rhythm is generated with a global view of the lyrics, while the melody is generated with a causal view of the rhythm and chords.
In the following subsections, we will first review CMT and explain the difficulties of modifying it to handle the lyric-to-melody task in Section 3.2. Then, we will detail our solution in Section 3.3.
### Chord-Conditioned Melody Transformer (CMT)
CMT adopts a pianoroll-like representation [13, 21] that includes chord, rhythm, and pitch (CRP) information. It splits the timeline into semiquaver-length frames (1/4 of a beat), each described by three vectors: a 12-dimensional binary _chord_ vector (pitch classes in the chord get a 1); a 3-dimensional one-hot _rhythm_ vector (onset, hold state, rest state); and a 22-dimensional one-hot melodic _pitch_ vector (for this part we restrict MIDI pitches to between 48 and 67, plus a hold state and a rest state, giving a total dimension of 22). Please refer to [13][Fig. 1] for an illustration.
CMT contains three main modules: _Chord Encoder (CE)_, a bidirectional LSTM [22]; _Rhythm Decoder (RD)_, a stack of self-attention blocks; and _Pitch Decoder (PD)_, another stack of self-attention blocks. In Stage 1, given an input chord progression, the chord embedding encoded
Figure 2: A two-stage structure of CMT, where \(\oplus\) represents concatenation. Stage 1: chord-to-rhythm. Stage 2: rhythm+chord-to-pitch based on the result of Stage 1.
by CE is autoregressively sent into RD to output the rhythm embedding, followed by a fully-connected layer ("FC Layer" in Fig. 2) to predict the sequence of rhythm vectors for the entire song. In Stage 2, the concatenation of the chord and rhythm embeddings is autoregressively fed into PD, followed by a fully-connected layer to predict the sequence of pitch vectors. Finally, rhythm and pitch vectors are combined and converted to the melody.
However, to leverage CMT for the lyric-to-melody task, we face three problems. (1) _Multimodality_: CMT was designed to take the input of a chord progression to generate the melody. However, it is non-trivial to directly add a lyric encoder for lyrics input, as lyrics are more complicated sequential data than chords. (2) _Representation_: CMT uses a pianoroll-like (i.e. CRP) representation to encode melody, where the time axis is evenly scaled (e.g., 1/16 beat), so a note may require multiple tokens to carry the duration. This makes it difficult to create a one-to-one mapping that ties a syllable (or character) to a single note token. (3) _Constraint on length_: in CMT, the CE generates the melody on a segment-to-segment basis (e.g., 8 bars at a time) without exploiting the global context of a full-song chord progression. However, we believe the structural information carried in the input lyrics is crucial to determine the repetitive pattern for the output melody. The next subsection details how we address these problems: (1) is addressed with a POS tagger that compactly encodes useful lyrics information; and (2) and (3) are addressed by adapting a Compound Word representation.
### Lyric-to-Rhythm Framework
Fig. 3 shows our lyric-to-rhythm framework, which is analogous to a language translation task: i.e., an input sequence of lyrics is translated into an output sequence of notes. In this work, we assume that each syllable (or character) is mapped onto one note as a simplification; handling melismas remains a future challenge. To this end, we adapt an encoder-decoder Transformer architecture [12]. To enhance the repetitive coherence modeling in note sequences, we incorporate relative self-attention [23, 24].
To extract the features of lyrics, we employ _part-of-speech (POS) tagging_ with a Transformer encoder. Following prior works [25, 26], we characterize the rhythmic features of a note with a tuple of (bar_shift, position, duration, onset_shift), and model the sequence of tuples using the _Compound Word (CP)_ Transformer decoder [27]. The lyric-to-rhythm module generates the \(t\)-th note based on the full context of lyrics and the previously generated notes (from the first to (\(t\)-1)-th) in an auto-regressive manner, with the future notes being masked. We describe the POS tag representation and CP Transformer in the next subsections.
#### 3.3.1 Part-of-Speech (POS) Tagging
In natural language processing, POS tagging refers to the process of labeling every word in a text with its part of speech. The taxonomy of POS tags varies by language, but commonly includes 'noun,''verb,' 'adjective,' 'adverb,' and others. POS tags can augment the text information by indicating the structure of sentences [28], and thus plays an important role in tokenizing the input words in conventional text-to-speech (TTS) systems [29, 30].
POS are word-level descriptors, but we want syllable-level descriptors in order to align the lyrics with the rhythm. (When dealing with Chinese lyrics, we can also say 'character-level' since each Chinese character is one syllable.) Thus, we combine each POS tag with the syllable index to create a POS 'token': e.g., the input English sentence "Why not tell someone," would result in: ['adverb-0', 'adverb-0','verb-0', 'noun-0', 'noun-1'], where the two syllables in "someone" are represented by ['noun-0', 'noun-1'].
#### 3.3.2 Compound Word Transformer
In contrast to the CP proposed in [27], we do not distinguish between _note_- and _metre_-related events. Instead, we include four tokens in every compound word (see Table 1), so that we can have one set of tokens per syllable. From karaoke scrolling lyrics data, we can obtain the onset and duration of each syllable, and by tracking the downbeats, we can obtain the metric position in the bar. Following [27], each of the four tokens is converted into an embedding, and then the embeddings are concatenated before being sent to the Transformer decoder. Each of the four output embeddings is linearly projected to predict the value for the associated token of the \(t\)-th note.
Once all the notes are ready, we convert them to MIDI (with unspecified pitch) with the following steps:
1. Place an empty note at bar 0 and position 0.
2. Determine the onset by shifting (bar_shift\(\times\)16+ onset_shift) units from the previous onset.
3. Set the duration by \(min\)(duration, next note's onset_shift).
4. Repeat 2 and 3 until all the notes are processed.
\begin{table}
\begin{tabular}{l|l|l} Token Name & Vocab. & Description \\ \hline bar\_shift & 0, 1, 2 & Time shift in bar to current bar \\ onset & 0 - 15 & Onset in 1/4 beat in current bar \\ duration & 0 - 31 & Duration in 1/4 beat \\ onset\_shift & 0 - 15 & Time shift in 1/4 beat to previous \\ note’s onset & & note’s onset \\ \end{tabular}
\end{table}
Table 1: Rhythmic features for a note.
Figure 3: The proposed lyric-to-rhythm framework.
We note that onset is not used for generating MIDIs. Instead, we use the position shifted to determine the onset so that notes are placed in an incremental order. Nevertheless, we suspect that onset can help regularization in training. Using the CP representation addresses the "constraint on length" issue mentioned in Section 3.2, as it permits a more compact sequence of tokens that can model a longer duration, such as a full song.
## 4 System Configuration
This section describes how we trained the lyric-to-rhythm and rhythm-to-melody models. For each model we explain what data were used and how they were collected. We focus on Chinese pop songs to validate our system, but the framework could be adaptable to other languages since parts of speech and syllables are broadly useful concepts.
### Lyric-to-Rhythm Model
We collected data for 45K Chinese pop songs using a similar pipeline as [31] and [7][Appendix A]. That is, we crawled online to obtain paired lyrics and audio, with timestamps indicating the onset of each line of the lyrics. Then, for each song, we performed the following steps: (1) isolate the vocal audio using source separation; (2) convert lyrics to phoneme sequences; (3) estimate the phoneme onset timestamps using forced phoneme alignment; and (4) estimate the time signature and beat and downbeat times. From the phoneme and beat data, we can derive the syllable onsets and thus the bar-shift, onset, duration and onset-shift attributes required by the model. The steps were performed using in-house tools comparable to those used in [7]: Spleeter [32], Phonemizer [33], Montreal Forced Aligner [34], and Madmon [35], respectively.
We kept songs with a detected time signature of 4/4 (around 90%), and quantized the timestamp of each syllable in quarter beats. Errors in automatic lyrics alignment, in beat tracking, and in the detected time signature can all degrade the model quality, so we selected 330 songs to manually adjust the timestamps. This subset was used to fine-tune the model.
For POS tagging, we adopted Jieba1, an open-source tool that supports 56 tags commonly used in Chinese. Without POS tags, the vocabulary size for our dataset was 5,368 unique characters. Reducing to POS and then adding the syllable index resulted in a vocabulary of 123 unique POS tokens. In Sec. 5.2, we will compare a model using this 123-dimensional POS vector to an ablated version of the system that encodes the raw characters index in a 5368-dimensional vector.
Footnote 1: [https://github.com/fxsjy/jieba](https://github.com/fxsjy/jieba)
In Chinese pop songs, symmetric expression of text structure is commonly reflected in melody repetition. Fig. 4 shows one example: the chorus melody of "Goodbye Kiss"2 by Jacky Cheung. The two phrases outlined in solid boxes are identical in melody, and nearly identical in text; but even where the lyrics are different (in the dotted boxes), they have the same POS tags. With the POS tagging representation, we believe the lyric-to-rhythm model can learn to generate similar rhythms for two text phrases if they have a common structure.
Footnote 2: [https://www.youtube.com/watch?v=bJRkEmrkI04](https://www.youtube.com/watch?v=bJRkEmrkI04)
Footnote 3: [https://www.hooktheory.com/](https://www.hooktheory.com/)
We use the following parameters for the encoder-decoder Transformer: the input length is 1000; the numbers of heads, encoder-layers, and decoder-layers are 8, 6, and 6, respectively; the embedding sizes of lyrics, bar_-shift, onset, duration, and onset_shift are 512, 32, 128, 128, and 128, respectively; dropout is 0.1; batch size is 16; and learning rate is 1e-5 with Adam optimizer. Using a single Tesla-V100-SXM2-32GB GPU to train a satisfactory model takes \(\sim\)10 hours on the automatically aligned dataset plus 1.5 hours on the manually annotated subset. In both cases we use 10 percent of the dataset for validation.
### Rhythm-to-Melody Model
To train the rhythm-to-melody model, we used POP909 [36] and Lead-Sheet-Dataset [37]. POP909 contains data on 909 Chinese pop songs including chords, melody in MIDI format, and other information not used here. Lead-Sheet-Dataset (LSD), a collection of symbolic content sourced from HookTheory,3 contains lead-sheets (i.e., melodies and chords) of 16K song segments in MIDI format. We used the songs in 4/4 time (dropping roughly 10% of the data) and transposed all pieces to the key of C major or A minor. This resulted in about 30K bars of music from POP909 and about 40K from LSD.
Footnote 3: [https://www.hooktheory.com/](https://www.hooktheory.com/)
We followed [13] to train the model, setting the input length to be 8 bars but reducing the pitch range from 48 to 20; any pitches outside the range were octave-shifted to lie within the range. After obtaining the rhythm MIDI of a full song from the lyric-to-rhythm module, pitches were generated for 8 bars autoregressively, with a 4-bar sliding window, i.e., the model composes the next 4 bars given the previous 4 bars already composed.
## 5 Evaluation
We would like to answer two questions: first, does our system succeed in emulating basic musical qualities of the training data? And second, does it produce pleasing, viable
Figure 4: The melody- lyrics-POS example in the chorus section of “Goodbye Kiss”. POS abbreviation key: {’r’: pronoun, ‘c’: conjunction, ‘v’: verb, ’p’: preposition, ‘a’: adjective, ’n’: noun, ‘uj’: auxiliary}.
settings of lyrics? To answer the first, we compare the output melodies of our model (denoted 'pop-melody') to the held-out training data and discuss their similarity. For the second, we conducted a listening test in which participants rated the quality of the lyric settings of our model as well as those of a state-of-the-art alternative, TeleMelody [7].
### Objective Results
We analyze the melodies created by our system in two objective evaluation strands. The first one is to demonstrate how similar the rhythms generated by our model are to the original data (see Fig. 5); the second is to look for and characterize the differences between the melodies produced by the two models (see Fig. 6). We compare statistics over several musical quantities computed on the dataset and compositions generated by both systems. For this comparison, we have generated 400 scores from each system and used the same amount of scores from the dataset.
Most of the musical quantities we compute are adapted from [38] and [39]. These symbolic descriptors have been shown to enhance melodic expectation when embodied in a cognitively plausible system for music prediction. Expectation and memorability have been shown to be important characteristics for identifying a plausible melody, and surprise and repetition are measurable elements that relate to these characteristics. (For more background on such descriptors and on the concepts of predictability and uncertainty in the pleasure of music, see [40, 41].)
We showcase two sets of descriptors, one for each evaluation strand. The first contains: the _duration_ of the melody notes; their _inter-onset intervals_ (IOIs; the distance between the start of a note to the start of the preceding one); and their metrical _position in bar_. Fig. 5 shows the distributions of these descriptors for the dataset and for the outputs of our system (tagged as "pop-melody") before and after fine-tuning (see Section 4.1). Note that we exclude TeleMelody from this comparison since it was trained on a different dataset (of around 110K samples), so it is not meaningful to compare it to our training data.
Judging from the distributions, the outputs of both models are broadly similar to the melodies in the dataset. However, there is clearly a surfeit of short notes (0.25 crochets, or sixteenth notes) in the generated melodies, which skews the distribution of IOIs as a result. Also, regarding note position in the bar, there is a subtle variation of the likely onset positions in the dataset that is not reflected in the generated data, which, prior to fine-tuning, has an almost uniform distribution.
The other set of descriptors contains: the _pitch contour_, which gives the likelihood that the next note in the melody will be lower (descending), higher (ascending) or the same; _note sparsity_, which gives the fraction of the timeline which has no note in the melody (a value of 0 indicates no rests in the melody); and the _pitch-in-chord-triads ratio_, a kind of 'consonance' metric, calculated as the fraction of notes in the melody that belong to the accompanying chord triad.
These descriptors are illustrated in Fig. 6, comparing the melodies from our fine-tuned system ("pop-melody") with TeleMelody. Here, the purpose is not to compare the systems to the data--they were trained on different data, and may each reflect their training set well--but to assess how the melodies of the systems differ. From the contour descriptors, it is clear that TeleMelody is more likely to generate many consecutive notes with the same pitch, whereas melodies from the proposed system have more variation. The melodies from our system also tend to have fewer rests, and tend to include more notes that appear in the underlying chord. The latter can be interpreted as a tendency to stay in consonance and limiting the space of "dissonant" or "unexpected" moments.
### Subjective Results
We conducted a subjective listening test using a similar design as [3, 7, 42]: we selected lyrics from ten random songs from the test portion of the dataset of 45K songs and used these as input to three systems to generate melodies: "TeleMelody"; "Pop-melody", the proposed system; and "Baseline", an ablated version of our system that does not use POS tokens (see Sec 4.1). This resulted in 30 full songs: ten triples with the same lyrics, chords, and tempo. We rendered the lyrics and melodies to audio with an in-house singing voice synthesis comparable to Xiaoice [42] and rendered a simple accompaniment with the chords.
We had 20 participants, all of whom had some musical background and could read musical scores and play an in
Figure 5: Distributions of descriptors values derived from melodies from the dataset and melodies generated by the proposed pop-melody system.
strument. Participants listened to a triple at a time, where the system identities were masked and the order is at random. Then, they rated each song on four criteria on a Likert scale from _Bad_ (1) to _Excellent_ (5):
1. **Rhythm**: is the timing of notes suitable for the lyrics?
2. **Harmony**: do the pitches fit the chords and key?
3. **Melody**: does the melody line sound natural with the lyrics?
4. **Overall**: what is the overall quality of the melody?
After rating each triple, listeners also rated their familiarity with the original song of the input lyrics on a 5-point Likert scale. The average rating here was 1.5: somewhere between "1. Never heard the title or melody" and "2. Heard the song title, but not the melody".
The results of the study are shown in Table 2. Overall, listeners gave the three systems similar average ratings: all lie within \(3.6\pm 0.25\). However, Wilcoxon signed-rank tests reveal small but consistent differences between the systems; see Table 3 for the \(p\)-values of all comparisons. First, we see that the proposed system is consistently better than Baseline, suggesting that POS-based tokenization is effective. Second, we find that the proposed system also matches or outperforms TeleMelody; the difference is greatest for rhythmic quality. Despite the broadly positive ratings, mostly between _Fair_ (3) and _Good_ (4), comments from the participants mostly cited shortcomings of the output. TeleMelody and Pop-melody both earned comments that the "melody is a little weird" and sometimes "too repetitive", but only the TeleMelody outputs earned comments that the "rhythm is a little weird" and "fragmented".
Two output examples from our system are shown in Figs. 7(a) and 7(b). In both cases the melodies follow the input chord progressions and we find parallelism and variation in the melody when the lyric structure recurs. E.g., in Fig. 7(a), the similar lyrics begin with the same two melody notes (dashed boxes), and the remainders have similar rhythm and contour (solid boxes). Similarly, in Fig. 7(b), the similar lyrics are given identical openings (dashed boxes) with rhythmically identical continuations (solid boxes).
## 6 Conclusion and Future Work
In this paper we proposed a new approach to generate melody for a given lyric by combining lyric-to-rhythm and rhythm-to-melody modules. We found that listeners rated the long-term text-settings provided by our system as acceptable, and at least as good as a competing system.
In order to achieve a cross-modal mapping from syllables to onsets to melody notes, we made the simplifying assumption that each syllable is sung on one note. There is a clear way to improve this in the lyric-to-rhythm module by adding a syllable-state token to the Compound Word, indicating whether we are at the onset of a syllable, or the continuation of one. However, allowing a one-to-many syllable-to-note mapping would also complicate the automatic syllable alignment step, making the hand-corrected data even more precious.
We also found that POS tags were valuable text tokens; using them led to a boost in text-setting quality. Given this success, we ought to leverage more linguistic information, such as syllable stress and word frequency, as in [15, 16]. Music structure labels (e.g., verse and chorus) could also prove valuable. This is an under-explored area, but may become feasible with the introduction of more datasets, or with automatic labeling systems [43] to further augment the existing data.
\begin{table}
\begin{tabular}{l|c c c c} Comparison & Rhythm & Harmony & Melody & Overall \\ \hline \hline Baseline & 3.42(7.78) & 3.67(.75) & 3.46(.81) & 3.42(.67) \\ TeleMelody & 3.58(.82) & 3.69(.83) & 3.38(.72) & 3.57(.58) \\ Pop-melody & 3.84(.72) & 3.87(.69) & 3.64(.70) & 3.68(.57) \\ \end{tabular}
\end{table}
Table 2: Subjective result and comparison.
\begin{table}
\begin{tabular}{l|c c c c} Comparison & Rhythm & Harmony & Melody & Overall \\ \hline Pop vs Tele & 0.0006 & 0.007 & 0.001 & 0.1 \\ Pop vs Baseline & 1.1e-08 & 0.002 & 0.006 & 1.8e-05 \\ Tele vs Baseline & 0.01 & 0.89 & 0.22 & 0.02 \\ \end{tabular}
\end{table}
Table 3: P-values of the subjective result comparison.
Figure 6: Distributions of descriptor values derived from melodies generated by TeleMelody [7] and by the proposed pop-melody system.
Figure 7: Output examples (a) above and (b) below. |
2307.03820 | Higher-Order Corrections to Optimisers based on Newton's Method | The Newton, Gauss--Newton and Levenberg--Marquardt methods all use the first
derivative of a vector function (the Jacobian) to minimise its sum of squares.
When the Jacobian matrix is ill-conditioned, the function varies much faster in
some directions than others and the space of possible improvement in sum of
squares becomes a long narrow ellipsoid in the linear model. This means that
even a small amount of nonlinearity in the problem parameters can cause a
proposed point far down the long axis of the ellipsoid to fall outside of the
actual curved valley of improved values, even though it is quite nearby. This
paper presents a differential equation that `follows' these valleys, based on
the technique of geodesic acceleration, which itself provides a 2$^\mathrm{nd}$
order improvement to the Levenberg--Marquardt iteration step. Higher
derivatives of this equation are computed that allow $n^\mathrm{th}$ order
improvements to the optimisation methods to be derived. These higher-order
accelerated methods up to 4$^\mathrm{th}$ order are tested numerically and
shown to provide substantial reduction of both number of steps and computation
time. | Stephen Brooks | 2023-07-07T20:24:23Z | http://arxiv.org/abs/2307.03820v2 | # Higher-Order Corrections to
###### Abstract
The Newton, Gauss-Newton and Levenberg-Marquardt methods all use the first derivative of a vector function (the Jacobian) to minimise its sum of squares. When the Jacobian matrix is ill-conditioned, the function varies much faster in some directions than others and the space of possible improvement in sum of squares becomes a long narrow ellipsoid in the linear model. This means that even a small amount of nonlinearity in the problem parameters can cause a proposed point far down the long axis of the ellipsoid to fall outside of the actual curved valley of improved values, even though it is quite nearby. This paper presents a differential equation that 'follows' these valleys, based on the technique of geodesic acceleration, which itself provides a \(2^{\text{nd}}\) order improvement to the Levenberg-Marquardt iteration step. Higher derivatives of this equation are computed that allow \(n^{\text{th}}\) order improvements to the optimisation methods to be derived. These higher-order accelerated methods up to \(4^{\text{th}}\) order are tested numerically and shown to provide substantial reduction of both number of steps and computation time.
## 1 Definitions and Introduction
Consider finding the value of a vector \(\mathbf{x}\) such that the vector-valued function \(\mathbf{f}(\mathbf{x})=\mathbf{0}\), noting the input and output of \(\mathbf{f}\) might have different dimensions.
Newton's method solves \(J(\mathbf{x})(\mathbf{x}_{\text{new}}-\mathbf{x})=-\mathbf{f}(\mathbf{x})\) where \(J\) is the Jacobian matrix. The Gauss-Newton algorithm generalises this to rectangular \(J\) using pseudo-inverses that may be calculated using Singular Value Decomposition (SVD). The Levenberg-Marquardt algorithm [1; 2] introduces a damping factor into this pseudo-inverse, which allows progress along 'easier' directions without having to go far in 'difficult' directions that may exhibit nonlinearity.
The remainder of this paper will be written for the simpler Newton method, where \(J^{-1}\) is the inverse of the square Jacobian matrix. However, the algorithms derived also work for the Gauss-Newton pseudo-inverse \([J^{-1}]_{GN}=(J^{T}J)^{-1}J^{T}\) and the damped Levenberg-Marquardt version \([J^{-1}]_{LM(\lambda)}=(J^{T}J+\lambda I)^{-1}J^{T}\). The latter is used in numerical tests.
One common source of slow convergence is that \(J\) is ill-conditioned, so the optimisation valley is much narrower in some directions than others, while the problem contains some nonlinearity, which may seem small but is amplified once you change into coordinates where \(J\) is well-conditioned. This is because even a small amount of nonlinearity can make the long narrow valley stop overlapping with its approximation in the linear model. This reduced range of validity of the linear model means many small steps have to be taken.
## 2 Natural Optimisation Pathway
The goal of the Newton step is to reduce the error vector \(\mathbf{f}\), ideally to zero. For a nonlinear function, the optimisation follows a curved pathway [3; 4] and one natural such pathway is \(\mathbf{x}(t)\) defined implicitly by
\[\mathbf{f}(\mathbf{x}(t))=(1-t)\mathbf{f}(\mathbf{x}(0))\]
for \(t\in[0,1]\). This scales down all components of the error equally and at \(t=1\) it reaches the true solution.
Taking the first derivative of this equation gives
\[\sum_{i}\partial_{i}\mathbf{f}(\mathbf{x}(t))\dot{x}_{i}(t)=J(\mathbf{x}(t)) \dot{\mathbf{x}}(t)=-\mathbf{f}(\mathbf{x}(0))=-\frac{\mathbf{f}(\mathbf{x}(t ))}{1-t},\]
which at \(t=0\) makes \(\dot{\mathbf{x}}\) equal to the Newton step and to a scaling of it for all \(0<t<1\). So this pathway is always tangent to the Newton step direction and corresponds to the limit of a Newton algorithm run with steps scaled down to be infinitesimally small.
## 3 Higher-Order Derivatives
If the pathway curves, one may wonder if longer steps can be taken if the curvature is taken into account. The second and higher derivatives of the equation defining the natural pathway have the form
\[\frac{\mathrm{d}^{n}}{\mathrm{d}t^{n}}\mathbf{f}(\mathbf{x}(t))=\mathbf{0}\]
for \(n\geq 2\). Multiple derivatives of a function composition (\(\mathbf{f}\circ\mathbf{x}\) here) are given by Faa di Bruno's formula [7; 8; 9]
\[\frac{\mathrm{d}^{n}}{\mathrm{d}t^{n}}\mathbf{f}(\mathbf{x}(t))=\sum_{\pi\in \Pi_{n}}\mathbf{f}^{(|\pi|)}(\mathbf{x}(t))\bigotimes_{p\in\pi}\mathbf{x}^{(| p|)}(t),\]
where \(\Pi_{n}\) is the set of all partitions of \(\{1,2,...,n\}\). The \(d^{\mathrm{th}}\) derivative of the vector function \(\mathbf{f}\) is a tensor that takes \(d\) vectors as input and outputs a vector, with elements defined by
\[f^{(d)}(\mathbf{x})^{i}_{j_{1}j_{2}...j_{d}}=\frac{\partial^{d}f_{i}(\mathbf{ x})}{\partial x_{j_{1}}\partial x_{j_{2}}...\partial x_{j_{d}}}.\]
Note that \(\mathbf{f}^{(1)}=J\). This paper will adopt compact notation where tensor products of vectors \(\mathbf{u}\otimes\mathbf{v}\otimes\mathbf{w}\) will be written \(\mathbf{uvw}\) so that \((\mathbf{uvw})_{ijk}=u_{i}v_{j}w_{k}\). These may be contracted with the derivative tensor to give a vector written in the form \(\mathbf{f}^{(3)}\mathbf{uvw}\), where \((\mathbf{f}^{(3)}\mathbf{uvw})_{n}=\sum_{i,j,k}f^{(3)n}_{ijk}u_{i}v_{j}w_{k}\).
### Second Order
For \(n=2\), \(\Pi_{2}=\{\{\{1\},\{2\}\},\{\{1,2\}\}\}\) and
\[\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\mathbf{f}(\mathbf{x}(t))=\mathbf{f}^{( 2)}(\mathbf{x}(t))\dot{\mathbf{x}}(t)\dot{\mathbf{x}}(t)+\mathbf{f}^{(1)}( \mathbf{x}(t))\ddot{\mathbf{x}}(t)=\mathbf{0}\]
\[\Rightarrow\qquad\ddot{\mathbf{x}}(t)=-J^{-1}(\mathbf{x}(t))\mathbf{f}^{(2)} (\mathbf{x}(t))\dot{\mathbf{x}}(t)\dot{\mathbf{x}}(t).\]
This agrees with the well-known [3; 4; 5; 6] quadratic acceleration term for Levenberg-Marquardt if the \(J^{-1}\) is replaced by a damped pseudo-inverse \([J^{-1}]_{LM(\lambda)}\).
### Third Order
For conciseness, \(\mathbf{x}\) and its derivatives will be evaluated at \(t=0\) unless otherwise stated and \(\mathbf{f}\) and its derivatives at \(\mathbf{x}\). For \(n=3\),
\[\Pi_{3}=\{\{\{1\},\{2\},\{3\}\},\{\{1\},\{2,3\}\},\{\{2\},\{1,3\}\},\{\{3\},\{1,2 \}\},\{\{1,2,3\}\}\}\]
and
\[\frac{\mathrm{d}^{3}}{\mathrm{d}t^{3}}\mathbf{f}=\mathbf{f}^{(3)}\dot{\mathbf{ x}}\dot{\mathbf{x}}\dot{\mathbf{x}}+3\mathbf{f}^{(2)}\dot{\mathbf{x}}\ddot{ \mathbf{x}}+\mathbf{f}^{(1)}\mathbf{x}^{(3)}=\mathbf{0}.\]
This gives the third derivative of \(\mathbf{x}\) as
\[\mathbf{x}^{(3)}=-J^{-1}(\mathbf{f}^{(3)}\dot{\mathbf{x}}\dot{\mathbf{x}} \dot{\mathbf{x}}+3\mathbf{f}^{(2)}\dot{\mathbf{x}}\ddot{\mathbf{x}}).\]
### Fourth Order, Recurrence and General Case
Higher-order expressions can be obtained either from the set partitions \(\Pi_{n}\) or the equivalent differentiation chain and product rules that obtain \(\frac{\mathrm{d}^{n+1}}{\mathrm{d}t^{n+1}}\mathbf{f}\) from \(\frac{\mathrm{d}^{n}}{\mathrm{d}t^{n}}\mathbf{f}\). The formulae
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{f}^{(n)}=\mathbf{f}^{(n+1)}\dot{ \mathbf{x}}\qquad\text{and}\qquad\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{x}^{( n)}=\mathbf{x}^{(n+1)}\]
together with the product rule are enough to generate the full sequence. Starting from \(n=2\),
\[\mathbf{f}^{(2)}\dot{\mathbf{x}}\dot{\mathbf{x}}+\mathbf{f}^{(1)} \ddot{\mathbf{x}} = \mathbf{0}\] \[\mathbf{f}^{(3)}\dot{\mathbf{x}}\dot{\mathbf{x}}\dot{\mathbf{x}}+ 3\mathbf{f}^{(2)}\dot{\mathbf{x}}\dot{\mathbf{x}}+\mathbf{f}^{(1)}\mathbf{x}^ {(3)} = \mathbf{0}\] \[\mathbf{f}^{(4)}\dot{\mathbf{x}}\dot{\mathbf{x}}\dot{\mathbf{x}} \dot{\mathbf{x}}+6\mathbf{f}^{(3)}\dot{\mathbf{x}}\dot{\mathbf{x}}\ddot{ \mathbf{x}}+4\mathbf{f}^{(2)}\dot{\mathbf{x}}\mathbf{x}^{(3)}+3\mathbf{f}^{( 2)}\ddot{\mathbf{x}}\ddot{\mathbf{x}}+\mathbf{f}^{(1)}\mathbf{x}^{(4)} = \mathbf{0}\]
and so on. A computer algebra system can generate these terms based on a rule like
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{f}^{(n)}\mathbf{x}^{(a)}\mathbf{x}^{(b) }\mathbf{x}^{(c)}=\]
\[\mathbf{f}^{(n+1)}\mathbf{x}^{(1)}\mathbf{x}^{(a)}\mathbf{x}^{(b)}\mathbf{x}^ {(c)}+\mathbf{f}^{(n)}\mathbf{x}^{(a+1)}\mathbf{x}^{(b)}\mathbf{x}^{(c)}+ \mathbf{f}^{(n)}\mathbf{x}^{(a)}\mathbf{x}^{(b+1)}\mathbf{x}^{(c)}+\mathbf{f} ^{(n)}\mathbf{x}^{(a)}\mathbf{x}^{(b)}\mathbf{x}^{(c+1)}\]
and collecting like terms, for example by sorting the \(\mathbf{x}\) derivatives in increasing order.
The highest derivative \(\mathbf{x}^{(n)}\) may be moved to the other side to get a formula like
\[\mathbf{x}^{(4)}=-J^{-1}(\mathbf{f}^{(4)}\dot{\mathbf{x}}\dot{\mathbf{x}} \dot{\mathbf{x}}\dot{\mathbf{x}}+6\mathbf{f}^{(3)}\dot{\mathbf{x}}\dot{ \mathbf{x}}\dot{\mathbf{x}}+4\mathbf{f}^{(2)}\dot{\mathbf{x}}\mathbf{x}^{(3) }+3\mathbf{f}^{(2)}\ddot{\mathbf{x}}\ddot{\mathbf{x}}),\]
shown for the \(n=4\) case, which expresses it in terms of lower derivatives of \(\mathbf{x}\).
## 4 Taking Finite Steps
The derivatives \(\mathbf{x}^{(n)}\) calculated above can produce a corrected higher-order step using the Taylor series of \(\mathbf{x}\) around \(t=0\)
\[\mathbf{x}(\epsilon)=\sum_{n=0}^{\infty}\frac{1}{n!}\epsilon^{n}\mathbf{x}^{( n)},\]
where the step is thought of as stopping at a time \(t=\epsilon\) in the parameterisation of the natural pathway. This unknown \(\epsilon\) may seem like a problem but it can be made to cancel. Define the correction at order \(n\) to be the \(n^{\text{th}}\) term of the Taylor series:
\[\mathbf{c}_{n}=\frac{1}{n!}\epsilon^{n}\mathbf{x}^{(n)}.\]
The step begins at \(\mathbf{c}_{0}=\mathbf{x}\) and the first order uncorrected step ends at \(\mathbf{c}_{0}+\mathbf{c}_{1}\), so has length \(\mathbf{c}_{1}\). Now recall that for \(n\geq 2\), the derivatives of \(\mathbf{f}\circ\mathbf{x}\) are zero and use Faa di Bruno's formula as before:
\[\frac{\mathrm{d}^{n}}{\mathrm{d}t^{n}}\mathbf{f}(\mathbf{x}(t))=\sum_{\pi\in \Pi_{n}}\mathbf{f}^{(|\pi|)}(\mathbf{x}(t))\bigotimes_{p\in\pi}\mathbf{x}^{(| p|)}(t)=\mathbf{0}.\]
Multiplying both sides by \(\epsilon^{n}\) gives
\[\sum_{\pi\in\Pi_{n}}\mathbf{f}^{(|\pi|)}(\mathbf{x}(t))\bigotimes_{p\in\pi} \epsilon^{|p|}\mathbf{x}^{(|p|)}(t)=\mathbf{0},\]
using the fact that \(\pi\) is a partition of \(\{1,2,...,n\}\), so the sum of sizes \(|p|\) of all its elements is \(n\). Noting that \(\epsilon^{n}\mathbf{x}^{(n)}=n!\mathbf{c}_{n}\) and evaluating at \(t=0\) gives
\[\sum_{\pi\in\Pi_{n}}\mathbf{f}^{(|\pi|)}\bigotimes_{p\in\pi}|p|\mathbf{c}_{|p |}=\mathbf{0}.\]
This formula is the basis for calculating corrections \(\mathbf{c}_{n}\) for finite steps in the following sections.
### The Meaning of \(\epsilon\)
Observant readers might have noticed that \(\mathbf{c}_{1}=\epsilon\dot{\mathbf{x}}\) and in an earlier section, \(\dot{\mathbf{x}}=-J^{-1}\mathbf{f}\), so taking a full Newton step would imply \(\epsilon=1\). This paper treats \(\epsilon\) as a small value because when experiencing slow convergence from the 'narrow curving valleys' problem, the area of validity for the local linear model (the trust region) is much smaller than what is required to go all the way to the model minimum. This means the steps taken that succeed in reducing the function sum of squares would only be a fraction of the Newton step, for example a Levenberg-Marquardt step with \(\lambda\) chosen large enough to damp away the longest-range movement axes of the exact Newton scheme.
## 5 Finite Difference Schemes
The higher-order corrections \(\mathbf{c}_{n}\) are expressible in terms of multiple directional derivatives of \(\mathbf{f}\). For a numerical method, these derivatives must be calculated from function values, or at most, the Jacobian used by the algorithm. In this paper finite difference schemes are used, some of which have their'stencils' of sampled points spread in multiple axes to give mixed derivatives.
### Second Order
For \(n=2\), the general formula gives
\[\mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{1}+\mathbf{f}^{(1)}2\mathbf{c}_{2} =\mathbf{0}\]
\[\Rightarrow\qquad\mathbf{c}_{2}=-\tfrac{1}{2}J^{-1}\mathbf{f}^{(2)}\mathbf{c }_{1}\mathbf{c}_{1}.\]
Taylor expansion of \(\mathbf{f}\) in the direction \(\mathbf{c}_{1}\) of the original uncorrected step gives
\[\mathbf{f}(\mathbf{x}+\mathbf{c}_{1})=\mathbf{f}+J\mathbf{c}_{1}+\tfrac{1}{2} \mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{1}+O(\epsilon^{3})\]
\[\Rightarrow\qquad\tfrac{1}{2}\mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{1}= \mathbf{f}(\mathbf{x}+\mathbf{c}_{1})-(\mathbf{f}+J\mathbf{c}_{1})+O(\epsilon ^{3}).\]
In other words, the difference between \(\mathbf{f}(\mathbf{x}+\mathbf{c}_{1})\) and a linear estimate using the \(\mathbf{f}\) and \(J\) already calculated at \(\mathbf{x}(0)\), is to leading order a second derivative term similar to the one required for calculating \(\mathbf{c}_{2}\). Thus,
\[\mathbf{c}_{2}=-J^{-1}(\mathbf{f}(\mathbf{x}+\mathbf{c}_{1})-(\mathbf{f}+J \mathbf{c}_{1}))+O(\epsilon^{3}).\]
The evaluations required for this calculation are shown in Figure 1. In this case, only one other point besides the evaluations of \(\mathbf{f}\) and \(J\) at \(\mathbf{x}\) is needed.
### Third Order
For \(n=3\), the general formula gives
\[\mathbf{f}^{(3)}\mathbf{c}_{1}\mathbf{c}_{1}\mathbf{c}_{1}+3\mathbf{f}^{(2)} \mathbf{c}_{1}2\mathbf{c}_{2}+\mathbf{f}^{(1)}6\mathbf{c}_{3}=\mathbf{0}.\]
\[\Rightarrow\qquad\mathbf{c}_{3}=-\tfrac{1}{6}J^{-1}(\mathbf{f}^{(3)}\mathbf{c} _{1}\mathbf{c}_{1}\mathbf{c}_{1}+6\mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{2 }).\]
There are a few differences from the second order case:
* There is a third order derivative \(\mathbf{f}^{(3)}\mathbf{c}_{1}\mathbf{c}_{1}\mathbf{c}_{1}\), which will require an additional stencil point in the direction of \(\mathbf{c}_{1}\).
* Errors will now have to be \(O(\epsilon^{4})\) as the main terms have size \(O(\epsilon^{3})\).
* There is a mixed derivative \(\mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{2}\), requiring a two dimensional stencil pattern.
The mixed derivative requires knowledge of the direction \(\mathbf{c}_{2}\), which must be evaluated first. The second order stencil for \(\mathbf{c}_{2}\) had error \(O(\epsilon^{3})\) and now \(O(\epsilon^{4})\) is needed, so even the lower-order derivative \(\mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{1}\) will have to be evaluated using a third order stencil. Fortunately, this stencil is also needed for evaluating \(\mathbf{f}^{(3)}\mathbf{c}_{1}\mathbf{c}_{1}\mathbf{c}_{1}\), so all coefficients of a cubic approximation to \(\mathbf{f}\) in this direction can be known.
#### 5.2.1 Phase One: Calculating \(\mathbf{c}_{2}\)
The additional stencil point in the \(\mathbf{c}_{1}\) direction is chosen to be \(\mathbf{x}+\tfrac{1}{2}\mathbf{c}_{1}\) here, although other choices are possible. To third order,
\[\mathbf{f}(\mathbf{x}+\tfrac{1}{2}\mathbf{c}_{1})=\mathbf{f}+\tfrac{1}{2}J \mathbf{c}_{1}+\tfrac{1}{8}\mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{1}+ \tfrac{1}{48}\mathbf{f}^{(3)}\mathbf{c}_{1}\mathbf{c}_{1}\mathbf{c}_{1}+O( \epsilon^{4})\]
\[\mathbf{f}(\mathbf{x}+\mathbf{c}_{1})=\mathbf{f}+J\mathbf{c}_{1}+\tfrac{1}{2} \mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{1}+\tfrac{1}{6}\mathbf{f}^{(3)} \mathbf{c}_{1}\mathbf{c}_{1}\mathbf{c}_{1}+O(\epsilon^{4}).\]
Writing the nonlinear part of \(\mathbf{f}\) as \(\mathbf{f}_{nl}(\mathbf{x}+\mathbf{a})=\mathbf{f}(\mathbf{x}+\mathbf{a})-( \mathbf{f}+J\mathbf{a})\), the derivatives in the \(\mathbf{c}_{1}\) direction can be expressed
\[\mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{1}=16\mathbf{f}_{nl}(\mathbf{x}+ \tfrac{1}{2}\mathbf{c}_{1})-2\mathbf{f}_{nl}(\mathbf{x}+\mathbf{c}_{1})+O( \epsilon^{4})\]
\[\mathbf{f}^{(3)}\mathbf{c}_{1}\mathbf{c}_{1}\mathbf{c}_{1}=12\mathbf{f}_{nl}( \mathbf{x}+\mathbf{c}_{1})-48\mathbf{f}_{nl}(\mathbf{x}+\tfrac{1}{2}\mathbf{ c}_{1})+O(\epsilon^{4})\]
and \(\mathbf{c}_{2}\) calculated from the formula \(\mathbf{c}_{2}=-\tfrac{1}{2}J^{-1}\mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{1}\) with \(O(\epsilon^{4})\) error.
Figure 1: Finite difference stencil for calculating the second order correction \(\mathbf{c}_{2}\). Points represent evaluations of the function \(\mathbf{f}\) and rings represent evaluations of its Jacobian.
#### 5.2.2 Phase Two: Calculating \(\mathbf{c}_{3}\)
This step requires the mixed derivative \(\mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{2}\). Expressions of the form \(\mathbf{f}^{(3)}\mathbf{u}\mathbf{v}\mathbf{w}=(\mathbf{u}\cdot\nabla)(\mathbf{v }\cdot\nabla)(\mathbf{w}\cdot\nabla)\mathbf{f}\) are iterated directional derivatives. Each directional derivative can be approximated to leading order as
\[(\mathbf{u}\cdot\nabla)\mathbf{f}(\mathbf{x}) = \frac{\mathbf{f}(\mathbf{x}+\epsilon\mathbf{u})-\mathbf{f}( \mathbf{x})}{\epsilon}+O(\epsilon)\] \[\Rightarrow (\epsilon\mathbf{u}\cdot\nabla)\mathbf{f}(\mathbf{x}) = \mathbf{f}(\mathbf{x}+\epsilon\mathbf{u})-\mathbf{f}(\mathbf{x})+ O(\epsilon^{2})\] \[\Rightarrow (\epsilon^{n}\mathbf{u}\cdot\nabla)\mathbf{f}(\mathbf{x}) = \mathbf{f}(\mathbf{x}+\epsilon^{n}\mathbf{u})-\mathbf{f}(\mathbf{ x})+O(\epsilon^{2n}).\]
In the last formula above, \(\epsilon^{n}\mathbf{u}\) represents an \(O(\epsilon^{n})\) sized term such as \(\mathbf{c}_{n}\). Using this multiple times allows mixed derivatives to be expressed to leading order as combinations of function evaluations at different points (i.e. finite difference stencils). For example,
\[\mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{2} = (\mathbf{c}_{1}\cdot\nabla)(\mathbf{c}_{2}\cdot\nabla)\mathbf{f}\] \[\simeq (\mathbf{c}_{1}\cdot\nabla)(\mathbf{f}(\mathbf{x}+\mathbf{c}_{2} )-\mathbf{f}(\mathbf{x}))\] \[\simeq \mathbf{f}(\mathbf{x}+\mathbf{c}_{2}+\mathbf{c}_{1})-\mathbf{f}( \mathbf{x}+\mathbf{c}_{1})-(\mathbf{f}(\mathbf{x}+\mathbf{c}_{2})-\mathbf{f}( \mathbf{x})).\]
Here, all terms have size \(O(\epsilon^{3})\) and all approximations are leading order accurate meaning the error is no worse than \(O(\epsilon^{4})\), as required.
In general, a \(d^{\text{th}}\) derivative of different directions would require \(2^{d}\) evaluations. An \(n\) times repeated derivative in the same direction only requires \(n+1\) as some of the evaluation points are coincident. A mixture like \(\mathbf{f}^{(a+b+c)}\mathbf{u}^{\otimes a}\mathbf{v}^{\otimes b}\mathbf{w}^{ \otimes c}\) would require evaluation at \((a+1)(b+1)(c+1)\) points.
Some efficiencies may be gained from coincident evaluation points and the fact that the full first derivative \(J\) is usually evaluated at \(\mathbf{x}\) already. This was used in the previous'second order' section, which only required one additional evaluation point for \(\mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{1}\) rather than three.
Now everything is in place to evaluate \(\mathbf{c}_{3}=-\frac{1}{6}J^{-1}(\mathbf{f}^{(3)}\mathbf{c}_{1}\mathbf{c}_{ 1}\mathbf{c}_{1}+6\mathbf{f}^{(2)}\mathbf{c}_{1}\mathbf{c}_{2})\). The full step will be corrected from \(\mathbf{c}_{1}\) to \(\mathbf{c}_{1}+\mathbf{c}_{2}+\mathbf{c}_{3}\). The evaluations required are shown in Figure 2.
### Fourth Order
For \(n=4\), the general formula gives
\[\mathbf{f}^{(4)}\mathbf{c}_{1}\mathbf{c}_{1}\mathbf{c}_{1}\mathbf{c}_{1}+6 \mathbf{f}^{(3)}\mathbf{c}_{1}\mathbf{c}_{1}2\mathbf{c}_{2}+4\mathbf{f}^{(2)} \mathbf{c}_{1}6\mathbf{c}_{3}+3\mathbf{f}^{(2)}2\mathbf{c}_{2}2\mathbf{c}_{2}+ \mathbf{f}^{(1)}24\mathbf{c}_{4}=\mathbf{0}\]
Figure 2: Finite difference stencil for calculating the third order correction \(\mathbf{c}_{3}\) along with the second order correction \(\mathbf{c}_{2}\) that is also required.
\[\Rightarrow\qquad{\bf c}_{4}=-\tfrac{1}{24}J^{-1}({\bf f}^{(4)}{\bf c}_{1}{\bf c}_{ 1}{\bf c}_{1}{\bf c}_{1}+12{\bf f}^{(3)}{\bf c}_{1}{\bf c}_{1}{\bf c}_{2}+24{\bf f }^{(2)}{\bf c}_{1}{\bf c}_{3}+12{\bf f}^{(2)}{\bf c}_{2}{\bf c}_{2}).\]
As expected, there are more higher-order and mixed derivatives. The double derivative \({\bf f}^{(2)}{\bf c}_{2}{\bf c}_{2}\) can take advantage of the Jacobian to eliminate a point from the stencil, just as previous unidirectional derivatives did. The direction \({\bf c}_{3}\) is now involved in the derivatives, so three evaluation phases are required. All errors have to be \(O(\epsilon^{5})\) including those of \({\bf c}_{2}\) and \({\bf c}_{3}\).
#### 5.3.1 Phase One: Calculating \({\bf c}_{2}\)
An additional stencil point \({\bf x}+\tfrac{3}{2}{\bf c}_{1}\) will be added to increase the order of accuracy in the \({\bf c}_{1}\) direction. Defining \({\bf f}_{nl}({\bf x}+{\bf a})={\bf f}({\bf x}+{\bf a})-({\bf f}+J{\bf a})\) as before,
\[{\bf f}^{(2)}{\bf c}_{1}{\bf c}_{1} \simeq 24{\bf f}_{nl}({\bf x}+\tfrac{1}{2}{\bf c}_{1})-6{\bf f}_{nl}({ \bf x}+{\bf c}_{1})+\tfrac{8}{9}{\bf f}({\bf x}+\tfrac{3}{2}{\bf c}_{1})\] \[{\bf f}^{(3)}{\bf c}_{1}{\bf c}_{1}{\bf c}_{1}{\bf c}_{2} \simeq -120{\bf f}_{nl}({\bf x}+\tfrac{1}{2}{\bf c}_{1})+48{\bf f}_{nl}( {\bf x}+{\bf c}_{1})-8{\bf f}({\bf x}+\tfrac{3}{2}{\bf c}_{1})\] \[{\bf f}^{(4)}{\bf c}_{1}{\bf c}_{1}{\bf c}_{1}{\bf c}_{1} \simeq 192{\bf f}_{nl}({\bf x}+\tfrac{1}{2}{\bf c}_{1})-96{\bf f}_{nl}( {\bf x}+{\bf c}_{1})+\tfrac{64}{3}{\bf f}({\bf x}+\tfrac{3}{2}{\bf c}_{1}),\]
all with errors \(O(\epsilon^{5})\). These formulae came from writing out the Taylor expansions and inverting the system of equations, which can also be done by inverting a matrix as the equations are linear in \({\bf f}\) and its derivatives. Calculate \({\bf c}_{2}=-\tfrac{1}{2}J^{-1}{\bf f}^{(2)}{\bf c}_{1}{\bf c}_{1}\) using the first formula above.
#### 5.3.2 Phase Two: Calculating \({\bf c}_{3}\)
The mixed derivative \({\bf f}^{(3)}{\bf c}_{1}{\bf c}_{1}{\bf c}_{2}\) needs a grid with three points in the \({\bf c}_{1}\) direction and two in the \({\bf c}_{2}\) direction for a total of six. Many previous points can be re-used, with the only new point for fourth order in this phase being \({\bf x}+\tfrac{1}{2}{\bf c}_{1}+{\bf c}_{2}\).
\[{\bf f}^{(3)}{\bf c}_{1}{\bf c}_{1}{\bf c}_{2} \simeq (4{\bf f}({\bf x}+{\bf c}_{2})-8{\bf f}({\bf x}+\tfrac{1}{2}{\bf c }_{1}+{\bf c}_{2})+4{\bf f}({\bf x}+{\bf c}_{1}+{\bf c}_{2}))-\] \[(4{\bf f}-8{\bf f}({\bf x}+\tfrac{1}{2}{\bf c}_{1})+4{\bf f}({ \bf x}+{\bf c}_{1}))\] \[{\bf f}^{(2)}{\bf c}_{1}{\bf c}_{2} \simeq (-3{\bf f}({\bf x}+{\bf c}_{2})+4{\bf f}({\bf x}+\tfrac{1}{2}{ \bf c}_{1}+{\bf c}_{2})-{\bf f}({\bf x}+{\bf c}_{1}+{\bf c}_{2}))-\] \[(-3{\bf f}+4{\bf f}({\bf x}+\tfrac{1}{2}{\bf c}_{1})-{\bf f}({ \bf x}+{\bf c}_{1}))\] \[{\bf f}^{(2)}{\bf c}_{2}{\bf c}_{2} \simeq 2{\bf f}_{nl}({\bf x}+{\bf c}_{2}).\]
Again, all errors are \(O(\epsilon^{5})\) and \({\bf c}_{3}=-\tfrac{1}{6}J^{-1}({\bf f}^{(3)}{\bf c}_{1}{\bf c}_{1}{\bf c}_{1} +6{\bf f}^{(2)}{\bf c}_{1}{\bf c}_{2})\) can now be calculated to the required accuracy.
#### 5.3.3 Phase Three: Calculating \({\bf c}_{4}\)
This phase requires the single mixed derivative \({\bf f}^{(2)}{\bf c}_{1}{\bf c}_{3}\), which can be handled analogously to when \({\bf f}^{(2)}{\bf c}_{1}{\bf c}_{2}\) first appeared, by extending the stencil in the \({\bf c}_{3}\) direction with the two points \({\bf x}+{\bf c}_{3}\) and \({\bf x}+{\bf c}_{1}+{\bf c}_{3}\).
\[{\bf f}^{(2)}{\bf c}_{1}{\bf c}_{3} \simeq {\bf f}({\bf x}+{\bf c}_{1}+{\bf c}_{3})-{\bf f}({\bf x}+{\bf c}_{3 })-({\bf f}({\bf x}+{\bf c}_{1})-{\bf f}({\bf x})).\]
Now all values are available to evaluate the fourth order correction \({\bf c}_{4}=-\tfrac{1}{24}J^{-1}({\bf f}^{(4)}{\bf c}_{1}{\bf c}_{1}{\bf c}_{1} {\bf c}_{1}+12{\bf f}^{(3)}{\bf c}_{1}{\bf c}_{1}{\bf c}_{2}+24{\bf f}^{(2)}{ \bf c}_{1}{\bf c}_{3}+12{\bf f}^{(2)}{\bf c}_{2}{\bf c}_{2})\). The full step will be corrected from \({\bf c}_{1}\) to \({\bf c}_{1}+{\bf c}_{2}+{\bf c}_{3}+{\bf c}_{4}\) and the evaluations required are shown in Figure 3.
It is clear that this process could be continued to even higher orders, although the stencils would require more and more points. Practically, automated selection of points and calculations of the stencil coefficients would also be required.
## 6 Numerical Test Problem
The higher-order algorithms were tested on a simple function to verify their performance. This function had to exhibit the 'narrow curving valleys' problem in its sum of squares, so a very anisotropic function \((x,y)\mapsto(x,Ky)\) for \(K\gg 1\) was chosen and then some nonlinearity added in parameter space. The resulting function is
\[\mathbf{f}(x,y)=(x+y^{2},K(y-x^{2})).\]
Typical values of \(K=10^{6}\) were used and the iteration was started from the arbitrary point \((x,y)=(\pi,e)\), moving towards the minimum sum of squares at \(\mathbf{f}(0,0)=(0,0)\).
Figure 4 shows the improved performance of the higher order corrected methods on the test problem with \(K=10^{6}\). The error norm in the plot is the value of \(|\mathbf{f}(x,y)|\) after each iteration. There is a slow convergence region for \(0.1\leq|\mathbf{f}|\leq 10\) after which the algorithm converges rapidly. When the full Newton step using the linear model becomes a valid, convergence should be quadratic with the error norm roughly squaring on each iteration. This rapid convergence appears as the near-vertical descending lines on the graph for \(|\mathbf{f}|<0.1\).
### Varying Valley Anisotropy
Varying \(K\) should show the relative performance of the different order algorithms as the valley gets narrower, while the curvature is kept constant. Table 1 shows the number of iterations required to converge as \(K\) is varied between \(1\) and \(10^{12}\).
A simplified model is that the valley has width \(1/K\) while the \(n^{\text{th}}\) order method has error term \(O(\epsilon^{n+1})\), so this error would push the proposed step out of the valley when \(O(\epsilon^{n+1})=1/K\) or \(\epsilon=O(K^{-\frac{1}{n+1}})\). This would give \(O(1/\epsilon)=O(K^{\frac{1}{n+1}})\) steps to convergence.
Plotting the data on a log-log plot in Figure 5 reveals straight lines in parts of the data that suggest a power law relationship. For \(K\geq 10^{9}\) there is an additional increase in convergence time, which may be from the limits of double precision used for the calculation. Taking the gradient through the last three available points with \(K\leq 10^{8}\) gives power law exponents of \(0.660,0.392,0.265,0.203\) for the first through fourth order methods. This is somewhat similar to the simplified model's \(\frac{1}{2},\frac{1}{3},\frac{1}{4},\frac{1}{5}\) although lower-order methods seem slower, particularly the first order.
Figure 3: Finite difference stencil for calculating the second, third and fourth order corrections \(\mathbf{c}_{2}+\mathbf{c}_{3}+\mathbf{c}_{4}\) at fourth order accuracy.
### Integration with Levenburg-Marquardt Method
These numerical experiments were performed with a Levenburg-Marquardt method enhanced with the higher-order corrections. Step length \(\epsilon\) is controlled by choosing the damping parameter \(\lambda\geq 0\) in the pseudo-inverse \([J^{-1}]_{LM(\lambda)}=(J^{T}J+\lambda I)^{-1}J^{T}\). The note [10] shows that \(\lambda=0\) gives the full Gauss-Newton step, while \(\lambda\rightarrow\infty\) produces infinitesimal steepest gradient steps, with the values of \(\lambda\) in between producing optimal reductions in the linear model for a
\begin{table}
\begin{tabular}{l c c c c} \hline
**Anisotropy \(K\)** & **1st order** & **2nd order** & **3rd order** & **4th order** \\ \hline
1 & 8 & 6 & 5 & 5 \\
10 & 15 & 8 & 6 & 5 \\
100 & 47 & 16 & 9 & 8 \\
1000 & 196 & 30 & 18 & 11 \\
10000 & 880 & 68 & 24 & 18 \\
100000 & 4041 & 162 & 50 & 27 \\ \(10^{6}\) & 18733 & 397 & 88 & 43 \\ \(10^{7}\) & \(>\)20000 & 971 & 166 & 70 \\ \(10^{8}\) & \(>\)20000 & 2432 & 312 & 110 \\ \(10^{9}\) & \(>\)20000 & 5828 & 631 & 243 \\ \(10^{10}\) & \(>\)20000 & \(>\)20000 & 2876 & 968 \\ \(10^{11}\) & \(>\)20000 & \(>\)20000 & 10886 & 2706 \\ \(10^{12}\) & \(>\)20000 & \(>\)20000 & \(>\)20000 & 9159 \\ \hline \end{tabular}
\end{table}
Table 1: Test problem convergence times for different order methods as the anisotropy factor \(K\) is varied.
Figure 4: Performance of higher-order corrected Levenburg–Marquardt methods on a test problem.
given step size.
A good choice of \(\lambda\) should be chosen at each step. In this study the values
\[\lambda_{n}=\lambda_{\text{old}}10000^{(n/10)^{3}}\qquad\text{for}\quad-10\leq n \leq 10,\]
where \(\lambda_{\text{old}}\) is the value from the previous step, are run in parallel and the one that produces the best reduction in \(|\mathbf{f}|\) chosen. The initial step uses \(\lambda_{\text{old}}=1\).
The fact these steps are run in parallel means that for the higher order methods, many initial Levenburg-Marquardt steps \(\mathbf{c}_{1}(\lambda_{n})\) are calculated, each of which has higher order corrections \(\mathbf{c}_{i}(\lambda_{n})\). The function \(\mathbf{f}\) is evaluated at all the corrected step points \(\mathbf{x}_{\text{out}}(\lambda_{n})=\sum_{i=0}^{order}\mathbf{c}_{i}(\lambda_ {n})\) and the one with lowest \(|\mathbf{f}|\) and its corresponding \(\lambda_{n}\) is chosen.
The higher order methods enable longer steps and thus smaller values of \(\lambda\) to be used. The scheme above is somewhat wasteful by trying 21 values of \(\lambda\) each step, but on modern computers these can be parallelised, unlike the slow progress along the narrow optimisation valley, which is a serial calculation.
## 7 Performance on a Practical Problem
These algorithms have also been used on a more complex optimisation problem (which motivated their development). This problem has 180 parameters and 300 output variables and features levels of successively more difficult narrow curved valleys in its optimisation space.
Figure 5: Test problem convergence times for different order methods as the anisotropy factor \(K\) is varied.
The full details of this problem are not the point of this paper but a brief summary will be given here. An initial distribution of 100 Ca\({}^{+}\) ions is accelerated through a potential of 1 kV and given a \(\pm 2\%\) energy chirp. It is transported through a curved electrostatic channel, where the electric field is produced by 15 rings of 12 configurable electrodes. Each electrode is modelled as a point charge and these 180 charges are the optimisation variables. The output vector whose norm should be minimised contains the \((x,y,z)\) position coordinates of the 100 ions on exiting the channel (so 300 entries in all), with the bunch centroid subtracted. The ions do not interact in this model and their trajectories are calculated by the 4\({}^{\text{th}}\) order Runge-Kutta method [11; 12] with a timestep \(\delta t=10^{-7}\) seconds.
This problem is interesting because focussing the ions to a point in the linear dynamics approximation can be done with standard optics, but optical aberrations at higher order will remain, for example spherical aberration from large angle effects. These higher order aberrations can also be corrected by careful choice of the electrode voltages, although this gets more difficult the smaller the focal point becomes and the more aberrations have to be cancelled simultaneously.
The figure of merit is the RMS focal size of the ion bunch, which is \(\frac{1}{\sqrt{100}}\) of the norm \(|\mathbf{f}|\). Figure 6 shows how this is reduced by the Levenburg-Marquardt optimisation method with various levels of higher order correction.
Figure 6 shows that higher order methods signficantly accelerate the optimisation progress while the ion bunch focal size is greater than \(10^{-7}\) metres, with comparative performance analogous to that on the test problem in Figure 4. Once the focal size reduces below \(10^{-7}\) metres, progress in the optimisation slows down greatly for all orders of method
Figure 6: Performance of higher-order corrected Levenburg–Marquardt methods on a physics problem with 180 parameters and 300 output variables.
reason for this slow-down is yet to be determined. Lack of numerical precision would produce a similar 'noise floor', although in this study care was taken to calculate the Jacobian \(J\) with automatic differentiation rather than the noisier finite difference schemes.
One potential concern with these higher-order methods is that the additional evaluations of \(\mathbf{f}\) in the stencil will cost too much time to make the method worth using. To measure this effect, the optimised focal size is plotted as a function of calculation time in Figure 7. The differences in total execution time can be seen at the right-hand end of each line, which is after 1000 iterations. The fourth order method takes 44% longer per step but still manages to pull ahead of the other methods in real time before the 'floor' is reached. Table 2 gives the amount of time for each method to reach an RMS focal size of under 1 micron.
\begin{table}
\begin{tabular}{l c c} \hline
**Method Order** & **Iterations** & **Calculation Time (s)** \\ \hline
1 & 532 & 3604.863 \\
2 & 94 & 665.812 \\
3 & 62 & 495.010 \\
4 & 46 & 418.301 \\ \hline \end{tabular}
\end{table}
Table 2: Real time taken to reach a focal size of \(<10^{-6}\,\mathrm{m}\) in the ion focussing problem with 180 parameters.
Figure 7: Calculation time vs. performance for higher-order methods on a physics problem with 180 parameters.
## 8 Conclusion
Methods such as Levenburg-Marquardt are not only for curve fitting, but also powerful optimisers where the function to be minimised is a sum of squares. Like many optimisers, they can get stuck for long periods of time in 'curved narrow valleys'. This paper derives higher-order corrections beyond the already known second order [3; 4] that further accelerate the optimiser performance in these situations. A general formula for deriving the \(n^{\text{th}}\) order correction is given, with suggestions on how to build finite difference stencils to evaluate it.
These successful methods have been derived using the concept of a 'natural pathway' for the optimisation, which is an ordinary differential equation (ODE) that is meant to follow the valleys. The form of this ODE is chosen somewhat arbitrarily here but it appears to work well, perhaps because it is a continuous version of the optimiser's path in the limit where step size \(\epsilon\to 0\). Using a higher-order step method on this ODE thus makes the optimiser behave 'as if' it had done a large number of very small steps.
This link with ODEs also suggests potential future work in applying well-known higher-order methods for ODEs such as Runge-Kutta [11; 12] or Bulirsch-Stoer [13] to difficult optimisation problems, as well as methods for stiff ODEs. In this paper the RK4 method was not preferred because it would have required four evaluations of the Jacobian, which is still much more expensive than the eight additional function points in the stencil of the fourth order method.
|
2304.05901 | Automated computed tomography and magnetic resonance imaging
segmentation using deep learning: a beginner's guide | Medical image segmentation is an increasingly popular area of research in
medical imaging processing and analysis. However, many researchers who are new
to the field struggle with basic concepts. This tutorial paper aims to provide
an overview of the fundamental concepts of medical imaging, with a focus on
Magnetic Resonance and Computerized Tomography. We will also discuss deep
learning algorithms, tools, and frameworks used for segmentation tasks, and
suggest best practices for method development and image analysis. Our tutorial
includes sample tasks using public data, and accompanying code is available on
GitHub (https://github.com/MICLab-Unicamp/Medical-ImagingTutorial). By sharing
our insights gained from years of experience in the field and learning from
relevant literature, we hope to assist researchers in overcoming the initial
challenges they may encounter in this exciting and important area of research. | Diedre Carmo, Gustavo Pinheiro, Lívia Rodrigues, Thays Abreu, Roberto Lotufo, Letícia Rittner | 2023-04-12T15:14:41Z | http://arxiv.org/abs/2304.05901v1 | Automated computed tomography and magnetic resonance imaging segmentation using deep learning: a beginner's guide
###### Abstract
Medical image segmentation is an increasingly popular area of research in medical imaging processing and analysis. However, many researchers who are new to the field struggle with basic concepts. This tutorial paper aims to provide an overview of the fundamental concepts of medical imaging, with a focus on Magnetic Resonance and Computerized Tomography. We will also discuss deep learning algorithms, tools, and frameworks used for segmentation tasks, and suggest best practices for method development and image analysis. Our tutorial includes sample tasks using public data, and accompanying code is available on GitHub ([https://github.com/MICLab-Unicamp/Medical-Imaging-Tutorial](https://github.com/MICLab-Unicamp/Medical-Imaging-Tutorial)). By sharing our insights gained from years of experience in the field and learning from relevant literature, we hope to assist researchers in overcoming the initial challenges they may encounter in this exciting and important area of research.
## I Introduction
It is not a mystery that medical images are a great tool in medicine. Besides being non-invasive, they are useful for diagnosing, evaluating, and predicting diseases. Also, many physicians use medical images for research purposes. The history of medical imaging begins in 1895, with the discovery of X-rays by Wilhelm Rontgen. It did not take long, the technique was being used by physicists to analyze medical issues. However, only in the later 1960's Hounsfield, an EMI Limited researcher, started to study x-rays in a 3D form. In 1972, Hounsfield and Dr. James Ambrose could diagnose a tumor using Computerized Tomography (CT) due to different tissue contrasts. By the same time, in 1973, Paul Lauterbur demonstrated that Nuclear Magnetic Resonance (NMR) could be used to create an image [1]. Yet, only in 1977 the first human image using MR was acquired, taking 5 hours for the acquisition [2].
With the popularization of imaging methods, research centers are dealing with an increasing amount of data that are both time and monetary costly to analyze manually. Using computational methods, engineers and computer science professionals can help physicians diminish these costs. In the early days, these computational methods were mainly for image enhancement, based on classical imaging algorithms such as morphology and filtering. More complex tasks, such as pattern recognition through machine learning algorithms, became possible as the area evolved. Although the first work on artificial intelligence (AI) began in the 1950s, AI only started to be applied to medical issues in the 1970s [3].
By the decade of 1990, there were several segmentation methods applied to MR images, and it was common to find methods based on the image characteristics, such as region-based algorithms. After a while, machine learning (ML) methods started to appear, and it became more usual to extract features from the images and use them on algorithms such as Support Vector Machines or Decision Trees. Finally, deep learning (DL) has become central over the last few years. Unlike the previous methods applied to medical images, DL can compute features from the raw input data without requiring manual feature extraction. The feature extraction step is done by the first layers of the architecture, and it is not necessarily interpretable by humans since they are abstract representations of the input image. However, DL algorithms require a large amount of data, which is a challenge in the medical field.
DL algorithms are widely applied to various medical image problems, such as regression, classification, and segmentation. In classification problems, DL classifies a sample into one of the N possible labels. Many applications in the literature
focused on helping medical research, such as brain tumor classification [4, 5] and covid \(\times\) non-covid affected lungs method [6]. In regression problems, the model is trained to predict a continuous value. Bounding box prediction for structures detection [7, 8] and MRI reconstruction from K-space images [9] are examples of regression used on medical images. Finally, segmentation methods contour the border of a specific structure or area to be studied. Physicians may use the segmentation to study brain lesions [10], lung findings [11], and subtle changes in specific structures. Several DL networks have been developed for this purpose [12, 13] and are applied to segment different structures of the brain [14, 15] and body [16, 17].
This paper will focus on Magnetic Resonance (MR) and Computed Tomography (CT) segmentation using DL algorithms. First, we will give a brief overview on the data, its acquisition, and intrinsic characteristics. Then, we will shortly introduce DL for image analyses and the computational environment required. Finally, we will describe the usual workflow and the most common statistical analysis, and give some recommendations and useful tips.
## II Medical images
A medical image can be understood as any image that represents aspects of the biological tissue. Medical images can be classified by the technique that is used for acquisition, also called modalities, such as ultrasound, magnetic resonance, and X-ray computed tomography. They can also be classified by their dimension: planar (or 2D), volumetric (or 3D), time series (4D), or by their range of values: single-channel scalar; multi-channel (e.g. dermatoscopic image) scalar; tensorial (e.g.: diffusion tensor imaging).
Among the great variety of medical image types, the vast majority is composed of scalar measures. The images are made of a collection of voxels (volume elements) representing a single scalar value. Thus, the images are a scalar field in a \(\mathbb{Z}^{3}\) space.
As for any finite discrete scalar field, or array, the images are defined by geometrical parameters that are directly related to the image quality. Field of View (FOV), spatial resolution, Voxel Size (Fig. 1), and radial resolution are some of the most relevant parameters. The FOV is the size of the image in real-world dimensions, the spatial resolution is the number of voxels in each image dimension, the voxel size is the measurement of the voxel dimensions, and the radial resolution is the number of possible scalar values that the voxel can assume. This scalar value is the representative of the tissue for each voxel.
As medical images are usually acquired in 2D slices, the 3D image can be seen as a stack of slices, and a 3D image gives us the freedom to look at the image from three views (Fig. 2): axial (or transversal), coronal (or frontal), and sagittal (or longitudinal).
The voxel is a volumetric element expressed in a single scalar value, and it represents an average of what is inside the defined space. Consequently, the voxel value suffers from the partial volume, which happens when more than one biological tissue is represented in a single voxel. This effect is minimized by reducing the voxel size and increasing the resolution for the same FOV. The partial volume can affect both manual and automatic segmentation methods since the fuzziness of the values could confuse the algorithms.
Depending on the application, the image acquisition parameters could vary considerably. For example, in the research field, the images using research-grade parameters tend to have higher resolution, better contrast, and less noise at the cost of longer acquisition time or better quality equipment. On the other hand, for clinical purposes, where resources such as time are scarce, the images acquired in clinical settings usually have a less spatial resolution (sometimes even skipping slices) and lower quality in general.
Among all 3D medical imaging types, CT and MR imaging are two of the most popular imaging-based diagnostic modalities used in different clinical conditions for diagnosis, follow-up, image-guided procedures, and medical research. Studying these images allows the analysis and segmentation of different body structures. Annotations can be used for different purposes such as volume measurements, statistics of a population in
Fig. 1: Parameters of a 3D image: Fiel of View (FOV), resolution, and voxel size.
Fig. 2: Slice view orientation in 3D medical images: Axial, Coronal, and Sagittal
medical research, localization of abnormal tissue according to an underlying pathological process, and disease of the patient [18]. Since manual analysis is very time-consuming and poorly reproducible, there is an interest in automated processing of these images [19].
### _X-ray CT_
In the X-ray based CT modality, the image is reconstructed from various X-ray acquisitions around the patient. Several parameters guide the acquisition and reconstruction of signals recorded by the CT scanner into the final image and can be found on the image's header. These parameters include, among others: the spacing between body slices, which can range from less than 1 mm to more than 1 cm; slice resolution, which is commonly very high with a pixel representing only 0.5mm2; reconstruction kernel or filter, which controls the frequencies present in the image and can generate from very smooth to very noisy images; and many others. One of the most important parameters to be considered when processing CT images is the Hounsfield Units (HU) window. HU directly maps to specific tissues, air, and water (Fig 3). Other variants of CT acquisition, such as PET-CT are out of the scope of this manuscript.
Footnote 2: [https://www.cds.org/](https://www.cds.org/)
Since HU values go beyond the traditional 8-bit (256 values) representation of gray images in a monitor, applying a window (clipping) to HU values will improve the contrast and visualization of specific tissues. In addition, the values contained in the digital reconstruction may have been rescaled to a different value. This rescaling is represented by a linear mapping \(ax+b\) where \(a\) is the rescale slope and \(b\) is the rescale intercept. These are commonly applied to remove negative numbers from the intensities, allowing unsigned storage. Section IV-A5 will review some recommendations to leverage these proprieties of CT images and avoid common mistakes when preprocessing and using CT images.
### _Mri_
MR Imaging is a technique that creates images by the interference between a high-intensity magnetic field, radio frequency pulses, and the field generated by the spin of the protons in the tissues inside the MR scanner. The signal strength measured by the scanner coils is responsible for generating the contrast for each voxel of the three-dimensional image.
MR images play a key role in the clinical environment as it is proven to be a fast, safe (non-ionizing radiation), and non-intrusive way to look inside the body. To support different types of exams, there are several MR imaging pulse sequences, each one presenting a different image contrast, exploiting specific characteristics of biological tissues. Due to a qualitative similarity to an anatomical slice, the sequence known as T1-weighted is one of the most popular. It is also one of the fastest-acquired sequences and relatively simple to analyze.
Besides T1-weighted images, many other MR image sequences (Fig. 4) are used for various applications, including structural and functional. For example, FLAIR [20] is used for investigating brain lesions, Diffusion sequences are used to measure the water diffusion in the tissue and infer about the fiber organization [21], Functional MR imaging [22] is used to measure brain activity, and spectroscopy [23] to measure metabolites.
### _Medical Imaging Processing_
The field of medical imaging processing has considerably changed over the last few years. At first, methods were based on image characteristics and often mimic the process done by human experts, such as region-based algorithms. After a while, machine learning (ML) methods started appearing, and extracting features from the images and using them in the algorithms became more usual. Finally, over the last few years, deep learning (DL) has taken the central place [24].
Complementary to previous machine learning methods, Deep Learning can learn how to generate the features automatically. A Deep Learning model can perform well using only raw data as input, without the necessity of a previously manually engineered feature extraction section, even requiring less technical knowledge about the problem in question to achieve baseline results [25].
In the medical imaging context, convolutional kernel parameters are the weights adjusted in training. The application of these convolutional kernels is invariant to translation, which allows learning from different positions throughout the image. Optimizing the kernel weights takes place by minimizing a loss function [26] expressing the model performance, through the application of optimization methods using variations of stochastic gradient descent [27]. In addition, these convolutions are usually supported by normalization layers [28] and
Fig. 4: Examples of MRI images from different acquisition sequences: respectively T1-weighted, FLAIR, Diffusion Tensor Imaging, and Functional MRI. Images from different subjects and non-correspondent slices.
Fig. 3: Hounsfield Unit scale, mapping values to the represented tissue, and an axial slice of a CT scan showing the inside of the lung.
nonlinear activations [29]. Normalization and non-linearity were some of the main breakthroughs that allowed deep learning networks to thrive in many applications, significantly accelerating the convergence of training. The Deep Learning architectures that use these techniques are called Convolutional Neural Networks (CNN). Note that CNNs are not the only way to use deep learning for imaging applications, with Tranformers [30] also being a current competitive architecture paradigm in natural and medical imaging processing.
There is an infinite amount of possible combinations of convolutional and other types of layers to define a CNN architecture. However, some combinations, or architectures, are already defined and well-known for certain applications. For example, U-Net [13] is one of the most used architectures in the field of medical imaging for image segmentation due to its compact encoder-decoder design that propagates multi-resolution features from the encoder to the decoder, allowing for fewer parameters when compared to famous large natural image classification architectures such as ResNet [31], EfficientNet [32], and ConvNext [33]. The encoder acts as a feature extractor and the decoder as a reconstructor of the intended output. It is important to stress that those features automatically extracted by the encoder are not necessarily visually interpretable since they are abstract representations of the input image that maximize a given objective. From the seminal UNet paper, many variations of the encoder-decoder segmentation architecture have been proposed [34, 35], and the encoder-decoder design continues to be the de-facto approach for supervised automated medical imaging segmentation.
## III Environment Recommendations
Software and hardware recommendations can be sensitive since different people and groups will have different experiences. In this Section, we will recommend what has worked for us in recent years, focusing on DL over medical imaging processing. However, these recommendations can be extrapolated using different processing methodologies. Firstly, regarding hardware, although having a good amount of RAM and a good CPU helps, we recommend distributing your budget, focusing on the best possible GPU with a decent amount of video memory. We also suggest paying attention to how much storage your work needs since some medical images can use a large amount of storage. If possible, Use a high-speed (SSD) storage for preprocessed data and low-speed storage for raw/original data.
From our experience, we recommend using Python and PyTorch as the programming language and DL framework, respectively. PyTorch 1 follows an object-oriented programming approach and is currently being used by a large part of the DL community. A top-level framework that simplifies some of the "engineering" code necessary to use PyTorch is PyTorch Lightning 2. The libraries you will use for processing will vary depending on the needs of your project. Some libraries commonly used for reading and dealing with medical imaging include NumPy, SimpleITK, Nibabel, and Pydicom. Both Windows and Ubuntu work well with Python in terms of operational systems. Jupyter Notebooks are a good tool for a more interactive programming approach, especially for prototyping and proof of concept. As a workflow, having separate Python scripts that are imported in a Jupyter notebook or a main command line script is interesting for controlling input arguments. Finally, we recommend that you use logging frameworks for experiments and do not implement logging from scratch. Many useful tools are available to log experiment parameters and results, and they are essential for organizing your research and recalling experiments in the future. Some examples include Neptune.ai3 and TensorBoard4.
Footnote 1: [https://pytorch.org/](https://pytorch.org/)
Footnote 2: [https://pytorch.org/tensorflow](https://pytorch.org/tensorflow)
Footnote 3: [https://neptune.ai/](https://neptune.ai/)
Footnote 4: [https://www.tensorflow.org/tensorboard](https://www.tensorflow.org/tensorboard)
Storage requirements are one of the main things to keep in mind when dealing with digital medical imaging. Unlike other imaging processing areas that deal with 2D images, medical imaging is frequently three-dimensional, with high resolution. This results in uncompressed storage of one volume, sometimes using hundreds of megabytes of space. Therefore, you need to reserve gigabytes or even terabytes of space to store both original and preprocessed copies of your data. We recommend that preprocessed data be stored in fast storage for faster processing, such as SSDs, and original data be stored in slower HDDs or even a separate computer. For deep learning training, using GPUs is mandatory, given that the training process consists of parallel multiplications and sums that can be performed optimally in GPUs. We recommend using a modern Nvidia GPU with CUDA support, due to it being the norm in the field currently, with most frameworks using CUDA. For RAM, We recommend a minimum of 16 GB of RAM, with more being beneficial especially for training 3D networks. Having more RAM is not essential but beneficial, increasing the amount of data that can be cached during training, and reducing the possibility of bottlenecks in parallel data loading. There is no need to have the most expensive CPU, but a top tier CPU will be of benefit for data processing and loading during training. Finally, it is very important to highlight that the GPU usage during training should be close to 100%. If not, that means that data loading is a bottleneck, either due to slow readings from storage, slow/not enough RAM, or a slow CPU. Always pay attention to hardware cooling as modern Nvidia GPUs should stay below 83 degrees Celsius while training. Finally, try to use parallel data loading and optimize your code. Badly optimized data loading code can bring even the most expensive hardware to a halt in training.
## IV Workflow
This section will go through recommendations and tips on all steps of the DL for medical imaging analysis workflow (Fig. 5).
### _Data_
Data curation is one of the most important steps of the training workflow. Here, we list five steps you need to consider before training your network.
#### Iv-A1 Understanding your problem and your data
The first thing you need to do is to understand the data you are using. For instance, it will be hard to classify lung lesions in CT images if you cannot locate the lung or if you have not heard about Hounsfield units. The more you understand the problem, the easier it is to find computational solutions for it. For example, if you are trying to segment a small structure, you may need to use some specific loss function or pay attention to the depth of the network. In this first step, you need to talk with physicians, radiologists, and specialists about the problem you are working on. You need to understand the primary goal of your project. Are you trying to decrease the processing time of existing methods, to improve metrics regardless of the processing time, or maybe, to develop a more generic application, even if it means a loss in performance? For instance, if your application is meant to work only in one imaging center, always using the same type of acquisition, you may train your network using only one dataset. However, if you intend to develop an application that can be used by different research centers, it is imperative to mix different data sources to increase generalization.
By understanding your problem early in development, you will be able to find the most effective data for your application and avoid wasting time re-training in the future.
#### Iv-A2 Labels
Here, you need to define what will be your ground truth. Usually, semi-supervised and supervised applications will require trustful labels to deliver good predictions. For medical images, the gold standard is usually defined as a specialist annotation. Also, the ideal scenario is to have more than one specialist annotating your data to avoid bias. However, the cost of manual labels is high in terms of both time and money. So, the first thing you need to do before starting a project is to analyze if you have data and annotation.
In terms of segmentation, if you have only a few labeled data, you may try some tricks:
* _Silver Standards_: Manual annotation is time-consuming, so you may create a dataset using established automated methods (Tab. II). To deal with different types of segmentation and reduce label noise, it is recommended to use label-cleaning strategies such as STAPLE and majority voting. Souza _et al._[62] defined this as silver standard labels.
* _Synthetic Images_: Recent studies have proved the generalization ability of a network trained with synthetic images. This means that by creating synthetic datasets using GAN [70] or probabilistic models [71], your network can predict on real images.
#### Iv-A3 Checking the data
You defined your problem, talked to physicians, and understood what data you need. Now it is time to look at it. Once again, It may sound obvious, however, if you are working with medical imaging, you must visually inspect the images before training. There are three moments during your application development when you need to look at a sample of the data. First, before any preprocessing. Check your raw data, see if the labels are coherent and if the images follow the same orientation acquisition. If you are dealing with diverse datasets, check the difference between them. Use this time to understand which preprocessing algorithms you will need to run on your image and to analyze if you understand the structure(s) you are dealing with. You may use visualization tools such as: ITK-SNAP [82]; Freeview [72]; 3D Slicer [83]; or DSI Studio [84] to see your images; or load them on your python script, using libraries such as: Nibabel [85]; PyDicom 6; SimpleITK [86]; or MedPY 7. If you have doubts about the labels, this is the moment to clarify them, or you may have to train your network again. Second, look at your data after preprocessing. You must ensure that the network will receive exactly what you intended to deliver. Finally, check
Fig. 5: Illustration of a generic workflow for training DL models for medical imaging analysis, from data pre-processing to model training and evaluation.
your predictions according to the input data, if the metrics make sense, or if you need some post-processing algorithm to improve your results.
#### Iv-B4 Ethics on data
It is crucial to understand that medical imaging is sensitive data, especially if you deal with patient images and/or pediatric images. To use the data, you need permission from the patients or from their legal representatives. It is imperative to anonymize the data by hiding personal information. Also, it is interesting to run a defacing algorithm in some cases to prevent face reconstruction from 3D images [87]. If you are using a public dataset, it is important to check if the data attend the ethics committee requirements. You will also need permission from the owner to use the data.
#### Iv-B5 Preprocessing
Before inserting the image into the model, several pre-processing steps can be done to the image, such as bias field correction, registration, skull stripping (in the case of brain images), normalization, and clipping. Registration is required to place multiple images in the same space, especially when working with multi-modality data, and it can be inter or intra-subject. For example, T1-weighted and Diffusion MR images usually have different resolutions and sometimes even acquisition directions. The registration process will transport one of the images to the space of the other, making each voxel have an exact correspondence in both images. A few of the most used tools to perform the registration are FSL, FreeSurfer, and Dipy. They are able to perform linear registration, a method that can basically manipulate the scales and orientation of the images, and nonlinear registration, which can elastically deform the images to adjust for punctual deformations. Intensity inhomogeneity, or bias field, is a low-frequency noise that represents intensity variability on a tissue and is caused during the acquisition process of the MR image [88]. Hence, as preprocessing, it is often necessary to apply a bias field correction such as the nonuniform normalization (N3) algorithm [89]. Skull-stripping (SS), on the other hand, is a step that separates brain tissue from the skull in MR images of the head. It is often used when the skull worsens the task results and it is mandatory when comparing brain structure volumes since the volumes need to
be normalized by the total intracranial volume. When working with CT, clipping usually is a common pre-processing step. It reduces the gray level window to a specific intensity range highlighting desired structures (Fig. 3). Finally, the network input is commonly normalized to avoid exploding gradients - usually into the [0,1] or [-1,1] range.
### _Model Training_
#### Iv-B1 Data Loading
After data checking and pre-processing are performed, you should have your data stored in a preprocessed state somewhere, such as preprocessed 3D volumes or preprocessed 2D images. Since medical images are usually multidimensional and large and DL requires a significant amount of data, you will need to pay attention to certain optimization tactics in data loading that are essential for training to be feasible, mainly related to how you save and read those preprocessed files. With some machine learning applications, you could have all the data stored as a variable in RAM, and fit your model there, but that is not viable with the large size of medical images. Therefore, your data reading logic will need to save some kind of index to your images and read each item in real-time during training. PyTorch recommends having a Dataset class that manages indexing all your data files and application of data augmentation if necessary and feeding that Dataset class to a Dataloader, which optimizes the loading process using parallelization. The goal in DL training is to use 100% of your GPU at all times. Therefore, the parallel loading logic implemented by the dataloader is sometimes essential for optimal training speeds. Monitoring your GPU usage during training is a good way to check if your data loading is bottlenecking your training.
There are many ways to optimize storing and reading your pre-processed data, and the exact way this will be done depends on the nature of your data, your hardware environment, and your goals. For example, using compressed formats such as _.npz_ can lead to less storage use, but more CPU load in decompressing. Uncompressed _.h5py_ files can make organization easier and have less CPU usage, but it can lead to the need for more storage. Even if you have unlimited storage, using uncompressed formats such as _.npy_ can be slower due to the slow read speeds of slow hard drives. An example of how to store processed data from CT slices for 2D training using compressed _.npz_ is in our case studies (Section VI), but should not be taken as the "correct" way. Experimentation must be done in your environment to determine the best storage and reading format for your case.
#### Iv-B2 Data Augmentation
When dealing with medical imaging processing, the generalization capacity of the model is of major importance. Once we have several scan manufacturers and different acquisition protocols, each database has its own particularities, especially when compared with data from different medical centers. One could train their model using only one dataset, however, this model would have a loss of performance when applied to data from different centers. Focusing on increasing image variability and improving the generalization of the methods, most authors resort to data augmentation (Fig. 7). There are several types of augmentation you may use in your method. It is really important to understand which will suit you more or the wrong use of this technique may worsen your results. A few examples of data augmentation focused on medical images are random crop, random rotation, elastic transformation, noise insertion, and intensity transformations (contrast, bright). When dealing with segmentation, it is important to apply the same transforms on the images and labels. Some data augmentation libraries may help, such as Torchvision, Albumentations, and Torch.io.
#### Iv-B3 Deep Networks
Deep Networks are a class of Machine learning methods that are able to extract the features of the data without the need for feature engineering done by a specialist. Instead, the architecture is able to learn not only the task but also the feature extraction on the first convolutional layers.
Usually, for medical images, specialists are focused on the application, not on designing the architecture. Thankfully, there is a large amount of CNN architectures available for general and medical imaging use, in both 2D and 3D formats. Although intuitive, the 3D CNN approach is not always the best performer compared to stacking 2D prediction of each slice. It is important to emphasize that even 2D slices have a thickness in addition to the voxel planar dimension, so the information of each voxel comes from a unity of volume.
We could split the CNN into many different groups, but in this tutorial, we are restricted to two CNN types, one for segmentation and another for classification. Each type of CNN has its characteristics accordingly to the application it is performing. For example, architectures designed for segmentation, usually fully convolutional (every layer is convolutional), are able to generate an output image with the same dimensions
Fig. 6: Example of CT image after clipping and normalization. On the left, you can see the original image. On the center, the image after a [-1024,600] clipping: It is possible to see changes in the image contrast. On the right, image after minmax normalization: Visually, there are no changes, however, the minimum and maximum intensity are different.
Fig. 7: Example of MR images after data augmentation (RandomRotation and RandomCrop). In red, it is possible to see the label overlaying the image.
as the input. For this type of CNN, the ground truth is composed of masks of the same size as the input images. On the other hand, architectures designed for classification present a dense layer after the feature extraction section, and these dense sections have one output for each of the considered classes. In these problems, the ground truth is a single value that represents the class of the input image.
In both types of applications, there are many available architectures. For example, in segmentation, some of the most popular or most performing architectures are the U-Net [13], V-Net [90] and QuickNAT [12]. For applications focused on classification, some example architectures are the Resnet [31], Inception [91], DenseNet [92], MobileNet [93] and EfficientNet [32].
#### Iii-B4 Loss Function
The loss function is defined by a relation between the current output of the model and the desired output, defined by the ground truth. By computing the loss function for the current state of the model, the training framework can define the gradient directions in which the weights are going to be moved to minimize the prediction error of the model.
The loss function must be properly defined depending on the problem the CNN is solving. For segmentation problems, loss functions based on overlap measures are commonly used [26].
It is important to notice that the loss function must be continuous in order to be differentiable, allowing for the optimizer to minimize it, therefore minimizing the error expressed by the loss. However, many problems would require metrics that do not fit this requirement, so the loss function must be properly defined depending on the problem the model is solving. For example, some classification problems search for higher accuracy, which is a non-differentiable function. In this case, a loss based on cross entropy is a valid option. For segmentation problems, a similarity metric such as Dice Coefficient [26] is not differentiable in its original set based definition. Therefore, most Dice based losses alter the metric definition to allow for probabilistic inputs to be compared with the binary groundtruth, which also has the benefit of smoothing convergence. This version of Dice is commonly called Dice Loss and can be defined as:
\[DiceLoss=1-2\frac{\sum_{i}^{N}p_{i}t_{i}}{\sum_{i}^{N}p_{i}^{2}+\sum_{i}^{N}t_ {i}^{2}} \tag{1}\]
where \(p\) is the probabilistic output of the network between 0 and 1, usually from sigmoid or softmax activations, and \(t\) are binary targets. Note that minimizing equation (1) by definition maximizes the value of the Dice metric (Section IV-D).
#### Iii-B5 Optimizer
Neural network training performs adjustments to weights \(w\) present in the network, which in the case of a CNN are convolutional kernel values. These weights are changed based on values returned by the loss functions, which measure how "wrong" an output is in relation to a target, which is assumed to be the expected output. The gradient \(\delta_{w}\) of a weight represents the change in loss caused by a change in the weights, in other words, a derivation, and guides the training process in the direction of minimizing the loss. The gradient for each weight is commonly calculated using backpropagation in the form of PyTorch's Autograd [94]. The optimization process is controlled by optimizers, with the most common ones being Stochastic gradient descent (SGD) and Adaptative momentum estimation (ADAM).
With the SGD optimizer, the derivation of an updated weight \(w_{t}\) for a current discrete time \(t\) as a function of past weight \(w_{t-1}\) can be expressed as:
\[w_{t}=w_{t-1}-\alpha\Delta_{w}L(O,T) \tag{2}\]
for a loss function \(L\) over outputs \(O\) and targets \(T\).
SGD can also be implemented with momentum, where past weights are also taken into consideration. The learning rate (LR) controls the speed of the optimization process. Finding the correct learning rate is a key factor in training a CNN. A high learning rate can make the model skip states where the minimum loss is achieved, whereas a low learning rate can lead to very slow convergence, and to the model being stuck in a local minimum. Some learning rate schedulers change the learning rate during training with a function of the number of epochs passed. Some other optimizers try to change the learning rate adaptively, such as ADAM. ADAM computes a learning rate per parameter, instead of using a global learning rate, and takes into consideration a moving average of gradients. More details can be found in its original publication in [95]. ADAM and its variations [96] are a good initial choice of optimizer due to their adaptability. A weight update for ADAM can be expressed as:
\[w_{t}=w_{t-1}-\eta\frac{m_{t}}{\sqrt{v_{t}}+\epsilon} \tag{3}\]
where \(\eta\) is the step-size, that can vary between iterations. \(m_{t}\) and \(v_{t}\) are bias corrected first and second momentums, respectively. These momentums are a function of gradients and the square of the gradients, also in relation to an input and loss function. More details can be found in its original publication in [95]. The usage of momentum and moving averages avoids the next step to be completely determined by the current batch of data since batches can be very randomized. The momentum keeps pointing to a general direction of minimization, where the current gradient points to the minimization for the current batch. The final step can be defined by a combination of the two.
### _Post processing_
Post-processing is the process of editing the network output to enhance the results. There are several post-processing options based on the end result you want to achieve. For instance, the analysis of connected components (CC) over the three dimensions. By checking the CC on a segmentation output, it is possible to remove miss-classified voxels (Fig. 8). The threshold analysis is also used to improve a network outcome. It needs to be done on the validation set and the main idea is
to find a threshold that better improves the final result. For this matter, it is necessary to vary the thresholding by analyzing the metric you want to improve. This may be applied to different applications such as classification and segmentation. Seunglab 8 provides a useful 3D connected components generation method which supports images containing many different labels, not just binary images. It also supports continuously valued images such as grayscale microscope images with an algorithm that joins together nearby values. The benefit of this package is that it labels all connected components in one shot, improving performance by one or more orders of magnitude.
Footnote 8: [https://github.com/scung-lab/connected-components-3d](https://github.com/scung-lab/connected-components-3d)
### _Evaluation Metrics_
Concerning segmentation models, different metrics should be used to evaluate results due to their complementarity in expressing how well the method's output is similar to the ground truth [97]. Some examples include the Dice coefficient (Dice), Hausdorff Distance (HD), Hausdorff Average Distance (AVD), and Volume Simmetry, among others. Considering **A** as the model prediction and **M** as the label, we may define:
#### Iv-D1 Dice Coefficient
The _DC_ is an overlap measure defined as follows:
\[DC=\frac{2*|M\cap A|}{|M|+|A|} \tag{4}\]
_DC_ is sensitive to small segmentation and does not identify boundary errors. However, it can be used as a measure of reproducibility and is widely used for medical imaging segmentation analysis, being the most used metric in the medical imaging segmentation field [97]. _DC_ results may be in the [0,1] range, where 1 a perfect _DC_
#### Iv-D2 Hausdorff Distance
The _HD_ measures the distance between two sets of points.
\[HD(A,M)=max(h(A,M),h(M,A)) \tag{5}\]
where:
\[h(A,M)=\max_{a\in A}\min_{m\in M}\|a-m\| \tag{6}\]
#### Iv-D3 Average Hausdorff Distance
As the name suggests, _AVD_ is the averaged Hausdorff Distance over all points.
\[AVD(A,M)=max(d(A,M),d(M,A)) \tag{7}\]
where:
\[d(A,M)=\frac{1}{N}\sum_{a\in A}\min_{m\in M}\|a-m\| \tag{8}\]
Similar to _HD_, it is also a spatial distance metric that is robust to small structures. However, on average, it is less sensitive to outliers [97]. The smaller the _AVD_ between manual and automated segmentation, the better the automated segmentation.
#### Iv-D4 Volume Similarity
Finally, the VS calculates the similarity between the two samples.
\[VS=1-\frac{||A|-|M||}{|A|+|M|} \tag{9}\]
being \(|X|\) the module of X. Although it ignores borders and overlaps, _VS_ is a good metric for analyzing the segmentation volume when determining the volume of the structure is the main goal [97]. _VS_ results may be in a [0,1] range, being 1 a perfect _VS_
## V Statistic Analysis
Statistical analysis is an important task to analyze the reliability of the obtained results obtained. The main idea is to compare groups and verify if they are statistically equivalent or different, thus obtaining a significant answer or not for the problem in question.
At first, it is important to stipulate which hypotheses are being tested. In a hypothesis test, the Null Hypothesis (\(H_{0}\)) is the equality hypothesis, which argues that the groups being analyzed are statistically equal. The alternative hypothesis, (\(H_{1}\)), is the complement of the null hypothesis and states that the effect between the groups under study exists, that is, that the groups are statistically different.
A statistical test can be one-tailed or two-tailed. In the two-tailed test, the alternative hypothesis is a hypothesis of inequality (\(\neq\)), not taking into account if values are less or greater (\(<\) or \(>\)).
There are several statistical tests and the correct choice will depend on the analysis you want to perform. Our focus will be to describe step by step how to perform a statistical test for two groups with quantitative response variables, which are the most used analyses in medical data.
### _Check if the groups are independent or dependent_
Two groups are independent (unpaired) if the sample selected from one of the populations is not related to the sample selected from the second population. For example, if we want to compare patients \(\times\) control.
Two groups are dependent (paired) if each member in a sample correspondent with a member from another sample.
Fig. 8: Prediction noise pointed by yellow arrows (left image) can be reduced (right image) using post-processing algorithms such as CC filters.
For example, collected data from the same cohort, before and after treatment.
### _Check data normality_
To perform a confident analysis, it is important to use the statistical test that best represents the data, so before choosing the test it is important to check the distribution of the data. Normality is a characteristic of the data in which the majority (higher frequency) of the sample values are close to the mean value of all samples. Normality can be visualized through the scatter histogram and the box-plot plot. The data are normally distributed if their scatter histogram is bell-shaped. A boxplot with many outliers is a characteristic of data without distribution, that is, of non-parametric data. However, it is often difficult to define normality just by visualizing the data. According to [98], the most powerful statistical test to verify data normality is the Shapiro Wilk test. The Shapiro Wilk test uses the following hypothesis:
\(H_{0}\) : data distribution = normal \(\to p>\alpha\)
\(H_{1}\) : data distribution \(\neq\) normal \(\to p\leq\alpha\) where alpha is the significance level stipulated as a priory, a limit established for the rejection or note null hypothesis based on the value of \(p\). In the medical area, it is often used \(\alpha=0.05\).
### _Use the appropriate statistical test to compare two groups_
Parametric tests require a normal data distribution, while non-parametric tests do not require normal distribution.
#### Vi-C1 Independent t test:
The t-test for two independent samples (also called independent Student's t-test), allows comparing means between two independent groups. The dependent variable is numerical with normal distribution and the independent variable is categorical with two categories. Before performing the independent t-test, it is important to verify whether the variances of the groups are homogeneous or not using the Levene test [99]. The hypotheses of the independent t-test are:
\(H_{0}\) : mean of the groups are same \(\to p>\alpha\)
\(H_{1}\) : mean of the groups are different \(\to p\leq\alpha\)
#### Vi-C2 Mann-Whitney test:
Mann-Whitney test compares two independent groups. It is a non-parametric alternative to the independent t-test. The Mann-Whitney test is used for data that do not have a normal distribution. That is, the mean is not a good representation of the data set and the median is the measure that best represents the data [100]. The test hypotheses are:
\(H_{0}\) : median of the groups are same \(\to\)\(p>\alpha\)
\(H_{1}\) : median of the groups are different \(\to\)\(p\leq\alpha\)
An important observation is that we can find the Mann-Whitney test as a test that compares the medians, but deep down, it is not just comparing the medians of the two groups, it is comparing the distributions [101]. It may happen that the same median is found in both groups under study and the Mann-Whitney test is significant, that is, this significance indicates that there is a difference in the distributions.
#### Vi-C3 Paired t test:
Paired Student's t-test is a type of hypothesis test that allows comparing the mean of two paired groups. The dependent variable must be numerical, and the independent variable is composed of two paired groups. To use this test the data must be represented by a normal or approximately normal distribution. The paired t-test hypotheses are the same as the independent t-test.
#### Vi-C4 Wilcoxon test:
The Wilcoxon test, also known as the Wilcoxon signed-rank test, is based on ranks and allows comparing two paired samples [102]. It is a nonparametric test corresponding to the paired t-test. This method considers the size of the differences in the case under study. The Wilcoxon signed-rank test is used to test differences in population. They are generally used for data that do not have a normal distribution. This type of data is rarely well represented by the mean and the median is the measure of central tendency that best represents the data. The hypotheses of this test are the same as in the Mann Whitney test.
## VI Case Studies
We have prepared a practical demonstration of most principles showcased in this paper for a CT and an MR image case using public data [103, 104] in the following repository: [https://github.com/MICLab-Unicamp/Medical-Imaging-Tutorial](https://github.com/MICLab-Unicamp/Medical-Imaging-Tutorial).
## VII Conclusion
In this manuscript we have gone through a brief introduction to the field of medical imaging analysis with deep learning, focusing on segmentation of images. A workflow guidance is provided, with tips and good practices for all phases commonly present in this field of research. In addition, we provide hands-on examples using public data, which will be used in our tutorial session.
## Acknowledgment
We thank MICLab students Alvaro Capelo, Beatriz Vicente, Bruno Santos, Gabriel Dias, Jean Ribeiro, and Joany Rodrigues, for participating in our internal seminars. Livia Rodrigues and Thays Abreu thanks the Higher Education Personnel Improvement Coordination (CAPES). Diedre Carmo thanks grant #2019/21964-4, Sao Paulo Research Foundation (FAPESP). Gustavo Pinheiro, Roberto Lotufo, and Leticia Rittner thank the National Scientific and Technological Development Council (CNPq).
|
2310.07051 | Score-Based Generative Models for Designing Binding Peptide Backbones | Score-based generative models (SGMs) have proven to be powerful tools for
designing new proteins. Designing proteins that bind a pre-specified target is
highly relevant to a range of medical and industrial applications. Despite the
flurry of new SGMs in the last year, there has been little systematic
exploration of the impact of design choices in SGMs for protein design. Here we
present LoopGen, a flexible SGM framework for the design of short binding
peptide structures. We apply our framework to design antibody binding loop
structures conditional on a target epitope and evaluate a variety of modelling
choices in SGM-based protein design. We demonstrate that modelling residue
orientations in addition to positions improves not only the quality of the
output structures but also their diversity. Additionally, we identify variance
schedules that result in significant performance improvements and observe
patterns that may motivate the development of better schedules for protein
design. Finally, we develop three novel tests to evaluate whether the model
generates structures that are appropriately conditioned on an epitope,
demonstrating that LoopGen's generated structures are dependent on the
structure, sequence, and position of the epitope. Our findings will help guide
future development and evaluation of generative models for binding proteins. | John D Boom, Matthew Greenig, Pietro Sormanni, Pietro Liò | 2023-10-10T22:26:41Z | http://arxiv.org/abs/2310.07051v3 | # Score-Based Generative Models
###### Abstract
Score-based generative models (SGMs) have proven to be powerful tools for designing new proteins. Designing proteins that bind a pre-specified target is highly relevant to a range of medical and industrial applications. Despite the flurry of new SGMs in the last year, there has been little systematic exploration of the impact of design choices in SGMs for protein design. Here we present LoopGen, a flexible SGM framework for the design of short binding peptide structures. We apply our framework to design antibody binding loop structures conditional on a target epitope and evaluate a variety of modelling choices in SGM-based protein design. We demonstrate that modelling residue orientations in addition to positions improves not only the quality of the output structures but also their diversity. Additionally, we identify variance schedules that result in significant performance improvements and observe patterns that may motivate the development of better schedules for protein design. Finally, we develop three novel metrics to evaluate whether a model generates structures that are appropriately conditioned on a target protein, demonstrating that LoopGen's generated structures are dependent on the structure, sequence, and position of the epitope. Our findings will help guide future development and evaluation of generative models for binding proteins.
## 1 Introduction
Rationally designing proteins _in silico_ has the potential to unlock new treatments for disease, accelerate scientific research, and enable new green manufacturing technologies. Score-Based Generative Models (SGMs) have proven to be effective tools for protein design [8; 21], dramatically outperforming previous methods like generative adversarial networks [16]. While there was an explosion of new models in the last year [21; 8; 15; 22; 23; 2; 13], there has been little systematic exploration of the design choices behind SGMs in this domain. Therefore, we sought to develop a framework for evaluating different model design choices in the context of an important task: generating proteins that bind a specified target. We apply our framework to design binding loop structures for antibodies, a key class of biomolecules widely applied as therapeutics, diagnostics, and research tools [19; 14]. Antibody binding is mediated primarily through interactions between the target and short loop regions called complementarity-determining-regions (CDRs), which often lack secondary structure and can
exhibit extreme structural diversity [6], posing challenges in the generative modelling setting. Designing CDR structures for binding is also challenging due to limited data availability; just under 8000 total antibody structures are available in the PDB, many of which are redundant or lack a binding partner [5]. These challenges, combined with the general utility of antibodies [19; 14], motivate the development of novel methods for generating and evaluating CDR structures.
## 2 Methodology
We present LoopGen, an SGM framework for the generation of binding peptides. Our framework allows for direct comparison of SGM design choices in the context of protein design, such as the generation of residue orientations and coordinates ("frames") versus coordinates alone, different estimator architectures, and different choices of variance schedule. We apply our framework to generate CDR loops conditioned on a target epitope. For our experiments, we use a heterogeneous variant of the Geometric Vector Perceptron (GVP) GNN architecture [9] to estimate scores for CDR residues in each CDR/epitope complex. The complex is represented as a heterogeneous graph with edges drawn between each residue and its \(K=6\) nearest neighbors in both the CDR and epitope. CDR residues are represented as either frames (rotations and coordinates) or coordinates alone, and epitope residues are represented using node features describing their sequence identity and backbone geometry. Training was first conducted using a large dataset of CDR-like fragments obtained from the PDB90 database [1] and subsequent finetuning was performed using a smaller, higher-fidelity dataset (SAbDab) of real CDRs in complex with epitopes [5], filtering for structures with <90% sequence identity. Self-conditioning was performed as in Watson et al. [21] at a rate of 0.5. Generation of novel loop structures was applied to 687 test set CDR/epitope complexes from SAbDab, generating 10 loops conditioned on the epitope structure and of the same length as the ground truth CDR, as well as generating 10 loops for a transformed version of each epitope: permuted (swapped), sequence scrambled, and translated. Please see Appendix A for more information on our methods. Code and data are publicly available at: [https://github.com/mgreenig/loopgen](https://github.com/mgreenig/loopgen).
## 3 Experiments
### Evaluating the Impact of Frames
We first investigated the importance of incorporating orientation information for each residue. We compared sets of CDR structures produced by generative models using rotations and positions (frames) versus models using C\(\alpha\) coordinates alone. Interestingly, the RMSD2 of the generated structures compared to the ground truth was indistinguishable between the models; however, structures generated by the coordinates-only model displayed a more significant deviation from the ground truth distances between adjacent C\(\alpha\) atoms along the chain (Fig. 1, middle). To analyze the diversity of the generated structures, we computed the mean pairwise RMSD between the 10 generated structures against each test set epitope for both models, removing duplicates. The coordinates-only model had dramatically lower diversity as measured by pairwise RMSD (Figure 1, right). These findings suggest that the coordinates-only model produces less biochemically plausible, more homogeneous structures, and, more generally, that RMSD to the ground truth structure fails to capture salient features of generated CDRs. We include examples of generated structures in Appendix B.
Footnote 2: Generally, our structures had much higher RMSD than similar models [15]. We suspect this occurs for many reasons, including not using sequence information, pretraining on a more diverse dataset, and removing CDRs with >90% sequence similarity.
### Exploring how Diffusion Schedules Affect Performance
Score-based generative models of frames for protein design are unusual because noise must be applied over both the orientation and coordinate of each residue. These two processes introduce complexity because the relative position between a residue and its neighbors informs what orientations are possible and vice versa. Variance schedules have been shown to have a large impact on diffusion models for image generation [11]; however, to our knowledge, there is almost no published data on how different noising schedules for translations and rotations affect frame generative models.
RMSD to the ground truth structure is the standard metric for evaluating the quality of generated CDR structures [15; 22]. We found that models with different variance schedules showed minimal variation in ground truth RMSD but varied significantly in their ability to generate physicochemically plausible structures (Table 1), where we defined structural violations using the violation loss functions from AlphaFold2 [10]. Strikingly, ranking the translation variance schedules from the "slowest" destruction of information to the "fastest" results in the exact ordering of performance (Figure 6, Appendix). The quadratic translation schedule combined with a logarithmic schedule for rotations exhibited the lowest rate of structural violations. Despite showing poor correspondence between generated and ground truth CDR structures, the generated loops still satisfy key criteria as both valid loops and potential binders, obeying the correct dihedral angle distribution at the correct distance from the epitope (Figure 2). These findings suggest that RMSD may not be an appropriate metric for assessing generative models of CDR loops, which have notably high structural diversity.
### Evaluating Generative Models for Binding Peptides
Given that ground truth RMSD appears to have many limitations as a metric, we searched for additional metrics to evaluate the quality of the model's output. Ultimately, we determined that a major and neglected objective is that the generated structures should be clearly conditioned on the provided epitope. To evaluate the dependence on the epitope, we generated sets of 10 CDRs after swapping each epitope (aligning its principal components to the original epitope), scrambling the epitope sequence (permutation of residue identities), and translating the epitope 20 A away from the CDR. Then, we computed the mean pairwise RMSD between each of these sets of generated CDRs and the set of CDRs generated for the wild-type epitope. Larger mean pairwise RMSDs indicate greater structural differences between sets of CDRs. As shown in Figure 3, we see that all three perturbations result in a large increase in pairwise RMSD, confirming that, in general, the model's outputs are highly dependent on the epitope's structure, sequence, and positioning.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Trans. & Rot. & RMSD (Å) & Internal & Bond & Bond & Epi.-CDR & Any \\ Sched. & Sched. & & Clashes & Length & Angle & Clash & Struct. \\ & & & (\%) & (\%) & (\%) & (\%) & Flaw (\%) \\ \hline Lin. & Log. & 4.98 \(\pm\) 2.14 & 0.3 & 20.6 & 3.9 & 3.3 & 22.4 \\ Quad. & Log. & 4.93 \(\pm\) 2.15 & 0.0 & 6.3 & 0.6 & 2.7 & 8.7 \\ Log. & Log. & 4.85 \(\pm\) 2.08 & 88.9 & 96.2 & 43.9 & 5.7 & 97.1 \\ Sig. & Log. & 4.93 \(\pm\) 2.18 & 0.6 & 6.1 & 1.8 & 3.4 & 9.2 \\ Lin. & Lin. & 5.04 \(\pm\) 2.08 & 1.0 & 29.4 & 4.0 & 3.4 & 30.9 \\ Lin. & Quad. & 5.13 \(\pm\) 2.17 & 0.8 & 18.6 & 1.7 & 3.5 & 20.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Choice of Variance Schedule Dramatically Affects Structure Quality
Figure 1: Analyzing how incorporating rotational information improves model performance. From left to right: A comparison of the RMSD of the generated structures compared to the ground truth, A histogram of the distances between C\({}_{\alpha}\), and a comparison of the pairwise RMSD between sets of generated structures (higher values indicate greater structural diversity).
## 4 Discussion and Conclusion
Score-based generative models have facilitated _in silico_ protein design; however, relatively little is known about the key components affecting their performance. In this work we conduct, to our knowledge, the first direct comparison between modelling entire residue frames versus C\(\alpha\) coordinates alone. We hypothesised that modelling frames would provide specific benefits in designing binding peptides, which have highly variable structures and thus significant degrees of freedom in their dihedral angles, and indeed we show that structures generated as frames are significantly more diverse. Next, we compare models trained under different variance schedules and show that, while schedules do not have a significant effect on the commonly-used ground truth RMSD metric, they do significantly influence the rate of structural violations. We also introduce additional metrics to assess the quality of generated binder structures by evaluating the extent to which they are conditioned on the epitope.
Our experiments identify multiple promising avenues for further investigation. While we study the incorporation of orientation information, we only do so for the IGSO(3) diffusion framework [23; 21; 15] (see Appendix A for more details). However, future research may benefit from exploring different forms of generative models over rotations (e.g. [2]). Furthermore, although we showed significant performance variation across both translation and rotation variance schedules, a more extensive evaluation of the entire space of schedule combinations may identify configurations that
Figure 3: Analyzing the dependence of the generated CDR structures on the epitope. A set of 10 structures are generated for 687 test set epitopes, i.e. wild-type (WT) epitopes. Another 10 structures are generated under three types of perturbation to each epitope: permuted (swapping the WT epitope with a random epitope), scrambled (permuting residue identities within the WT epitope structure), and translated (translating the epitope 20 Å away from the CDR along the vector connecting their centers of masses). Pairwise RMSDs are calculated between the set of CDRs generated for the WT epitope and the sets generated under all conditions for each epitope in the test set.
Figure 2: Left: Ramachandran Distribution of the generated CDRs compared to ground truth. Right: The minimum distance between an \(C_{\alpha}\) on the CDR and the epitope for the ground truth and generated CDRs. Stratification by length reveals better performance on shorter CDRs.
yield even better results. Finally, we note that our model alone is not sufficient for _de novo_ CDR design because it only generates CDR structures, independent of sequences. Previous work tested conducting structure and sequence generation concurrently (at the cost of lower structural diversity) [15], and future research would benefit from comparing methods for incorporating sequence generation directly into the SGM framework to performing sequence design post-hoc [21].
|
2305.03081 | Minimal model for the $W$-boson mass, $(g-2)_μ$, $h\toμ^+μ^-$ and
quark-mixing-matrix unitarity | The $SU(2)_L$ triplet scalar with hypercharge $Y=0$ predicts a positive
definite shift in the $W$ mass, w.r.t.~the Standard Model prediction, if it
acquires a vacuum expectation value. As this new field cannot couple directly
to SM fermions (on its own), it has no significant impact on other low-energy
precision observables and is weakly constrained by collider searches. In fact,
the multi-lepton anomalies at the LHC even point towards new scalars that decay
dominantly to $W$ bosons, as the neutral component of the triplet naturally
does. In this article, we show that with a minimal extension of the scalar
triplet model by a heavy vector-like lepton, being either I) an $SU(2)_L$
doublet with $Y=-1/2$ or II) an $SU(2)_L$ triplet with $Y=-1$, couplings of the
triplet to Standard Model leptons are possible. This minimal extension can then
provide, in addition to the desired positive shift in the $W$ mass, a chirally
enhanced contribution to $(g-2)_\mu$. In addition version I) and II) can
improve on $Z\to\mu^+\mu^-$ and alleviate the tension in first-row CKM
unitarity (known as the Cabibbo angle anomaly), respectively. Finally, both
options, in general, predict sizable changes of $h\to\mu^+\mu^-$, i.e.,~much
larger than most other $(g-2)_\mu$ explanations where only $O(\%)$ effects are
expected, making this channel a smoking gun signature of our model. | Andreas Crivellin, Matthew Kirk, Anil Thapa | 2023-05-04T18:00:04Z | http://arxiv.org/abs/2305.03081v2 | (Y=0\) Scalar Triplet Beyond the \(W\) Mass: \((g-2)_{\mu}\), \(h\to\mu^{+}\mu^{-}\) and CKM Unitarity
###### Abstract
The \(SU(2)_{L}\) triplet scalar with hypercharge \(Y=0\) predicts a positive definite shift in the \(W\) mass, w.r.t. the Standard Model prediction, if it acquires a vacuum expectation value. As this new field cannot couple directly to SM fermions (on its own), it has no significant impact on other low-energy precision observables and is weakly constrained by collider searches. In fact, the multi-lepton anomalies at the LHC even point towards new scalars that decay dominantly to \(W\) bosons, as the neutral component of the triplet naturally does. In this article, we show that with a minimal extension of the scalar triplet model by a heavy vector-like lepton, being either I) an \(SU(2)_{L}\) doublet with \(Y=-1/2\) or II) an \(SU(2)_{L}\) triplet with \(Y=-1\), couplings of the triplet to Standard Model leptons are possible. This minimal extension can then provide, in addition to the desired positive shift in the \(W\) mass, a chirally enhanced contribution to \((g-2)_{\mu}\). In addition version I) and II) can improve on \(Z\to\mu^{+}\mu^{-}\) and alleviate the tension in first-row CKM unitarity (known as the Cabibbo angle anomaly), respectively. Finally, both options, in general, predict sizable changes of \(h\to\mu^{+}\mu^{-}\), i.e., much larger than most other \((g-2)_{\mu}\) explanations where only \(O(\%)\) effects are expected, making this channel a smoking gun signature of our model.
+
Footnote †: preprint: PSI-PR-23-11, ZU-TH 20/23
## I Introduction
The Standard Model (SM) of particle physics is the theory describing the fundamental constituents and interactions of matter according to our current state of knowledge. However, it is clear that it cannot be ultimate description of nature. For instance, it cannot account for the existence of Dark Matter established at cosmological scales, nor for the non-vanishing neutrino masses required by neutrino oscillations. Unfortunately, these observations can be addressed in many different ways and within a very wide range for the new physics scale. Therefore, in the absence of confirmed direct signals for new particles, more information on possible extensions of the SM is thus necessary to make progress towards a theory superseding the SM that can be tested at the Large Hadron Collider (LHC) or next-generation experiments. In this context, we can use deviations from the SM predictions in low-energy observables, known as anomalies, as a guide for identifying promising extensions of the SM, within which one can then calculate predictions for future verification (or falsification) of the model. Prominent candidates among these indirect hints for physics beyond the SM (see e.g. Ref [1] for a recent review) are the anomaly in the \(W\) boson mass [2], the anomalous magnetic moment of the muon (\((g-2)_{\mu}\)) [3] as well as the deficit in first-row CKM unitarity, known as the Cabibbo angle anomaly (CAA) [4]. While in the first observable, the CDF II result is in some tension with LHC measurements [5; 6], the significance of the deviation in \((g-2)_{\mu}\) depends on the SM prediction, where inconsistencies between the data-driven method [7] and lattice results [8] exist. However, it is still very interesting and instructive to see which models can give sizable effects in these observables. While several combined NP explanations of \((g-2)_{\mu}\) and the \(W\) mass have been proposed in the literature [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36], a simple combined explanation of all three anomalies is still missing (to the best of our knowledge).
In this article, we aim at constructing a minimal model that can naturally provide sizable effects in both \(m_{W}\) and \((g-2)_{\mu}\) (and possibly explain the CAA) and investigate its phenomenological consequences. For this, our starting point is the \(\Delta\)SM [37; 38; 39; 40; 41; 42; 43; 44], where an \(SU(2)_{L}\) triplet scalar with hypercharge 0 (\(\Delta\)) is added to the SM particle content. Its vacuum expectation value (VEV) violates custodial symmetry at the tree-level via a positive contribution to the \(W\) boson mass, as suggested by the measurement of the CDF II measurement [45; 46; 47; 48; 49; 50]. Since the neutral component of the triplet scalar can dominantly decay to pairs of \(W\) bosons, while the decay to \(Z\) pairs is suppressed by mixing with the SM Higgs, this model is well motivated by the LHC multi-lepton anomalies [51; 52; 53; 54; 55], including the hint for an enhanced \(W\) pair production at the electroweak (EW) scale [56].
Next, we aim at extending the \(\Delta\)SM to obtain a sizable effect in \(g-2\) of the muon. In fact, there are only two
minimal options that can, as we will show, give rise to chirally enhanced effects in \((g-2)_{\mu}\) (see e.g. Refs. [57; 58; 59; 60; 61; 62; 63] for generic models with chiral enhancement). We can supplement the \(\Delta\)SM by a vector-like lepton
I) \(D\sim(1,2,-1/2)\)\(\qquad\)or\(\qquad\)II) \(T\sim(1,3,-1)\),
where the numbers in the bracket denote their representation under the SM gauge group \(SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\). The corresponding Feynman diagrams giving the dominant contribution to \(g-2\) are shown in Fig. 1.1
Footnote 1: Note that contrary to Ref. [64], or the MSSM with R-parity conservation (see Ref. [65] for a review), we do not impose a (effective) \(Z_{2}\) symmetry. This extension allows for a more minimal setup in which only two new fields are needed (instead of 3 in Ref. [64]).
Interestingly, both model versions lead to unavoidable tree-level effects in the dim-6 operator \((H^{\dagger}H)(\bar{\ell}_{L}He_{R})\)[66] contributing to the muon mass and \(h\to\mu^{+}\mu^{-}\) after electroweak (EW) symmetry breaking. In fact, while most other \((g-2)_{\mu}\) explanations only predict effects of the order of a few percent [67; 68; 64], we will see that our model, in general, predicts much larger effects.
## II Model
Our starting point is to add a real scalar \(SU(2)_{L}\) triplet with \(Y=0\) (\(\Delta\)) to the SM particle content, called the \(\Delta\)SM. The most general renormalizable scalar potential involving SM Higgs \(H\) and real triplet \(\Delta\) reads
\[V= -\mu_{H}^{2}H^{\dagger}H+\mu_{\Delta}^{2}\text{Tr}[\Delta^{2}]+ \lambda_{1}(H^{\dagger}H)^{2}+\lambda_{2}\text{Tr}[\Delta^{4}]\] \[+\lambda_{3}(H^{\dagger}H)\,\text{Tr}[\Delta^{2}]+\mu H^{\dagger} \Delta H\,, \tag{1}\]
where the scalar fields are defined as
\[H=\begin{pmatrix}H^{+}\\ H^{0}\end{pmatrix},\ \ \ \ \Delta=\frac{1}{2}\begin{pmatrix}\Delta^{0}&\sqrt{2} \Delta^{+}\\ \sqrt{2}\Delta^{-}&-\Delta^{0}\end{pmatrix}\,, \tag{2}\]
in terms of electric charge eigenstates. The scalar potential has a global \(O(4)_{H}\times O(3)_{\Delta}\) symmetry in the limit \(\mu\to 0\). Therefore, \(\mu\) softly breaks this symmetry and is naturally small. We denote the VEVs as \(\langle H^{0}\rangle=v/\sqrt{2}\) and \(\langle\Delta^{0}\rangle=v_{\Delta}\), with \(v^{2}+4v_{\Delta}^{2}\approx(246\text{GeV})^{2}\), and the minimization conditions are
\[-\mu_{H}^{2}+\lambda_{1}v^{2}-\frac{1}{2}\mu v_{\Delta}+\frac{1}{ 2}\lambda_{3}v_{\Delta}^{2} =0\,, \tag{3}\] \[\mu_{\Delta}^{2}+\frac{1}{2}\lambda_{3}v^{2}-\frac{1}{4}\frac{ \mu}{v_{\Delta}}v^{2} +\frac{1}{2}\lambda_{2}v_{\Delta}^{2} =0\,, \tag{4}\]
which we use to eliminate \(\mu_{H}^{2}\) and \(\mu_{\Delta}^{2}\). The scalar mass matrices, in the basis \((H^{+},\Delta^{+})\) and (Re \(H^{0}\), Re \(\Delta^{0}\)), are
\[M_{+}^{2} =\mu\begin{pmatrix}v_{\Delta}&\frac{v}{2}\\ v&\frac{v^{2}}{4v_{\Delta}}\end{pmatrix}\,, \tag{5}\] \[M_{0}^{2} =\begin{pmatrix}2\lambda_{1}v^{2}&\frac{-\mu}{2}v+\lambda_{3}v\ v _{\Delta}\\ \frac{-\mu}{2}v+\lambda_{3}v\ v_{\Delta}&\frac{\mu v^{2}}{4v_{\Delta}}+\lambda _{2}v_{\Delta}^{2}\end{pmatrix}\,, \tag{6}\]
with mass eigensates
\[G^{+} =\frac{-vH^{+}+2v_{\Delta}\Delta^{+}}{\sqrt{v^{2}+4v_{\Delta}^{2 }}},\ \ \ \ \delta^{+}=\frac{2v_{\Delta}H^{+}+v\Delta^{+}}{\sqrt{v^{2}+4v_{\Delta}^{2}}}\,, \tag{7}\] \[h=\cos\alpha\ \text{Re}(H^{0})+\sin\alpha\ \text{Re}(\Delta^{0})\,,\] (8) \[\delta^{0}=-\sin\alpha\ \text{Re}(H^{0})+\cos\alpha\ \text{Re}( \Delta^{0})\,, \tag{9}\]
where
\[\sin 2\alpha=\frac{\mu v-2\lambda_{3}vv_{\Delta}}{m_{\delta^{0}}^{2}-m_{h}^{2}}\,. \tag{10}\]
Note that the massless eigenstate is the Goldstone boson (\(G^{\pm}\)), eaten up by the \(W^{\pm}\) gauge boson, while the other combination (\(\delta^{\pm}\)) is a physical charged Higgs field. The field \(h\) is to be identified as the SM-like Higgs boson of mass \(125\,\text{GeV}\) and in the limit of a small mixing angle \(\alpha\), the splitting between the charged and neutral component of the triplet field is \(m_{\delta^{+}}^{2}-m_{\delta^{0}}^{2}\simeq v_{\Delta}(\mu-\lambda_{2}v_{\Delta})\), i.e., both components are nearly degenerate. In the limit \(v_{\Delta}\ll v\) we have \(v_{\Delta}=\mu v^{2}/(4m_{\delta^{0}}^{2})\).
As stated in the introduction, the triplet model can be minimally extended by two different vector-like leptons (VLLs) in order to allow for couplings to SM leptons:
I) An \(SU(2)_{L}\) doublet with \(Y=-1/2\) (\(D\)) with Yukawa interactions given by
\[\mathcal{L}_{\text{Y}}^{\text{I}}\supset Y_{L}^{\text{I}}\bar{D}_{R}\ell_{L}\Delta+Y_{R}^{\text{I}}\bar{D}_{L}e_{R}H+Y^{ \text{I}}\bar{D}_{L}\Delta D_{R}+\text{h.c.}\,. \tag{11}\]
Figure 1: Leading one-loop effect contribution to \((g-2)_{\mu}\) from the VLL doublet \(D\) (triplet \(T\)). Note that while the upper diagram involves \(\mu\), the lower diagram is proportional to the VEV of the SM doublet and therefore in general dominant.
II) An \(SU(2)_{L}\) triplet with \(Y=-1\) (\(T\)) with Yukawa interactions given by
\[\mathcal{L}_{\rm Y}^{\rm II}\supset Y_{R}^{\rm II}\,{\rm Tr}[\bar{T}_{L}\Delta]e_{R}Y_{L}^{\rm II}H^{ \dagger}\bar{T}_{R}\ell_{L}+Y^{\rm II}{\rm Tr}[\bar{T}_{L}\Delta T_{R}]+{\rm h.c.}\,. \tag{12}\]
where \(\ell\) (\(e\)) is the SM lepton doublet (singlet) and the VLL \(T\) is defined as
\[T=\frac{1}{2}\begin{pmatrix}T^{-}&\sqrt{2}T^{0}\\ \sqrt{2}T^{--}&-T^{-}\end{pmatrix}\,. \tag{13}\]
Integrating out the new VLL at the tree-level leads to the following effective interactions
\[\mathcal{L}_{\rm eff}= \frac{|Y_{R}^{\rm I}|^{2}}{2m_{D}^{2}}(H^{\dagger}i\overset{ \leftrightarrow}{D_{\mu}}H)(\bar{e}_{R}\gamma^{\mu}e_{R}) \tag{14}\] \[- \frac{3|Y_{L}^{\rm II}|^{2}}{16m_{T}^{2}}(H^{\dagger}i\overset{ \leftrightarrow}{D_{\mu}}H)(\bar{\ell}_{L}\gamma^{\mu}\ell_{L})\] \[+ \frac{|Y_{L}^{\rm I}|^{2}}{16m_{T}^{2}}(H^{\dagger}i\overset{ \leftrightarrow}{D_{\mu}}H)(\bar{\ell}_{L}\sigma^{a}\gamma^{\mu}\ell_{L})\] \[- \left(\frac{(Y_{L}^{\rm I})^{*}Y_{R}^{\rm I}}{m_{D}}+\frac{(Y_{L }^{\rm II})^{*}Y_{R}^{\rm II}}{2m_{T}}\right)\bar{\ell}_{L}\Delta He_{R}\,,\]
which, after EW breaking modify gauge bosons couplings to leptons, affect Higgs decay to lepton pairs \(h\to\ell^{+}\ell^{-}\) and induce couplings of the triplet scalar \(\Delta\) to leptons.
## III Phenomenology
The CDF II collaboration updated their previous measurement of the \(W\) boson mass, finding \(M_{W}=80.4335(94)\,{\rm GeV}\), which leads to the new Tevatron average \(80.4270(89)\,{\rm GeV}\) when combined with the D0 [2] result. However, the recent ATLAS update [6] (superseding their 2017 result [69]) of \(M_{W}=80.360(16)\,{\rm GeV}\), as well the LHCb [5], find significantly smaller values. Together with LEP [70], a naive average gives \(M_{W}=80.406(7)\,{\rm GeV}\). Since the consistency of the data is poor (\(\chi^{2}/{\rm dof}=4.3\)), we inflate the error to get a conservative estimate2 of \(M_{W}^{\rm comb}=80.406(15)\,{\rm GeV}\). Comparing this to the SM prediction of \(M_{W}^{\rm SM}=80.355(5)\,{\rm GeV}\)[72; 73; 74; 75; 76; 77; 78; 79], with \(m_{t}=172.5(7)\,{\rm GeV}\)[79], we see a discrepancy of \(51\,{\rm MeV}\), with a significance of slightly more than \(3\,\sigma\). If instead we disregard the CDF II result, the data agree well amongst themselves, and we find an average of \(M_{W}^{\rm comb\ (w/o\ CDF\ II)}=80.372(10)\,{\rm GeV}\), which would correspond to a discrepancy of \(17\,{\rm MeV}\) with a significance of below \(2\,\sigma\). In the \(\Delta\)SM we have
Footnote 2: Note that these averages agree well with the more sophisticated combinations done by HEPfit [71] prior to the ATLAS update.
\[m_{W}^{2}=\frac{g^{2}}{4}(v^{2}+4v_{\Delta}^{2})\,,\qquad m_{Z}^{2}=\frac{g^{2 }}{4\cos\theta_{W}^{2}}v^{2}\,, \tag{15}\]
which shows that the VEV of the triplet can easily alter the \(W\) mass in the desired direction.
Let us start with the tree-level effects induced by the couplings \(Y_{L}^{\rm II}\) and \(Y_{R}^{\rm I}\), which give rise to modifications of EW gauge bosons couplings to leptons. Here, both \(D\) and \(T\) modify \(Z\mu\mu\) couplings, which are constrained from LEP measurements [80], while \(T\), in addition, modifies the leptonic \(W\) vertex. Therefore, in case II), the extraction of CKM elements is affected by the determination of the Fermi constant \(G_{F}\) from the muon lifetime [81] (dominantly \(V_{ud}\) from beta decays [81]). Furthermore, lepton flavour universality (LFU) measurements in the charged current (see Ref. [82] for an overview) receive new physics contributions. In fact, the CAA, i.e. the deficit in the first row unitarity relation \(|V_{ud}|^{2}+|V_{us}|^{2}+|V_{ub}|^{2}=1\)[83; 84; 85; 86; 87], with a significance at the \(3\,\sigma\) level [88; 89; 90; 91], can be resolved by the VLL triplet [85; 92; 93]. Performing a combined fit using smelli v2.4.0[94; 95]3 (which is built on flavio v2.5.4[97; 98] for the observable calculations and wilson[99] for the renormalization group evolution), we show the region in parameter space favoured by the global EW fit, tests of LFU, the \(W\) mass and CKM unitarity in Fig. 2. The best-fit points are
Footnote 3: For the complete list of observables included in our global fit, we refer the interested reader to Ref. [96], to which we added the tests of LFU Br\((\tau\to e\nu\nu)\), Br\((\tau\to\mu\nu\nu)\), Br\((\pi^{+}\to e\nu)\), and Br\((K^{+}\to e\nu)/\)Br\((K^{+}\to\mu\nu)\).
\[Y_{R}^{\rm I}v/m_{D}=\pm 0.05,\quad v_{\Delta}=4.5\,{\rm GeV}\,, \tag{16}\] \[Y_{L}^{\rm II}v/m_{T}=\pm 0.09,\quad v_{\Delta}=4.8\,{\rm GeV}\,. \tag{17}\]
with pulls relative to the SM of \(3.1\,\sigma\) and \(3.6\,\sigma\), respectively (taking into account two degrees of freedom). Note that the preference for a non-zero coupling \(Y_{R}^{\rm I}\) (\(Y_{R}^{\rm II}\)) is mainly due to Br\((Z\to\mu^{+}\mu^{-})\) (the CKM unitarity deficit).
For \(g-2\) of the muon, the experimental value [100; 3] deviates from the SM prediction [101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 120], resulting in a \(4.2\sigma\) tension
\[\Delta a_{\mu}[e^{+}e^{-}]^{\rm WP}=a_{\mu}^{\rm exp}-a_{\mu}^{\rm SM}[e^{+}e^{ -}]=251(59)\times 10^{-11}, \tag{18}\]
according to the White Paper [7]. However, the significance crucially depends on the value used for hadronic vacuum polarization (HPV). While \(e^{+}e^{-}\) data underlies Eq. (18), this dispersive approach has been challenged by lattice QCD [121; 122; 123; 124; 125; 126; 126], leading to a smaller tension with experiment. The reason for this mismatch is not understood, and also the recent measurement of \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) by CMD-3 [127] differs from previous measurements [128; 129; 130; 131; 132; 133] at a combined level of \(5\sigma\). Therefore, in this article, we consider ourselves agnostic to the exact value and do not aim for any specific range, merely noting two possible options in the figure to guide the reader.
Neglecting scalar mixing, which is naturally small given the preferred range of \(v_{\Delta}\), the leading (chirally enhanced) 1-loop contribution to the anomalous magnetic moment is given by
\[\Delta a_{\mu}^{\rm I}= \frac{m_{\mu}v(Y_{L}^{\rm I})^{*}Y_{R}^{\rm I}}{64\sqrt{2}\pi^{2}m_ {D}^{2}(r-1)^{3}}\] \[\times\Big{(}\frac{4rv_{\Delta}m_{D}}{v^{2}}\left[7+r(r-8)+(r+2) \log r^{2}\right]\] \[+Y^{\rm I}\left[1+r(4-5r)+r(r+2)\log r^{2}\right]\Big{)}\,, \tag{19}\] \[\Delta a_{\mu}^{\rm II}= \frac{m_{\mu}v(Y_{L}^{\rm II})^{*}Y_{R}^{\rm II}}{128\sqrt{2}\pi ^{2}m_{T}^{2}(r-1)^{3}}\] \[\times \Big{(}\frac{4rv_{\Delta}m_{T}}{v^{2}}\left[-1+r(8-7r)+(5r-2) \log r^{2}\right]\] \[-2Y^{\rm II}(r-1)\left[1-r+r\log r\right]\Big{)}\,, \tag{20}\]
for the two cases,4 where \(r=m_{\delta^{0}}^{2}/m_{D,T}^{2}\) for the doublet and triplet case, respectively. The dominant modification of \(h\to\mu^{+}\mu^{-}\) arises already at tree-level resulting in
Footnote 4: We confirmed these results using MatchMakerEFT [134].
\[\frac{\text{Br}(h\to\mu\mu)}{\text{Br}(h\to\mu\mu)^{\rm SM}}=\left|1+\frac{vv _{\Delta}Y_{L}Y_{R}}{N\sqrt{2}m_{\mu}m_{\psi}}\right|^{2}\,, \tag{21}\]
where \(m_{\psi}=m_{D}\) or \(m_{T}\) and \(N=1\) or 2 for the doublet or triplet VLL cases, respectively. The average of the ATLAS [135] and CMS [136] measurements is
\[\frac{\text{Br}(h\to\mu\mu)}{\text{Br}(h\to\mu\mu)^{\rm SM}}=1.21^{+0.36}_{-0. 34}\,, \tag{22}\]
while a precision of around \(10\,\%\) is expected at the HL-LHC with an integrated luminosity of \(3000\,\text{fb}^{-1}\)[137].
Concerning direct LHC bounds, the lower limits on the masses of VLLs which are triplets or doublets of \(SU(2)_{L}\) are around \(700\,\text{GeV}\) for third generation VLLs [138], meaning that we expect somehow stronger limits for second generation VLLs and to be conservative we will set the mass to \(2\,\text{TeV}\). Furthermore, since the VLLs induce couplings of the triplet to muons and muon neutrinos, to a good approximation, the bounds on slepton searches in the limit of a vanishing neutralino mass apply for the mass of the scalar triplet, which are as well around \(700\,\text{GeV}\)[139]. Taking into account these constraints, in Fig. 3 we predict \(\text{Br}(h\to\mu^{+}\mu^{-})\) as a function of \((g-2)_{\mu}\) and \(M_{W}\). Note that \(\Delta a_{\mu}\) can be as large as \(250\times 10^{-11}\) while at the same time providing a sizable effect in \(M_{W}\). It is important to note that the numerical value of the loop function entering \((g-2)_{\mu}\) is larger in model II) than in model I). Therefore, in model II) one can obtain a sizable effect with smaller couplings \(Y_{L,R}\) which leads to the small effects in \(h\to\mu\mu\) compared to model I), as well as to bounds from the perturbativity of \(Y_{L}^{\rm I}\) (if \(Y_{R}^{\rm I}\) is fixed to the best-fit value in Eq. (16).
## IV Conclusions
In this article, we proposed (two versions of) a minimal model obtained by extending the SM with a scalar triplet with hypercharge 0 and a vector-like lepton that is I) an \(SU(2)_{L}\) doublet with \(Y=-1/2\) or II) \(SU(2)_{L}\) triplet with \(Y=-1\). This model can:
* Provide naturally a positive definite shift in the \(W\) mass of the size suggested by the current tension.
* Give a sizable effect in \(g-2\) of the muon.
Figure 2: Global fit to EW precision data, the \(W\) mass, CKM unitarity and tests of LFU, for the case of the VLL doublet (left) and the triplet (right). The preference for a non-zero coupling \(Y_{R}^{\rm I}\) (\(Y_{L}^{\rm II}\)) is mainly due to \(\text{Br}(Z\to\mu^{+}\mu^{-})\) (the CKM unitarity deficit).
* Improve on \(Z\to\mu^{+}\mu^{-}\) in case I) or explain the CAA in case II).
For both model versions, effects in \(h\to\mu^{+}\mu^{-}\) of the order of 10% (while most other models on the market only generate \(O(\%)\) effects), as well as \(\mu^{+}\mu^{-}\) plus missing energy signatures at the LHC, are predicted.
###### Acknowledgements.
A.C. thanks Martin Hoferichter for useful discussions concerning the status of the SM prediction for \((g-2)_{\mu}\). Financial support from the SNSF (PP00P21_76884) is gratefully acknowledged. M.K. acknowledges support from a Maria Zambrano fellowship, and from the State Agency for Research of the Spanish Ministry of Science and Innovation through the "Unit of Excellence Maria de Maeztu 2020-2023" award to the Institute of Cosmos Sciences (CEX2019-000918-M) and from PID2019-105614GB-C21 and 2017-SGR-929 grants. The work of A.T. is supported in part by the National Science Foundation under Grant PHY-2210428. A.T. acknowledges the Department of Physics at Washington University in St. Louis for local hospitality during the completion of this work.
|
2310.08362 | Multi-Value Alignment in Normative Multi-Agent System: An Evolutionary
Optimisation Approach | Value-alignment in normative multi-agent systems is used to promote a certain
value and to ensure the consistent behaviour of agents in autonomous
intelligent systems with human values. However, the current literature is
limited to the incorporation of effective norms for single-value alignment with
no consideration of agents' heterogeneity and the requirement of simultaneous
promotion and alignment of multiple values. This research proposes a
multi-value promotion model that uses multi-objective evolutionary algorithms
and decentralised reasoning to produce the optimum parametric set of norms that
is aligned with multiple simultaneous values of heterogeneous agents and the
system. To understand various aspects of this complex problem, several
evolutionary algorithms were used to find a set of optimised norm parameters
considering two toy tax scenarios with two and five values are considered. The
results are analysed from different perspectives to show the impact of a
selected evolutionary algorithm on the solution, and the importance of
understanding the relation between values when prioritising them. | Maha Riad, Vinicius de Carvalho, Fatemeh Golpayegani | 2023-10-12T14:32:27Z | http://arxiv.org/abs/2310.08362v1 | # Multi-Value Alignment in Normative Multi-Agent System: An Evolutionary Optimisation Approach
###### Abstract
Value-alignment in normative multi-agent systems is used to promote a certain value and to ensure the consistent behaviour of agents in autonomous intelligent systems with human values. However, the current literature is limited to the incorporation of effective norms for single-value alignment with no consideration of agents' heterogeneity and the requirement of simultaneous promotion and alignment of multiple values. This research proposes a multi-value promotion model that uses multi-objective evolutionary algorithms and decentralised reasoning to produce the optimum parametric set of norms that is aligned with multiple simultaneous values of heterogeneous agents and the system.To understand various aspects of this complex problem, several evolutionary algorithms were used to find a set of optimised norm parameters considering two toy tax scenarios with two and five values are considered. The results are analysed from different perspectives to show the impact of a selected evolutionary algorithm on the solution, and the importance of understanding the relation between values when prioritising them.
## 1 Introduction
Normative multi-agent systems (NorMAS) have been used effectively to coordinate the behaviour of agents in multi-agent systems (MAS) that model complex applications such as intelligent transport systems [16]. The norms in NorMAS are regulative norms defined by a social group to regulate behaviours [22]. For example, in a traffic system, the norm is to give priority to emergency vehicles. Also, it is the norm that passengers leave front seats in buses for senior people.These examples represent guidelines that might be recommended in some societies, obligated, or prohibited [22], so when the agents are aware of the norms of the environment they are operating in, they can synchronise their behaviour with other agents, facilitate group decision making and collaborate. However, it is essential to promote human values in MAS as well to reflect real applications.
In the context of this research, the term 'value' refers to motivational values, which represent standards that serve desirable objectives [24]. In other words, a 'value' will represent a preferred state[24], such as equality, health, fairness, etc. [2]. For example (to differentiate between a 'norm' and a 'value'), as a norm, companies give their employees maternity leaves if they have a newborn baby. However, if the values of one of the companies support equality between men and women, both can have equal maternity leaves [24].
The concept of value-alignment was introduced in [15],[23] and [24], to reflect the alignment of norms and values. Researchers used several techniques to address this challenge, they included: reasoning strategies [2], learning methodologies [19], utility-based approaches[10], and genetic algorithms [15]. However, the proposed solutions neglect one or more of the following points. First, they match the norms with only one value or with the preferred sub-set of values, while in the real world, all the values need to be aligned with the norms. Second, these models might not consider heterogeneous MAS, in which different groups of agents support different values, especially when these values are incompatible. For example, supporting both fairness and equality may be conflicting, as ensuring fairness does not necessarily support equality. Third, some works directly derive norms from the values of the system. However, in many systems norms and values may be incompatible and they should be considered independently. For instance, a community can have a value of supporting equality, at the same time of having a norm of giving priority to senior people in queues, or exempting them from paying taxes. In this paper, we address these limitations by proposing **N**orms **O**ptimisation and **V**alue **A**lignment Model (NOVA), that has three main goals:
1. Choose the _best set of norms_ in NorMAS with heterogeneous group of agents.
2. _Optimise multiple values_ in NorMAS, these values can be: * compatible and _incompatible_ values. * defined by _heterogeneous_ groups of agents.
3. Align independent sets of norms and values in NorMAS.
To reach these goals, we formalised the problem as a multi-objective optimisation problem (MOP), in which we represented the values by objectives that need optimisation, and modelled the norms as the decision variables. This allowed us to get the _best set of norms_ (the decision variables) when the values (the objectives) are optimised, and so, aligned the norms and the values. Moreover, solving it as a multi-objective optimisation problem, (i) allowed the system to facilitate _Optimising multiple values_ defined by _heterogeneous_ groups of agents, and (ii) allowed multiple compatible and _incompatible_ values (objectives) _optimisation_.
We proposed to solve this problem using multi-objective evolutionary algorithms (MOEAs) as they have been successfully applied to solve MOPs [3] in several domains including logistics, redis-sharing [6], environmental/economic dispatch (EED) problems [21],
feature selection for machine learning problems [5], and by optimising antibiotic treatments [20]. We applied several MOEAs (NSGA-II, MOEA/DD, SPEA2, and MOMBI2) on different evaluation scenarios to analyse the performance of each of the MOEAs.
Also, as the MOEAs produce sets of non-dominant optimum solutions, to choose a final solution we extended the agents' logic with a reasoner that allowed them to vote for their preferred solution.
Accordingly, our proposed model NOVA, is a multi-value promotion model that uses multi-objective evolutionary algorithms to produce an optimum parametric set of norms that is aligned with the values of heterogeneous agents' groups. We evaluated NOVA using different scenarios that measure the effect of using different combinations of values. Our contribution is three-fold:
* Multiple values alignment: we show the capability of choosing the optimum values for a parametric norms set while aligning it with a set of multiple optimised values.
* Incompatible and compatible values alignment: we model the problem as a multiple-objective problem to enable the simultaneous optimisation of all values regardless of their compatibility.
* Heterogeneous agents groups' values alignment: we align values from different heterogeneous groups of agents while considering shared system values.
## 2 Problem Formulation
Let us consider a heterogeneous normative multi-value multi-agent system that is composed of a finite set of regular agents as \(Ag=\{ag_{1},ag_{2},...,ag_{n}\}\). Each agent \(ag_{i}\) has a set of values \(V_{ag_{i}}\), a set of properties \(Pr_{ag_{i}}\), a set of actions \(A_{ag_{i}}\), and a set of adopted norms \(N_{ag_{i}}\). There is one regularity agent \(r\) that is responsible to synthesise the norms set \(N\), in which \(N_{ag_{i}}\subseteq N\). The norms are parametric norms, i.e. each norm \(n_{j}\) has a set of parameters \(P_{n_{j}}\) that can contain unbounded or constrained elements with discrete or continuous domains. The regulative agent \(r\) has a set of values \(V_{r}\) as well. In each step (iteration) \(t\), each regular agent \(ag_{i}\) performs actions from \(A_{ag_{i}}\) and applies its set of adopted norms \(N_{ag_{i}}\). The regulative agent also applies actions chosen from its set of actions \(A_{r}\). Corresponding to the agents' new situations, a global state \(s_{t}\) is captured by \(r\). In such a system, \(r\)'s main challenges are: to synthesise the optimum set of norms that ensures the alignment of its own values \(V_{r}\) and each of the regular agent's values \(V_{ag_{i}}\) (which is shared between a subset of agents), and to optimise the synthesised set of norms even in case of incompatible values.
### Defining the Problem as a Multi-Objective Optimisation Problem (MOP)
As the main aim is to find the best set of parametric norms when the agents' and system's values are satisfied (optimised), we consider the problem as a multi-objective optimisation problem (MOP).
Multi-objective optimisation requires finding solutions which simultaneously consider two or more conflicting objectives to be minimised or maximised [17]. Thus, the optimisation process aims to find a set of solutions that reflects a trade-off between the objectives. MOPs are formulated using: objective functions, constraints, decision variables and their bounds [17].
Respectively, in NOVA, we formulate the problem identified in Section 2 as a multi-objective optimisation problem. We define the agents' and system's values as the objective functions to be optimised, and the norms as the decision variables.
## 3 Norms Optimisation and Values Alignment Model (NOVA)
NOVA is a model for norms optimisation and values alignment, its main responsibilities are to: (i) optimise the values (objectives), (ii) choose the best set of norms, (iii) reason non-dominant solutions, and (iv) produce one final optimum solution for aligning multiple norms and objectives. As seen in Figure 1, NOVA operates using a main regulative agent \(r\) and regular agents \(Ag\). The regulative agent is responsible for: initialising the environment parameters and norms, collecting values from regular agents, and doing the optimisation process using the _Optimiser_ component. After the _Optimiser_ produces the set of non-dominant solutions, the _Main Reasoner_ in \(r\) is triggered to start the reasoning process with the regular agents \(Ag\).
In the next subsections (3.1 and 3.2), we illustrate in more details the two main processes carried by NOVA.
### Optimisation Process
The optimisation process is performed by the _Optimiser_ in the regulative agent \(r\) after the _Environment Initialiser_ (i) defines the set of parametric norms and their values bounds (ii) collects values of all agents \(V_{ag}\) and integrates them with its values \(V_{r}\) in a single set of values \(V\). (iii) maps the norms as decision variables and the values as the objectives and send them to the _Optimiser_.
Accordingly, NOVA synthesises the best set of norms (decision variables) based on the agents' values (objective functions) optimisation, using the following off-line approach:
**Environment Initialisation:** initially, the parametric norms set \(N\) is initialised with random values between the norms' specified boundaries by the regulative agent \(r\) (as seen in Fig. 2). Also, the primary values of the Properties \(Pr_{ag_{i}}\) for each of the agents are defined. The regulative agent \(r\) sends the initial set of norms \(N\) to the agents. The values of the set of agents \(Ag\) in the system and the regulative agent \(r\) are consolidated by \(r\) in one set of values \(V\). These initial values are used to calculate the global state \(s_{0}\). Then, the processed \(N\), \(V_{r}\), \(s_{0}\), and \(MOEA\) (which is the type of the multi-objective evolutionary algorithm that will be used for the optimisation, as illustrated in the next paragraph) are used as input parameters to start the _NOVA optimisation strategy_ used in Algorithm 1.
**MOEA usage in NOVA.** NOVA uses different Multi-Objective Evolutionary Algorithms (MOEAs) to solve the multi-objective optimisation problem, and produce the Pareto Front set of solutions (set of non-dominant solutions). MOEAs are heuristic techniques that provide a flexible representation of the solutions and do not impose continuity conditions on the functions to be optimised. Moreover, MOEAs are extensions of Evolutionary Algorithms (EAs) for
Figure 1: NOVA Conceptual Model
ulti-objective problems that usually apply the concepts of Pareto dominance [1]. In Pareto dominance, a certain solution \(sl_{a}\) in the decision space of a MOP is superior to another solution \(sl_{b}\) if and only if \(f(sl_{a})\) is at least as good as \(f(sl_{b})\) in terms of all the objectives and strictly better than \(f(sl_{b})\) in terms of at least one single objective. Solution \(sl_{a}\) is also said to strictly dominate solution \(sl_{b}\)[1]. In Nova _Optimiser_, we use four MOEA algorithms: **NSGA-II**[8], **SPEA2**[26] and **MOMBI2[11]** that differ from each other mainly in the way that solutions are ranked at every iteration [25], and **MOEA/DD**[14] which is different in its decomposition technique.
**NOVA Optimisation Strategy:** as NOVA optimiser is built on a genetic (EA) strategy, it takes the following main steps in each iteration \(t\), see Algorithm 1:
1. Each of the agents in \(Ag\) and the regulative agent \(r\) carry out their actions while applying the relative norms to these actions. These actions produce a new global state \(s_{t}\). [Lines 3-6]
2. The regulative agent \(r\) performs its actions \(A_{r}\) on \(s_{t}\) considering the current norms \(N\). [Line 7]
3. The regulative agent \(r\) uses the new global state to perform the optimisation process using a multi-objective optimiser \(MOEA\) and produces the new set of norms \(N\) based on the optimised set of values \(V\). [Line 8]
4. The new set of norms \(N\) is communicated to all the agents in \(Ag\). [Line 9]
```
1:Input:\(N\), \(V\), \(s_{0}\), \(MOEA\)
2:for each t do
3:for each\(ag_{i}\in Ag\)do
4:\(s_{t}\Lefteq ag_{i}.act(N_{ag_{i}},A_{ag_{i}},s_{t})\)
5:endfor
6:\(s_{t}\Leftarrow r.act(N,A_{r},s_{t})\)
7:\(N\Leftarrow r.optimise(s_{t},V,N,MOEA)\)
8:\(r.inform(Ag,N)\)
9:endfor
10:\(N^{\star}\Leftarrow N\)
```
**Algorithm 1** NOVA Optimisation Strategy
### Reasoning Process
The multi-objective optimisation process produces the Pareto Front set of solutions \(PF_{known}\), and then, it sends each solution with its corresponding norms set to the _Main Reasoner_ (see Figure 1) as \(Sol\), where \(sol_{j}=\{pf_{j},N_{ag_{j}}^{pf_{j}}\}\). Afterwards, a decentralised reasoning process takes place to produce one final optimum solution \(sol_{best}\).
As indicated in Algorithm 2 and Figure 3, the reasoning process starts by running the \(mainReasoner()\) after receiving the Pareto Front set and its corresponding parametric norms, and formulating \(Sol\). First, in line 3, the reasoner creates an empty list to store in it the votes that will be collected from the regular agents. Each of the reasoning (regular) agents is asked to vote in line 4 by calling the \(getVote\) method. The \(getVote\) method takes as parameters the Pareto Front set of optimum solutions \(PF_{Known}\), and the \(N_{ag_{i}}^{PF_{Known}}\) parametric norm values that correspond to these solutions and belong to this agent's group. In line 10, the preferred decision variable (i.e. norm to be prioritised) is stored in \(prefVar\) variable. Depending on the \(prefVar\), the norms set that prompts this \(prefVar\) the most is stored in \(n_{sol_{best}}\). Subsequently, the solution that couples this norms set is saved as the chosen solution \(pf_{best}\). Then, this solution is added to the \(votes[]\) at line 5. After calculating the solution with the maximum number of votes, the main reasoner states the final chosen solution in line 7.
**The Voting Process:** when the regular agents \(Ag\) are reasoning the best solution, they calculate the fitness \(fit\) of each solution by calculating equation 1. \(Wg\) is the set of weights defined for each of the agent's norms, which is created randomly, and \(\sum_{i}^{N}Wg_{i}=1\). The weights are defined based on the preferred decision variable (norms), by assigning a higher value (such as 0.8) to the preferred variable and splitting the remaining weight (0.2) among the other variables.
\[fit(PF,Wg)=select\max\forall s\in PF\sum_{i}^{qtAvars}Wg_{i}*Var_{i}^{s} \tag{1}\]
Finally, as it is expected that different agents choose different solutions, the most voted solution is elected and returned as the final solution.
\[vote(Ag)=\sum_{a\in Agents}\text{ 1 if }fit(PF,Ag.Wg)>fit(PF,a.Wg),\]
\[ag\neq a \tag{2}\]
\[\max\forall Ag\in Agents,vote(Ag) \tag{3}\]
## 4 Tax System Scenario
For further illustration of NOVA and for evaluation we will use an adapted tax system toy scenario introduced in [15]. In this scenario,
Figure 3: Reasoning Process
Figure 2: System Initialisation Process Instantiated by the Regulative Entity
the regular agents set \(Ag\) will represent the set of citizens and the regulating agent \(r\) will represent the government. The government collects taxes from the citizens according to their wealth group. There are five wealth groups, the \(1^{st}\) group represents the poorest group while the \(5^{th}\) group represents the richest group. A percentage of the citizens do not pay taxes and will be considered as evaders. However, if they were caught by the government they will be punished and will pay the evaded payment in addition to extra fines. In case they do not have sufficient funds only the available money is collected to avoid getting the citizen into debt. After the taxes and fines are collected a 5% will be considered as a fixed interest rate that is added to the total collected amount. Then, the total collected money \(cr\) will be redistributed back to the citizens depending on their wealth group. Initially (i.e. before simulation), The wealth of each citizen is randomly assigned after being initialized using a random uniform distribution \(U\)(0,100). Then, agents are allocated to their corresponding wealth group, with a constraint that the wealth groups have an equal number of citizens. The main characteristics of the system are as follows. First, each of the citizens has four main properties in their properties set, which describes its current state. The properties are:
* Wealth \((w_{i})\): it has a numerical value that represents the amount of money citizen \(i\) currently has.
* Wealth group \((g_{k})\): it represents the wealth group the citizen belongs to according to its wealth \(w_{i}\).
* Evader flag \((e)\): it reflects whether this citizen is an evader and will not pay taxes or not.
* Primary Wealth \((pw_{i})\): it has a numerical value that represents the wealth of the citizen \(i\) at the beginning of a time-step before taking any action and before its state changes.
Second, each citizen has a set of values \(V_{ag_{i}}\), for simplicity in this example, citizens in the same wealth group have the same fixed set of values. In other words, each wealth group has a set of values \(V_{g_{i}}\), this could represent the community values. Only it is assumed that wealth group \(g_{2}\) does not have a value to simulate citizens with no particular values, to see how they are affected by the values encouraged by others. Third, the government has its own set of values as well \(V_{r}\) and a set of parametric norms, which has initial randomly defined values. The norms are defined in the same manner they are stated in [15]:
* **n1** defines the tax rate \(collect_{j}\) each wealth group is expected to pay at each time-step. The parametric set of the norm is defined as \(P_{n_{1}}\) = \(\{collect_{j}\}_{j=1,...5}\). The tax rate values are restricted between 0 and 1.
* **n2** defines the fractional percentage \( redistribute_{j}\) each wealth group will take back from the redistribution amount at the end of each time-step. The parametric set of the norm is defined as \(P_{n_{2}}\) = \(\{redistribute_{j}\}_{j=1,...5}\), the values are between 0 and 1 and the sum of the fractions is constrained to be equal to 1.
* **n3** defines the catch rate of evaders. This single parameter is defined as \(P_{n_{3}}\) = \(\{catch\}\). Its value is constrained to be between 0 and \(1/2\) to reflect the difficulty of law-enforcement tasks.
* **n4** defines the extra fine defined as punishment when an evader is caught. This single parameter is defined as \(P_{n_{4}}\) = \(\{fine\}\). However, the total amount to be paid by a caught evader, which is equal to the fine plus the taxes amount, can not exceed the total wealth of the evader.
The main challenge of this system is represented in the government's responsibility to optimise the parameters' sets \(P_{n_{i}}\) of the previously defined four norms belonging to \(N=\{n_{1},n_{2},n_{3},n_{4}\}\), while aligning them with the values of the government, as well as the regular citizen's values. The values are defined as follows.
* **Value 1 (Obj1)**: the value of the government is _Equality_, as it aims to treat all the citizens equally without being biased to any group. _Equality_ is calculated using equation 4 introduced in [15]. \(GI(s)\) represents the Gini Index of the global state \(s\). The Gini Index [9] is an indicator of inequality, where \(w_{k}\) is the wealth of agent \(ag_{k}\) and \(\overline{w}\) is the average wealth of all agents at state \(s\). \[Equality=1-2.GI(s),\ with\ GI(s)=\frac{\Sigma_{i,j\in Ag}\mid w_{i}-w_{j}\mid}{2. \mid Ag\mid^{2}.\overline{w}}\] (4)
* **Value 2 (Obj2)**: the value of citizens in wealth group \(g_{3}\) is _Fairness_. The main aim of this value is to have the highest number of evaders in the wealth group \(g_{1}\) (the poorest group). To promote the estimated probability P of evaders in \(g_{1}\) at state \(s\), and to increase fairness, equation 5 is used as suggested in [15]. \[Fairness=2.P[g_{i}(s)=1\mid evader_{i}]-1\] (5)
* **Value 3 (Obj3)**: the value of citizens in wealth group \(g_{5}\) is to maximise their _Wealth_. The main aim of this value is to have the maximum wealth portion from the total wealth. It represents the new wealth of the citizens after an iteration takes place. Equation 6 is used for calculating the new wealth. \[Wealth=\frac{\Sigma_{i\in g_{5}}w_{i}}{\Sigma_{j\in Ag}w_{j}}\] (6)
* **Value 4 (Obj4)**: the value of citizens in wealth group \(g_{4}\) is to maximise the _Gained Amount_. This value aims to have the maximum gain portion from the common amount available for redistribution \(cr\). The gained value is the difference between the citizen's new wealth \(w_{k}\) and the old wealth \(pw_{k}\) (Check the numerator in equation 7). \[Gained\ Amount=\frac{\Sigma_{i\in g_{4}}w_{i}-pw_{i}}{cr}\] (7)
* **Value 5 (Obj5)**: the value of citizens in wealth group \(g_{1}\) is related to the _Collect Portion_. This value aims to have the minimum portion from the collect rate out of 1 (a total portion of collect rates). To inverse this to a maximisation objective we have: 8. \[Collect\ Portion=1-Collect_{g_{1}}\] (8)
The best alignment between the synthesised set of parametric norms and the values is achieved by maximising these 5 values (objectives).
### Applying the Optimisation Process of NOVA
In the tax system scenario illustrated in section 4, NOVA's goal is to find the values of the parametric norms n1, n2, n3 and n4 while
Figure 4: Optimisation Process of NOVA in the TAX System Scenario
optimising the values in \(V\): equality, fairness, wealth, gained amount and collect portion. As it is seen in NOVA's conceptual model in Fig. 4, the system is divided into two main divisions, the divisions of the government and citizens. Evaders are represented in red as they have a different set of norms and actions than normal citizens. In this model, first, NOVA randomly initializes the norms and the wealth of the citizens, and consequently, they are assigned to their corresponding wealth groups. Second, the norms set \(N\) are communicated by the government (the regularity agent) to the citizens. Third, the citizens start applying the different actions and their corresponding norms. So, normal citizens will start paying taxes according to the rate of their wealth group defined by \(n1\). Then the government will start catching the evaders according to the catch value defined in \(n3\). The caught evaders will pay their taxes plus the fines determined using \(n4\). Afterwards, the government calculates the total amount of money available for redistribution. Subsequently, each citizen receives their portion from redistribution according to the redistribution rate defined by \(n2\). Then, the citizens calculate their new wealth and move to their new wealth groups. Forth, based on \(s_{t}\), the government uses the optimiser to decide the new values of norms by optimising the five values in \(V\). This cycle is repeated until NOVA reaches a stopping condition that represents a satisfying level of the optimisation of the values in \(V\).
### Applying the Reasoning Process of NOVA
In the taxation scenario, the government will carry out the tasks of the _Main Reasoner_. Accordingly, after receiving the solutions set \(Sol\), it will ask the citizens to vote for the best solution \(Sol_{best}\). Each citizen (i.e. each \(ag\in Ag\)) will randomly choose its preferred decision variable \(prefVar\) to prioritise, and will choose the solution that gives the highest value for \(prefVar\). If the \(prefVar\) supports \(n1\) or \(n2\) (a parametric set of norms), the citizens will check the highest solution of the norms values that are in \(P_{n1}\) or \(P_{n2}\) that belongs to their wealth group. For example if an agent \(ag_{1}\) has its \(prefVar\) as \(n2\) and belongs to wealth group \(g_{1}\), it will choose from the solution that has the best value for \(P_{n2}=\{redistribute_{j=1}\}\). After all the citizens vote, the solution with the maximum number of votes is set as \(Sol_{best}\).
## 5 Experimental Evaluation
We evaluated four algorithms NSGA-II, MOFA/DD, SPEA2, and MOMBI2 on solving both two and five objectives problems. Both problems are based on the tax scenario defined in section 4. The two objectives problem includes value 1 (Equality) and value 2 (Fairness), and the five objectives problem includes all the values. Further, we compared the results of the two-objectives scenario with a state-of-the-art work [15], as they tackle the value-alignment problem using genetic algorithm, however they handle only one value per run (i.e. do not support multi-objectives).
**Experimental Settings:** We used \(200\) agents to represent the citizens, and a randomly chosen number of evaders in each iteration. The number of segments that represents the wealth groups was set as \(5\). The investment rate was \(0.05\). We used Monte Carlo Sampling during \(5000\) iterations similar to [15], but in our case, Monte Carlo runs after a meta-heuristic complete execution. For this sampling the _path_ was defined as \(10\). All meta-heuristics run for \(500\) generations, with the maximum population size of \(100\) for two and \(210\) for five objectives. For MOEA/DD we followed [14] and set \(Nr=1\), \(\delta=10\) and probability as \(0.9\). Regarding evolutionary operators, we followed [4], where the SBX Crossover and Polynomial Mutation were employed and setup with distribution set as \(\{n_{c}=20\}\) and \(\{n_{m}=20\}\) respectively with probabilities \(\{p_{c}=0.9\}\) and \(\{p_{m}=1/n_{p}\}\), where \(n_{p}\) is the number of decision variables in the problem. Regarding the reasoning engine, we performed the experiment considering 200 agents.
**Implementation Tools:** NOVA was coded using Java JDK 14 using 3Metal 5.7 [18] and JMetalHyperHeuristicHelper 1.
Footnote 1: [https://github.com/vinixnan/JMetalHyperHeuristicHelper](https://github.com/vinixnan/JMetalHyperHeuristicHelper)
We discuss our results from three different perspectives. First, Hypervolume and IGD+ averages are compared to understand the performance of different algorithms in this context. Secondly, we present how is the Pareto Front for each of the meta-heuristics. Finally, we analyse the best solutions from the problem perspective.
### Hypervolume and IGD+ comparisons
We employed Hypervolume [13], and IGD+ [12] averages obtained from the 30 executions as the algorithms performance comparison criterion. This is necessary because MOEAs produce a set of non-dominated solutions, which makes each algorithm have a set of values to be compared. However, direct comparisons between solutions sets is difficult, therefore we need a single value that summarises the algorithm's performance. In Hypervolume, for example, we calculate the area (or volume) from each solution to a reference point. We treat them as points in the Cartesian space. For minimisation problems, we set this reference point as the worst possible. Thus, when algorithm A has a higher hypervolume value than algorithm B, it means that solutions from A are more distant to the worst point, and then the found Pareto Front provides more quality. For this purpose, first, for each problem (two and five objectives), we joined all results obtained by all algorithms, found the nadir point (worst found), necessary for Hypervolume calculation, and took the Non-dominated set in order to generate the _known Pareto Front_ (\(PF_{known}\)), necessary for IGD+ calculation. Then, we calculated Hypervolume and IGD+ for each of the executions and generated averages for both quality indicators for each algorithm. Finally, we compared these averages using Kruskal-Wallis as the statistical test with a confidence level of \(99\%\). In order to perform this, we first identified which algorithm has the best average according to the quality indicator, thus, all the other algorithms are compared to the best, generating a set of _p-values_. We define an algorithm tied statistically with the best when a given _p-value_ is superior to the significance level of \(0.01\).
Table 1 presents a meta-heuristic comparison for the two and five objective problems. Here the mean for 30 executions, standard deviation (_std_), and _max_ value among the executions are presented.
Regarding _max_ for Hypervolume for the two-objective problem, SPEA2 found the highest value. However, it also has the highest _std_, which means this high value rarely occurs. NSGA-II has the best average, but when we consider both mean and std, we can see why MOEA/DD and SPEA2 results are statistically tied with NSGA-II. Regarding IGD+, also NSGA-II is the best algorithm, but this time being the best one considering _std_, _mean_ and _max_. Finally, we can clearly see that all of these algorithms except MOMBI2 can be a good option for solving this two-objective problem.
For the five objective problems comparison, the scenario is completely different. MOMBI2 was a terrible algorithm for two objectives, but here it found the best _max_ value regarding to Hypervolume. In terms of _std_ and _mean_ MOEA/DD is the best algorithm. These results made MOMBI2 and MOEA/DD the best algorithms considering Hypervolume. However, in terms of IGD+, MOEA/DD stands
as the best algorithm with a better IGD+ average with a statistical difference. Moreover, it also had the smallest _std_ and _max_ values.
Figures 5 and 6 present, respectively, box-plots for Hypervolume and IGD+ for the two-objective problem. Basically, these figures represent visually the same results shown in Table 1. We can see how MOMBI2 is the algorithm with more variance ( in Table 1 regarding _std_) while NSGA-II is the one with less variance. The reason for that is while NSGA-II usually found non-dominated solutions, MOMBI2 found dominated solutions making it sometimes having near to zero Hypervolume values. This is also clear when we analyse considering IGD+ where MOMBI2 always has bigger values.
Figures 7 and 8 present respectively box-plots for Hypervolume and IGD+ considering the five-objectives problem. Basically, these figures represent visually the same results shown in Table 1. Here we can see, for Hypervolume, how MOMBI2 and MOEA/DD perform better than NSGA-II and SPEA2. However, unlike MOEA/DD, MOMBI2 has a big variance having the biggest Hypervolume values (considering one single execution) and minimal values smaller than NSGA-II and SPEA2 maximum values. MOEA/DD is more stable in terms of results, even not having the maximum value among the algorithms, which is the best choice for this problem. This is even more clear when we take into consideration IGD+ values where MOEA/DD is clearly the best-performing algorithm in all aspects.
For the five-objective problem, we split the objectives into four groups by combining objectives one and two, which are used in the two-objective problem, with one of the other objectives. Because of space limits, we decided to plot only two fronts here, and other images can be seen at **this link**.
From Figure 10, that shows objectives 1, 2 and 3, we can view the Pareto shape, which is somehow linear and disconnected. NSGA-II has solutions on extremes while MOEAD/D is more spread.
Figure 11: Pareto Front for algorithms for 1, 2 and 4 objectives
Figure 10: Pareto Front for algorithms for objectives 1, 2 and 3
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Parameters & Fairness & Equality & Wealth & GainedAmount & CollectPortion \\ \hline collect = [70.54\%, 12.35\%, 27.00\%, 44.45\%, 37.08\%] & & & & & \\ redistribute= [99.79\%, 18.99\%, 89.39\%, 77.21\%, 86.50\%] & 0.8 & 0.76 & 0.24 & 1.05 & 0.29 \\ catch=96.05\% & fine=84.3634\% & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: NOVA best solution selected by the Reasoner engine for five objectives
\begin{table}
\begin{tabular}{l c c} \hline \hline Parameters & Fairness Equality \\ \hline Montes and Sierra [15] & & & \\ \hline collect = [20\%, 29\%, 26\%, 35\%, 27\%] & & & \\ redistribute= [20\%, 22\%, 19\%, 26\%, 13\%] & 0 & 0.95 \\ catch = 44\% & & & \\ \hline fine = 61\% & & & \\ collect = [1\%, 30\%, 37\%, 72\%, 66\%] & & & \\ redistribute = [2\%, 33\%, 42\%, 24\%, 9\%] & & & \\ catch = 45\% & & & \\ fine = 56\% & & & \\ \hline NOVA & & & \\ \hline \hline collect = [78.45\%, 51.92\%, 60.63\%, 56.13\%, 63.15\%] & & & \\ redistribute= [52.75\%, 69.74\%, 47.87\%, 51.03\%, 54.30\%] & & & \\ catch= 91.72\% & & & \\ fine= 47.5155\% & & & \\ collect = [1.96\%, 16.38\%, 35.72\%, 25.76\%, 36.19\%] & & & \\ redistribute= [51.82\%, 43.57\%, 52.13\%, 58.69\%, 63.23\%] & 0 & 0.93 \\ catch= 99.88\% & & & \\ fine=22.1505\% & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Best two-objective solutions generated by NOVA according to the Reasoner compared against solutions provided in [15] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.